WO2022221260A1 - O-cloud lifecycle management service support - Google Patents

O-cloud lifecycle management service support Download PDF

Info

Publication number
WO2022221260A1
WO2022221260A1 PCT/US2022/024390 US2022024390W WO2022221260A1 WO 2022221260 A1 WO2022221260 A1 WO 2022221260A1 US 2022024390 W US2022024390 W US 2022024390W WO 2022221260 A1 WO2022221260 A1 WO 2022221260A1
Authority
WO
WIPO (PCT)
Prior art keywords
cloud
ran
request
deployment
consumer
Prior art date
Application number
PCT/US2022/024390
Other languages
French (fr)
Inventor
Joey Chou
Niall POWER
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Publication of WO2022221260A1 publication Critical patent/WO2022221260A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication
    • H04L41/342Signalling channels for network management communication between virtual entities, e.g. orchestrators, SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5041Network service management, e.g. ensuring proper service fulfilment according to agreements characterised by the time relationship between creation and deployment of a service
    • H04L41/5051Service on demand, e.g. definition and deployment of services in real time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0894Policy-based network configuration management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0896Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities
    • H04L41/0897Bandwidth or capacity management, i.e. automatically increasing or decreasing capacities by horizontal or vertical scaling of resources, or by migrating entities, e.g. virtual resources or entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5058Service discovery by the service manager
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/18Service support devices; Network management devices

Definitions

  • This disclosure generally relates to systems and methods for wireless communications and, more particularly, to the field of wireless communications, and in particular, Open Radio Access Network (O-RAN) implementations.
  • OF-RAN Open Radio Access Network
  • O-RAN Open RAN Alliance
  • 3 GPP defined network slicing technologies
  • FIG. 1 depicts an illustrative schematic diagram for O-Cloud LCM service, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 2 depicts an illustrative schematic diagram for O-Cloud LCM service, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 3 illustrates a flow diagram of a process for an illustrative O-Cloud LCM service system, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 4 illustrates an example network architecture, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 5 schematically illustrates a wireless network, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 6 illustrates components of a computing device, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 7 illustrates a logical architecture 700 of the O-RAN system architecture, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 8 illustrates an example O-RAN Architecture including Near-RT RIC interfaces, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 9 depicts an example ORAN architectures/frameworks for adding 3rd party xApps, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 10 depicts an example Near-RT RIC Internal Architecture, in accordance with one or more example embodiments of the present disclosure.
  • O-RAN Open RAN Alliance
  • FIG. 1 depicts an illustrative schematic diagram for O-Cloud LCM service, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 1 illustrates an example Open RAN (O-RAN) system architecture 100.
  • O-RAN Open RAN
  • the O-RAN architecture 100 includes four O-RAN defined interfaces - namely, the A1 interface, the 01 interface, the 02 interface, and the Open Fronthaul Management (M)- plane interface - which connect the service management and orchestration (SMO) framework 102 to O-RAN network functions (NFs) 104 and the O-Cloud 106.
  • O-RAN defined interfaces - namely, the A1 interface, the 01 interface, the 02 interface, and the Open Fronthaul Management (M)- plane interface - which connect the service management and orchestration (SMO) framework 102 to O-RAN network functions (NFs) 104 and the O-Cloud 106.
  • SMO service management and orchestration
  • the 01 interface is an interface between orchestration & management entities (Orchestration/NMS) and O-RAN managed elements, for operation and management, by which FCAPS management, Software management, File management and other similar functions shall be achieved (see e.g., O-RAN Alliance Working Group (WG) 1, “O-RAN Architecture Description” v02.00 (Jul 2020) (“O-RAN. WG1. O-RAN- Architecture- Description- v02.00”), O-RAN Alliance WG6, “Cloud Architecture and Deployment Scenarios for O-RAN Virtualized RAN” v02.00 (“O-RAN. WG6.CAD-v02.00”).
  • the 02 interface is an interface between the Service Management and Orchestration Framework and the O-Cloud (see e.g., O-RAN. WGl.O-RAN-Architecture-Description-v02.00, O-RAN. WG6.CAD- v02.00).
  • O-Cloud refers to a cloud computing platform made up of the physical infrastructure nodes using O-RAN architecture.
  • the A1 interface is an interface between Non-RT RIC and Near-RT RIC to enable policy-driven guidance of Near-RT RIC applications/functions, and support AI/ML workflow.
  • the 02 Interface is a collection of services and their associated interfaces that are provided by the O-Cloud platform to the SMO.
  • the services are categorized into two logical groups: (i) Infrastructure Management Services (IMS), which include the subset of 02 functions that are responsible for deploying and managing cloud infrastructure. (ii) Deployment Management Services (DMS), which include the subset of 02 functions that are responsible for managing the lifecycle of virtualized/containerized deployments on the cloud infrastructure.
  • IMS Infrastructure Management Services
  • DMS Deployment Management Services
  • the 02 services and their associated interfaces shall be specified in the upcoming 02 specification. Any definitions of SMO functional elements needed to consume these services shall be described in OAM architecture life cycle management (LCM)
  • the SMO 102 also connects with an external system 110, which provides enrichment data to the SMO 102.
  • FIG. 1 also illustrates that the A1 interface terminates at an O-RAN Non-Real Time (RT) RAN
  • the O-RAN NFs 104 can be VNFs such as VMs or containers, sitting above the O-Cloud 106 and/or Physical Network Functions (PNFs) utilizing customized hardware. All O-RAN NFs 104 are expected to support the 01 interface when interfacing with the SMO framework 102.
  • the O-RAN NFs 104 connect to the NG-Core 108 via the NG interface (which is a 3GPP defined interface).
  • the Open Fronthaul M-plane interface between the SMO 102 and the O-RAN Radio Unit (O-RU) 116 supports the O-RU 116 management in the O-RAN hybrid model as specified in O-RAN Alliance WG4, O-RAN Fronthaul Management Plane Specification, version 2.0 (July 2019) (“ORAN-WG4.MP.0-v02.00.00”).
  • the Open Fronthaul M-plane interface is an optional interface to the SMO 102 that is included for backward compatibility purposes as per ORAN-WG4.MP.0-v02.00.00, and is intended for management of the O-RU 116 in hybrid mode only.
  • OAM- Architecture-v03.00 OAM- Architecture-v03.00
  • Cloud-native APIs e.g., Kubernetes, OpenStack, ! have been proposed to provide deployment management services (DMS) on the 02 interface.
  • DMS deployment management services
  • the DMS provides management of one or more deployments using the O-Cloud resources. If applications, such as slice subnet management functions are to invoke cloud-native APIs directly, then each application needs to support multiple cloud-native APIs.
  • a O-Cloud LCM service system may provide that O- Cloud life cycle management (LCM) services are provided by NFO that will decouple the complexity of cloud-native APIs from the applications.
  • O-cloud LCM services are the abstraction of cloud-native APIs.
  • a O-Cloud LCM service system may provide mechanisms for load balancing optimization (LBO) and mobility robustness optimization (MRO).
  • LBO load balancing optimization
  • MRO mobility robustness optimization
  • FIG. 2 depicts an illustrative schematic diagram for O-Cloud LCM service, in accordance with one or more example embodiments of the present disclosure.
  • FIG. 2 shows a framework that describes the O-Cloud LCM services exposed by network function orchestration (NFO) function to allow consumers, such as slice management function, to instantiate and/or terminate O-RAN network functions.
  • NFO network function orchestration
  • a O-Cloud LCM service system may facilitate that the O- cloud LCM service may enable an authorized consumer in SMO to send a request to upload NF descriptors and receive instantiate responses.
  • a O-Cloud LCM service system may invoke UploadNflJescriptorsRequest to onboard the NF LCM descriptors.
  • the NF LCM descriptors are an abstraction of VM/container descriptors or Application packages.
  • the O-Cloud LCM service system may receive InstantiateNfliesponse with output parameters indicating the result of onboarding.
  • Table 1 shows the messages and directions:
  • a O-Cloud LCM service system may facilitate O-cloud LCM operations such as instantiation.
  • the O-cloud LCM service may enable an authorized consumer in SMO to request instantiation and receive a response to the instantiation request.
  • a O-Cloud LCM service system may invoke InstantiateNfRequest with reference to NF LCM descriptors that identify the information needed to instantiate the O-RAN NF.
  • the application is expected to provide O-cloud ID for SMO to select the O-cloud, then it needs a way to expose the O-cloud ID to applications.
  • DMS supports VM and container solutions, it needs to determine how the solution is chosen when a request is received from the applications.
  • a O-Cloud LCM service system may receive // islanlialeNfRespoi ise with output parameters indicating the status of instantiation.
  • a O-Cloud LCM service system may receive an LCM notification to indicate the result of instantiation.
  • the following Table 2 shows the messages and directions:
  • a O-Cloud LCM service system may execute an O-cloud LCM termination operation.
  • the O-Cloud LCM service system may facilitate that the O-cloud LCM service may enable an authorized consumer in SMO to send a termination request and receive a termination response.
  • the O-Cloud LCM service system may invoke TerminateNfRequest with the NF identifier to terminate the NF instance.
  • the O-Cloud LCM service system may receive TerminateNfResponse with output parameters indicating the result of NF termination.
  • an O-Cloud LCM service system may instantiate a Network Function on O-Cloud.
  • the following use case (Table 4) describes the instantiation of a Network Function as a new deployment on an O-Cloud, and notification to the SMO once the instantiation of resources for the Network Function deployment has been completed.
  • the instantiation on the O-Cloud Node may be part of a larger procedure instantiating multiple connected Network Functions, in which case the SMO will coordinate the timing of instantiation across O-Clouds and O-Cloud Nodes, the configuration of transport needed between O-Cloud Nodes, and other requirements such as addressing and security used for connecting the Network Functions as below.
  • Instantiation of multiple connected NFs is not addressed in the use case shown in Table 4.
  • a O-Cloud LCM service system may Terminate Network Function on O-Cloud.
  • Table 5 The following use case (Table 5) describes the termination of a Network Function as a new deployment on an O-Cloud, and notification to the SMO once the termination of resources for the Network Function deployment has been completed.
  • Table 5 Sequence Description
  • a O-Cloud LCM service system may define requirements for O-RAN managed function IM elements and FM/PM elements needed for orchestration using the 01 interface. This is shown in the following Tables 6 and 7. Table 6: Orchestration Requirements Relating to 01
  • FIG. 3 illustrates a flow diagram of illustrative process 300 for an illustrative O-Cloud LCM service system, in accordance with one or more example embodiments of the present disclosure.
  • a device may identify a first request received from a consumer in the SMO in an open radio access network (O-RAN), wherein the request is to instantiate a network function (NF) on an O-Cloud in the 5GS.
  • the request specifies placement requirements where the NF needs to be instantiated.
  • the request is an UploadNfDescriptorsRequest from the consumer.
  • the device may cause to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF.
  • DMS deployment management services
  • the device may identify an indication received from the DMS that the deployment of the NF has been completed.
  • the device may cause to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
  • the notification is an instantiateNfResponse response to the consumer.
  • the notification is an O-Cloud lifecycle management (LCM) notification.
  • the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
  • the device may onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
  • the device may identify a termination request received from the consumer, wherein the termination request is for NF termination.
  • the device may cause to send a termination response to the consumer indicating the result of the NF termination.
  • FIGs. 4-6 illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.
  • FIG. 4 illustrates an example network architecture 400 according to various embodiments.
  • the network 400 may operate in a manner consistent with 3 GPP technical specifications for LTE or 5G/NR systems.
  • 3 GPP technical specifications for LTE or 5G/NR systems 3 GPP technical specifications for LTE or 5G/NR systems.
  • the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3 GPP systems, or the like.
  • the network 400 includes a UE 402, which is any mobile or non-mobile computing device designed to communicate with a RAN 404 via an over-the-air connection.
  • the UE 402 is communicatively coupled with the RAN 404 by a Uu interface, which may be applicable to both LTE and NR systems.
  • Examples of the UE 402 include, but are not limited to, a smartphone, tablet computer, wearable computer, desktop computer, laptop computer, in- vehicle infotainment system, in-car entertainment system, instrument cluster, head-up display (HUD) device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, machine-to-machine (M2M), device-to-device (D2D), machine-type communication (MTC) device, Internet of Things (IoT) device, and/or the like.
  • HUD head-up display
  • the network 400 may include a plurality of UEs 402 coupled directly with one another via a D2D, ProSe, PC5, and/or sidelink (SL) interface.
  • UEs 402 may be M2M/D2D/MTC/IoT devices and/or vehicular systems that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc.
  • the UE 402 may perform blind decoding attempts of SL channels/links according to the various embodiments herein.
  • the UE 402 may additionally communicate with an AP 406 via an over-the-air (OTA) connection.
  • the AP 406 manages a WLAN connection, which may serve to offload some/all network traffic from the RAN 404.
  • the connection between the UE 402 and the AP 406 may be consistent with any IEEE 802.11 protocol.
  • the UE 402, RAN 404, and AP 406 may utilize cellular- WLAN aggregation/integration (e.g., LWA/LWIP).
  • Cellular- WLAN aggregation may involve the UE 402 being configured by the RAN 404 to utilize both cellular radio resources and WLAN resources.
  • the RAN 404 includes one or more access network nodes (ANs) 408.
  • the ANs 408 terminate air-interface(s) for the UE 402 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and PHY/Ll protocols. In this manner, the AN 408 enables data/voice connectivity between CN 420 and the UE 402.
  • the ANs 408 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells; or some combination thereof.
  • an AN 408 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, etc.
  • One example implementation is a “CU/DU split” architecture where the ANs 408 are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB- Distributed Units (DUs), where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like) (see e.g., 3GPP TS 38.401 vl6.1.0 (2020-03)).
  • RUs Radio Units
  • the one or more RUs may be individual RSUs.
  • the CU/DU split may include an ng-eNB-CU and one or more ng- eNB-DUs instead of, or in addition to, the gNB-CU and gNB-DUs, respectively.
  • the ANs 408 employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used.
  • BBU Virtual Base Band Unit
  • CRAN cloud RAN
  • REC Radio Equipment Controller
  • RRCC Radio Cloud Center
  • C-RAN centralized RAN
  • vRAN virtualized RAN
  • the plurality of ANs may be coupled with one another via an X2 interface (if the RAN 404 is an LTE RAN or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) 410) or an Xn interface (if the RAN 404 is a NG-RAN 414).
  • the X2/Xn interfaces which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
  • the ANs of the RAN 404 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 402 with an air interface for network access.
  • the UE 402 may be simultaneously connected with a plurality of cells provided by the same or different ANs 408 of the RAN 404.
  • the UE 402 and RAN 404 may use carrier aggregation to allow the UE 402 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell.
  • a first AN 408 may be a master node that provides an MCG and a second AN 408 may be secondary node that provides an SCG.
  • the first/second ANs 408 may be any combination of eNB, gNB, ng-eNB, etc.
  • the RAN 404 may provide the air interface over a licensed spectrum or an unlicensed spectrum.
  • the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells.
  • the nodes Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
  • LBT listen-before-talk
  • the UE 402 or AN 408 may be or act as a roadside unit (RSU), which may refer to any transportation infrastructure entity used for V2X communications.
  • RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE.
  • An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like.
  • an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs.
  • the RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic.
  • the RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services.
  • the components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.
  • the RAN 404 may be an E-UTRAN 410 with one or more eNBs 412.
  • the an E-UTRAN 410 provides an LTE air interface (Uu) with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc.
  • the LTE air interface may rely on CSI- RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE.
  • the LTE air interface may operating on sub-6 GHz bands.
  • the RAN 404 may be an next generation (NG)-RAN 414 with one or more gNB 416 and/or on or more ng-eNB 418.
  • the gNB 416 connects with 5G-enabled UEs 402 using a 5G NR interface.
  • the gNB 416 connects with a 5GC 440 through an NG interface, which includes an N2 interface or an N3 interface.
  • the ng-eNB 418 also connects with the 5GC 440 through an NG interface, but may connect with a UE 402 via the Uu interface.
  • the gNB 416 and the ng-eNB 418 may connect with each other over an Xn interface.
  • the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 414 and a UPF 448 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 414 and an AMF 444 (e.g., N2 interface).
  • NG-U NG user plane
  • N3 interface e.g., N3 interface
  • N-C NG control plane
  • the NG-RAN 414 may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP- OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data.
  • the 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface.
  • the 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking.
  • the 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz.
  • the 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.
  • the 5G-NR air interface may utilize BWPs for various purposes.
  • BWP can be used for dynamic adaptation of the SCS.
  • the UE 402 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 402, the SCS of the transmission is changed as well.
  • Another use case example of BWP is related to power saving.
  • multiple BWPs can be configured for the UE 402 with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios.
  • a BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 402 and in some cases at the gNB 416.
  • a BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
  • the RAN 404 is communicatively coupled to CN 420 that includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE 402).
  • the components of the CN 420 may be implemented in one physical node or separate physical nodes.
  • NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 420 onto physical compute/storage resources in servers, switches, etc.
  • a logical instantiation of the CN 420 may be referred to as a network slice, and a logical instantiation of a portion of the CN 420 may be referred to as a network sub-slice.
  • the CN 420 may be an LTE CN 422 (also referred to as an Evolved Packet Core (EPC) 422).
  • the EPC 422 may include MME 424, SGW 426, SGSN 428, HSS 430, PGW 432, and PCRF 434 coupled with one another over interfaces (or “reference points”) as shown.
  • the NFs in the EPC 422 are briefly introduced as follows.
  • the MME 424 implements mobility management functions to track a current location of the UE 402 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.
  • the SGW 426 terminates an SI interface toward the RAN 410 and routes data packets between the RAN 410 and the EPC 422.
  • the SGW 426 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3 GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
  • the SGSN 428 tracks a location of the UE 402 and performs security functions and access control.
  • the SGSN 428 also performs inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 424; MME 424 selection for handovers; etc.
  • the S3 reference point between the MME 424 and the SGSN 428 enable user and bearer information exchange for inter-3 GPP access network mobility in idle/active states.
  • the HSS 430 includes a database for network users, including subscription-related information to support the network entities’ handling of communication sessions.
  • the HSS 430 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc.
  • An S6a reference point between the HSS 430 and the MME 424 may enable transfer of subscription and authentication data for authenticating/authorizing user access to the EPC 420.
  • the PGW 432 may terminate an SGi interface toward a data network (DN) 436 that may include an application (app)/content server 438.
  • the PGW 432 routes data packets between the EPC 422 and the data network 436.
  • the PGW 432 is communicatively coupled with the SGW 426 by an S5 reference point to facilitate user plane tunneling and tunnel management.
  • the PGW 432 may further include a node for policy enforcement and charging data collection (e.g., PCEF).
  • the SGi reference point may communicatively couple the PGW 432 with the same or different data network 436.
  • the PGW 432 may be communicatively coupled with a PCRF 434 via a Gx reference point.
  • the PCRF 434 is the policy and charging control element of the EPC 422.
  • the PCRF 434 is communicatively coupled to the app/content server 438 to determine appropriate QoS and charging parameters for service flows.
  • the PCRF 432 also provisions associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.
  • the CN 420 may be a 5GC 440 including an AUSF 442, AMF 444, SMF 446, UPF 448, NSSF 450, NEF 452, NRF 454, PCF 456, UDM 458, and AF 460 coupled with one another over various interfaces as shown.
  • the NFs in the 5GC 440 are briefly introduced as follows.
  • the AUSF 442 stores data for authentication of UE 402 and handle authentication- related functionality.
  • the AUSF 442 may facilitate a common authentication framework for various access types..
  • the AMF 444 allows other functions of the 5GC 440 to communicate with the UE 402 and the RAN 404 and to subscribe to notifications about mobility events with respect to the UE 402.
  • the AMF 444 is also responsible for registration management (e.g., for registering UE 402), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization.
  • the AMF 444 provides transport for SM messages between the UE 402 and the SMF 446, and acts as a transparent proxy for routing SM messages.
  • AMF 444 also provides transport for SMS messages between UE 402 and an SMSF.
  • AMF 444 interacts with the AUSF 442 and the UE 402 to perform various security anchor and context management functions.
  • AMF 444 is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN 404 and the AMF 444.
  • the AMF 444 is also a termination point of NAS (Nl) signaling, and performs NAS ciphering and integrity protection.
  • AMF 444 also supports NAS signaling with the UE 402 over an N3IWF interface.
  • the N3IWF provides access to untrusted entities.
  • N3IWF may be a termination point for the N2 interface between the (R)AN 404 and the AMF 444 for the control plane, and may be a termination point for the N3 reference point between the (R)AN 414 and the 448 for the user plane.
  • the AMF 444 handles N2 signalling from the SMF 446 and the AMF 444 for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec andN3 tunnelling, marks N3 user-plane packets in the uplink, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received over N2.
  • N3IWF may also relay UL and DL control-plane NAS signalling between the UE 402 and AMF 444 via an Nl reference point between the UE 402and the AMF 444, and relay uplink and downlink user-plane packets between the UE 402 and UPF 448.
  • the N3IWF also provides mechanisms for IPsec tunnel establishment with the UE 402.
  • the AMF 444 may exhibit an Namf service- based interface, and may be a termination point for an N14 reference point between two AMFs 444 and an N17 reference point between the AMF 444 and a 5G-EIR (not shown by FIG. 4).
  • the SMF 446 is responsible for SM (e.g., session establishment, tunnel management between UPF 448 and AN 408); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 448 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 444 over N2 to AN 408; and determining SSC mode of a session.
  • SM refers to management of a PDU session
  • a PDU session or “session” refers to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 402 and the DN 436.
  • the UPF 448 acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 436, and a branching point to support multi homed PDU session.
  • the UPF 448 also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs uplink traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the uplink and downlink, and performs downlink packet buffering and downlink data notification triggering.
  • UPF 448 may include an uplink classifier to support routing traffic flows to a data network.
  • the NSSF 450 selects a set of network slice instances serving the UE 402.
  • the NSSF 450 also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed.
  • the NSSF 450 also determines an AMF set to be used to serve the UE 402, or a list of candidate AMFs 444 based on a suitable configuration and possibly by querying the NRF 454.
  • the selection of a set of network slice instances for the UE 402 may be triggered by the AMF 444 with which the UE 402 is registered by interacting with the NSSF 450; this may lead to a change of AMF 444.
  • the NSSF 450 interacts with the AMF 444 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown).
  • the NEF 452 securely exposes services and capabilities provided by 3 GPP NFs for third party, internal exposure/re-exposure, AFs 460, edge computing or fog computing systems (e.g., edge compute node, etc.
  • the NEF 452 may authenticate, authorize, or throttle the AFs.
  • NEF 452 may also translate information exchanged with the AF 460 and information exchanged with internal network functions. For example, the NEF 452 may translate between an AF-Service-Identifier and an internal 5GC information.
  • NEF 452 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 452 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 452 to other NFs and AFs, or used for other purposes such as analytics.
  • the NRF 454 supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances. NRF 454 also maintains information of available NF instances and their supported services. The NRF 454 also supports service discovery functions, wherein the NRF 454 receives NF Discovery Request from NF instance or an SCP (not shown), and provides information of the discovered NF instances to the NF instance or SCP.
  • the PCF 456 provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior.
  • the PCF 456 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 458.
  • the PCF 456 exhibit an Npcf service-based interface.
  • the UDM 458 handles subscription-related information to support the network entities’ handling of communication sessions, and stores subscription data of UE 402. For example, subscription data may be communicated via an N8 reference point between the UDM 458 and the AMF 444.
  • the UDM 458 may include two parts, an application front end and a UDR.
  • the UDR may store subscription data and policy data for the UDM 458 and the PCF 456, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 402) for the NEF 452.
  • the Nudr service- based interface may be exhibited by the UDR 221 to allow the UDM 458, PCF 456, and NEF 452 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR.
  • the UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions.
  • the UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management.
  • the UDM 458 may exhibit the Nudm service-based interface.
  • AF 460 provides application influence on traffic routing, provide access to NEF 452, and interact with the policy framework for policy control.
  • the AF 460 may influence UPF 448 (re)selection and traffic routing. Based on operator deployment, when AF 460 is considered to be a trusted entity, the network operator may permit AF 460 to interact directly with relevant NFs. Additionally, the AF 460 may be used for edge computing implementations,
  • the 5GC 440 may enable edge computing by selecting operator/3 rd party services to be geographically close to a point that the UE 402 is attached to the network. This may reduce latency and load on the network.
  • the 5GC 440 may select a UPF 448 close to the UE 402 and execute traffic steering from the UPF 448 to DN 436 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 460, which allows the AF 460 to influence UPF (re)selection and traffic routing.
  • the data network (DN) 436 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)/content server 438.
  • the DN 436 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services.
  • the app server 438 can be coupled to an IMS via an S-CSCF or the I-CSCF.
  • the DN 436 may represent one or more local area DNs (LADNs), which are DNs 436 (or DN names (DNNs)) that is/are accessible by a UE 402 in one or more specific areas. Outside of these specific areas, the UE 402 is not able to access the LADN/DN 436.
  • LADNs local area DNs
  • DNNs DN names
  • the DN 436 may be an Edge DN 436, which is a (local) Data Network that supports the architecture for enabling edge applications.
  • the app server 438 may represent the physical hardware sy stems/ devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s).
  • the app/content server 438 provides an edge hosting environment that provides support required for Edge Application Server's execution.
  • the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic.
  • the edge compute nodes may be included in, or co-located with one or more RAN410, 414.
  • the edge compute nodes can provide a connection between the RAN 414 and UPF 448 in the 5GC 440.
  • the edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN 414 and UPF 448.
  • the interfaces of the 5GC 440 include reference points and service-based itnterfaces.
  • the reference points include: N1 (between the UE 402 and the AMF 444), N2 (between RAN 414 and AMF 444), N3 (between RAN 414 and UPF 448), N4 (between the SMF 446 and UPF 448), N5 (between PCF 456 and AF 460), N6 (between UPF 448 and DN 436), N7 (between SMF 446 and PCF 456), N8 (between UDM 458 and AMF 444), N9 (between two UPFs 448), N10 (between the UDM 458 and the SMF 446), Ni l (between the AMF 444 and the SMF 446), N12 (between AUSF 442 and AMF 444), N13 (between AUSF 442 and UDM 458), N14 (between two AMFs 444; not shown), N15 (between PCF 456 and AMF 444 in case of a non roaming scenario
  • the service-based representation of FIG. 4 represents NFs within the control plane that enable other authorized NFs to access their services.
  • the service-based interfaces include: Namf (SBI exhibited by AMF 444), Nsmf (SBI exhibited by SMF 446), Nnef (SBI exhibited by NEF 452), Npcf (SBI exhibited by PCF 456), Nudm (SBI exhibited by the UDM 458), Naf (SBI exhibited by AF 460), Nnrf (SBI exhibited by NRF 454), Nnssf (SBI exhibited by NSSF 450), Nausf (SBI exhibited by AUSF 442).
  • NEF 452 can provide an interface to edge compute nodes 436x, which can be used to process wireless connections with the RAN 414.
  • the system 400 may include an SMSF, which is responsible for SMS subscription checking and verification, and relaying SM messages to/from the UE 402 to/from other entities, such as an SMS-GMSC/IWMSC/SMS-router.
  • the SMS may also interact with AMF 442 and UDM 458 for a notification procedure that the UE 402 is available for SMS transfer (e.g., set a UE not reachable flag, and notifying UDM 458 when UE 402 is available for SMS).
  • the 5GS may also include an SCP (or individual instances of the SCP) that supports indirect communication (see e.g., 3GPP TS 23.501 section 7.1.1); delegated discovery (see e.g., 3GPP TS 23.501 section 7.1.1); message forwarding and routing to destination NF/NF service(s), communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer API) (see e.g., 3GPP TS 33.501), load balancing, monitoring, overload control, etc.; and discovery and selection functionality for UDM(s), AUSF(s), UDR(s), PCF(s) with access to subscription data stored in the UDR based on UE's SUPI, SUCI or GPSI (see e.g., 3GPP TS 23.501 section 6.3).
  • SCP or individual instances of the SCP
  • indirect communication see e.g., 3GPP TS 23.501 section 7.1.1
  • delegated discovery see e.g.,
  • Load balancing, monitoring, overload control functionality provided by the SCP may be implementation specific.
  • the SCP may be deployed in a distributed manner. More than one SCP can be present in the communication path between various NF Services.
  • the SCP although not an NF instance, can also be deployed distributed, redundant, and scalable.
  • FIG. 5 schematically illustrates a wireless network 500 in accordance with various embodiments.
  • the wireless network 500 may include a UE 502 in wireless communication with an AN 504.
  • the UE 502 and AN 504 may be similar to, and substantially interchangeable with, like-named components described with respect to FIG. 4.
  • the UE 502 may be communicatively coupled with the AN 504 via connection 506.
  • the connection 506 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6GHz frequencies.
  • the UE 502 may include a host platform 508 coupled with a modem platform 510.
  • the host platform 508 may include application processing circuitry 512, which may be coupled with protocol processing circuitry 514 of the modem platform 510.
  • the application processing circuitry 512 may run various applications for the UE 502 that source/sink application data.
  • the application processing circuitry 512 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations
  • the protocol processing circuitry 514 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 506.
  • the layer operations implemented by the protocol processing circuitry 514 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
  • the modem platform 510 may further include digital baseband circuitry 516 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 514 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
  • PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding
  • the modem platform 510 may further include transmit circuitry 518, receive circuitry 520, RF circuitry 522, and RF front end (RFFE) 524, which may include or connect to one or more antenna panels 526.
  • the transmit circuitry 518 may include a digital-to-analog converter, mixer, intermediate frequency (IF) components, etc.
  • the receive circuitry 520 may include an analog-to-digital converter, mixer, IF components, etc.
  • the RF circuitry 522 may include a low- noise amplifier, a power amplifier, power tracking components, etc.
  • RFFE 524 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc.
  • transmit/receive components may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc.
  • the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.
  • the protocol processing circuitry 514 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
  • a UE 502 reception may be established by and via the antenna panels 526, RFFE 524, RF circuitry 522, receive circuitry 520, digital baseband circuitry 516, and protocol processing circuitry 514.
  • the antenna panels 526 may receive a transmission from the AN 504 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 526.
  • a UE 502 transmission may be established by and via the protocol processing circuitry 514, digital baseband circuitry 516, transmit circuitry 518, RF circuitry 522, RFFE 524, and antenna panels 526.
  • the transmit components of the UE 504 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 526.
  • the AN 504 may include a host platform 528 coupled with a modem platform 530.
  • the host platform 528 may include application processing circuitry 532 coupled with protocol processing circuitry 534 of the modem platform 530.
  • the modem platform may further include digital baseband circuitry 536, transmit circuitry 538, receive circuitry 540, RF circuitry 542, RFFE circuitry 544, and antenna panels 546.
  • the components of the AN 504 may be similar to and substantially interchangeable with like-named components of the UE 502.
  • the components of the AN 508 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
  • FIG. 6 illustrates components of a computing device 600 according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
  • FIG. 6 shows a diagrammatic representation of hardware resources 600 including one or more processors (or processor cores) 610, one or more memory/ storage devices 620, and one or more communication resources 630, each of which may be communicatively coupled via a bus 640 or other interface circuitry.
  • a hypervisor 602 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 600.
  • the processors 610 include, for example, processor 612 and processor 614.
  • the processors 610 include circuitry such as, but not limited to one or more processor cores and one or more of cache memory, low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface circuit, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose I/O, memory card controllers such as secure digital/multi-media card (SD/MMC) or similar, interfaces, mobile industry processor interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports.
  • LDOs low drop-out voltage regulators
  • RTC real time clock
  • timer-counters including interval and watchdog timers
  • general purpose I/O general purpose I/O
  • memory card controllers such as secure digital/multi-media card (SD/MMC) or similar, interfaces, mobile industry processor interface (MIPI) interfaces and Joint Test Access Group (JTAG)
  • the processors 610 may be, for example, a central processing unit (CPU), reduced instruction set computing (RISC) processors, Acorn RISC Machine (ARM) processors, complex instruction set computing (CISC) processors, graphics processing units (GPUs), one or more Digital Signal Processors (DSPs) such as a baseband processor, Application- Specific Integrated Circuits (ASICs), an Field-Programmable Gate Array (FPGA), a radio-frequency integrated circuit (RFIC), one or more microprocessors or controllers, another processor (including those discussed herein), or any suitable combination thereof.
  • the processor circuitry 610 may include one or more hardware accelerators, which may be microprocessors, programmable processing devices (e.g., FPGA, complex programmable logic devices (CPLDs), etc.), or the like.
  • the memory/storage devices 620 may include main memory, disk storage, or any suitable combination thereof.
  • the memory/storage devices 620 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, phase change RAM (PRAM), resistive memory such as magnetoresistive random access memory (MRAM), etc., and may incorporate three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®.
  • the memory/ storage devices 620 may also comprise persistent storage devices, which may be temporal and/or persistent storage of any type, including, but not limited to, non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth.
  • the communication resources 630 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 604 or one or more databases 606 or other network elements via a network 608.
  • the communication resources 630 may include wired communication components (e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, WiFi® components, and other communication components.
  • wired communication components e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others
  • Network connectivity may be provided to/from the computing device 600 via the communication resources 630 using a physical connection, which may be electrical (e.g., a “copper interconnect”) or optical.
  • the physical connection also includes suitable input connectors (e.g., ports, receptacles, sockets, etc.) and output connectors (e.g., plugs, pins, etc.).
  • the communication resources 630 may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned network interface protocols.
  • Instructions 650 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 610 to perform any one or more of the methodologies discussed herein.
  • the instructions 650 may reside, completely or partially, within at least one of the processors 610 (e.g., within the processor’s cache memory), the memory/ storage devices 620, or any suitable combination thereof.
  • any portion of the instructions 650 may be transferred to the hardware resources 600 from any combination of the peripheral devices 604 or the databases 606. Accordingly, the memory of processors 610, the memory/ storage devices 620, the peripheral devices 604, and the databases 606 are examples of computer-readable and machine-readable media.
  • FIG. 7 illustrates a logical architecture 700 of the O-RAN system architecture 100 of FIG. 1.
  • the SMO 702 corresponds to the SMO 102
  • O-Cloud 706 corresponds to the O-Cloud 106
  • the non-RT RIC 712 corresponds to the non-RT RIC 112
  • the near-RT RIC 714 corresponds to the near-RT RIC 114
  • the O-RU 716 corresponds to the O-RU 116 of FIG. 7, respectively.
  • the O-RAN logical architecture 700 includes a radio portion and a management portion.
  • the management portion/side of the architectures 700 includes the SMO Framework 702 containing the non-RT RIC 712, and may include the O-Cloud 706.
  • the O-Cloud 706 is a cloud computing platform including a collection of physical infrastructure nodes to host the relevant O-RAN functions (e.g., the near-RT RIC 714, O-CU-CP 721, O-CU-UP 722, and the O-DU 715), supporting software components (e.g., OSs, VMMs, container runtime engines, ML engines, etc.), and appropriate management and orchestration functions.
  • An O-Cloud instance 706 refers to a collection of O-Cloud Resource Pools at one or more location and the software to manage Nodes and Deployments hosted on them.
  • An O- Cloud will include functionality to support both Deployment-plane (aka. user-plane) and Management services.
  • the O-Cloud provides a single logical reference point for all O-Cloud Resource Pools within the O-Cloud boundary.
  • An O-Cloud Resource Pool is a collection of O-Clouds nodes with homogeneous profiles in one location which can be used for either Management services or Deployment Plane functions.
  • the allocation of NF deployment to a resource pool is determined by the SMO.
  • An O-Cloud Node is a collection of CPUs, Mem, Storage, NICs, Accelerators, BIOSes, BMCs, etc., and can be thought of as a server. Each O-Cloud Node will support one or more “roles”, see next.
  • An O-Cloud Node Role refers to the functionalities that a given node may support. These include Compute, Storage, Networking for the Deployment-plane (i.e., user- plane related functions such as the O-RAN NF), they may include optional acceleration functions, and they may also include the appropriate Management services.
  • An O-Cloud Deployment Plane is a logical construct representing the O-Cloud Nodes across the Resource Pools which are used to create NF Deployments.
  • An O-Cloud 706 NF Deployment is a deployment of a cloud native Network Function (all or partial), resources shared within a NF Function, or resource shared across network functions.
  • the NF Deployment configures and assembles user-plane resources required for the cloud native construct used to establish the NF Deployment and manage its life cycle from creation to deletion.
  • the radio portion/side of the logical architecture 700 includes the near-RT RIC 714, the O-RAN Distributed Unit (O-DU) 715, the O-RU 716, the O-RAN Central Unit - Control Plane (O-CU-CP) 721, and the O-RAN Central Unit - User Plane (O-CU-UP) 722 functions.
  • the radio portion/side of the logical architecture 700 may also include the O-e/gNB 710.
  • the O-DU 715 is a logical node hosting RLC, MAC, and higher PHY layer entities/elements (High-PHY layers) based on a lower layer functional split.
  • the O-RU 716 is a logical node hosting lower PHY layer entities/elements (Low-PHY layer) (e.g., FFT/iFFT, PRACH extraction, etc.) and RF processing elements based on a lower layer functional split. Virtualization of O-RU 716 is FFS.
  • the O-CU-CP 721 is a logical node hosting the RRC and the control plane (CP) part of the PDCP protocol.
  • the O O-CU-UP 722 is a a logical node hosting the user plane part of the PDCP protocol and the SDAP protocol.
  • An E2 interface terminates at a plurality of E2 nodes.
  • the E2 interface connects the near-RT RIC 714 and one or more -CU-CP 721, one or more O-CU-UP 722, one or more O- DU 715, and one or more O-e/gNB 710.
  • the E2 nodes are logical nodes/entities that terminate the E2 interface.
  • the E2 nodes include the O-CU-CP 721, O-CU-UP 722, O-DU 715, or any combination of elements as defined in -RAN Alliance WG3, “O-RANNear- Real-time RAN Intelligent Controller Architecture & E2 General Aspects and Principles” vOl.Ol (“O-RAN.
  • the E2 nodes include the O- e/gNB 710. As shown in FIG. 7, the E2 interface also connects the O-e/gNB 710 to the Near- RT RIC 714.
  • the protocols over E2 interface are based exclusively on Control Plane (CP) protocols.
  • the E2 functions are grouped into the following categories: (a) near-RT RIC 714 services (REPORT, INSERT, CONTROL and POLICY, as described in O- RAN.WG3.E2GAP-v01.01); and (b) near-RT RIC 714 support functions, which include E2 Interface Management (E2 Setup, E2 Reset, Reporting of General Error Situations, etc.) and Near-RT RIC Service Update (e.g., capability exchange related to the list of E2 Node functions exposed over E2).
  • An RIC Service is a Service provided on an E2 Node to provide access to messages and measurements and / or enable control of the E2 Node from the Near-RT RIC.
  • E2 interface The injection and control guided by AI/ML based intelligence into RAN networks are realized via E2 interface from the Near-RT RIC, where Near-RT RIC subscribes various RIC services (REPORT, INSERT, CONTROL, POLICY) based on RAN functions exposed from RAN nodes. These exposed RAN functions are specified by E2 service models (E2SM(s)). Among those, E2SM RAN control (E2SM-RC) (see e.g., O-RAN Alliance WG3, “O-RAN Near-Real-time RAN Intelligent Controller E2 Service Model (E2SM) KPM” vOl.Ol (Feb.
  • ORAN-WG3.E2SM-KPM-v01.00.00 has been recently agreed to be specified to support injection of resource and mobility control commands from Near-RT RIC, spanning from radio admission and bearer control to HO, dual connectivity, and carrier aggregation decisions that are required to support traffic steering and QoS optimization use cases (see e.g., O-RAN Alliance WG3, “Use Cases and Requirements” v01.00.03 (Dec. 2020) (“[O- RAN.WG3.UCR-v01.00.03]”).
  • FIG. 7 shows the Uu interface between a UE 701 and O-e/gNB 710 as well as between the UE 701 and O-RAN components.
  • the Uu interface is a 3GPP defined interface (see e.g., sections 5.2 and 5.3 of 3GPP TS 38.401 vl6.3.0 (2020-10-02) (“[TS38401]”)), which includes a complete protocol stack from LI to L3 and terminates in the NG-RAN or E-UTRAN.
  • the O-e/gNB 710 is an LTE eNB (see e.g., 3 GPP TS 36.401 vl6.0.0 (2020-07-16)), a 5G gNB or ng-eNB 3 GPP TS 38.300 vl6.3.0 (2020-10-02) (“[TS38300]”) that supports the E2 interface.
  • the O-e/gNB 710 may be the same or similar as RAN 404 and/or ANs 408, and UE 701 may correspond to UE 402 discussed with respect to FIG. 4, and/or the like. There may be multiple UEs 701 and/or multiple O-e/gNB 710, each of which may be connected to one another the via respective Uu interfaces.
  • the O-e/gNB 710 supports O-DU 715 and O-RU 716 functions with an Open Fronthaul interface between them.
  • the Open Fronthaul (OF) interface(s) is/are between O-DU 715 and O-RU 716 functions (see e.g., ORAN-WG4.MP.0-v02.00.00, O-RAN Alliance WG4, “O-RAN Fronthaul Control, User and Synchronization Plane Specification 4.0” (Jul 2020) (“ORAN-WG4.CUS.O- v04.00”)).
  • the OF interface(s) includes the Control User Synchronization (CUS) Plane and Management (M) Plane.
  • Figures 1 and 7 also show that the O-RU 716 terminates the OF M- Plane interface towards the O-DU 715 and optionally towards the SMO 702 as specified in ORAN-WG4.MP.0-v02.00.00.
  • the O-RU 716 terminates the OF CUS-Plane interface towards the O-DU 715 and the SMO 702.
  • the Fl-c interface connects the O-CU-CP 721 with the O-DU 715.
  • the Fl-c interface is between the gNB-CU-CP and gNB-DU nodes [TS38401], 3GPP TS 38.470 vl6.3.0 (2020-10-02) (“[TS38470]”).
  • the Fl-c interface is adopted between the O-CU-CP 721 with the O-DU 715 functions while reusing the principles and protocol stack defined by 3 GPP and the definition of interoperability profile specifications.
  • the Fl-u interface connects the O-CU-UP 722 with the O-DU 715.
  • the Fl-u interface is between the gNB-CU-UP and gNB-DU nodes [TS38401], [TS38470]
  • the Fl-u interface is adopted between the O- CU-UP 7 22 with the O-DU 715 functions while reusing the principles and protocol stack defined by 3GPP and the definition of interoperability profile specifications.
  • the NG-c interface is defined by 3 GPP as an interface between the gNB-CU-CP and the AMF in the 5GC [TS38300]
  • the NG-c is also referred as the N2 interface (see [TS38300]).
  • the NG-u interface is defined by 3GPP, as an interface between the gNB-CU- UP and the UPF in the 5GC (see e.g., [TS38300]).
  • the NG-u interface is referred as the N3 interface (see e.g., [TS38300]).
  • NG-c and NG-u protocol stacks defined by 3GPP are reused and may be adapted for O-RAN purposes.
  • the X2-c interface is defined in 3 GPP for transmitting control plane information between eNBs or between eNB and en-gNB in EN-DC.
  • the X2-u interface is defined in 3GPP for transmitting user plane information between eNBs or between eNB and en-gNB in EN-DC (see e.g., 3 GPP TS 36.420 vl6.0.0 (2020-07-17), [TS38300]).
  • X2-c and X2-u protocol stacks defined by 3 GPP are reused and may be adapted for O-RAN purposes.
  • the Xn-c interface is defined in 3 GPP for transmitting control plane information between gNBs, ng-eNBs, or between an ng-eNB and gNB.
  • the Xn-u interface is defined in 3 GPP for transmitting user plane information between gNBs, ng-eNBs, or between ng-eNB and gNB (see e.g., [TS38300], 3 GPP TS 38.420 vl6.0.0 (2020-07-16)).
  • Xn-c and Xn-u protocol stacks defined by 3 GPP are reused and may be adapted for O-RAN purposes
  • the El interface is defined by 3 GPP as being an interface between the gNB-CU-CP (e.g., gNB-CU-CP 3728) and gNB-CU-UP (see e.g., [TS38401], 3 GPP TS 38.460 vl6.1.0 (2020-07-17)).
  • El protocol stacks defined by 3 GPP are reused and adapted as being an interface between the O-CU-CP 721 and the O-CU-UP 722 functions.
  • the O-RAN Non-Real Time (RT) RAN Intelligent Controller (RIC) 712 is a logical function within the SMO framework 102, 702 that enables non-real-time control and optimization of RAN elements and resources; AI/machine learning (ML) workflow(s) including model training, inferences, and updates; and policy-based guidance of applications/features in the Near-RT RIC 714.
  • RT Non-Real Time
  • RIC RAN Intelligent Controller
  • the O-RAN near-RT RIC 714 is a logical function that enables near-real-time control and optimization of RAN elements and resources via fine-grained data collection and actions over the E2 interface.
  • the near-RT RIC 714 may include one or more AI/ML workflows including model training, inferences, and updates.
  • the non-RT RIC 712 can be an ML training host to host the training of one or more ML models. ML training can be performed offline using data collected from the RIC, O-DU 715 and O-RU 716.
  • non-RT RIC 712 is part of the SMO 702
  • the ML training host and/or ML model host/actor can be part of the non-RT RIC 712 and/or the near-RT RIC 714.
  • the ML training host and ML model host/actor can be part of the non-RT RIC 712 and/or the near-RT RIC 714.
  • the ML training host and ML model host/actor may be co-located as part of the non-RT RIC 712 and/or the near-RT RIC 714.
  • the non-RT RIC 712 may request or trigger ML model training in the training hosts regardless of where the model is deployed and executed.
  • ML models may be trained and not currently deployed.
  • the non-RT RIC 712 provides a query-able catalog for an ML designer/developer to publish/install trained ML models (e.g., executable software components).
  • the non-RT RIC 712 may provide discovery mechanism if a particular ML model can be executed in a target ML inference host (MF), and what number and type of ML models can be executed in the MF.
  • MF target ML inference host
  • the non-RT RIC 712 there may be three types of ML catalogs made disoverable by the non-RT RIC 712: a design-time catalog (e.g., residing outside the non-RT RIC 712 and hosted by some other ML platform(s)), a training/deployment-time catalog (e.g., residing inside the non-RT RIC 712), and a run-time catalog (e.g., residing inside the non-RT RIC 712).
  • the non-RT RIC 712 supports necessary capabilities for ML model inference in support of ML assisted solutions running in the non-RT RIC 712 or some other ML inference host. These capabilities enable executable software to be installed such as VMs, containers, etc.
  • the non-RT RIC 712 may also include and/or operate one or more ML engines, which are packaged software executable libraries that provide methods, routines, data types, etc., used to run ML models.
  • the non-RT RIC 712 may also implement policies to switch and activate ML model instances under different operating conditions.
  • the non-RT RIC 712 is be able to access feedback data (e.g., FM and PM statistics) over the 01 interface on ML model performance and perform necessary evaluations. If the ML model fails during runtime, an alarm can be generated as feedback to the non-RT RIC 712. How well the ML model is performing in terms of prediction accuracy or other operating statistics it produces can also be sent to the non-RT RIC 712 over 01.
  • the non-RT RIC 712 can also scale ML model instances running in a target MF over the 01 interface by observing resource utilization in MF.
  • the environment where the ML model instance is running (e.g., the MF) monitors resource utilization of the running ML model.
  • the scaling mechanism may include a scaling factor such as an number, percentage, and/or other like data used to scale up/down the number of ML instances.
  • ML model instances running in the target ML inference hosts may be automatically scaled by observing resource utilization in the MF. For example, the Kubernetes® (K8s) runtime environment typically provides an auto-scaling feature.
  • the A1 interface is between the non-RT RIC 712 (within or outside the SMO 702) and the near-RT RIC 714.
  • the A1 interface supports three types of services as defined in -RAN Alliance WG2, O-RAN A1 interface: General Aspects and Principles Specification, version 1.0 (Oct 2019) (“ORAN-WG2.Al.GA&P-v01.00”), including a Policy Management Service, an Enrichment Information Service, and ML Model Management Service.
  • A1 policies have the following characteristics compared to persistent configuration (see e.g., ORAN- WG2.A1.GA&P-V01.00): A1 policies are not critical to traffic; A1 policies have temporary validity; A1 policies may handle individual UE or dynamically defined groups of UEs; A1 policies act within and take precedence over the configuration; and A1 policies are non- persistent, i.e., do not survive a restart of the near-RT RIC.
  • O-RAN is currently developing a framework for adding 3rd party xApps to a Base Station product, which is assembled from components from different suppliers.
  • FIG. 8 illustrates an example O-RAN Architecture 800 including Near-RT RIC interfaces according to various embodiments.
  • the Near-RT RIC is a logical network node placed between the Service Management & Orchestration layer, which hosts the Non-RT RIC, and the E2 Nodes.
  • the Near-RT-RIC logical architecture and related interfaces are shown in FIG. 8.
  • the Near-RT RIC is connected to the Non-RT RIC through the A1 interface (see e.g., ORAN- WG2.Al.GA&P-v01.00).
  • a Near-RT RIC is connected to only one Non-RT RIC.
  • E2 is a logical interface connecting the Near-RT RIC with an E2 Node.
  • the Near-RT RIC is connected to the O-CU-CP.
  • the Near-RT RIC is connected to the O-CU- UP.
  • the Near-RT RIC is connected to the O-DU.
  • the Near-RT RIC is connected to the O- eNB.
  • An E2 Node is connected to only one Near-RT RIC.
  • a Near-RT RIC can be connected to multiple E2 Nodes, i.e. multiple O-CU-CPs, O-CU-UPs, O-DUs and O-eNBs.
  • FI (Fl-C, Fl-U) and El are logical 3 GPP interfaces, whose protocols, termination points and cardinalities are specified in [TS38401]
  • the near-RT RIC and other RAN nodes have 01 interfaces as defined in 0-RAN.WG1.0AM-Architecture-v03.00, and O-RAN. WGl.O-RAN- Architecture-Description-v02.00.
  • the Near-RT RIC hosts one or more xApps that use the E2 interface to collect near real-time information (e.g., UE basis, Cell basis) and provide value added services.
  • the Near- RT RIC may receive declarative Policies and obtain Data Enrichment information over the A1 interface (see e.g., ORAN-WG2.Al.GA&P-v01.00).
  • the protocols over E2 interface are based exclusively on Control plane protocols and are defined in O-RAN Alliance WG1, “Near-Real-time RAN Intelligent Controller, E2 Application Protocol (E2AP)” vOl.Ol (Jul 2020) (“O-RAN. WG3.E2AP-v01.01”).
  • E2AP E2 Application Protocol
  • WG3.E2AP-v01.01 O-RAN. WG3.E2AP-v01.01
  • E2 Node will be able to provide services but there may be an outage for certain value-added services that may only be provided using the Near-RT RIC.
  • the Near-RT RIC provides a database function that stores the configurations relating to E2 nodes, Cells, Bearers, Flows, UEs and the mappings between them.
  • the Near-RT RIC provides ML tools that support data pipelining.
  • the Near-RT RIC provides a messaging infrastructure.
  • the Near-RT RIC provides logging, tracing and metrics collection from Near- RT RIC framework and xApps to SMO.
  • the Near-RT RIC provides security functions.
  • the Near-RT RIC supports conflict resolution to resolve the potential conflicts or overlaps which may be caused by the requests from xApps.
  • the Near-RT RIC also provides an open API enabling the hosting of 3rd party xApps and xApps from the Near-RT RIC platform vendor.
  • Near-RT RIC also provides an open API decoupled from specific implementation solutions, including a Shared Data Layer (SDL) that works as an overlay for underlying databases and enables simplified data access.
  • An xApp is an application designed to run on the Near-RT RIC. Such an application is likely to include or provide one or more microservices and at the point of on-boarding will identify which data it consumes and which data it provides.
  • An xApp is independent of the Near-RT RIC and may be provided by any third party.
  • the E2 enables a direct association between the xApp and the RAN functionality.
  • a RAN Function is a specific Function in an E2 Node; examples include X2AP, FIAP, EIAP, SIAP, NGAP interfaces and RAN internal functions handling UEs, Cells, etc.
  • the architecture of an xApp comprises the code implementing the xApp's logic and the RIC libraries that allow the xApp to: send and receive messages; read from, write to, and get notifications from the SDL layer; and write log messages. Additional libraries will be available in future versions including libraries for setting and resetting alarms and sending statistics. Furthermore, xApps can use access libraries to access specific name-spaces in the SDL layer. For example, the R-NIB that provides information about which E2 nodes (e.g., CU/DU) the RIC is connected to and which SMs are supported by each E2 node, can be read by using the R-NIB access library.
  • E2 nodes e.g., CU/DU
  • the O-RAN standard interfaces (e.g., 01, Al, and E2) are exposed to the xApps as follows: xApp will receive its configuration via K8s ConfigMap - the configuration can be updated while the xApp is running and the xApp can be notified of this modification by using inotifyO; xApp can send statistics (PM) either by (a) sending it directly to VES collector in VES format, (b) by exposing statistics via a REST interface for Prometheus to collect; xApp will receive Al policy guidance via an RMR message of a specific kind (policy instance creation and deletion operations); and xApp can subscribe to E2 events by constructing the E2 subscription ASN.1 payload and sending it as a message (RMR), xApp will receive E2 messages (e.g., E2 INDICATION) as RMR messages with the ASN.1 payload. Similarly xApp can issue E2 control messages.
  • PM statistics
  • xApps can send messages that are processes by other xApps and can receive messages produced by other xApps.
  • Communication inside the RIC is policy driven, that is, an xApp cannot specify the target of a message. It simply sends a message of a specific type and the routing policies specified for the RIC instance will determine to which destinations this message will be delivered (logical pub/ sub).
  • an xApp is a entity that implements a well-defined function. Mechanically, an xApp is a K8s pod that includes one or multiple containers. In order for an xApp to be deployable, it needs to have an xApp descriptor (e.g., JSON) that describes the xApp's configuration parameters and information the RIC platform needs to configure the RIC platform for the xApp. The xApp developer will also need to provide a JSON schema for the descriptor.
  • JSON e.g., JSON
  • an xApp may do any of the following: read initial configuration parameters (passed in the xApp descriptor); receive updated configuration parameters; send and receive messages; read and write into a persistent shared data storage (key-value store); receive Al-P policy guidance messages - specifically operations to create or delete a policy instance (JSON payload on an RMR message) related to a given policy type; define a new A1 policy type; make subscriptions via E2 interface to the RAN, receive E2 INDICATION messages from the RAN, and issue E2 POLICY and CONTROL messages to the RAN; and report metrics related to its own execution or observed RAN events.
  • the lifecycle of xApp development and deployment consists of the following states:
  • the xApp code and xapp descriptor are committed to LF Gerrit repo and included in an O-RAN release.
  • the xApp is packaged as Docker container and its image released to LF Release registry.
  • On-boarded/Distributed The xApp descriptor (and potentially helm chart) is customized for a given RIC environment and the resulting customized helm chart is stored in a local helm chart repo used by the RIC environment's xApp Manager.
  • Run-time Parameters Configuration Before the xApp can be deployed, run-time helm chart parameters will be provided by the operator to customized the xApp Kubernetes deployment instance. This procedure is mainly used to configure run-time unique helm chart parameters such as instance UUID, liveness check, east-bound and north-bound service endpoints (e.g., DBAAS entry, VES collector endpoint) and so on.
  • run-time unique helm chart parameters such as instance UUID, liveness check, east-bound and north-bound service endpoints (e.g., DBAAS entry, VES collector endpoint) and so on.
  • the xApp has been deployed via the xApp Manager and the xApp pod is running on a RIC instance.
  • the deployed status may be further divided into additional states controlled via xApp configuration updates. For example, Running, Stopped.
  • Near-RT RIC The general principles guiding the definition of Near-RT RIC architecture as well as the interfaces between Near-RT RIC, E2 Nodes and Service Management & Orchestration are the following: Near-RT RIC and E2 Node functions are fully separated from transport functions. Addressing scheme used in Near-RT RIC and the E2 Nodes shall not be tied to the addressing schemes of transport functions.
  • the E2 Nodes support all protocol layers and interfaces defined within 3 GPP radio access networks that include eNB for E-UTRAN [5] and gNB/ ng-eNB for NG-RAN [16]
  • Near-RT RIC and hosted “xApp” applications shall use a set of services exposed by an E2 Node that is described by a series of RAN function and Radio Access Technology (RAT) dependent ⁇ 2 Service Models”.
  • RAT Radio Access Technology
  • the Near-RT RIC interfaces are defined along the following principles:
  • Interfaces are based on a logical model of the entity controlled through this interface.
  • xApps may enhance the RRM capabilities of the Near-RT RIC. xApps provide logging, tracing and metrics collection to the Near-RT RIC.
  • xApps include an xApp descriptor and xApp image.
  • the xApp image is the software package.
  • the xApp image contains all the files needed to deploy an xApp.
  • An xApp can have multiple versions of xApp image, which are tagged by the xApp image version number.
  • the xApp descriptor describes the packaging format of xApp image.
  • the xApp descriptor also provides the necessary data to enable their management and orchestration.
  • the xApp descriptor provides xApp management services with necessary information for the LCM of xApps, such as deployment, deletion, upgrade etc.
  • the xApp descriptor also provides extra parameters related to the health management of the xApps, such as auto scaling when the load of xApp is too heavy and auto healing when xApp becomes unhealthy.
  • the xApp descriptor provides FCAPS and control parameters to xApps when xApp is launched.
  • xApp descriptor includes:
  • the basic information of xApp including name, version, provider, URL of xApp image, virtual resource requirements (e.g., CPU), etc. This information is used to support LCM of xApps. Additionally or alternatively, the basic information include or indicate configuration, metrics, and control data about an xApp.
  • FCAPS management specifications that specify the options of configuration, performance metrics collection, etc. for the xApp.
  • the control specifications that specify the data types consumed and provided by the xApp for control capabilities (e.g., Performance Management (PM) data that the xApp subscribes, the message type of control messages).
  • PM Performance Management
  • the xApp descriptor components include the following: Configuration:
  • the xApp configuration specification shall include a data dictionary for the configuration data, i.e., meta data such as a yang definition or a list of configuration parameters and their semantics. Additionally it may include an initial configuration of xApps.
  • xApp controls specification shall include the types of data it consumes and provides that enable control capabilities (e.g., xApp URL, parameters, input/output type).
  • the xApp metrics specification shall include a list of metrics (e.g., metric name, type, unit and semantics) provided by the xApp.
  • FIG. 9 depicts an example ORAN architectures/frameworks 900 for adding 3rd party xApps according to various embodiments.
  • FIG. 10 depicts an example Near-RT RIC Internal Architecture 1000 according to various embodiments.
  • the Near-RT RIC hosts the following functions: Database functionality, which allows reading and writing of RAN/UE information; xApp subscription management, which merges subscriptions from different xApps and provides unified data distribution to xApps; Conflict mitigation, which resolves potentially overlapping or conflicting requests from multiple xApps; Messaging infrastructure, which enables message interaction amongst Near-RT RIC internal functions; Security, which provides the security scheme for the xApps; and
  • Management services including: fault management, configuration management, and performance management as a service producer to SMO; life-cycle management of xApps; and logging, tracing and metrics collection, which capture, monitor and collect the status of Near- RT RIC internals and can be transferred to external system for further evaluation; and
  • E2 termination which terminates the E2 interface from an E2 Node
  • A1 termination which terminates the A1 interface from the non-RT RIC
  • 01 termination which terminates the 01 interface from SMO
  • xApps may provide UE related information to be stored in the UE-NIB (UE-Network Information Base) database.
  • UE-NIB UE-Network Information Base
  • UE-NIB maintains a list of UEs and associated data.
  • the UE- NIB maintains tracking and correlation of the UE identities associated with the connected E2 nodes.
  • xApps may provide radio access network related information to be stored in the R-NIB (Radio-Network Information Base) database.
  • the R-NIB stores the configurations and near real-time information relating to connected E2 Nodes and the mappings between them.
  • xApp subscription management manages subscriptions from the xApps to the E2 Nodes.
  • xApp subscription management enforces authorization of policies controlling xApp access to messages.
  • xApp subscription management enables merging of identical subscriptions from different xApps into a single subscription to the E2 Node.
  • At least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below.
  • the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
  • circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.
  • At least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below.
  • the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
  • circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
  • Example 1 may include an apparatus comprising identify a request received from a consumer in the SMO in an open radio access network (O-RAN), wherein the request may be to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); cause to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identify an indication received from the DMS that the deployment of the NF has been completed; and cause to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
  • O-RAN open radio access network
  • DMS deployment management services
  • Example 2 may include the apparatus of example 1 and/or some other example herein, wherein the request specifies placement requirements where the NF needs to be instantiated.
  • Example 3 may include the apparatus of example 1 and/or some other example herein, wherein the request may be an UploadNfDescriptorsRequest from the consumer.
  • Example 4 may include the apparatus of example 1 and/or some other example herein, wherein the notification may be an instantiateNfResponse response to the consumer.
  • Example 5 may include the apparatus of example 1 and/or some other example herein, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O- Cloud.
  • Example 6 may include the apparatus of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
  • Example 7 may include the apparatus of example 1 and/or some other example herein, wherein the notification may be an O-Cloud lifecycle management (LCM) notification.
  • LCM O-Cloud lifecycle management
  • Example 8 may include the apparatus of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to identify a termination request received from the consumer, wherein the termination request may be for NF termination.
  • Example 9 may include the apparatus of example 8 and/or some other example herein, wherein the processing circuitry may be further configured to cause to send a termination response to the consumer indicating the result of the NF termination.
  • Example 10 may include a computer-readable storage medium comprising instructions to cause processing circuitry, upon execution of the instructions by the processing circuitry, to: identify a request received from a consumer in a service management and orchestration (SMO) in an open radio access network (O-RAN), wherein the request may be to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); cause to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identify an indication received from the DMS that the deployment of the NF has been completed; and cause to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
  • SMO service management and orchestration
  • OFDMS deployment management services
  • Example 11 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the request specifies placement requirements where the NF needs to be instantiated.
  • Example 12 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the request may be an UploadNfDescriptorsRequest from the consumer.
  • Example 13 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the notification may be an instantiateNfResponse response to the consumer.
  • Example 14 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-
  • Example 15 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the operations further comprise onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
  • Example 16 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the notification may be an O-Cloud lifecycle management (LCM) notification.
  • OCM O-Cloud lifecycle management
  • Example 17 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the operations further comprise identifying a termination request received from the consumer, wherein the termination request may be for NF termination.
  • Example 18 may include the computer-readable storage medium of example 17 and/or some other example herein, wherein the operations further comprise causing to send a termination response to the consumer indicating the result of the NF termination.
  • Example 19 may include a method comprising: identifying a request received from a consumer in a service management and orchestration (SMO) in an open radio access network (O-RAN), wherein the request may be to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); causing to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identifying an indication received from the DMS that the deployment of the NF has been completed; and causing to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
  • SMO service management and orchestration
  • OFDMS deployment management services
  • Example 20 may include the method of example 19 and/or some other example herein, wherein the request specifies placement requirements where the NF needs to be instantiated.
  • Example 21 may include the method of example 19 and/or some other example herein, wherein the request may be an UploadNfDescriptorsRequest from the consumer.
  • Example 22 may include the method of example 19 and/or some other example herein, wherein the notification may be an instantiateNfResponse response to the consumer.
  • Example 23 may include the method of example 19 and/or some other example herein, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
  • Example 24 may include the method of example 19 and/or some other example herein, further comprising onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
  • Example 25 may include the method of example 19 and/or some other example herein, wherein the notification may be an O-Cloud lifecycle management (LCM) notification.
  • OCM O-Cloud lifecycle management
  • Example 26 may include the method of example 19 and/or some other example herein, further comprising identifying a termination request received from the consumer, wherein the termination request may be for NF termination.
  • Example 27 may include the method of example 26 and/or some other example herein, further comprising causing to send a termination response to the consumer indicating the result of the NF termination.
  • Example 28 may include an apparatus comprising means for: identifying a request received from a consumer in a service management and orchestration (SMO) in an open radio access network (O-RAN), wherein the request may be to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); causing to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identifying an indication received from the DMS that the deployment of the NF has been completed; and causing to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
  • SMO service management and orchestration
  • OFDMS deployment management services
  • Example 29 may include the apparatus of example 28 and/or some other example herein, wherein the request specifies placement requirements where the NF needs to be instantiated.
  • Example 30 may include the apparatus of example 28 and/or some other example herein, wherein the request may be an UploadNfDescriptorsRequest from the consumer.
  • Example 31 may include the apparatus of example 28 and/or some other example herein, wherein the notification may be an instantiateNfResponse response to the consumer.
  • Example 32 may include the apparatus of example 28 and/or some other example herein, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
  • Example 33 may include the apparatus of example 28 and/or some other example herein, further comprising onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
  • Example 34 may include the apparatus of example 28 and/or some other example herein, wherein the notification may be an O-Cloud lifecycle management (LCM) notification.
  • OCM O-Cloud lifecycle management
  • Example 35 may include the apparatus of example 28 and/or some other example herein, further comprising identifying a termination request received from the consumer, wherein the termination request may be for NF termination.
  • Example 36 may include the apparatus of example 35 and/or some other example herein, further comprising causing to send a termination response to the consumer indicating the result of the NF termination.
  • Example 37 may include an apparatus comprising means for performing any of the methods of examples 1-36.
  • Example 38 may include a network node comprising a communication interface and processing circuitry connected thereto and configured to perform the methods of examples 1- 36.
  • Example 39 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-36, or any other method or process described herein.
  • Example 40 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-36, or any other method or process described herein.
  • Example 41 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-36, or any other method or process described herein.
  • Example 42 may include a method, technique, or process as described in or related to any of examples 1-36, or portions or parts thereof.
  • Example 43 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-36, or portions thereof.
  • Example 44 may include a signal as described in or related to any of examples 1-36, or portions or parts thereof.
  • Example 45 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-36, or portions or parts thereof, or otherwise described in the present disclosure.
  • PDU protocol data unit
  • Example 46 may include a signal encoded with data as described in or related to any of examples 1-36, or portions or parts thereof, or otherwise described in the present disclosure.
  • Example 47 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-36, or portions or parts thereof, or otherwise described in the present disclosure.
  • PDU protocol data unit
  • Example 48 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-36, or portions thereof.
  • Example 49 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-36, or portions thereof.
  • Example 50 may include a signal in a wireless network as shown and described herein.
  • Example 51 may include a method of communicating in a wireless network as shown and described herein.
  • Example 52 may include a system for providing wireless communication as shown and described herein.
  • Example 53 may include a device for providing wireless communication as shown and described herein.
  • An example implementation is an edge computing system, including respective edge processing devices and nodes to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is a client endpoint node, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an aggregation node, network hub node, gateway node, or core data processing node, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an access point, base station, road-side unit, street-side unit, or on-premise unit, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an edge provisioning node, service orchestration node, application orchestration node, or multi-tenant management node, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an edge node operating an edge provisioning service, application or service orchestration service, virtual machine deployment, container deployment, function deployment, and compute management, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an edge computing system operable as an edge mesh, as an edge mesh with side car loading, or with mesh-to-mesh communications, operable to invoke or perform the operations of the examples above, or other subject matter described herein.
  • Another example implementation is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
  • Another example implementation is an edge computing system adapted for supporting client mobility, vehicle-to-vehicle (V2V), vehicle-to-everything (V2X), or vehicle-to-infrastructure (V2I) scenarios, and optionally operating according to ETSI MEC specifications, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
  • V2V vehicle-to-vehicle
  • V2X vehicle-to-everything
  • V2I vehicle-to-infrastructure
  • Another example implementation is an edge computing system adapted for mobile wireless communications, including configurations according to an 3 GPP 4G/LTE or 5G network capabilities, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
  • Another example implementation is a computing system adapted for network communications, including configurations according to an O-RAN capabilities, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
  • the phrase “A and/or B” means (A), (B), or (A and B).
  • the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).
  • the description may use the phrases “in an embodiment,” or “In some embodiments,” which may each refer to one or more of the same or different embodiments.
  • the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure are synonymous.
  • Coupled may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other.
  • directly coupled may mean that two or more elements are in direct contact with one another.
  • communicatively coupled may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like.
  • circuitry refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality.
  • FPD field-programmable device
  • FPGA field-programmable gate array
  • PLD programmable logic device
  • CPLD complex PLD
  • HPLD high-capacity PLD
  • DSPs digital signal processors
  • the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality.
  • the term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.
  • the term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information.
  • processor circuitry may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes.
  • Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like.
  • the one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators.
  • CV computer vision
  • DL deep learning
  • memory and/or “memory circuitry” as used herein refers to one or more hardware devices for storing data, including RAM, MRAM, PRAM, DRAM, and/or SDRAM, core memory, ROM, magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data.
  • computer-readable medium may include, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data.
  • interface circuitry refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices.
  • interface circuitry may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.
  • user equipment refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network.
  • the term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc.
  • the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface.
  • network element refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services.
  • network element may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.
  • computer system refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
  • appliance refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource.
  • a “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource.
  • element refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, etc., or combinations thereof.
  • device refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity.
  • entity refers to a distinct component of an architecture or device, or information transferred as a payload.
  • controller refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move.
  • cloud computing refers to a paradigm for enabling network access to a scalable and elastic pool of shareable computing resources with self-service provisioning and administration on-demand and without active management by users.
  • Cloud computing provides cloud computing services (or cloud services), which are one or more capabilities offered via cloud computing that are invoked using a defined interface (e.g., an API or the like).
  • computing resource or simply “resource” refers to any physical or virtual component, or usage of such components, of limited availability within a computer system or network.
  • Examples of computing resources include usage/access to, for a period of time, servers, processor(s), storage equipment, memory devices, memory areas, networks, electrical power, input/output (peripheral) devices, mechanical devices, network connections (e.g., channels/links, ports, network sockets, etc.), operating systems, virtual machines (VMs), software/applications, computer files, and/or the like.
  • a “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s).
  • a “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc.
  • the term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network.
  • system resources may refer to any kind of shared entities to provide services, and may include computing and/or network resources.
  • System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable.
  • cloud service provider or CSP indicates an organization which operates typically large-scale “cloud” resources comprised of centralized, regional, and edge data centers (e.g., as used in the context of the public cloud).
  • a CSP may also be referred to as a Cloud Service Operator (CSO).
  • CSO Cloud Service Operator
  • References to “cloud computing” generally refer to computing resources and services offered by a CSP or a CSO, at remote locations with at least some increased latency, distance, or constraints relative to edge computing.
  • data center refers to a purpose-designed structure that is intended to house multiple high-performance compute and data storage nodes such that a large amount of compute, data storage and network resources are present at a single location. This often entails specialized rack and enclosure systems, suitable heating, cooling, ventilation, security, fire suppression, and power delivery systems.
  • the term may also refer to a compute and data storage node in some contexts.
  • a data center may vary in scale between a centralized or cloud data center (e.g., largest), regional data center, and edge data center (e.g., smallest).
  • edge computing refers to the implementation, coordination, and use of computing and resources at locations closer to the “edge” or collection of “edges” of a network. Deploying computing resources at the network’s edge may reduce application and network latency, reduce network backhaul traffic and associated energy consumption, improve service capabilities, improve compliance with security or data privacy requirements (especially as compared to conventional cloud computing), and improve total cost of ownership).
  • edge compute node refers to a real-world, logical, or virtualized implementation of a compute-capable element in the form of a device, gateway, bridge, system or subsystem, component, whether operating in a server, client, endpoint, or peer mode, and whether located at an “edge” of an network or at a connected location further within the network.
  • references to a “node” used herein are generally interchangeable with a “device”, “component”, and “sub-system”; however, references to an “edge computing system” or “edge computing network” generally refer to a distributed architecture, organization, or collection of multiple nodes and devices, and which is organized to accomplish or offer some aspect of services or resources in an edge computing setting.
  • the term “Edge Computing” refers to a concept, as described in [6], that enables operator and 3rd party services to be hosted close to the UE's access point of attachment, to achieve an efficient service delivery through the reduced end-to- end latency and load on the transport network.
  • the term “Edge Computing Service Provider” refers to a mobile network operator or a 3rd party service provider offering Edge Computing service.
  • the term “Edge Data Network” refers to a local Data Network (DN) that supports the architecture for enabling edge applications.
  • the term “Edge Hosting Environment” refers to an environment providing support required for Edge Application Server's execution.
  • the term “Application Server” refers to application software resident in the cloud performing the server function.
  • IoT Internet of Things
  • IoT devices are usually low-power devices without heavy compute or storage capabilities.
  • Edge IoT devices may be any kind of IoT devices deployed at a network’s edge.
  • cluster refers to a set or grouping of entities as part of an edge computing system (or systems), in the form of physical entities (e.g., different computing systems, networks or network groups), logical entities (e.g., applications, functions, security constructs, containers), and the like.
  • a “cluster” is also referred to as a “group” or a “domain”.
  • the membership of cluster may be modified or affected based on conditions or functions, including from dynamic or property-based membership, from network or system management scenarios, or from various example techniques discussed below which may add, modify, or remove an entity in a cluster.
  • Clusters may also include or be associated with multiple layers, levels, or properties, including variations in security features and results based on such layers, levels, or properties.
  • the term “application” may refer to a complete and deployable package, environment to achieve a certain function in an operational environment.
  • AI/ML application or the like may be an application that contains some AI/ML models and application-level descriptions.
  • machine learning or “ML” refers to the use of computer systems implementing algorithms and/or statistical models to perform specific task(s) without using explicit instructions, but instead relying on patterns and inferences.
  • ML algorithms build or estimate mathematical model(s) (referred to as “ML models” or the like) based on sample data (referred to as “training data,” “model training information,” or the like) in order to make predictions or decisions without being explicitly programmed to perform such tasks.
  • an ML algorithm is a computer program that learns from experience with respect to some task and some performance measure
  • an ML model may be any object or data structure created after an ML algorithm is trained with one or more training datasets. After training, an ML model may be used to make predictions on new datasets.
  • ML algorithm refers to different concepts than the term “ML model,” these terms as discussed herein may be used interchangeably for the purposes of the present disclosure.
  • machine learning model may also refer to ML methods and concepts used by an ML-assisted solution.
  • An “ML-assisted solution” is a solution that addresses a specific use case using ML algorithms during operation.
  • ML models include supervised learning (e.g., linear regression, k-nearest neighbor (KNN), decision tree algorithms, support machine vectors, Bayesian algorithm, ensemble algorithms, etc.) unsupervised learning (e.g., K-means clustering, principle component analysis (PCA), etc.), reinforcement learning (e.g., Q-learning, multi-armed bandit learning, deep RL, etc.), neural networks, and the like.
  • An “ML pipeline” is a set of functionalities, functions, or functional entities specific for an ML-assisted solution; an ML pipeline may include one or several data sources in a data pipeline, a model training pipeline, a model evaluation pipeline, and an actor.
  • the “actor” is an entity that hosts an ML assisted solution using the output of the ML model inference).
  • ML training host refers to an entity, such as a network function, that hosts the training of the model.
  • ML inference host refers to an entity, such as a network function, that hosts model during inference mode (which includes both the model execution as well as any online learning if applicable).
  • the ML-host informs the actor about the output of the ML algorithm, and the actor takes a decision for an action (an “action” is performed by an actor as a result of the output of an ML assisted solution).
  • model inference information refers to information used as an input to the ML model for determining inference(s); the data used to train an ML model and the data used to determine inferences may overlap, however, “training data” and “inference data” refer to different concepts.
  • instantiate refers to the creation of an instance.
  • An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.
  • information element refers to a structural element containing one or more fields.
  • field refers to individual contents of an information element, or a data element that contains content.
  • a “database object”, “data structure”, or the like may refer to any representation of information that is in the form of an object, attribute-value pair (A VP), key- value pair (KVP), tuple, etc., and may include variables, data structures, functions, methods, classes, database records, database fields, database entities, associations between data and/or database entities (also referred to as a “relation”), blocks and links between blocks in block chain implementations, and/or the like.
  • An “information object,” as used herein, refers to a collection of structured data and/or any representation of information, and may include, for example electronic documents (or “documents”), database objects, data structures, files, audio data, video data, raw data, archive files, application packages, and/or any other like representation of information.
  • electronic document or “document,” may refer to a data structure, computer file, or resource used to record data, and includes various file types and/or data formats such as word processing documents, spreadsheets, slide presentations, multimedia items, webpage and/or source code documents, and/or the like.
  • the information objects may include markup and/or source code documents such as HTML, XML, JSON, Apex®, CSS, JSP, MessagePackTM, Apache® ThriftTM, ASN.l, Google® Protocol Buffers (protobuf), or some other document(s)/format(s) such as those discussed herein.
  • An information object may have both a logical and a physical structure. Physically, an information object comprises one or more units called entities. An entity is a unit of storage that contains content and is identified by a name. An entity may refer to other entities to cause their inclusion in the information object. An information object begins in a document entity, which is also referred to as a root element (or "root"). Logically, an information object comprises one or more declarations, elements, comments, character references, and processing instructions, all of which are indicated in the information object (e.g., using markup).
  • data item refers to an atomic state of a particular object with at least one specific property at a certain point in time.
  • Such an object is usually identified by an object name or object identifier, and properties of such an object are usually defined as database objects (e.g., fields, records, etc.), object instances, or data elements (e.g., mark-up language elements/tags, etc.).
  • database objects e.g., fields, records, etc.
  • object instances e.g., mark-up language elements/tags, etc.
  • data elements e.g., mark-up language elements/tags, etc.
  • data item may refer to data elements and/or content items, although these terms may refer to difference concepts.
  • data element or “element” as used herein refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary.
  • a data element is a logical component of an information object (e.g., electronic document) that may begin with a start tag (e.g., “ ⁇ element>”) and end with a matching end tag (e.g., “ ⁇ /element>”), or only has an empty element tag (e.g., “ ⁇ element />”). Any characters between the start tag and end tag, if any, are the element’s content (referred to herein as “content items” or the like).
  • the content of an entity may include one or more content items, each of which has an associated datatype representation.
  • a content item may include, for example, attribute values, character values, URIs, qualified names (qnames), parameters, and the like.
  • a qname is a fully qualified name of an element, attribute, or identifier in an information object.
  • a qname associates a URI of a namespace with a local name of an element, attribute, or identifier in that namespace. To make this association, the qname assigns a prefix to the local name that corresponds to its namespace.
  • the qname comprises a URI of the namespace, the prefix, and the local name. Namespaces are used to provide uniquely named elements and attributes in information objects.
  • child elements e.g., “ ⁇ elementl> ⁇ element2>content item ⁇ /element2> ⁇ /elementl>”.
  • An “attribute” may refer to a markup construct including a name-value pair that exists within a start tag or empty element tag. Attributes contain data related to its element and/or control the element’s behavior.
  • channel refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream.
  • channel may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated.
  • link refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.
  • radio technology refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer.
  • radio access technology refers to the technology used for the underlying physical connection to a radio based communication network.
  • communication protocol refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.
  • radio technology refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer.
  • radio access technology or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network.
  • communication protocol (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.
  • Examples of wireless communications protocols may be used in various embodiments include a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3 GPP) radio communication technology including, for example, 3 GPP Fifth Generation (5G) or New Radio (NR), Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), Long Term Evolution (LTE), LTE- Advanced (LTE Advanced), LTE Extra, LTE-A Pro, cdmaOne (2G), Code Division Multiple Access 2000 (CDMA 2000), Cellular Digital Packet Data (CDPD), Mobitex, Circuit Switched Data (CSD), High-Speed CSD (HSCSD), Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDM), High Speed Packet Access (HSPA), HSPA Plus (HSPA+), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Sy
  • LPWAN Area-Network
  • LoRA Long Range Wide Area Network
  • LoRaWANTM developed by Semtech and the LoRa Alliance
  • Sigfox Wireless Gigabit Alliance
  • WiMAX Worldwide Interoperability for Microwave Access
  • mmWave standards in general (e.g., wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802.1 lad, IEEE 802.1 lay, etc.), V2X communication technologies (including 3GPP C-V2X),
  • DSRC Dedicated Short Range Communications
  • ITS Intelligent- Transport- Systems
  • ITU International Telecommunication Union
  • ETSI European Telecommunications Standards Institute
  • access network refers to any network, using any combination of radio technologies, RATs, and/or communication protocols, used to connect user devices and service providers.
  • an “access network” is an IEEE 802 local area network (LAN) or metropolitan area network (MAN) between terminals and access routers connecting to provider services.
  • LAN local area network
  • MAN metropolitan area network
  • access router refers to router that terminates a medium access control (MAC) service from terminals and forwards user traffic to information servers according to Internet Protocol (IP) addresses.
  • MAC medium access control
  • SMTC refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration.
  • SSB refers to a synchronization signal/Physical Broadcast Channel (SS/PBCH) block, which includes a Primary Syncrhonization Signal (PSS), a Secondary Syncrhonization Signal (SSS), and a PBCH.
  • PSS Primary Syncrhonization Signal
  • SSS Secondary Syncrhonization Signal
  • PBCH Physical Broadcast Channel
  • a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure.
  • Primary SCG Cell refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation.
  • Secondary Cell refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA.
  • Secondary Cell Group refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC.
  • Serving Cell refers to the primary cell for a UE in RRC CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell.
  • serving cell refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC CONNECTED configured with CA.
  • Special Cell refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.
  • A1 policy refers to a type of declarative policies expressed using formal statements that enable the non-RT RIC function in the SMO to guide the near-RT RIC function, and hence the RAN, towards better fulfilment of the RAN intent.
  • A1 Enrichment information refers to information utilized by near-RT RIC that is collected or derived at SMO/non-RT RIC either from non-network data sources or from network functions themselves.
  • A1 -Policy Based Traffic Steering Process Mode refers to an operational mode in which the Near-RT RIC is configured through A1 Policy to use Traffic Steering Actions to ensure a more specific notion of network performance (for example, applying to smaller groups of E2 Nodes and UEs in the RAN) than that which it ensures in the Background Traffic Steering.
  • Background Traffic Steering Processing Mode refers to an operational mode in which the Near-RT RIC is configured through 01 to use Traffic Steering Actions to ensure a general background network performance which applies broadly across E2 Nodes and UEs in the RAN.
  • Baseline RAN Behavior refers to the default RAN behavior as configured at the E2 Nodes by SMO
  • E2 refers to an interface connecting the Near-RT RIC and one or more O- CU-CPs, one or more O-CU-UPs, one or more O-DUs, and one or more O-eNBs.
  • ⁇ 2 Node refers to a logical node terminating E2 interface.
  • ORAN nodes terminating E2 interface are: for NR access: O-CU-CP, O- CU-UP, O-DU or any combination; and for E-UTRA access: O-eNB.
  • non-RT RIC refers to a logical function that enables non-real-time control and optimization of RAN elements and resources, AI/ML workflow including model training and updates, and policy-based guidance of applications/features in Near-RT RIC.
  • Near-RT RIC or “O-RAN near-real-time RAN Intelligent Controller” refers to a logical function that enables near-real-time control and optimization of RAN elements and resources via fine-grained (e.g., UE basis, Cell basis) data collection and actions over E2 interface.
  • fine-grained e.g., UE basis, Cell basis
  • O-RAN Central Unit refers to a logical node hosting RRC, SDAP and PDCP protocols.
  • O-RAN Central Unit - Control Plane or “O-CU-CP” refers to a logical node hosting the RRC and the control plane part of the PDCP protocol.
  • O-RAN Central Unit - User Plane or “0-CU-UP” refers to a logical node hosting the user plane part of the PDCP protocol and the SDAP protocol
  • O-RAN Distributed Unit refers to a logical node hosting RLC/MAC/High-PHY layers based on a lower layer functional split.
  • O-RAN eNB or “O-eNB” refers to an eNB or ng-eNB that supports E2 interface.
  • O-RAN Radio Unit refers to a logical node hosting Low-PHY layer and RF processing based on a lower layer functional split. This is similar to 3GPP’s “TRP” or “RRH” but more specific in including the Low-PHY layer (FFT/iFFT, PRACH extraction).
  • the term “01” refers to an interface between orchestration & management entities (Orchestration/NMS) and O-RAN managed elements, for operation and management, by which FCAPS management, Software management, File management and other similar functions shall be achieved.
  • the term “RAN UE Group” refers to an aggregations of UEs whose grouping is set in the E2 nodes through E2 procedures also based on the scope of A1 policies. These groups can then be the target of E2 CONTROL or POLICY messages.
  • Traffic Steering Action refers to the use of a mechanism to alter RAN behavior. Such actions include E2 procedures such as CONTROL and POLICY.
  • Traffic Steering Inner Loop refers to the part of the Traffic Steering processing, triggered by the arrival of periodic TS related KPM (Key Performance Measurement) from E2 Node, which includes UE grouping, setting additional data collection from the RAN, as well as selection and execution of one or more optimization actions to enforce Traffic Steering policies.
  • KPM Key Performance Measurement
  • Traffic Steering Outer Loop refers to the part of the Traffic Steering processing, triggered by the near-RT RIC setting up or updating Traffic Steering aware resource optimization procedure based on information from A1 Policy setup or update, A1 Enrichment Information (El) and/or outcome of Near-RT RIC evaluation, which includes the initial configuration (preconditions) and injection of related A1 policies, Triggering conditions for TS changes.
  • A1 Policy setup or update A1 Enrichment Information (El) and/or outcome of Near-RT RIC evaluation, which includes the initial configuration (preconditions) and injection of related A1 policies, Triggering conditions for TS changes.
  • Traffic Steering Processing Mode refers to an operational mode in which either the RAN or the Near-RT RIC is configured to ensure a particular network performance. This performance includes such aspects as cell load and throughput, and can apply differently to different E2 nodes and UEs. Throughout this process, Traffic Steering Actions are used to fulfill the requirements of this configuration.
  • Traffic Steering Target refers to the intended performance result that is desired from the network, which is configured to Near-RT RIC over 01.
  • any of the disclosed embodiments and example implementations can be embodied in the form of various types of hardware, software, firmware, middleware, or combinations thereof, including in the form of control logic, and using such hardware or software in a modular or integrated manner.
  • any of the software components or functions described herein can be implemented as software, program code, script, instructions, etc., operable to be executed by processor circuitry.
  • the software code can be stored as a computer- or processor- executable instructions or commands on a physical non-transitory computer-readable medium.
  • suitable media include RAM, ROM, magnetic media such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like, or any combination of such storage or transmission devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

This disclosure describes systems, methods, and devices related to a O-Cloud LCM service. A device may identify a first request received from a consumer in the SMO in an open radio access network (O-RAN), wherein the request is to instantiate a network function (NF) on an O-Cloud in the 5GS. The device may cause to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF. The device may identify an indication received from the DMS that the deployment of the NF has been completed. The device may cause to send a notification to the consumer that the NF has been instantiated on the O-Cloud.

Description

O-CLOUD LIFECYCLE MANAGEMENT SERVICE SUPPORT
CROSS-REFERENCE TO RELATED PATENT APPLICATION(S)
This application claims the benefit of U.S. Provisional Application No. 63/173,790, filed April 12, 2021, the disclosure of which is incorporated by reference as set forth in full.
TECHNICAL FIELD
This disclosure generally relates to systems and methods for wireless communications and, more particularly, to the field of wireless communications, and in particular, Open Radio Access Network (O-RAN) implementations.
BACKGROUND
Wireless devices are becoming widely prevalent and are increasingly requesting access to wireless channels. Open RAN Alliance (O-RAN) is committed to evolving radio access networks. The O-RAN will be deployed based on 3 GPP defined network slicing technologies.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 depicts an illustrative schematic diagram for O-Cloud LCM service, in accordance with one or more example embodiments of the present disclosure.
FIG. 2 depicts an illustrative schematic diagram for O-Cloud LCM service, in accordance with one or more example embodiments of the present disclosure.
FIG. 3 illustrates a flow diagram of a process for an illustrative O-Cloud LCM service system, in accordance with one or more example embodiments of the present disclosure.
FIG. 4 illustrates an example network architecture, in accordance with one or more example embodiments of the present disclosure.
FIG. 5 schematically illustrates a wireless network, in accordance with one or more example embodiments of the present disclosure.
FIG. 6 illustrates components of a computing device, in accordance with one or more example embodiments of the present disclosure.
FIG. 7 illustrates a logical architecture 700 of the O-RAN system architecture, in accordance with one or more example embodiments of the present disclosure.
FIG. 8 illustrates an example O-RAN Architecture including Near-RT RIC interfaces, in accordance with one or more example embodiments of the present disclosure. FIG. 9 depicts an example ORAN architectures/frameworks for adding 3rd party xApps, in accordance with one or more example embodiments of the present disclosure.
FIG. 10 depicts an example Near-RT RIC Internal Architecture, in accordance with one or more example embodiments of the present disclosure.
DETAILED DESCRIPTION
The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of various embodiments. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the various embodiments may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the various embodiments with unnecessary detail. For the present document, the phrases “A or B” and “A/B” mean (A), (B), or (A and B).
The Open RAN Alliance (O-RAN) is committed to evolving radio access networks — making them more open and smarter than previous generations and will leverage the real-time analytics to drive embedded machine learning systems and artificial intelligence back-end modules in the RAN.
FIG. 1 depicts an illustrative schematic diagram for O-Cloud LCM service, in accordance with one or more example embodiments of the present disclosure.
FIG. 1 illustrates an example Open RAN (O-RAN) system architecture 100.
The O-RAN architecture 100 includes four O-RAN defined interfaces - namely, the A1 interface, the 01 interface, the 02 interface, and the Open Fronthaul Management (M)- plane interface - which connect the service management and orchestration (SMO) framework 102 to O-RAN network functions (NFs) 104 and the O-Cloud 106.
The 01 interface is an interface between orchestration & management entities (Orchestration/NMS) and O-RAN managed elements, for operation and management, by which FCAPS management, Software management, File management and other similar functions shall be achieved (see e.g., O-RAN Alliance Working Group (WG) 1, “O-RAN Architecture Description” v02.00 (Jul 2020) (“O-RAN. WG1. O-RAN- Architecture- Description- v02.00”), O-RAN Alliance WG6, “Cloud Architecture and Deployment Scenarios for O-RAN Virtualized RAN” v02.00 (“O-RAN. WG6.CAD-v02.00”). The 02 interface is an interface between the Service Management and Orchestration Framework and the O-Cloud (see e.g., O-RAN. WGl.O-RAN-Architecture-Description-v02.00, O-RAN. WG6.CAD- v02.00). O-Cloud refers to a cloud computing platform made up of the physical infrastructure nodes using O-RAN architecture.
The A1 interface is an interface between Non-RT RIC and Near-RT RIC to enable policy-driven guidance of Near-RT RIC applications/functions, and support AI/ML workflow.
The 02 Interface is a collection of services and their associated interfaces that are provided by the O-Cloud platform to the SMO. The services are categorized into two logical groups: (i) Infrastructure Management Services (IMS), which include the subset of 02 functions that are responsible for deploying and managing cloud infrastructure. (ii) Deployment Management Services (DMS), which include the subset of 02 functions that are responsible for managing the lifecycle of virtualized/containerized deployments on the cloud infrastructure. The 02 services and their associated interfaces shall be specified in the upcoming 02 specification. Any definitions of SMO functional elements needed to consume these services shall be described in OAM architecture life cycle management (LCM)
The SMO 102 (see e.g., O-RAN Alliance WG1, “O-RAN Operations and Maintenance Interface Specification” v03.00 (Apr 2020) (“O-RAN.WGl.01-Interface.0-v03.00”)) also connects with an external system 110, which provides enrichment data to the SMO 102. FIG. 1 also illustrates that the A1 interface terminates at an O-RAN Non-Real Time (RT) RAN
Intelligent Controller (RIC) 112 in or at the SMO 102 and at the O-RAN Near-RT RIC 114 in or at the O-RAN NFs 1 04. The O-RAN NFs 104 can be VNFs such as VMs or containers, sitting above the O-Cloud 106 and/or Physical Network Functions (PNFs) utilizing customized hardware. All O-RAN NFs 104 are expected to support the 01 interface when interfacing with the SMO framework 102. The O-RAN NFs 104 connect to the NG-Core 108 via the NG interface (which is a 3GPP defined interface). The Open Fronthaul M-plane interface between the SMO 102 and the O-RAN Radio Unit (O-RU) 116 supports the O-RU 116 management in the O-RAN hybrid model as specified in O-RAN Alliance WG4, O-RAN Fronthaul Management Plane Specification, version 2.0 (July 2019) (“ORAN-WG4.MP.0-v02.00.00”). The Open Fronthaul M-plane interface is an optional interface to the SMO 102 that is included for backward compatibility purposes as per ORAN-WG4.MP.0-v02.00.00, and is intended for management of the O-RU 116 in hybrid mode only. The management architecture of flat mode O-RAN Alliance WG1, “O-RAN Operations and Maintenance Architecture Specification” v03.00 (Apr 2020) (“O-RAN. WG1. OAM- Architecture-v03.00”) and its relation to the 01 interface for the O-RU 116 is for future study. The O-RU 116 termination of the 01 interface towards the SMO 102 as specified in O-RAN.WG 1 OAM-Architecture-v03 00
Cloud-native APIs (e.g., Kubernetes, OpenStack, ...) have been proposed to provide deployment management services (DMS) on the 02 interface. The DMS provides management of one or more deployments using the O-Cloud resources. If applications, such as slice subnet management functions are to invoke cloud-native APIs directly, then each application needs to support multiple cloud-native APIs.
In one or more embodiments, a O-Cloud LCM service system may provide that O- Cloud life cycle management (LCM) services are provided by NFO that will decouple the complexity of cloud-native APIs from the applications. Indeed, O-cloud LCM services are the abstraction of cloud-native APIs.
In one or more embodiments, a O-Cloud LCM service system may provide mechanisms for load balancing optimization (LBO) and mobility robustness optimization (MRO).
FIG. 2 depicts an illustrative schematic diagram for O-Cloud LCM service, in accordance with one or more example embodiments of the present disclosure.
FIG. 2 shows a framework that describes the O-Cloud LCM services exposed by network function orchestration (NFO) function to allow consumers, such as slice management function, to instantiate and/or terminate O-RAN network functions.
In one or more embodiments, a O-Cloud LCM service system may facilitate that the O- cloud LCM service may enable an authorized consumer in SMO to send a request to upload NF descriptors and receive instantiate responses. For example, a O-Cloud LCM service system may invoke UploadNflJescriptorsRequest to onboard the NF LCM descriptors. It should be noted that the NF LCM descriptors are an abstraction of VM/container descriptors or Application packages. The O-Cloud LCM service system may receive InstantiateNfliesponse with output parameters indicating the result of onboarding. The following Table 1 shows the messages and directions:
Table 1:
Figure imgf000006_0001
In one or more embodiments, a O-Cloud LCM service system may facilitate O-cloud LCM operations such as instantiation. The O-cloud LCM service may enable an authorized consumer in SMO to request instantiation and receive a response to the instantiation request. For example, a O-Cloud LCM service system may invoke InstantiateNfRequest with reference to NF LCM descriptors that identify the information needed to instantiate the O-RAN NF. It should be noted that if the application is expected to provide O-cloud ID for SMO to select the O-cloud, then it needs a way to expose the O-cloud ID to applications. It should also be noted that if DMS supports VM and container solutions, it needs to determine how the solution is chosen when a request is received from the applications.
In one or more embodiments, a O-Cloud LCM service system may receive // islanlialeNfRespoi ise with output parameters indicating the status of instantiation.
In one or more embodiments, a O-Cloud LCM service system may receive an LCM notification to indicate the result of instantiation. The following Table 2 shows the messages and directions:
Table 2:
Figure imgf000007_0001
In one or more embodiments, a O-Cloud LCM service system may execute an O-cloud LCM termination operation. The O-Cloud LCM service system may facilitate that the O-cloud LCM service may enable an authorized consumer in SMO to send a termination request and receive a termination response.
For example, the O-Cloud LCM service system may invoke TerminateNfRequest with the NF identifier to terminate the NF instance. The O-Cloud LCM service system may receive TerminateNfResponse with output parameters indicating the result of NF termination.
The following Table 3 shows the messages and directions:
Table 3:
Figure imgf000007_0002
In one or more embodiments, an O-Cloud LCM service system may instantiate a Network Function on O-Cloud. The following use case (Table 4) describes the instantiation of a Network Function as a new deployment on an O-Cloud, and notification to the SMO once the instantiation of resources for the Network Function deployment has been completed.
The instantiation on the O-Cloud Node may be part of a larger procedure instantiating multiple connected Network Functions, in which case the SMO will coordinate the timing of instantiation across O-Clouds and O-Cloud Nodes, the configuration of transport needed between O-Cloud Nodes, and other requirements such as addressing and security used for connecting the Network Functions as below. Instantiation of multiple connected NFs is not addressed in the use case shown in Table 4. Table 4: Sequence Description
Figure imgf000008_0001
Figure imgf000009_0001
In one or more embodiments, a O-Cloud LCM service system may Terminate Network Function on O-Cloud.
The following use case (Table 5) describes the termination of a Network Function as a new deployment on an O-Cloud, and notification to the SMO once the termination of resources for the Network Function deployment has been completed. Table 5: Sequence Description
Figure imgf000010_0001
In one or more embodiments, a O-Cloud LCM service system may define requirements for O-RAN managed function IM elements and FM/PM elements needed for orchestration using the 01 interface. This is shown in the following Tables 6 and 7. Table 6: Orchestration Requirements Relating to 01
Figure imgf000011_0001
Table 7: Orchestration Requirements Relating to 02
Figure imgf000011_0002
It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting.
FIG. 3 illustrates a flow diagram of illustrative process 300 for an illustrative O-Cloud LCM service system, in accordance with one or more example embodiments of the present disclosure. At block 302, a device may identify a first request received from a consumer in the SMO in an open radio access network (O-RAN), wherein the request is to instantiate a network function (NF) on an O-Cloud in the 5GS. The request specifies placement requirements where the NF needs to be instantiated. The request is an UploadNfDescriptorsRequest from the consumer.
At block 304, the device may cause to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF.
At block 306, the device may identify an indication received from the DMS that the deployment of the NF has been completed.
At block 308, the device may cause to send a notification to the consumer that the NF has been instantiated on the O-Cloud. The notification is an instantiateNfResponse response to the consumer. The notification is an O-Cloud lifecycle management (LCM) notification. The instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
In one or more embodiments, the device may onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF. The device may identify a termination request received from the consumer, wherein the termination request is for NF termination. The device may cause to send a termination response to the consumer indicating the result of the NF termination.
It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting.
FIGs. 4-6 illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.
FIG. 4 illustrates an example network architecture 400 according to various embodiments. The network 400 may operate in a manner consistent with 3 GPP technical specifications for LTE or 5G/NR systems. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3 GPP systems, or the like.
The network 400 includes a UE 402, which is any mobile or non-mobile computing device designed to communicate with a RAN 404 via an over-the-air connection. The UE 402 is communicatively coupled with the RAN 404 by a Uu interface, which may be applicable to both LTE and NR systems. Examples of the UE 402 include, but are not limited to, a smartphone, tablet computer, wearable computer, desktop computer, laptop computer, in- vehicle infotainment system, in-car entertainment system, instrument cluster, head-up display (HUD) device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, machine-to-machine (M2M), device-to-device (D2D), machine-type communication (MTC) device, Internet of Things (IoT) device, and/or the like. The network 400 may include a plurality of UEs 402 coupled directly with one another via a D2D, ProSe, PC5, and/or sidelink (SL) interface. These UEs 402 may be M2M/D2D/MTC/IoT devices and/or vehicular systems that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc. The UE 402 may perform blind decoding attempts of SL channels/links according to the various embodiments herein.
In some embodiments, the UE 402 may additionally communicate with an AP 406 via an over-the-air (OTA) connection. The AP 406 manages a WLAN connection, which may serve to offload some/all network traffic from the RAN 404. The connection between the UE 402 and the AP 406 may be consistent with any IEEE 802.11 protocol. Additionally, the UE 402, RAN 404, and AP 406 may utilize cellular- WLAN aggregation/integration (e.g., LWA/LWIP). Cellular- WLAN aggregation may involve the UE 402 being configured by the RAN 404 to utilize both cellular radio resources and WLAN resources.
The RAN 404 includes one or more access network nodes (ANs) 408. The ANs 408 terminate air-interface(s) for the UE 402 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and PHY/Ll protocols. In this manner, the AN 408 enables data/voice connectivity between CN 420 and the UE 402. The ANs 408 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells; or some combination thereof. In these implementations, an AN 408 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, etc.
One example implementation is a “CU/DU split” architecture where the ANs 408 are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB- Distributed Units (DUs), where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like) (see e.g., 3GPP TS 38.401 vl6.1.0 (2020-03)). In some implementations, the one or more RUs may be individual RSUs. In some implementations, the CU/DU split may include an ng-eNB-CU and one or more ng- eNB-DUs instead of, or in addition to, the gNB-CU and gNB-DUs, respectively. The ANs 408 employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used.
The plurality of ANs may be coupled with one another via an X2 interface (if the RAN 404 is an LTE RAN or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) 410) or an Xn interface (if the RAN 404 is a NG-RAN 414). The X2/Xn interfaces, which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
The ANs of the RAN 404 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 402 with an air interface for network access. The UE 402 may be simultaneously connected with a plurality of cells provided by the same or different ANs 408 of the RAN 404. For example, the UE 402 and RAN 404 may use carrier aggregation to allow the UE 402 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell. In dual connectivity scenarios, a first AN 408 may be a master node that provides an MCG and a second AN 408 may be secondary node that provides an SCG. The first/second ANs 408 may be any combination of eNB, gNB, ng-eNB, etc.
The RAN 404 may provide the air interface over a licensed spectrum or an unlicensed spectrum. To operate in the unlicensed spectrum, the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells. Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
In V2X scenarios the UE 402 or AN 408 may be or act as a roadside unit (RSU), which may refer to any transportation infrastructure entity used for V2X communications. An RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE. An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like. In one example, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic. The RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services. The components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.
In some embodiments, the RAN 404 may be an E-UTRAN 410 with one or more eNBs 412. The an E-UTRAN 410 provides an LTE air interface (Uu) with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc. The LTE air interface may rely on CSI- RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operating on sub-6 GHz bands.
In some embodiments, the RAN 404 may be an next generation (NG)-RAN 414 with one or more gNB 416 and/or on or more ng-eNB 418. The gNB 416 connects with 5G-enabled UEs 402 using a 5G NR interface. The gNB 416 connects with a 5GC 440 through an NG interface, which includes an N2 interface or an N3 interface. The ng-eNB 418 also connects with the 5GC 440 through an NG interface, but may connect with a UE 402 via the Uu interface. The gNB 416 and the ng-eNB 418 may connect with each other over an Xn interface.
In some embodiments, the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 414 and a UPF 448 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 414 and an AMF 444 (e.g., N2 interface).
The NG-RAN 414 may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP- OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking. The 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz. The 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH. The 5G-NR air interface may utilize BWPs for various purposes. For example, BWP can be used for dynamic adaptation of the SCS. For example, the UE 402 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 402, the SCS of the transmission is changed as well. Another use case example of BWP is related to power saving. In particular, multiple BWPs can be configured for the UE 402 with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios. A BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 402 and in some cases at the gNB 416. A BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
The RAN 404 is communicatively coupled to CN 420 that includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE 402). The components of the CN 420 may be implemented in one physical node or separate physical nodes. In some embodiments, NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 420 onto physical compute/storage resources in servers, switches, etc. A logical instantiation of the CN 420 may be referred to as a network slice, and a logical instantiation of a portion of the CN 420 may be referred to as a network sub-slice.
The CN 420 may be an LTE CN 422 (also referred to as an Evolved Packet Core (EPC) 422). The EPC 422 may include MME 424, SGW 426, SGSN 428, HSS 430, PGW 432, and PCRF 434 coupled with one another over interfaces (or “reference points”) as shown. The NFs in the EPC 422 are briefly introduced as follows.
The MME 424 implements mobility management functions to track a current location of the UE 402 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.
The SGW 426 terminates an SI interface toward the RAN 410 and routes data packets between the RAN 410 and the EPC 422. The SGW 426 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3 GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
The SGSN 428 tracks a location of the UE 402 and performs security functions and access control. The SGSN 428 also performs inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 424; MME 424 selection for handovers; etc. The S3 reference point between the MME 424 and the SGSN 428 enable user and bearer information exchange for inter-3 GPP access network mobility in idle/active states.
The HSS 430 includes a database for network users, including subscription-related information to support the network entities’ handling of communication sessions. The HSS 430 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc. An S6a reference point between the HSS 430 and the MME 424 may enable transfer of subscription and authentication data for authenticating/authorizing user access to the EPC 420.
The PGW 432 may terminate an SGi interface toward a data network (DN) 436 that may include an application (app)/content server 438. The PGW 432 routes data packets between the EPC 422 and the data network 436. The PGW 432 is communicatively coupled with the SGW 426 by an S5 reference point to facilitate user plane tunneling and tunnel management. The PGW 432 may further include a node for policy enforcement and charging data collection (e.g., PCEF). Additionally, the SGi reference point may communicatively couple the PGW 432 with the same or different data network 436. The PGW 432 may be communicatively coupled with a PCRF 434 via a Gx reference point.
The PCRF 434 is the policy and charging control element of the EPC 422. The PCRF 434 is communicatively coupled to the app/content server 438 to determine appropriate QoS and charging parameters for service flows. The PCRF 432 also provisions associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.
The CN 420 may be a 5GC 440 including an AUSF 442, AMF 444, SMF 446, UPF 448, NSSF 450, NEF 452, NRF 454, PCF 456, UDM 458, and AF 460 coupled with one another over various interfaces as shown. The NFs in the 5GC 440 are briefly introduced as follows.
The AUSF 442 stores data for authentication of UE 402 and handle authentication- related functionality. The AUSF 442 may facilitate a common authentication framework for various access types..
The AMF 444 allows other functions of the 5GC 440 to communicate with the UE 402 and the RAN 404 and to subscribe to notifications about mobility events with respect to the UE 402. The AMF 444 is also responsible for registration management (e.g., for registering UE 402), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization. The AMF 444 provides transport for SM messages between the UE 402 and the SMF 446, and acts as a transparent proxy for routing SM messages. AMF 444 also provides transport for SMS messages between UE 402 and an SMSF. AMF 444 interacts with the AUSF 442 and the UE 402 to perform various security anchor and context management functions. Furthermore, AMF 444 is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN 404 and the AMF 444. The AMF 444 is also a termination point of NAS (Nl) signaling, and performs NAS ciphering and integrity protection.
AMF 444 also supports NAS signaling with the UE 402 over an N3IWF interface. The N3IWF provides access to untrusted entities. N3IWF may be a termination point for the N2 interface between the (R)AN 404 and the AMF 444 for the control plane, and may be a termination point for the N3 reference point between the (R)AN 414 and the 448 for the user plane. As such, the AMF 444 handles N2 signalling from the SMF 446 and the AMF 444 for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec andN3 tunnelling, marks N3 user-plane packets in the uplink, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received over N2. N3IWF may also relay UL and DL control-plane NAS signalling between the UE 402 and AMF 444 via an Nl reference point between the UE 402and the AMF 444, and relay uplink and downlink user-plane packets between the UE 402 and UPF 448. The N3IWF also provides mechanisms for IPsec tunnel establishment with the UE 402. The AMF 444 may exhibit an Namf service- based interface, and may be a termination point for an N14 reference point between two AMFs 444 and an N17 reference point between the AMF 444 and a 5G-EIR (not shown by FIG. 4).
The SMF 446 is responsible for SM (e.g., session establishment, tunnel management between UPF 448 and AN 408); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 448 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 444 over N2 to AN 408; and determining SSC mode of a session. SM refers to management of a PDU session, and a PDU session or “session” refers to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 402 and the DN 436.
The UPF 448 acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 436, and a branching point to support multi homed PDU session. The UPF 448 also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs uplink traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the uplink and downlink, and performs downlink packet buffering and downlink data notification triggering. UPF 448 may include an uplink classifier to support routing traffic flows to a data network.
The NSSF 450 selects a set of network slice instances serving the UE 402. The NSSF 450 also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed. The NSSF 450 also determines an AMF set to be used to serve the UE 402, or a list of candidate AMFs 444 based on a suitable configuration and possibly by querying the NRF 454. The selection of a set of network slice instances for the UE 402 may be triggered by the AMF 444 with which the UE 402 is registered by interacting with the NSSF 450; this may lead to a change of AMF 444. The NSSF 450 interacts with the AMF 444 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown).
The NEF 452 securely exposes services and capabilities provided by 3 GPP NFs for third party, internal exposure/re-exposure, AFs 460, edge computing or fog computing systems (e.g., edge compute node, etc. In such embodiments, the NEF 452 may authenticate, authorize, or throttle the AFs. NEF 452 may also translate information exchanged with the AF 460 and information exchanged with internal network functions. For example, the NEF 452 may translate between an AF-Service-Identifier and an internal 5GC information. NEF 452 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 452 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 452 to other NFs and AFs, or used for other purposes such as analytics.
The NRF 454 supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances. NRF 454 also maintains information of available NF instances and their supported services. The NRF 454 also supports service discovery functions, wherein the NRF 454 receives NF Discovery Request from NF instance or an SCP (not shown), and provides information of the discovered NF instances to the NF instance or SCP.
The PCF 456 provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior. The PCF 456 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 458. In addition to communicating with functions over reference points as shown, the PCF 456 exhibit an Npcf service-based interface. The UDM 458 handles subscription-related information to support the network entities’ handling of communication sessions, and stores subscription data of UE 402. For example, subscription data may be communicated via an N8 reference point between the UDM 458 and the AMF 444. The UDM 458 may include two parts, an application front end and a UDR. The UDR may store subscription data and policy data for the UDM 458 and the PCF 456, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 402) for the NEF 452. The Nudr service- based interface may be exhibited by the UDR 221 to allow the UDM 458, PCF 456, and NEF 452 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR. The UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs over reference points as shown, the UDM 458 may exhibit the Nudm service-based interface.
AF 460 provides application influence on traffic routing, provide access to NEF 452, and interact with the policy framework for policy control. The AF 460 may influence UPF 448 (re)selection and traffic routing. Based on operator deployment, when AF 460 is considered to be a trusted entity, the network operator may permit AF 460 to interact directly with relevant NFs. Additionally, the AF 460 may be used for edge computing implementations,
The 5GC 440 may enable edge computing by selecting operator/3 rd party services to be geographically close to a point that the UE 402 is attached to the network. This may reduce latency and load on the network. In edge computing implementations, the 5GC 440 may select a UPF 448 close to the UE 402 and execute traffic steering from the UPF 448 to DN 436 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 460, which allows the AF 460 to influence UPF (re)selection and traffic routing.
The data network (DN) 436 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)/content server 438. The DN 436 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. In this embodiment, the app server 438 can be coupled to an IMS via an S-CSCF or the I-CSCF. In some implementations, the DN 436 may represent one or more local area DNs (LADNs), which are DNs 436 (or DN names (DNNs)) that is/are accessible by a UE 402 in one or more specific areas. Outside of these specific areas, the UE 402 is not able to access the LADN/DN 436.
Additionally or alternatively, the DN 436 may be an Edge DN 436, which is a (local) Data Network that supports the architecture for enabling edge applications. In these embodiments, the app server 438 may represent the physical hardware sy stems/ devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s). In some embodiments, the app/content server 438 provides an edge hosting environment that provides support required for Edge Application Server's execution.
In some embodiments, the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic. In these embodiments, the edge compute nodes may be included in, or co-located with one or more RAN410, 414. For example, the edge compute nodes can provide a connection between the RAN 414 and UPF 448 in the 5GC 440. The edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN 414 and UPF 448.
The interfaces of the 5GC 440 include reference points and service-based itnterfaces. The reference points include: N1 (between the UE 402 and the AMF 444), N2 (between RAN 414 and AMF 444), N3 (between RAN 414 and UPF 448), N4 (between the SMF 446 and UPF 448), N5 (between PCF 456 and AF 460), N6 (between UPF 448 and DN 436), N7 (between SMF 446 and PCF 456), N8 (between UDM 458 and AMF 444), N9 (between two UPFs 448), N10 (between the UDM 458 and the SMF 446), Ni l (between the AMF 444 and the SMF 446), N12 (between AUSF 442 and AMF 444), N13 (between AUSF 442 and UDM 458), N14 (between two AMFs 444; not shown), N15 (between PCF 456 and AMF 444 in case of a non roaming scenario, or between the PCF 456 in a visited network and AMF 444 in case of a roaming scenario), N16 (between two SMFs 446; not shown), and N22 (between AMF 444 and NSSF 450). Other reference point representations not shown in FIG. 4 can also be used. The service-based representation of FIG. 4 represents NFs within the control plane that enable other authorized NFs to access their services. The service-based interfaces (SBIs) include: Namf (SBI exhibited by AMF 444), Nsmf (SBI exhibited by SMF 446), Nnef (SBI exhibited by NEF 452), Npcf (SBI exhibited by PCF 456), Nudm (SBI exhibited by the UDM 458), Naf (SBI exhibited by AF 460), Nnrf (SBI exhibited by NRF 454), Nnssf (SBI exhibited by NSSF 450), Nausf (SBI exhibited by AUSF 442). Other service-based interfaces (e.g., Nudr, N5g- eir, and Nudsf) not shown in FIG. 4 can also be used. In some embodiments, the NEF 452 can provide an interface to edge compute nodes 436x, which can be used to process wireless connections with the RAN 414.
In some implementations, the system 400 may include an SMSF, which is responsible for SMS subscription checking and verification, and relaying SM messages to/from the UE 402 to/from other entities, such as an SMS-GMSC/IWMSC/SMS-router. The SMS may also interact with AMF 442 and UDM 458 for a notification procedure that the UE 402 is available for SMS transfer (e.g., set a UE not reachable flag, and notifying UDM 458 when UE 402 is available for SMS).
The 5GS may also include an SCP (or individual instances of the SCP) that supports indirect communication (see e.g., 3GPP TS 23.501 section 7.1.1); delegated discovery (see e.g., 3GPP TS 23.501 section 7.1.1); message forwarding and routing to destination NF/NF service(s), communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer API) (see e.g., 3GPP TS 33.501), load balancing, monitoring, overload control, etc.; and discovery and selection functionality for UDM(s), AUSF(s), UDR(s), PCF(s) with access to subscription data stored in the UDR based on UE's SUPI, SUCI or GPSI (see e.g., 3GPP TS 23.501 section 6.3). Load balancing, monitoring, overload control functionality provided by the SCP may be implementation specific. The SCP may be deployed in a distributed manner. More than one SCP can be present in the communication path between various NF Services. The SCP, although not an NF instance, can also be deployed distributed, redundant, and scalable.
FIG. 5 schematically illustrates a wireless network 500 in accordance with various embodiments. The wireless network 500 may include a UE 502 in wireless communication with an AN 504. The UE 502 and AN 504 may be similar to, and substantially interchangeable with, like-named components described with respect to FIG. 4.
The UE 502 may be communicatively coupled with the AN 504 via connection 506. The connection 506 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6GHz frequencies.
The UE 502 may include a host platform 508 coupled with a modem platform 510. The host platform 508 may include application processing circuitry 512, which may be coupled with protocol processing circuitry 514 of the modem platform 510. The application processing circuitry 512 may run various applications for the UE 502 that source/sink application data. The application processing circuitry 512 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations
The protocol processing circuitry 514 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 506. The layer operations implemented by the protocol processing circuitry 514 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
The modem platform 510 may further include digital baseband circuitry 516 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 514 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
The modem platform 510 may further include transmit circuitry 518, receive circuitry 520, RF circuitry 522, and RF front end (RFFE) 524, which may include or connect to one or more antenna panels 526. Briefly, the transmit circuitry 518 may include a digital-to-analog converter, mixer, intermediate frequency (IF) components, etc.; the receive circuitry 520 may include an analog-to-digital converter, mixer, IF components, etc.; the RF circuitry 522 may include a low- noise amplifier, a power amplifier, power tracking components, etc.; RFFE 524 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc. The selection and arrangement of the components of the transmit circuitry 518, receive circuitry 520, RF circuitry 522, RFFE 524, and antenna panels 526 (referred generically as “transmit/receive components”) may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.
In some embodiments, the protocol processing circuitry 514 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components. A UE 502 reception may be established by and via the antenna panels 526, RFFE 524, RF circuitry 522, receive circuitry 520, digital baseband circuitry 516, and protocol processing circuitry 514. In some embodiments, the antenna panels 526 may receive a transmission from the AN 504 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 526.
A UE 502 transmission may be established by and via the protocol processing circuitry 514, digital baseband circuitry 516, transmit circuitry 518, RF circuitry 522, RFFE 524, and antenna panels 526. In some embodiments, the transmit components of the UE 504 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 526.
Similar to the UE 502, the AN 504 may include a host platform 528 coupled with a modem platform 530. The host platform 528 may include application processing circuitry 532 coupled with protocol processing circuitry 534 of the modem platform 530. The modem platform may further include digital baseband circuitry 536, transmit circuitry 538, receive circuitry 540, RF circuitry 542, RFFE circuitry 544, and antenna panels 546. The components of the AN 504 may be similar to and substantially interchangeable with like-named components of the UE 502. In addition to performing data transmission/reception as described above, the components of the AN 508 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
FIG. 6 illustrates components of a computing device 600 according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, FIG. 6 shows a diagrammatic representation of hardware resources 600 including one or more processors (or processor cores) 610, one or more memory/ storage devices 620, and one or more communication resources 630, each of which may be communicatively coupled via a bus 640 or other interface circuitry. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor 602 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 600.
The processors 610 include, for example, processor 612 and processor 614. The processors 610 include circuitry such as, but not limited to one or more processor cores and one or more of cache memory, low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface circuit, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose I/O, memory card controllers such as secure digital/multi-media card (SD/MMC) or similar, interfaces, mobile industry processor interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports. The processors 610 may be, for example, a central processing unit (CPU), reduced instruction set computing (RISC) processors, Acorn RISC Machine (ARM) processors, complex instruction set computing (CISC) processors, graphics processing units (GPUs), one or more Digital Signal Processors (DSPs) such as a baseband processor, Application- Specific Integrated Circuits (ASICs), an Field-Programmable Gate Array (FPGA), a radio-frequency integrated circuit (RFIC), one or more microprocessors or controllers, another processor (including those discussed herein), or any suitable combination thereof. In some implementations, the processor circuitry 610 may include one or more hardware accelerators, which may be microprocessors, programmable processing devices (e.g., FPGA, complex programmable logic devices (CPLDs), etc.), or the like.
The memory/storage devices 620 may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices 620 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, phase change RAM (PRAM), resistive memory such as magnetoresistive random access memory (MRAM), etc., and may incorporate three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. The memory/ storage devices 620 may also comprise persistent storage devices, which may be temporal and/or persistent storage of any type, including, but not limited to, non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth.
The communication resources 630 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 604 or one or more databases 606 or other network elements via a network 608. For example, the communication resources 630 may include wired communication components (e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, WiFi® components, and other communication components. Network connectivity may be provided to/from the computing device 600 via the communication resources 630 using a physical connection, which may be electrical (e.g., a “copper interconnect”) or optical. The physical connection also includes suitable input connectors (e.g., ports, receptacles, sockets, etc.) and output connectors (e.g., plugs, pins, etc.). The communication resources 630 may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned network interface protocols.
Instructions 650 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 610 to perform any one or more of the methodologies discussed herein. The instructions 650 may reside, completely or partially, within at least one of the processors 610 (e.g., within the processor’s cache memory), the memory/ storage devices 620, or any suitable combination thereof. Furthermore, any portion of the instructions 650 may be transferred to the hardware resources 600 from any combination of the peripheral devices 604 or the databases 606. Accordingly, the memory of processors 610, the memory/ storage devices 620, the peripheral devices 604, and the databases 606 are examples of computer-readable and machine-readable media.
FIG. 7 illustrates a logical architecture 700 of the O-RAN system architecture 100 of FIG. 1. In FIG. 7, the SMO 702 corresponds to the SMO 102, O-Cloud 706 corresponds to the O-Cloud 106, the non-RT RIC 712 corresponds to the non-RT RIC 112, the near-RT RIC 714 corresponds to the near-RT RIC 114, and the O-RU 716 corresponds to the O-RU 116 of FIG. 7, respectively. The O-RAN logical architecture 700 includes a radio portion and a management portion.
The management portion/side of the architectures 700 includes the SMO Framework 702 containing the non-RT RIC 712, and may include the O-Cloud 706. The O-Cloud 706 is a cloud computing platform including a collection of physical infrastructure nodes to host the relevant O-RAN functions (e.g., the near-RT RIC 714, O-CU-CP 721, O-CU-UP 722, and the O-DU 715), supporting software components (e.g., OSs, VMMs, container runtime engines, ML engines, etc.), and appropriate management and orchestration functions.
An O-Cloud instance 706 refers to a collection of O-Cloud Resource Pools at one or more location and the software to manage Nodes and Deployments hosted on them. An O- Cloud will include functionality to support both Deployment-plane (aka. user-plane) and Management services. The O-Cloud provides a single logical reference point for all O-Cloud Resource Pools within the O-Cloud boundary. An O-Cloud Resource Pool is a collection of O-Clouds nodes with homogeneous profiles in one location which can be used for either Management services or Deployment Plane functions. The allocation of NF deployment to a resource pool is determined by the SMO. An O-Cloud Node is a collection of CPUs, Mem, Storage, NICs, Accelerators, BIOSes, BMCs, etc., and can be thought of as a server. Each O-Cloud Node will support one or more “roles”, see next. An O-Cloud Node Role refers to the functionalities that a given node may support. These include Compute, Storage, Networking for the Deployment-plane (i.e., user- plane related functions such as the O-RAN NF), they may include optional acceleration functions, and they may also include the appropriate Management services. An O-Cloud Deployment Plane is a logical construct representing the O-Cloud Nodes across the Resource Pools which are used to create NF Deployments. An O-Cloud 706 NF Deployment is a deployment of a cloud native Network Function (all or partial), resources shared within a NF Function, or resource shared across network functions. The NF Deployment configures and assembles user-plane resources required for the cloud native construct used to establish the NF Deployment and manage its life cycle from creation to deletion.
The radio portion/side of the logical architecture 700 includes the near-RT RIC 714, the O-RAN Distributed Unit (O-DU) 715, the O-RU 716, the O-RAN Central Unit - Control Plane (O-CU-CP) 721, and the O-RAN Central Unit - User Plane (O-CU-UP) 722 functions. The radio portion/side of the logical architecture 700 may also include the O-e/gNB 710.
The O-DU 715 is a logical node hosting RLC, MAC, and higher PHY layer entities/elements (High-PHY layers) based on a lower layer functional split. The O-RU 716 is a logical node hosting lower PHY layer entities/elements (Low-PHY layer) (e.g., FFT/iFFT, PRACH extraction, etc.) and RF processing elements based on a lower layer functional split. Virtualization of O-RU 716 is FFS. The O-CU-CP 721 is a logical node hosting the RRC and the control plane (CP) part of the PDCP protocol. The O O-CU-UP 722 is a a logical node hosting the user plane part of the PDCP protocol and the SDAP protocol.
An E2 interface terminates at a plurality of E2 nodes. The E2 interface connects the near-RT RIC 714 and one or more -CU-CP 721, one or more O-CU-UP 722, one or more O- DU 715, and one or more O-e/gNB 710. The E2 nodes are logical nodes/entities that terminate the E2 interface. For NR/5G access, the E2 nodes include the O-CU-CP 721, O-CU-UP 722, O-DU 715, or any combination of elements as defined in -RAN Alliance WG3, “O-RANNear- Real-time RAN Intelligent Controller Architecture & E2 General Aspects and Principles” vOl.Ol (“O-RAN. WG3.E2GAP-v01.01”). For E-UTRA access the E2 nodes include the O- e/gNB 710. As shown in FIG. 7, the E2 interface also connects the O-e/gNB 710 to the Near- RT RIC 714. The protocols over E2 interface are based exclusively on Control Plane (CP) protocols. The E2 functions are grouped into the following categories: (a) near-RT RIC 714 services (REPORT, INSERT, CONTROL and POLICY, as described in O- RAN.WG3.E2GAP-v01.01); and (b) near-RT RIC 714 support functions, which include E2 Interface Management (E2 Setup, E2 Reset, Reporting of General Error Situations, etc.) and Near-RT RIC Service Update (e.g., capability exchange related to the list of E2 Node functions exposed over E2). An RIC Service is a Service provided on an E2 Node to provide access to messages and measurements and / or enable control of the E2 Node from the Near-RT RIC.
The injection and control guided by AI/ML based intelligence into RAN networks are realized via E2 interface from the Near-RT RIC, where Near-RT RIC subscribes various RIC services (REPORT, INSERT, CONTROL, POLICY) based on RAN functions exposed from RAN nodes. These exposed RAN functions are specified by E2 service models (E2SM(s)). Among those, E2SM RAN control (E2SM-RC) (see e.g., O-RAN Alliance WG3, “O-RAN Near-Real-time RAN Intelligent Controller E2 Service Model (E2SM) KPM” vOl.Ol (Feb. 2020) (“ORAN-WG3.E2SM-KPM-v01.00.00”) has been recently agreed to be specified to support injection of resource and mobility control commands from Near-RT RIC, spanning from radio admission and bearer control to HO, dual connectivity, and carrier aggregation decisions that are required to support traffic steering and QoS optimization use cases (see e.g., O-RAN Alliance WG3, “Use Cases and Requirements” v01.00.03 (Dec. 2020) (“[O- RAN.WG3.UCR-v01.00.03]”).
FIG. 7 shows the Uu interface between a UE 701 and O-e/gNB 710 as well as between the UE 701 and O-RAN components. The Uu interface is a 3GPP defined interface (see e.g., sections 5.2 and 5.3 of 3GPP TS 38.401 vl6.3.0 (2020-10-02) (“[TS38401]”)), which includes a complete protocol stack from LI to L3 and terminates in the NG-RAN or E-UTRAN. The O-e/gNB 710 is an LTE eNB (see e.g., 3 GPP TS 36.401 vl6.0.0 (2020-07-16)), a 5G gNB or ng-eNB 3 GPP TS 38.300 vl6.3.0 (2020-10-02) (“[TS38300]”) that supports the E2 interface. The O-e/gNB 710 may be the same or similar as RAN 404 and/or ANs 408, and UE 701 may correspond to UE 402 discussed with respect to FIG. 4, and/or the like. There may be multiple UEs 701 and/or multiple O-e/gNB 710, each of which may be connected to one another the via respective Uu interfaces. Although not shown in FIG. 7, the O-e/gNB 710 supports O-DU 715 and O-RU 716 functions with an Open Fronthaul interface between them.
The Open Fronthaul (OF) interface(s) is/are between O-DU 715 and O-RU 716 functions (see e.g., ORAN-WG4.MP.0-v02.00.00, O-RAN Alliance WG4, “O-RAN Fronthaul Control, User and Synchronization Plane Specification 4.0” (Jul 2020) (“ORAN-WG4.CUS.O- v04.00”)). The OF interface(s) includes the Control User Synchronization (CUS) Plane and Management (M) Plane. Figures 1 and 7 also show that the O-RU 716 terminates the OF M- Plane interface towards the O-DU 715 and optionally towards the SMO 702 as specified in ORAN-WG4.MP.0-v02.00.00. The O-RU 716 terminates the OF CUS-Plane interface towards the O-DU 715 and the SMO 702.
The Fl-c interface connects the O-CU-CP 721 with the O-DU 715. As defined by 3GPP, the Fl-c interface is between the gNB-CU-CP and gNB-DU nodes [TS38401], 3GPP TS 38.470 vl6.3.0 (2020-10-02) (“[TS38470]”). However, for purposes of O-RAN, the Fl-c interface is adopted between the O-CU-CP 721 with the O-DU 715 functions while reusing the principles and protocol stack defined by 3 GPP and the definition of interoperability profile specifications.
The Fl-u interface connects the O-CU-UP 722 with the O-DU 715. As defined by 3GPP, the Fl-u interface is between the gNB-CU-UP and gNB-DU nodes [TS38401], [TS38470] However, for purposes of O-RAN, the Fl-u interface is adopted between the O- CU-UP 7 22 with the O-DU 715 functions while reusing the principles and protocol stack defined by 3GPP and the definition of interoperability profile specifications.
The NG-c interface is defined by 3 GPP as an interface between the gNB-CU-CP and the AMF in the 5GC [TS38300] The NG-c is also referred as the N2 interface (see [TS38300]). The NG-u interface is defined by 3GPP, as an interface between the gNB-CU- UP and the UPF in the 5GC (see e.g., [TS38300]). The NG-u interface is referred as the N3 interface (see e.g., [TS38300]). In O-RAN, NG-c and NG-u protocol stacks defined by 3GPP are reused and may be adapted for O-RAN purposes.
The X2-c interface is defined in 3 GPP for transmitting control plane information between eNBs or between eNB and en-gNB in EN-DC. The X2-u interface is defined in 3GPP for transmitting user plane information between eNBs or between eNB and en-gNB in EN-DC (see e.g., 3 GPP TS 36.420 vl6.0.0 (2020-07-17), [TS38300]). In O-RAN, X2-c and X2-u protocol stacks defined by 3 GPP are reused and may be adapted for O-RAN purposes.
The Xn-c interface is defined in 3 GPP for transmitting control plane information between gNBs, ng-eNBs, or between an ng-eNB and gNB. The Xn-u interface is defined in 3 GPP for transmitting user plane information between gNBs, ng-eNBs, or between ng-eNB and gNB (see e.g., [TS38300], 3 GPP TS 38.420 vl6.0.0 (2020-07-16)). In O-RAN, Xn-c and Xn-u protocol stacks defined by 3 GPP are reused and may be adapted for O-RAN purposes
The El interface is defined by 3 GPP as being an interface between the gNB-CU-CP (e.g., gNB-CU-CP 3728) and gNB-CU-UP (see e.g., [TS38401], 3 GPP TS 38.460 vl6.1.0 (2020-07-17)). In O-RAN, El protocol stacks defined by 3 GPP are reused and adapted as being an interface between the O-CU-CP 721 and the O-CU-UP 722 functions.
The O-RAN Non-Real Time (RT) RAN Intelligent Controller (RIC) 712 is a logical function within the SMO framework 102, 702 that enables non-real-time control and optimization of RAN elements and resources; AI/machine learning (ML) workflow(s) including model training, inferences, and updates; and policy-based guidance of applications/features in the Near-RT RIC 714.
The O-RAN near-RT RIC 714 is a logical function that enables near-real-time control and optimization of RAN elements and resources via fine-grained data collection and actions over the E2 interface. The near-RT RIC 714 may include one or more AI/ML workflows including model training, inferences, and updates.
The non-RT RIC 712 can be an ML training host to host the training of one or more ML models. ML training can be performed offline using data collected from the RIC, O-DU 715 and O-RU 716. For supervised learning, non-RT RIC 712 is part of the SMO 702, and the ML training host and/or ML model host/actor can be part of the non-RT RIC 712 and/or the near-RT RIC 714. For unsupervised learning, the ML training host and ML model host/actor can be part of the non-RT RIC 712 and/or the near-RT RIC 714. For reinforcement learning, the ML training host and ML model host/actor may be co-located as part of the non-RT RIC 712 and/or the near-RT RIC 714. In some implementations, the non-RT RIC 712 may request or trigger ML model training in the training hosts regardless of where the model is deployed and executed. ML models may be trained and not currently deployed.
In some implementations, the non-RT RIC 712 provides a query-able catalog for an ML designer/developer to publish/install trained ML models (e.g., executable software components). In these implementations, the non-RT RIC 712 may provide discovery mechanism if a particular ML model can be executed in a target ML inference host (MF), and what number and type of ML models can be executed in the MF. For example, there may be three types of ML catalogs made disoverable by the non-RT RIC 712: a design-time catalog (e.g., residing outside the non-RT RIC 712 and hosted by some other ML platform(s)), a training/deployment-time catalog (e.g., residing inside the non-RT RIC 712), and a run-time catalog (e.g., residing inside the non-RT RIC 712). The non-RT RIC 712 supports necessary capabilities for ML model inference in support of ML assisted solutions running in the non-RT RIC 712 or some other ML inference host. These capabilities enable executable software to be installed such as VMs, containers, etc. The non-RT RIC 712 may also include and/or operate one or more ML engines, which are packaged software executable libraries that provide methods, routines, data types, etc., used to run ML models. The non-RT RIC 712 may also implement policies to switch and activate ML model instances under different operating conditions.
The non-RT RIC 712 is be able to access feedback data (e.g., FM and PM statistics) over the 01 interface on ML model performance and perform necessary evaluations. If the ML model fails during runtime, an alarm can be generated as feedback to the non-RT RIC 712. How well the ML model is performing in terms of prediction accuracy or other operating statistics it produces can also be sent to the non-RT RIC 712 over 01. The non-RT RIC 712 can also scale ML model instances running in a target MF over the 01 interface by observing resource utilization in MF. The environment where the ML model instance is running (e.g., the MF) monitors resource utilization of the running ML model. This can be done, for example, using an ORAN-SC component called ResourceMonitor in the near-RT RIC 714 and/or in the non-RT RIC 712, which continuously monitors resource utilization. If resources are low or fall below a certain threshold, the runtime environment in the near-RT RIC 714 and/or the non- RT RIC 712 provides a scaling mechanism to add more ML instances. The scaling mechanism may include a scaling factor such as an number, percentage, and/or other like data used to scale up/down the number of ML instances. ML model instances running in the target ML inference hosts may be automatically scaled by observing resource utilization in the MF. For example, the Kubernetes® (K8s) runtime environment typically provides an auto-scaling feature.
The A1 interface is between the non-RT RIC 712 (within or outside the SMO 702) and the near-RT RIC 714. The A1 interface supports three types of services as defined in -RAN Alliance WG2, O-RAN A1 interface: General Aspects and Principles Specification, version 1.0 (Oct 2019) (“ORAN-WG2.Al.GA&P-v01.00”), including a Policy Management Service, an Enrichment Information Service, and ML Model Management Service. A1 policies have the following characteristics compared to persistent configuration (see e.g., ORAN- WG2.A1.GA&P-V01.00): A1 policies are not critical to traffic; A1 policies have temporary validity; A1 policies may handle individual UE or dynamically defined groups of UEs; A1 policies act within and take precedence over the configuration; and A1 policies are non- persistent, i.e., do not survive a restart of the near-RT RIC.
O-RAN is currently developing a framework for adding 3rd party xApps to a Base Station product, which is assembled from components from different suppliers.
FIG. 8 illustrates an example O-RAN Architecture 800 including Near-RT RIC interfaces according to various embodiments. The Near-RT RIC is a logical network node placed between the Service Management & Orchestration layer, which hosts the Non-RT RIC, and the E2 Nodes.
The Near-RT-RIC logical architecture and related interfaces are shown in FIG. 8. The Near-RT RIC is connected to the Non-RT RIC through the A1 interface (see e.g., ORAN- WG2.Al.GA&P-v01.00). A Near-RT RIC is connected to only one Non-RT RIC. As mentioned previously, E2 is a logical interface connecting the Near-RT RIC with an E2 Node. The Near-RT RIC is connected to the O-CU-CP. The Near-RT RIC is connected to the O-CU- UP. The Near-RT RIC is connected to the O-DU. The Near-RT RIC is connected to the O- eNB. An E2 Node is connected to only one Near-RT RIC. A Near-RT RIC can be connected to multiple E2 Nodes, i.e. multiple O-CU-CPs, O-CU-UPs, O-DUs and O-eNBs. FI (Fl-C, Fl-U) and El are logical 3 GPP interfaces, whose protocols, termination points and cardinalities are specified in [TS38401] In addition, the near-RT RIC and other RAN nodes have 01 interfaces as defined in 0-RAN.WG1.0AM-Architecture-v03.00, and O-RAN. WGl.O-RAN- Architecture-Description-v02.00.
The Near-RT RIC hosts one or more xApps that use the E2 interface to collect near real-time information (e.g., UE basis, Cell basis) and provide value added services. The Near- RT RIC may receive declarative Policies and obtain Data Enrichment information over the A1 interface (see e.g., ORAN-WG2.Al.GA&P-v01.00).
The protocols over E2 interface are based exclusively on Control plane protocols and are defined in O-RAN Alliance WG1, “Near-Real-time RAN Intelligent Controller, E2 Application Protocol (E2AP)” vOl.Ol (Jul 2020) (“O-RAN. WG3.E2AP-v01.01”). On E2 or Near-RT RIC failure, the E2 Node will be able to provide services but there may be an outage for certain value-added services that may only be provided using the Near-RT RIC.
The Near-RT RIC provides a database function that stores the configurations relating to E2 nodes, Cells, Bearers, Flows, UEs and the mappings between them. The Near-RT RIC provides ML tools that support data pipelining. The Near-RT RIC provides a messaging infrastructure. The Near-RT RIC provides logging, tracing and metrics collection from Near- RT RIC framework and xApps to SMO. The Near-RT RIC provides security functions. The Near-RT RIC supports conflict resolution to resolve the potential conflicts or overlaps which may be caused by the requests from xApps.
The Near-RT RIC also provides an open API enabling the hosting of 3rd party xApps and xApps from the Near-RT RIC platform vendor. Near-RT RIC also provides an open API decoupled from specific implementation solutions, including a Shared Data Layer (SDL) that works as an overlay for underlying databases and enables simplified data access. An xApp is an application designed to run on the Near-RT RIC. Such an application is likely to include or provide one or more microservices and at the point of on-boarding will identify which data it consumes and which data it provides. An xApp is independent of the Near-RT RIC and may be provided by any third party. The E2 enables a direct association between the xApp and the RAN functionality. A RAN Function is a specific Function in an E2 Node; examples include X2AP, FIAP, EIAP, SIAP, NGAP interfaces and RAN internal functions handling UEs, Cells, etc.
The architecture of an xApp comprises the code implementing the xApp's logic and the RIC libraries that allow the xApp to: send and receive messages; read from, write to, and get notifications from the SDL layer; and write log messages. Additional libraries will be available in future versions including libraries for setting and resetting alarms and sending statistics. Furthermore, xApps can use access libraries to access specific name-spaces in the SDL layer. For example, the R-NIB that provides information about which E2 nodes (e.g., CU/DU) the RIC is connected to and which SMs are supported by each E2 node, can be read by using the R-NIB access library.
The O-RAN standard interfaces (e.g., 01, Al, and E2) are exposed to the xApps as follows: xApp will receive its configuration via K8s ConfigMap - the configuration can be updated while the xApp is running and the xApp can be notified of this modification by using inotifyO; xApp can send statistics (PM) either by (a) sending it directly to VES collector in VES format, (b) by exposing statistics via a REST interface for Prometheus to collect; xApp will receive Al policy guidance via an RMR message of a specific kind (policy instance creation and deletion operations); and xApp can subscribe to E2 events by constructing the E2 subscription ASN.1 payload and sending it as a message (RMR), xApp will receive E2 messages (e.g., E2 INDICATION) as RMR messages with the ASN.1 payload. Similarly xApp can issue E2 control messages.
In addition to Al and E2 related messages, xApps can send messages that are processes by other xApps and can receive messages produced by other xApps. Communication inside the RIC is policy driven, that is, an xApp cannot specify the target of a message. It simply sends a message of a specific type and the routing policies specified for the RIC instance will determine to which destinations this message will be delivered (logical pub/ sub).
Logically, an xApp is a entity that implements a well-defined function. Mechanically, an xApp is a K8s pod that includes one or multiple containers. In order for an xApp to be deployable, it needs to have an xApp descriptor (e.g., JSON) that describes the xApp's configuration parameters and information the RIC platform needs to configure the RIC platform for the xApp. The xApp developer will also need to provide a JSON schema for the descriptor.
In addition to these basic requirements, an xApp may do any of the following: read initial configuration parameters (passed in the xApp descriptor); receive updated configuration parameters; send and receive messages; read and write into a persistent shared data storage (key-value store); receive Al-P policy guidance messages - specifically operations to create or delete a policy instance (JSON payload on an RMR message) related to a given policy type; define a new A1 policy type; make subscriptions via E2 interface to the RAN, receive E2 INDICATION messages from the RAN, and issue E2 POLICY and CONTROL messages to the RAN; and report metrics related to its own execution or observed RAN events.
The lifecycle of xApp development and deployment consists of the following states:
Development: Design, implementation, local testing.
Released: The xApp code and xapp descriptor are committed to LF Gerrit repo and included in an O-RAN release. The xApp is packaged as Docker container and its image released to LF Release registry.
On-boarded/Distributed: The xApp descriptor (and potentially helm chart) is customized for a given RIC environment and the resulting customized helm chart is stored in a local helm chart repo used by the RIC environment's xApp Manager.
Run-time Parameters Configuration: Before the xApp can be deployed, run-time helm chart parameters will be provided by the operator to customized the xApp Kubernetes deployment instance. This procedure is mainly used to configure run-time unique helm chart parameters such as instance UUID, liveness check, east-bound and north-bound service endpoints (e.g., DBAAS entry, VES collector endpoint) and so on.
Deployed: The xApp has been deployed via the xApp Manager and the xApp pod is running on a RIC instance. For xApps where it makes sense, the deployed status may be further divided into additional states controlled via xApp configuration updates. For example, Running, Stopped.
The general principles guiding the definition of Near-RT RIC architecture as well as the interfaces between Near-RT RIC, E2 Nodes and Service Management & Orchestration are the following: Near-RT RIC and E2 Node functions are fully separated from transport functions. Addressing scheme used in Near-RT RIC and the E2 Nodes shall not be tied to the addressing schemes of transport functions.
The E2 Nodes support all protocol layers and interfaces defined within 3 GPP radio access networks that include eNB for E-UTRAN [5] and gNB/ ng-eNB for NG-RAN [16]
Near-RT RIC and hosted “xApp” applications shall use a set of services exposed by an E2 Node that is described by a series of RAN function and Radio Access Technology (RAT) dependent Έ2 Service Models”.
The Near-RT RIC interfaces are defined along the following principles:
The functional division across the interfaces have as few options as possible.
Interfaces are based on a logical model of the entity controlled through this interface.
One physical network element can implement multiple logical nodes. xApps may enhance the RRM capabilities of the Near-RT RIC. xApps provide logging, tracing and metrics collection to the Near-RT RIC.
According to various embodiments, xApps include an xApp descriptor and xApp image. The xApp image is the software package. The xApp image contains all the files needed to deploy an xApp. An xApp can have multiple versions of xApp image, which are tagged by the xApp image version number.
The xApp descriptor describes the packaging format of xApp image. The xApp descriptor also provides the necessary data to enable their management and orchestration. The xApp descriptor provides xApp management services with necessary information for the LCM of xApps, such as deployment, deletion, upgrade etc. The xApp descriptor also provides extra parameters related to the health management of the xApps, such as auto scaling when the load of xApp is too heavy and auto healing when xApp becomes unhealthy. The xApp descriptor provides FCAPS and control parameters to xApps when xApp is launched.
The definition of xApp descriptor includes:
The basic information of xApp, including name, version, provider, URL of xApp image, virtual resource requirements (e.g., CPU), etc. This information is used to support LCM of xApps. Additionally or alternatively, the basic information include or indicate configuration, metrics, and control data about an xApp.
The FCAPS management specifications that specify the options of configuration, performance metrics collection, etc. for the xApp. The control specifications that specify the data types consumed and provided by the xApp for control capabilities (e.g., Performance Management (PM) data that the xApp subscribes, the message type of control messages).
Additionally or alternatively, the xApp descriptor components include the following: Configuration: The xApp configuration specification shall include a data dictionary for the configuration data, i.e., meta data such as a yang definition or a list of configuration parameters and their semantics. Additionally it may include an initial configuration of xApps.
Control: xApp controls specification shall include the types of data it consumes and provides that enable control capabilities (e.g., xApp URL, parameters, input/output type).
Metrics: The xApp metrics specification shall include a list of metrics (e.g., metric name, type, unit and semantics) provided by the xApp.
FIG. 9 depicts an example ORAN architectures/frameworks 900 for adding 3rd party xApps according to various embodiments. FIG. 10 depicts an example Near-RT RIC Internal Architecture 1000 according to various embodiments.
In these examples, the Near-RT RIC hosts the following functions: Database functionality, which allows reading and writing of RAN/UE information; xApp subscription management, which merges subscriptions from different xApps and provides unified data distribution to xApps; Conflict mitigation, which resolves potentially overlapping or conflicting requests from multiple xApps; Messaging infrastructure, which enables message interaction amongst Near-RT RIC internal functions; Security, which provides the security scheme for the xApps; and
Management services including: fault management, configuration management, and performance management as a service producer to SMO; life-cycle management of xApps; and logging, tracing and metrics collection, which capture, monitor and collect the status of Near- RT RIC internals and can be transferred to external system for further evaluation; and
Interface Termination including: E2 termination, which terminates the E2 interface from an E2 Node; A1 termination, which terminates the A1 interface from the non-RT RIC; and 01 termination, which terminates the 01 interface from SMO; and
Functions hosted by xApps, which allow services to be executed at the Near-RT RIC and the outcomes sent to the E2 Nodes via E2 interface. xApps may provide UE related information to be stored in the UE-NIB (UE-Network Information Base) database. UE-NIB maintains a list of UEs and associated data. The UE- NIB maintains tracking and correlation of the UE identities associated with the connected E2 nodes. xApps may provide radio access network related information to be stored in the R-NIB (Radio-Network Information Base) database. The R-NIB stores the configurations and near real-time information relating to connected E2 Nodes and the mappings between them. xApp subscription management manages subscriptions from the xApps to the E2 Nodes. xApp subscription management enforces authorization of policies controlling xApp access to messages. xApp subscription management enables merging of identical subscriptions from different xApps into a single subscription to the E2 Node.
For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.
Additional examples of the presently described embodiments include the following, non-limiting implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.
For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
The following examples pertain to further embodiments.
Example 1 may include an apparatus comprising identify a request received from a consumer in the SMO in an open radio access network (O-RAN), wherein the request may be to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); cause to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identify an indication received from the DMS that the deployment of the NF has been completed; and cause to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
Example 2 may include the apparatus of example 1 and/or some other example herein, wherein the request specifies placement requirements where the NF needs to be instantiated.
Example 3 may include the apparatus of example 1 and/or some other example herein, wherein the request may be an UploadNfDescriptorsRequest from the consumer.
Example 4 may include the apparatus of example 1 and/or some other example herein, wherein the notification may be an instantiateNfResponse response to the consumer.
Example 5 may include the apparatus of example 1 and/or some other example herein, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O- Cloud.
Example 6 may include the apparatus of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
Example 7 may include the apparatus of example 1 and/or some other example herein, wherein the notification may be an O-Cloud lifecycle management (LCM) notification.
Example 8 may include the apparatus of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to identify a termination request received from the consumer, wherein the termination request may be for NF termination.
Example 9 may include the apparatus of example 8 and/or some other example herein, wherein the processing circuitry may be further configured to cause to send a termination response to the consumer indicating the result of the NF termination.
Example 10 may include a computer-readable storage medium comprising instructions to cause processing circuitry, upon execution of the instructions by the processing circuitry, to: identify a request received from a consumer in a service management and orchestration (SMO) in an open radio access network (O-RAN), wherein the request may be to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); cause to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identify an indication received from the DMS that the deployment of the NF has been completed; and cause to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
Example 11 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the request specifies placement requirements where the NF needs to be instantiated.
Example 12 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the request may be an UploadNfDescriptorsRequest from the consumer.
Example 13 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the notification may be an instantiateNfResponse response to the consumer.
Example 14 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-
Example 15 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the operations further comprise onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
Example 16 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the notification may be an O-Cloud lifecycle management (LCM) notification.
Example 17 may include the computer-readable storage medium of example 10 and/or some other example herein, wherein the operations further comprise identifying a termination request received from the consumer, wherein the termination request may be for NF termination.
Example 18 may include the computer-readable storage medium of example 17 and/or some other example herein, wherein the operations further comprise causing to send a termination response to the consumer indicating the result of the NF termination.
Example 19 may include a method comprising: identifying a request received from a consumer in a service management and orchestration (SMO) in an open radio access network (O-RAN), wherein the request may be to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); causing to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identifying an indication received from the DMS that the deployment of the NF has been completed; and causing to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
Example 20 may include the method of example 19 and/or some other example herein, wherein the request specifies placement requirements where the NF needs to be instantiated.
Example 21 may include the method of example 19 and/or some other example herein, wherein the request may be an UploadNfDescriptorsRequest from the consumer.
Example 22 may include the method of example 19 and/or some other example herein, wherein the notification may be an instantiateNfResponse response to the consumer.
Example 23 may include the method of example 19 and/or some other example herein, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
Example 24 may include the method of example 19 and/or some other example herein, further comprising onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
Example 25 may include the method of example 19 and/or some other example herein, wherein the notification may be an O-Cloud lifecycle management (LCM) notification.
Example 26 may include the method of example 19 and/or some other example herein, further comprising identifying a termination request received from the consumer, wherein the termination request may be for NF termination.
Example 27 may include the method of example 26 and/or some other example herein, further comprising causing to send a termination response to the consumer indicating the result of the NF termination.
Example 28 may include an apparatus comprising means for: identifying a request received from a consumer in a service management and orchestration (SMO) in an open radio access network (O-RAN), wherein the request may be to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); causing to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identifying an indication received from the DMS that the deployment of the NF has been completed; and causing to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
Example 29 may include the apparatus of example 28 and/or some other example herein, wherein the request specifies placement requirements where the NF needs to be instantiated.
Example 30 may include the apparatus of example 28 and/or some other example herein, wherein the request may be an UploadNfDescriptorsRequest from the consumer.
Example 31 may include the apparatus of example 28 and/or some other example herein, wherein the notification may be an instantiateNfResponse response to the consumer.
Example 32 may include the apparatus of example 28 and/or some other example herein, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
Example 33 may include the apparatus of example 28 and/or some other example herein, further comprising onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
Example 34 may include the apparatus of example 28 and/or some other example herein, wherein the notification may be an O-Cloud lifecycle management (LCM) notification.
Example 35 may include the apparatus of example 28 and/or some other example herein, further comprising identifying a termination request received from the consumer, wherein the termination request may be for NF termination.
Example 36 may include the apparatus of example 35 and/or some other example herein, further comprising causing to send a termination response to the consumer indicating the result of the NF termination.
Example 37 may include an apparatus comprising means for performing any of the methods of examples 1-36.
Example 38 may include a network node comprising a communication interface and processing circuitry connected thereto and configured to perform the methods of examples 1- 36.
Example 39 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-36, or any other method or process described herein.
Example 40 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-36, or any other method or process described herein.
Example 41 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-36, or any other method or process described herein.
Example 42 may include a method, technique, or process as described in or related to any of examples 1-36, or portions or parts thereof.
Example 43 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-36, or portions thereof.
Example 44 may include a signal as described in or related to any of examples 1-36, or portions or parts thereof.
Example 45 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-36, or portions or parts thereof, or otherwise described in the present disclosure.
Example 46 may include a signal encoded with data as described in or related to any of examples 1-36, or portions or parts thereof, or otherwise described in the present disclosure.
Example 47 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-36, or portions or parts thereof, or otherwise described in the present disclosure.
Example 48 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-36, or portions thereof.
Example 49 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-36, or portions thereof.
Example 50 may include a signal in a wireless network as shown and described herein.
Example 51 may include a method of communicating in a wireless network as shown and described herein.
Example 52 may include a system for providing wireless communication as shown and described herein.
Example 53 may include a device for providing wireless communication as shown and described herein.
An example implementation is an edge computing system, including respective edge processing devices and nodes to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is a client endpoint node, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an aggregation node, network hub node, gateway node, or core data processing node, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an access point, base station, road-side unit, street-side unit, or on-premise unit, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an edge provisioning node, service orchestration node, application orchestration node, or multi-tenant management node, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an edge node operating an edge provisioning service, application or service orchestration service, virtual machine deployment, container deployment, function deployment, and compute management, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an edge computing system operable as an edge mesh, as an edge mesh with side car loading, or with mesh-to-mesh communications, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein. Another example implementation is an edge computing system adapted for supporting client mobility, vehicle-to-vehicle (V2V), vehicle-to-everything (V2X), or vehicle-to-infrastructure (V2I) scenarios, and optionally operating according to ETSI MEC specifications, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein. Another example implementation is an edge computing system adapted for mobile wireless communications, including configurations according to an 3 GPP 4G/LTE or 5G network capabilities, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein. Another example implementation is a computing system adapted for network communications, including configurations according to an O-RAN capabilities, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.
Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an” and “the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specific the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof.
For the purposes of the present disclosure, the phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases “in an embodiment,” or “In some embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.
The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like.
The term “circuitry” as used herein refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. The term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry. The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes. Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”
The term “memory” and/or “memory circuitry” as used herein refers to one or more hardware devices for storing data, including RAM, MRAM, PRAM, DRAM, and/or SDRAM, core memory, ROM, magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data. The term “computer-readable medium” may include, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data.
The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.
The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface. The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.
The term “computer system” as used herein refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource. The term “element” refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, etc., or combinations thereof. The term “device” refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity. The term “entity” refers to a distinct component of an architecture or device, or information transferred as a payload. The term “controller” refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move.
The term “cloud computing” or “cloud” refers to a paradigm for enabling network access to a scalable and elastic pool of shareable computing resources with self-service provisioning and administration on-demand and without active management by users. Cloud computing provides cloud computing services (or cloud services), which are one or more capabilities offered via cloud computing that are invoked using a defined interface (e.g., an API or the like). The term “computing resource” or simply “resource” refers to any physical or virtual component, or usage of such components, of limited availability within a computer system or network. Examples of computing resources include usage/access to, for a period of time, servers, processor(s), storage equipment, memory devices, memory areas, networks, electrical power, input/output (peripheral) devices, mechanical devices, network connections (e.g., channels/links, ports, network sockets, etc.), operating systems, virtual machines (VMs), software/applications, computer files, and/or the like. A “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network. The term “system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable. As used herein, the term “cloud service provider” (or CSP) indicates an organization which operates typically large-scale “cloud” resources comprised of centralized, regional, and edge data centers (e.g., as used in the context of the public cloud). In other examples, a CSP may also be referred to as a Cloud Service Operator (CSO). References to “cloud computing” generally refer to computing resources and services offered by a CSP or a CSO, at remote locations with at least some increased latency, distance, or constraints relative to edge computing.
As used herein, the term “data center” refers to a purpose-designed structure that is intended to house multiple high-performance compute and data storage nodes such that a large amount of compute, data storage and network resources are present at a single location. This often entails specialized rack and enclosure systems, suitable heating, cooling, ventilation, security, fire suppression, and power delivery systems. The term may also refer to a compute and data storage node in some contexts. A data center may vary in scale between a centralized or cloud data center (e.g., largest), regional data center, and edge data center (e.g., smallest).
As used herein, the term “edge computing” refers to the implementation, coordination, and use of computing and resources at locations closer to the “edge” or collection of “edges” of a network. Deploying computing resources at the network’s edge may reduce application and network latency, reduce network backhaul traffic and associated energy consumption, improve service capabilities, improve compliance with security or data privacy requirements (especially as compared to conventional cloud computing), and improve total cost of ownership). As used herein, the term “edge compute node” refers to a real-world, logical, or virtualized implementation of a compute-capable element in the form of a device, gateway, bridge, system or subsystem, component, whether operating in a server, client, endpoint, or peer mode, and whether located at an “edge” of an network or at a connected location further within the network. References to a “node” used herein are generally interchangeable with a “device”, “component”, and “sub-system”; however, references to an “edge computing system” or “edge computing network” generally refer to a distributed architecture, organization, or collection of multiple nodes and devices, and which is organized to accomplish or offer some aspect of services or resources in an edge computing setting.
Additionally or alternatively, the term “Edge Computing” refers to a concept, as described in [6], that enables operator and 3rd party services to be hosted close to the UE's access point of attachment, to achieve an efficient service delivery through the reduced end-to- end latency and load on the transport network. As used herein, the term “Edge Computing Service Provider” refers to a mobile network operator or a 3rd party service provider offering Edge Computing service. As used herein, the term “Edge Data Network” refers to a local Data Network (DN) that supports the architecture for enabling edge applications. As used herein, the term “Edge Hosting Environment” refers to an environment providing support required for Edge Application Server's execution. As used herein, the term “Application Server” refers to application software resident in the cloud performing the server function.
The term “Internet of Things” or “IoT” refers to a system of interrelated computing devices, mechanical and digital machines capable of transferring data with little or no human interaction, and may involve technologies such as real-time analytics, machine learning and/or AI, embedded systems, wireless sensor networks, control systems, automation (e.g., smarthome, smart building and/or smart city technologies), and the like. IoT devices are usually low-power devices without heavy compute or storage capabilities. “Edge IoT devices” may be any kind of IoT devices deployed at a network’s edge.
As used herein, the term “cluster” refers to a set or grouping of entities as part of an edge computing system (or systems), in the form of physical entities (e.g., different computing systems, networks or network groups), logical entities (e.g., applications, functions, security constructs, containers), and the like. In some locations, a “cluster” is also referred to as a “group” or a “domain”. The membership of cluster may be modified or affected based on conditions or functions, including from dynamic or property-based membership, from network or system management scenarios, or from various example techniques discussed below which may add, modify, or remove an entity in a cluster. Clusters may also include or be associated with multiple layers, levels, or properties, including variations in security features and results based on such layers, levels, or properties.
The term “application” may refer to a complete and deployable package, environment to achieve a certain function in an operational environment. The term “AI/ML application” or the like may be an application that contains some AI/ML models and application-level descriptions. The term “machine learning” or “ML” refers to the use of computer systems implementing algorithms and/or statistical models to perform specific task(s) without using explicit instructions, but instead relying on patterns and inferences. ML algorithms build or estimate mathematical model(s) (referred to as “ML models” or the like) based on sample data (referred to as “training data,” “model training information,” or the like) in order to make predictions or decisions without being explicitly programmed to perform such tasks. Generally, an ML algorithm is a computer program that learns from experience with respect to some task and some performance measure, and an ML model may be any object or data structure created after an ML algorithm is trained with one or more training datasets. After training, an ML model may be used to make predictions on new datasets. Although the term “ML algorithm” refers to different concepts than the term “ML model,” these terms as discussed herein may be used interchangeably for the purposes of the present disclosure.
The term “machine learning model,” “ML model,” or the like may also refer to ML methods and concepts used by an ML-assisted solution. An “ML-assisted solution” is a solution that addresses a specific use case using ML algorithms during operation. ML models include supervised learning (e.g., linear regression, k-nearest neighbor (KNN), decision tree algorithms, support machine vectors, Bayesian algorithm, ensemble algorithms, etc.) unsupervised learning (e.g., K-means clustering, principle component analysis (PCA), etc.), reinforcement learning (e.g., Q-learning, multi-armed bandit learning, deep RL, etc.), neural networks, and the like. Depending on the implementation a specific ML model could have many sub-models as components and the ML model may train all sub-models together. Separately trained ML models can also be chained together in an ML pipeline during inference. An “ML pipeline” is a set of functionalities, functions, or functional entities specific for an ML-assisted solution; an ML pipeline may include one or several data sources in a data pipeline, a model training pipeline, a model evaluation pipeline, and an actor. The “actor” is an entity that hosts an ML assisted solution using the output of the ML model inference). The term “ML training host” refers to an entity, such as a network function, that hosts the training of the model. The term “ML inference host” refers to an entity, such as a network function, that hosts model during inference mode (which includes both the model execution as well as any online learning if applicable). The ML-host informs the actor about the output of the ML algorithm, and the actor takes a decision for an action (an “action” is performed by an actor as a result of the output of an ML assisted solution). The term “model inference information” refers to information used as an input to the ML model for determining inference(s); the data used to train an ML model and the data used to determine inferences may overlap, however, “training data” and “inference data” refer to different concepts.
The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code. The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content. As used herein, a “database object”, “data structure”, or the like may refer to any representation of information that is in the form of an object, attribute-value pair (A VP), key- value pair (KVP), tuple, etc., and may include variables, data structures, functions, methods, classes, database records, database fields, database entities, associations between data and/or database entities (also referred to as a “relation”), blocks and links between blocks in block chain implementations, and/or the like.
An “information object,” as used herein, refers to a collection of structured data and/or any representation of information, and may include, for example electronic documents (or “documents”), database objects, data structures, files, audio data, video data, raw data, archive files, application packages, and/or any other like representation of information. The terms “electronic document” or “document,” may refer to a data structure, computer file, or resource used to record data, and includes various file types and/or data formats such as word processing documents, spreadsheets, slide presentations, multimedia items, webpage and/or source code documents, and/or the like. As examples, the information objects may include markup and/or source code documents such as HTML, XML, JSON, Apex®, CSS, JSP, MessagePack™, Apache® Thrift™, ASN.l, Google® Protocol Buffers (protobuf), or some other document(s)/format(s) such as those discussed herein. An information object may have both a logical and a physical structure. Physically, an information object comprises one or more units called entities. An entity is a unit of storage that contains content and is identified by a name. An entity may refer to other entities to cause their inclusion in the information object. An information object begins in a document entity, which is also referred to as a root element (or "root"). Logically, an information object comprises one or more declarations, elements, comments, character references, and processing instructions, all of which are indicated in the information object (e.g., using markup).
The term “data item” as used herein refers to an atomic state of a particular object with at least one specific property at a certain point in time. Such an object is usually identified by an object name or object identifier, and properties of such an object are usually defined as database objects (e.g., fields, records, etc.), object instances, or data elements (e.g., mark-up language elements/tags, etc.). Additionally or alternatively, the term “data item” as used herein may refer to data elements and/or content items, although these terms may refer to difference concepts. The term “data element” or “element” as used herein refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary. A data element is a logical component of an information object (e.g., electronic document) that may begin with a start tag (e.g., “<element>”) and end with a matching end tag (e.g., “</element>”), or only has an empty element tag (e.g., “<element />”). Any characters between the start tag and end tag, if any, are the element’s content (referred to herein as “content items” or the like).
The content of an entity may include one or more content items, each of which has an associated datatype representation. A content item may include, for example, attribute values, character values, URIs, qualified names (qnames), parameters, and the like. A qname is a fully qualified name of an element, attribute, or identifier in an information object. A qname associates a URI of a namespace with a local name of an element, attribute, or identifier in that namespace. To make this association, the qname assigns a prefix to the local name that corresponds to its namespace. The qname comprises a URI of the namespace, the prefix, and the local name. Namespaces are used to provide uniquely named elements and attributes in information objects. Content items may include text content (e.g., “<element>content item</element>”), attributes (e.g., “<element attribute="attributeValue">”), and other elements referred to as “child elements” (e.g., “<elementl><element2>content item</element2></elementl>”). An “attribute” may refer to a markup construct including a name-value pair that exists within a start tag or empty element tag. Attributes contain data related to its element and/or control the element’s behavior.
The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information. As used herein, the term “radio technology” refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term “radio access technology” or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network. As used herein, the term “communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.
As used herein, the term “radio technology” refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term “radio access technology” or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network. As used herein, the term “communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like. Examples of wireless communications protocols may be used in various embodiments include a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3 GPP) radio communication technology including, for example, 3 GPP Fifth Generation (5G) or New Radio (NR), Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), Long Term Evolution (LTE), LTE- Advanced (LTE Advanced), LTE Extra, LTE-A Pro, cdmaOne (2G), Code Division Multiple Access 2000 (CDMA 2000), Cellular Digital Packet Data (CDPD), Mobitex, Circuit Switched Data (CSD), High-Speed CSD (HSCSD), Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDM), High Speed Packet Access (HSPA), HSPA Plus (HSPA+), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), LTE LAA, MuLTEfire, UMTS Terrestrial Radio Access (UTRA), Evolved UTRA (E-UTRA), Evolution- Data Optimized or Evolution-Data Only (EV-DO), Advanced Mobile Phone System (AMPS), Digital AMPS (D-AMPS), Total Access Communication System/Extended Total Access Communication System (TACS/ETACS), Push-to-talk (PTT), Mobile Telephone System (MTS), Improved Mobile Telephone System (IMTS), Advanced Mobile Telephone System (AMTS), Cellular Digital Packet Data (CDPD), DataTAC, Integrated Digital Enhanced Network (iDEN), Personal Digital Cellular (PDC), Personal Handy-phone System (PHS), Wideband Integrated Digital Enhanced Network (WiDEN), iBurst, Unlicensed Mobile Access (UMA), also referred to as also referred to as 3 GPP Generic Access Network, or GAN standard), Bluetooth®, Bluetooth Low Energy (BLE), IEEE 802.15.4 based protocols (e.g., IPv6 over Low power Wireless Personal Area Networks (6L0WPAN), WirelessHART, MiWi, Thread, 802.11a, etc.) WiFi-direct, ANT/ANT+, ZigBee, Z-Wave, 3GPP device-to-device (D2D) or Proximity Services (ProSe), Universal Plug and Play (UPnP), Low-Power Wide-
Area-Network (LPWAN), Long Range Wide Area Network (LoRA) or LoRaWAN™ developed by Semtech and the LoRa Alliance, Sigfox, Wireless Gigabit Alliance (WiGig) standard, Worldwide Interoperability for Microwave Access (WiMAX), mmWave standards in general (e.g., wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802.1 lad, IEEE 802.1 lay, etc.), V2X communication technologies (including 3GPP C-V2X),
Dedicated Short Range Communications (DSRC) communication systems such as Intelligent- Transport- Systems (ITS) including the European ITS-G5, ITS-G5B, ITS-G5C, etc. In addition to the standards listed above, any number of satellite uplink technologies may be used for purposes of the present disclosure including, for example, radios compliant with standards issued by the International Telecommunication Union (ITU), or the European Telecommunications Standards Institute (ETSI), among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated.
The term “access network” refers to any network, using any combination of radio technologies, RATs, and/or communication protocols, used to connect user devices and service providers. In the context of WLANs, an “access network” is an IEEE 802 local area network (LAN) or metropolitan area network (MAN) between terminals and access routers connecting to provider services. The term “access router” refers to router that terminates a medium access control (MAC) service from terminals and forwards user traffic to information servers according to Internet Protocol (IP) addresses.
The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration. The term “SSB” refers to a synchronization signal/Physical Broadcast Channel (SS/PBCH) block, which includes a Primary Syncrhonization Signal (PSS), a Secondary Syncrhonization Signal (SSS), and a PBCH. The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure. The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation. The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA. The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC. The term “Serving Cell” refers to the primary cell for a UE in RRC CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell. The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC CONNECTED configured with CA. The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.
The term “A1 policy” refers to a type of declarative policies expressed using formal statements that enable the non-RT RIC function in the SMO to guide the near-RT RIC function, and hence the RAN, towards better fulfilment of the RAN intent.
The term “A1 Enrichment information” refers to information utilized by near-RT RIC that is collected or derived at SMO/non-RT RIC either from non-network data sources or from network functions themselves.
The term “A1 -Policy Based Traffic Steering Process Mode” refers to an operational mode in which the Near-RT RIC is configured through A1 Policy to use Traffic Steering Actions to ensure a more specific notion of network performance (for example, applying to smaller groups of E2 Nodes and UEs in the RAN) than that which it ensures in the Background Traffic Steering.
The term “Background Traffic Steering Processing Mode” refers to an operational mode in which the Near-RT RIC is configured through 01 to use Traffic Steering Actions to ensure a general background network performance which applies broadly across E2 Nodes and UEs in the RAN.
The term “Baseline RAN Behavior” refers to the default RAN behavior as configured at the E2 Nodes by SMO
The term “E2” refers to an interface connecting the Near-RT RIC and one or more O- CU-CPs, one or more O-CU-UPs, one or more O-DUs, and one or more O-eNBs. The term Έ2 Node” refers to a logical node terminating E2 interface. In this version of the specification, ORAN nodes terminating E2 interface are: for NR access: O-CU-CP, O- CU-UP, O-DU or any combination; and for E-UTRA access: O-eNB.
The term “Intents”, in the context of O-RAN systems/implementations, refers to declarative policy to steer or guide the behavior of RAN functions, allowing the RAN function to calculate the optimal result to achieve stated objective.
The term “O-RAN non-real-time RAN Intelligent Controller” or “non-RT RIC” refers to a logical function that enables non-real-time control and optimization of RAN elements and resources, AI/ML workflow including model training and updates, and policy-based guidance of applications/features in Near-RT RIC.
The term “Near-RT RIC” or “O-RAN near-real-time RAN Intelligent Controller” refers to a logical function that enables near-real-time control and optimization of RAN elements and resources via fine-grained (e.g., UE basis, Cell basis) data collection and actions over E2 interface.
The term “O-RAN Central Unit” or “O-CU” refers to a logical node hosting RRC, SDAP and PDCP protocols.
The term “O-RAN Central Unit - Control Plane” or “O-CU-CP” refers to a logical node hosting the RRC and the control plane part of the PDCP protocol.
The term “O-RAN Central Unit - User Plane” or “0-CU-UP” refers to a logical node hosting the user plane part of the PDCP protocol and the SDAP protocol
The term “O-RAN Distributed Unit” or “O-DU” refers to a logical node hosting RLC/MAC/High-PHY layers based on a lower layer functional split.
The term “O-RAN eNB” or “O-eNB” refers to an eNB or ng-eNB that supports E2 interface.
The term “O-RAN Radio Unit” or “O-RU” refers to a logical node hosting Low-PHY layer and RF processing based on a lower layer functional split. This is similar to 3GPP’s “TRP” or “RRH” but more specific in including the Low-PHY layer (FFT/iFFT, PRACH extraction).
The term “01” refers to an interface between orchestration & management entities (Orchestration/NMS) and O-RAN managed elements, for operation and management, by which FCAPS management, Software management, File management and other similar functions shall be achieved. The term “RAN UE Group” refers to an aggregations of UEs whose grouping is set in the E2 nodes through E2 procedures also based on the scope of A1 policies. These groups can then be the target of E2 CONTROL or POLICY messages.
The term “Traffic Steering Action” refers to the use of a mechanism to alter RAN behavior. Such actions include E2 procedures such as CONTROL and POLICY.
The term “Traffic Steering Inner Loop” refers to the part of the Traffic Steering processing, triggered by the arrival of periodic TS related KPM (Key Performance Measurement) from E2 Node, which includes UE grouping, setting additional data collection from the RAN, as well as selection and execution of one or more optimization actions to enforce Traffic Steering policies.
The term “Traffic Steering Outer Loop” refers to the part of the Traffic Steering processing, triggered by the near-RT RIC setting up or updating Traffic Steering aware resource optimization procedure based on information from A1 Policy setup or update, A1 Enrichment Information (El) and/or outcome of Near-RT RIC evaluation, which includes the initial configuration (preconditions) and injection of related A1 policies, Triggering conditions for TS changes.
The term “Traffic Steering Processing Mode” refers to an operational mode in which either the RAN or the Near-RT RIC is configured to ensure a particular network performance. This performance includes such aspects as cell load and throughput, and can apply differently to different E2 nodes and UEs. Throughout this process, Traffic Steering Actions are used to fulfill the requirements of this configuration.
The term “Traffic Steering Target” refers to the intended performance result that is desired from the network, which is configured to Near-RT RIC over 01.
Furthermore, any of the disclosed embodiments and example implementations can be embodied in the form of various types of hardware, software, firmware, middleware, or combinations thereof, including in the form of control logic, and using such hardware or software in a modular or integrated manner. Additionally, any of the software components or functions described herein can be implemented as software, program code, script, instructions, etc., operable to be executed by processor circuitry. These components, functions, programs, etc., can be developed using any suitable computer language such as, for example, Python, PyTorch, NumPy, Ruby, Ruby on Rails, Scala, Smalltalk, Java™, C++, C#, “C”, Kotlin, Swift, Rust, Go (or “Golang”), EMCAScript, JavaScript, TypeScript, Jscript, ActionScript, Server- Side JavaScript (SSJS), PHP, Pearl, Lua, Torch/Lua with Just-In Time compiler (LuaJIT), Accelerated Mobile Pages Script (AMPscript), VBScript, JavaServer Pages (JSP), Active Server Pages (ASP), Node.js, ASP.NET, JAMscript, Hypertext Markup Language (HTML), extensible HTML (XHTML), Extensible Markup Language (XML), XML User Interface Language (XUL), Scalable Vector Graphics (SVG), RESTful API Modeling Language (RAML), wiki markup or Wikitext, Wireless Markup Language (WML), Java Script Object Notion (JSON), Apache® MessagePack™, Cascading Stylesheets (CSS), extensible stylesheet language (XSL), Mustache template language, Handlebars template language, Guide Template Language (GTL), Apache® Thrift, Abstract Syntax Notation One (ASN.l), Google® Protocol Buffers (protobuf), Bitcoin Script, EVM® bytecode, Solidity™, Vyper (Python derived), Bamboo, Lisp Like Language (LLL), Simplicity provided by Blockstream™, Rholang, Michelson, Counterfactual, Plasma, Plutus, Sophia, Salesforce® Apex®, and/or any other programming language or development tools including proprietary programming languages and/or development tools. The software code can be stored as a computer- or processor- executable instructions or commands on a physical non-transitory computer-readable medium. Examples of suitable media include RAM, ROM, magnetic media such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like, or any combination of such storage or transmission devices.
Unless used differently herein, terms, definitions, and abbreviations may be consistent with terms, definitions, and abbreviations defined in 3GPP TR 21.905 vl6.0.0 (2019-06). For the purposes of the present document, the following abbreviations may apply to the examples and embodiments discussed herein.
Table 8 Abbreviations:
Figure imgf000058_0001
Figure imgf000059_0001
Figure imgf000060_0001
Figure imgf000061_0001
Figure imgf000062_0001
Figure imgf000063_0001
Figure imgf000064_0001
Figure imgf000065_0001
Figure imgf000066_0001
Figure imgf000067_0001
Figure imgf000068_0001
The foregoing description provides illustration and description of various example embodiments, but is not intended to be exhaustive or to limit the scope of embodiments to the precise forms disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. Where specific details are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.

Claims

CLAIMS What is claimed is:
1. An apparatus for a service management and orchestration (SMO) in a 5G system (5GS), the apparatus comprising: a processor configured to: identify a first request received from a consumer in the SMO in an open radio access network (O-RAN), wherein the request is to instantiate a network function (NF) on an O-Cloud in the 5GS; cause to send a second request to a deployment management services (DMS) on the O- Cloud to create a deployment of the NF; identify an indication received from the DMS that the deployment of the NF has been completed; and cause to send a notification to the consumer that the NF has been instantiated on the O-
Cloud.
2. The apparatus of claim 1, wherein the request specifies placement requirements where the NF needs to be instantiated.
3. The apparatus of claim 1, wherein the request is an UploadNfDescriptorsRequest from the consumer.
4. The apparatus of claim 1, wherein the notification is an instantiateNfResponse response to the consumer.
5. The apparatus of claim 1, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
6. The apparatus of claim 1, wherein the processing circuitry is further configured to onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
7. The apparatus of claim 1, wherein the notification is an O-Cloud lifecycle management (LCM) notification.
8. The apparatus of claim 1, wherein the processing circuitry is further configured to identify a termination request received from the consumer, wherein the termination request is for NF termination.
9. The apparatus of claim 8, wherein the processing circuitry is further configured to cause to send a termination response to the consumer indicating the result of the NF termination.
10. A computer-readable storage medium comprising instructions to cause processing circuitry, upon execution of the instructions by the processing circuitry, to: identifying a first request received from a consumer in a service management and orchestration (SMO) in an open radio access network (O-RAN), wherein the request is to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); causing to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identifying an indication received from the DMS that the deployment of the NF has been completed; and causing to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
11. The computer-readable storage medium of claim 10, wherein the request specifies placement requirements where the NF needs to be instantiated.
12. The computer-readable storage medium of claim 10, wherein the request is an UploadNfDescriptorsRequest from the consumer.
13. The computer-readable storage medium of claim 10, wherein the notification is an instantiateNfResponse response to the consumer.
14. The computer-readable storage medium of claim 10, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
15. The computer-readable storage medium of claim 10, wherein the operations further comprise onboard the NF, wherein the NF comprises a descriptor providing requirements associated with the deployment of the NF.
16. The computer-readable storage medium of claim 10, wherein the notification is an O- Cloud lifecycle management (LCM) notification.
17. The computer-readable storage medium of claim 10, wherein the operations further comprise identifying a termination request received from the consumer, wherein the termination request is for NF termination.
18. The computer-readable storage medium of claim 17, wherein the operations further comprise causing to send a termination response to the consumer indicating the result of the NF termination.
19. A method for a service management and orchestration (SMO) in a 5G system (5GS)comprising: identifying a first request received from a consumer in a service management and orchestration (SMO) in an open radio access network (O-RAN), wherein the request is to instantiate a network function (NF) on an O-Cloud in a 5G system (5GS); causing to send a second request to a deployment management services (DMS) on the O-Cloud to create a deployment of the NF; identifying an indication received from the DMS that the deployment of the NF has been completed; and causing to send a notification to the consumer that the NF has been instantiated on the O-Cloud.
20. The method of claim 19, wherein the request specifies placement requirements where the NF needs to be instantiated.
21. The method of claim 19, wherein the request is an UploadNfDescriptorsRequest from the consumer.
22. The method of claim 19, wherein the notification is an instantiateNfResponse response to the consumer.
23. The method of claim 19, wherein the instantiateNfResponse response comprises output parameters indicating the result of the deployment of the NF on the O-Cloud.
24. An apparatus comprising means for performing any of the methods of claims 19-23.
25. A network node comprising a communication interface and processing circuitry connected thereto and configured to perform the methods of claims 19-23.
PCT/US2022/024390 2021-04-12 2022-04-12 O-cloud lifecycle management service support WO2022221260A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163173790P 2021-04-12 2021-04-12
US63/173,790 2021-04-12

Publications (1)

Publication Number Publication Date
WO2022221260A1 true WO2022221260A1 (en) 2022-10-20

Family

ID=83640984

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/024390 WO2022221260A1 (en) 2021-04-12 2022-04-12 O-cloud lifecycle management service support

Country Status (1)

Country Link
WO (1) WO2022221260A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115643263A (en) * 2022-12-08 2023-01-24 阿里巴巴(中国)有限公司 Cloud native platform resource allocation method, storage medium and electronic device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200329381A1 (en) * 2019-07-19 2020-10-15 Joey Chou Orchestration and configuration of e2e network slices across 3gpp core network and oran
WO2020242987A1 (en) * 2019-05-24 2020-12-03 Apple Inc. 5g new radio load balancing and mobility robustness

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020242987A1 (en) * 2019-05-24 2020-12-03 Apple Inc. 5g new radio load balancing and mobility robustness
US20200329381A1 (en) * 2019-07-19 2020-10-15 Joey Chou Orchestration and configuration of e2e network slices across 3gpp core network and oran

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
`O-RAN Use Cases and Deployment Scenarios', White Paper, 23 February 2020, [retrieved on 2022.07.12]. Retrieved from <https://assets-global.website-files.com/60b4ffd4ca081979751b5ed2/60e5aff9fc5c8d496515d7fe_O-RAN%2BUse%2BCases%2Band%2BDeployment%2BScenarios%2BWhitepaper%2BFebruary%2B2020.pdf>. *
ENG WEI KOO et al., `VIAVI Test Proposals for RIC-assisted Dynamic Traffic Steering Use Cases Design and Verifications', 19 February 2020 [retrieved on 2022.07.12]. Retrieved from <https://wiki.o-ran-sc.org/display/RSAC/RSAC+Meetings?preview=/1179971/17269330/VIAVI%20ORAN%20OSC%20High%20Level%20Overview%202020Feb01E%20Ex.pdf>. *
MARTIN SKORUPSKI, `O-RAN Architecture', 14 July 2020 [retrieved on 2022.07.12]. Retrieved from <https://wiki.o-ran-sc.org/display/OAM/OAM+Architecture?preview=%2F3605245%2F20875811%2Fo-ran-architecture.pptx>. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115643263A (en) * 2022-12-08 2023-01-24 阿里巴巴(中国)有限公司 Cloud native platform resource allocation method, storage medium and electronic device

Similar Documents

Publication Publication Date Title
WO2022087474A1 (en) Intra-user equipment prioritization for handling overlap of uplink control and uplink data channels
EP4233419A1 (en) Resource allocation for new radio multicast-broadcast service
WO2022031553A1 (en) Data plane for big data and data as a service in next generation cellular networks
WO2022221260A1 (en) O-cloud lifecycle management service support
WO2022221495A1 (en) Machine learning support for management services and management data analytics services
WO2022125296A1 (en) Mechanisms for enabling in-network computing services
WO2022087489A1 (en) Downlink control information (dci) based beam indication for new radio (nr)
WO2022261028A1 (en) Data functions and procedures in the non-real time radio access network intelligent controller
WO2024026515A1 (en) Artificial intelligence and machine learning entity testing
WO2023049345A1 (en) Load balancing optimization for 5g systems
US20240155393A1 (en) Measurement reporting efficiency enhancement
WO2024092132A1 (en) Artificial intelligence and machine learning entity loading in cellular networks
WO2024081642A1 (en) Pipelining services in next-generation cellular networks
WO2023014745A1 (en) Performance measurements for network exposure function
WO2023122043A1 (en) Performance measurements for location management function on location management
WO2024015747A1 (en) Session management function selection in cellular networks supporting distributed non-access stratum between a device and network functions
WO2022232038A1 (en) Performance measurements for unified data repository (udr)
WO2024091970A1 (en) Performance evaluation for artificial intelligence/machine learning inference
WO2024076852A1 (en) Data collection coordination function and network data analytics function framework for sensing services in next generation cellular networks
WO2023122037A1 (en) Measurements and location data supporting management data analytics (mda) for coverage problem analysis
WO2024020519A1 (en) Systems and methods for sharing unstructured data storage function services
WO2023055852A1 (en) Performance measurements for policy authorization and event exposure for network exposure functions
WO2023069750A1 (en) Good cell quality criteria
WO2024039950A2 (en) Constrained application protocol for computing services in cellular networks
WO2022240850A1 (en) Time domain restriction for channel state information reference signal configuration

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22788761

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22788761

Country of ref document: EP

Kind code of ref document: A1