WO2023114017A1 - Network resource model based solutions for ai-ml model training - Google Patents

Network resource model based solutions for ai-ml model training Download PDF

Info

Publication number
WO2023114017A1
WO2023114017A1 PCT/US2022/051545 US2022051545W WO2023114017A1 WO 2023114017 A1 WO2023114017 A1 WO 2023114017A1 US 2022051545 W US2022051545 W US 2022051545W WO 2023114017 A1 WO2023114017 A1 WO 2023114017A1
Authority
WO
WIPO (PCT)
Prior art keywords
training
entity
attribute
mns
consumer
Prior art date
Application number
PCT/US2022/051545
Other languages
French (fr)
Inventor
Yizhi Yao
Joey Chou
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to CN202280046922.9A priority Critical patent/CN117716674A/en
Publication of WO2023114017A1 publication Critical patent/WO2023114017A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design

Definitions

  • Wireless communication systems are rapidly growing in usage. Further, wireless communication technology has evolved from voice-only communications to also include the transmission of data, such as Internet and multimedia content, to a variety of devices. To accommodate a growing number of devices communicating, many wireless communication systems share the available communication channel resources among devices. Further, Internet-of-Thing (loT) devices are also growing in usage and can coexist with user devices in various wireless communication systems such as cellular networks.
  • VoIP Internet-of-Thing
  • FIG. 1 illustrates a wireless communication system
  • FIG. 2 illustrates a management data analytic (MDA) system in accordance with one embodiment.
  • FIG. 3 illustrates an artificial intelligence (Al) system in accordance with one embodiment.
  • FIG. 4 illustrates a MDA machine learning (ML) system in accordance with one embodiment.
  • FIG. 5 illustrates a chart in accordance with one embodiment.
  • FIG. 6 illustrates a machine learning training system in accordance with one embodiment.
  • FIG. 7 illustrates a message flow in accordance with one embodiment.
  • FIG. 8 illustrates a logic flow in accordance with one embodiment.
  • FIG. 9 illustrates a machine learning software architecture in accordance with one embodiment.
  • FIG. 10 illustrates an apparatus in accordance with one embodiment.
  • FIG. 11 A illustrates a first class diagram in accordance with one embodiment.
  • FIG. 11B illustrates a second class diagram in accordance with one embodiment.
  • FIG. 12A illustrates a first class hierarchy in accordance with one embodiment.
  • FIG. 12B illustrates a second class hierarchy in accordance with one embodiment.
  • FIG. 13 illustrates a first network in accordance with one embodiment.
  • FIG. 14 illustrates a second network in accordance with one embodiment.
  • FIG. 15 illustrates a third network in accordance with one embodiment.
  • FIG. 16 illustrates computer readable medium in accordance with one embodiment.
  • Various embodiments may generally relate to the field of wireless communications. More particularly, various embodiments are directed to principles for radio access network (RAN) intelligence to enable artificial intelligence (Al) and machine learning (ML) techniques (collectively referred to as “Al” or “ML” or “AI/ML”), a functional framework for AI/ML functionality, and input/output (I/O) of components for AI/ML enabled optimization, and use cases and solutions of AI/ML enabled RAN.
  • RAN radio access network
  • AI machine learning
  • I/O input/output
  • the RAN intelligence enabled by AI/ML can be implemented, for example, as part of a management data analytics (MDA) system or platform in alignment with the SA5 5G Services Based Management Architecture (SBMA).
  • MDA management data analytics
  • SBMA 5G Services Based Management Architecture
  • RAN3 is responsible for an overall universal mobile telecommunications system (UMTS) terrestrial radio access network (UTRAN), an evolved UMTS terrestrial radio access network (E -UTRAN), and a next generation RAN (NG-RAN) architecture and the specification of protocols for the related network interfaces.
  • UMTS universal mobile telecommunications system
  • UTRAN Universal Terrestrial Radio Access
  • E -UTRAN evolved UMTS terrestrial radio access network
  • NG-RAN next generation RAN
  • Embodiments may relate to, for example, 3 GPP technical report (TR) 28.809 titled “Study on enhancement of Management Data Analytics” Release 16 version 17.0.0 (2021-03); 3GPP technical standard (TS) 28.104 titled “Management Data Analytics (MDA)” Release 17 version 17.1.1 (2022-09); 3GPP TS 28.620 (deleted) titled “Telecommunication management; Generic Network Resource Model (NRM) Integration Reference Point (IRP); Information Service (IS)”; 3GPP TS 32.156 titled “Telecommunication management; Fixed Mobile Convergence (FMC) Model Repertoire”; 3GPP TS 28.104 titled “Management and orchestration; Management Data Analytics (MDA)”; 3GPP TS 23.288 titled “Architecture enhancements for 5G System (5GS) to support network data analytics services”; and 3GPP TS 28.532 titled “Management and orchestration; Generic management services", including any progeny, revisions and variants.
  • TR technical report
  • TS Technical standard
  • Some embodiments may be implemented to support management data analytics (MDA) for a 3GPP system.
  • MDA management data analytics
  • 3GPP TS 28.104 specifies MDA capabilities with corresponding analytics inputs and analytics outputs (reports), as well as processes and requirements for Management Data Analytics Service (MDAS), historical data handling for MDA, and ML support for MDA.
  • MDAS Management Data Analytics Service
  • This document also describes an MDA functionality and service framework, and the MDA role in a management loop.
  • 3GPP TR 28.809 generally studies enhancements for MDA. More particularly, 3 GPP TR 28.809 describes MDA use cases, identifies corresponding potential requirements, and presents possible solutions with analytics input and output (report).
  • the study also captures the MDA functionality and service framework, MDA process, MDA role in management loop and management aspects of MDA. Moreover, the study provides recommendations for the normative specifications work in full alignment with the 3 GPP TSG SA RAN3 and/or Working Group Five (SA5) 5G SBMA. The main objectives of SA5 are Management, Orchestration and Charging for 3 GPP systems. Both functional and service perspectives are covered.
  • MDA is a key enabler of automation and intelligence, and it is considered a foundational capability for mobile networks and services management and orchestration.
  • the MDA provides a capability of processing and analyzing data related to network and service events and status including, such as performance measurements, key performance indicators (KPIs), reports, alarms, configuration data, network analytics data, and service experience data from analytics functions (AFs).
  • KPIs key performance indicators
  • AFs analytics functions
  • the MDA may provide analytics output, such as statistics or predictions, root cause analysis issues, and recommendations to enable necessary actions for network and service operations.
  • the MDA output is provided by a Management Data Analytics Service (MDAS) producer to corresponding consumers that request the analytics.
  • MDAS Management Data Analytics Service
  • the MDA can identify ongoing issues impacting the performance of the network and services, and help to identify in advance potential issues that may cause potential failure and/or performance degradation.
  • the MDA can also assist to predict the network and service demand to enable the timely resource provisioning and deployments which would allow fast time-to-market network and service deployments.
  • the MDAS are services exposed by the MDA.
  • the MDAS can be consumed by various consumers, including for instance management functions (MnFs) such as management service (MnS) producers and MnS consumers for network and service management, network functions (NFs) (e.g., network data analytics function (NWDAF)), self-organizing network (SON) functions, network and service optimization tools/functions, service level specification (SLS) assurance functions, human operators, applications functions (AFs), and so forth.
  • MnFs management functions
  • MnS management service
  • NFs network functions
  • SON self-organizing network
  • SLS service level specification
  • a MDA MnS also referred to as a MDAS
  • SBMA enables any authorized consumer to request and receive analytics. It is worthy to note that the terms MDAS and MDA MnS are equivalent and may be used interchangeably throughout this document.
  • 3GPP TS 28.105 specifies AI/ML management capabilities and services for 5GS where AI/ML is used, including management and orchestration (e.g., MDA as defined in 3GPP TS 28.104) and 5G networks (e.g., a network data analytics function (NWDAF) as defined in 3GPP TS 23.288).
  • 3GPP TS 28.105 also describes the functionality and service framework for AI/ML management.
  • the AI/ML inference function in the 5GS uses an ML model for inference.
  • an ML entity (which could be an ML model or the entity contains one or more ML models) and AI/ML inference function need to be managed.
  • 3 GPP TS 28.105 specifies the AI/ML management related capabilities and services, which include ML training for training the ML model(s) associated with an ML entity.
  • 3GPP TS 28.105 specifies AI/ML functionality and a service framework for ML training.
  • An entity playing the role of an ML Training MnS producer may consume various data for ML training purposes.
  • the ML entity training capability is provided via the ML Training MnS producer in the context of SBMA to the authorized consumers by the ML Training MnS producer.
  • the ML entity training refers to the training of ML model(s) associated with the ML entity.
  • the ML Training MnS producer may implement internal business logic related to ML training in order to leverage current and historical data related to MDA and 5G networks to monitor the networks and/or services where are relevant to the ML entity, prepare the data for model training, trigger and conduct the appropriate ML training.
  • an ML entity is an entity that is either an ML model or contains one or more ML model (s) and ML model related metadata. It can be managed as a single composite entity. Metadata may include, e.g. the applicable runtime context for the ML model.
  • An Al decision entity is an entity that applies a non-ML based logic for making Al decisions that can be managed as a single composite entity.
  • An ML model or AI/ML model is a mathematical algorithm that can be "trained" by data and human expert input as examples to replicate a decision an expert would make when provided that same information.
  • ML model training refers to capabilities of an ML training function to take data, run it through an ML model, derive the associated loss and adjust the parameterization of that ML model based on the computed loss.
  • ML training refers to capabilities and associated end-to-end processes to enable an ML training function to perform /ML model training (as defined above).
  • ML training capabilities may include interaction with other parties to collect and format the data required for training the ML model, and ML model training.
  • a ML training function is a function with ML training capabilities; it is also referred to as MLT function.
  • An AI/ML inference function is a function that employs an ML entity and/or Al decision entity to conduct inference.
  • Embodiments attempt to solve these and other challenges.
  • Embodiments define a set of standard apparatus, systems, procedures, methods and techniques for ML training for a wireless communications systems, such as a 5GS or sixth generation system (6GS).
  • Embodiments also provide a set of information model definitions suitable for AI/ML management.
  • an AI/ML model is deployed, such as for an AI/ML inference function (referred to as an “inference function”) to conduct inference, it needs to be trained.
  • ML training can be performed by an external entity of the inference function.
  • the ML model is trained by an ML Training (MLT) MnS producer.
  • the training can be triggered by one or more requests from one or more MLT MnS consumers, or initiated by the MLT MnS producer (e.g., as a result of model evaluation).
  • MLT ML Training
  • an apparatus suitable to train an ML entity or model for a network node in a 3GPP system may comprise a memory interface communicatively coupled to processor circuitry.
  • the memory interface may send or receive, to or from a data storage device, management information for a network resource model (NRM) of a fifth generation system (5GS).
  • the processor circuitry may determine to initiate training of an ML entity using the management information.
  • the training may be performed by a MnS producer of the 5GS.
  • the processor circuitry may determine an inference type associated with the ML entity, select training data to train the ML entity, and train the ML entity according to the inference type using the selected training data by the MnS producer.
  • the trained ML entity may be used to conduct inference operations for a MnS consumer of the 5GS.
  • Other embodiments are described and claimed.
  • FIG. 1 illustrates an example of a wireless communication wireless communications system 100.
  • the example wireless communications system 100 is described in the context of the long-term evolution (LTE) and fifth generation (5G) new radio (NR) (5G NR) cellular networks communication standards as defined by one or more 3GPP technical specifications (TSs) and/or technical reports (TRs).
  • LTE long-term evolution
  • NR new radio
  • TSs 3GPP technical specifications
  • TRs technical reports
  • the wireless communications system 100 includes UE 102a and UE 102b (collectively referred to as the "UEs 102").
  • the UEs 102 are illustrated as smartphones (e.g., handheld touchscreen mobile computing devices connectable to one or more cellular networks).
  • any of the UEs 102 can include other mobile or non-mobile computing devices, such as consumer electronics devices, cellular phones, smartphones, feature phones, tablet computers, wearable computer devices, personal digital assistants (PDAs), pagers, wireless handsets, desktop computers, laptop computers, in- vehicle infotainment (IVI), in-car entertainment (ICE) devices, an Instrument Cluster (IC), head-up display (HUD) devices, onboard diagnostic (OBD) devices, dashtop mobile equipment (DME), mobile data terminals (MDTs), Electronic Engine Management System (EEMS), electronic/engine control units (ECUs), electronic/engine control modules (ECMs), embedded systems, microcontrollers, control modules, engine management systems (EMS), networked or “smart” appliances, machine-type communications (MTC) devices, machine-to-machine (M2M) devices, Internet of Things (loT) devices, or combinations of them, among others.
  • PDAs personal digital assistants
  • IPI in-car entertainment
  • ICE in-car entertainment
  • any of the UEs 102 may be loT UEs, which can include a network access layer designed for low-power loT applications utilizing short-lived UE connections.
  • An loT UE can utilize technologies such as M2M or MTC for exchanging data with an MTC server or device using, for example, a public land mobile network (PLMN), proximity services (ProSe), device-to-device (D2D) communication, sensor networks, loT networks, or combinations of them, among others.
  • PLMN public land mobile network
  • ProSe proximity services
  • D2D device-to-device
  • the M2M or MTC exchange of data may be a machine-initiated exchange of data.
  • An loT network describes interconnecting loT UEs, which can include uniquely identifiable embedded computing devices (within the Internet infrastructure), with short-lived connections.
  • the loT UEs may execute background applications (e.g., keep-alive messages or status updates) to facilitate the connections of the loT network.
  • the UEs 102 are configured to connect (e.g., communicatively couple) with a radio access network (RAN) 112.
  • the RAN 112 may be a next generation RAN (NG RAN), an evolved UMTS terrestrial radio access network (E- UTRAN), or a legacy RAN, such as a UMTS terrestrial radio access network (UTRAN) or a GSM EDGE radio access network (GERAN).
  • NG RAN may refer to a RAN 112 that operates in a 5G NR wireless communications system 100
  • E-UTRAN may refer to a RAN 112 that operates in an LTE or 4G wireless communications system 100.
  • connections 118 and 120 are illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols, such as a global system for mobile communications (GSM) protocol, a code-division multiple access (CDMA) network protocol, a push-to-talk (PTT) protocol, a PTT over cellular (POC) protocol, a universal mobile telecommunications system (UMTS) protocol, a 3GPP LTE protocol, a 5G NR protocol, or combinations of them, among other communication protocols.
  • GSM global system for mobile communications
  • CDMA code-division multiple access
  • PTT push-to-talk
  • POC PTT over cellular
  • UMTS universal mobile telecommunications system
  • 3GPP LTE Long Term Evolution
  • 5G NR 5G NR protocol
  • the UE 102b is shown to be configured to access an access point (AP) 104 (also referred to as "WLAN node 104," “WLAN 104,” “WLAN Termination 104,” “WT 104" or the like) using a connection 122.
  • the connection 122 can include a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, in which the AP 104 would include a wireless fidelity (Wi-Fi) router.
  • Wi-Fi wireless fidelity
  • the AP 104 is shown to be connected to the Internet without connecting to the core network of the wireless system, as described in further detail below.
  • the RAN 112 can include one or more nodes such as RAN nodes 106a and 106b (collectively referred to as “RAN nodes 106" or “RAN node 106") that enable the connections 118 and 120.
  • RAN nodes 106 nodes 106a and 106b
  • RAN node 106 nodes 106
  • the terms "access node,” “access point,” or the like may describe equipment that provides the radio baseband functions for data or voice connectivity, or both, between a network and one or more users.
  • These access nodes can be referred to as base stations (BS), gNodeBs, gNBs, eNodeBs, eNBs, NodeBs, RAN nodes, rode side units (RSUs), transmission reception points (TRxPs or TRPs), and the link, and can include ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell), among others.
  • BS base stations
  • gNodeBs gNodeBs
  • gNBs gNodeBs
  • eNodeBs eNodeBs
  • NodeBs NodeBs
  • RAN nodes e.g., rode side units (RSUs), transmission reception points (TRxPs or TRPs), and the link
  • RSUs rode side units
  • TRxPs or TRPs transmission reception points
  • the link and can include ground stations (e.g., terrestrial access points) or satellite stations providing coverage within
  • the term "NG RAN node” may refer to a RAN node 106 that operates in an 5G NR wireless communications system 100 (for example, a gNB), and the term “E-UTRAN node” may refer to a RAN node 106 that operates in an LTE or 4G wireless communications system 100 (e.g., an eNB).
  • the RAN nodes 106 may be implemented as one or more of a dedicated physical device such as a macrocell base station, or a low power (LP) base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells.
  • LP low power
  • some or all of the RAN nodes 106 may be implemented as one or more software entities running on server computers as part of a virtual network, which may be referred to as a cloud RAN (CRAN) or a virtual baseband unit pool (vBBUP).
  • CRAN cloud RAN
  • vBBUP virtual baseband unit pool
  • the CRAN or vBBUP may implement a RAN function split, such as a packet data convergence protocol (PDCP) split in which radio resource control (RRC) and PDCP layers are operated by the CRAN/vBBUP and other layer two (e.g., data link layer) protocol entities are operated by individual RAN nodes 106; a medium access control (MAC)/physical layer (PHY) split in which RRC, PDCP, MAC, and radio link control (RLC) layers are operated by the CRAN/vBBUP and the PHY layer is operated by individual RAN nodes 106; or a "lower PHY" split in which RRC, PDCP, RLC, and MAC layers and upper portions of the PHY layer are operated by the CRAN/vBBUP and lower portions of the PHY layer are operated by individual RAN nodes 106.
  • PDCP packet data convergence protocol
  • RRC radio resource control
  • RLC radio link control
  • an individual RAN node 106 may represent individual gNB distributed units (DUs) that are connected to a gNB central unit (CU) using individual Fl interfaces (not shown in FIG. 1).
  • the gNB -DUs can include one or more remote radio heads or RFEMs, and the gNB-CU may be operated by a server that is located in the RAN 112 (not shown) or by a server pool in a similar manner as the CRAN/vBBUP.
  • one or more of the RAN nodes 106 may be next generation eNBs (ng-eNBs), including RAN nodes that provide E-UTRA user plane and control plane protocol terminations toward the UEs 102, and are connected to a 5G core network (e.g., core network 114) using a next generation interface.
  • ng-eNBs next generation eNBs
  • 5G core network e.g., core network 114
  • RSU vehicle-to-everything
  • UE-type RSU a RSU implemented in or by a UE
  • eNB -type RSU a RSU implemented in or by a gNB
  • gNB-type RSU a RSU implemented in or by a gNB
  • an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs 102 (vUEs 102).
  • the RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications or other software to sense and control ongoing vehicular and pedestrian traffic.
  • the RSU may operate on the 5.9 GHz Direct Short Range Communications (DSRC) band to provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may operate on the cellular V2X band to provide the aforementioned low latency communications, as well as other cellular communications services.
  • DSRC Direct Short Range Communications
  • the RSU may operate as a Wi-Fi hotspot (2.4 GHz band) or provide connectivity to one or more cellular networks to provide uplink and downlink communications, or both.
  • the computing device(s) and some or all of the radiofrequency circuitry of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and can include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network, or both.
  • Any of the RAN nodes 106 can terminate the air interface protocol and can be the first point of contact for the UEs 102.
  • any of the RAN nodes 106 can fulfill various logical functions for the RAN 112 including, but not limited to, radio network controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management.
  • RNC radio network controller
  • the UEs 102 can be configured to communicate using orthogonal frequency division multiplexing (OFDM) communication signals with each other or with any of the RAN nodes 106 over a multicarrier communication channel in accordance with various communication techniques, such as, but not limited to, OFDMA communication techniques (e.g., for downlink communications) or SC-FDMA communication techniques (e.g., for uplink communications), although the scope of the techniques described here not limited in this respect.
  • OFDM signals can comprise a plurality of orthogonal subcarriers.
  • the RAN nodes 106 can transmit to the UEs 102 over various channels.
  • Various examples of downlink communication channels include Physical Broadcast Channel (PBCH), Physical Downlink Control Channel (PDCCH), and Physical Downlink Shared Channel (PDSCH). Other types of downlink channels are possible.
  • the UEs 102 can transmit to the RAN nodes 106 over various channels.
  • Various examples of uplink communication channels include Physical Uplink Shared Channel (PUSCH), Physical Uplink Control Channel (PUCCH), and Physical Random Access Channel (PRACH). Other types of uplink channels are possible.
  • PUSCH Physical Uplink Shared Channel
  • PUCCH Physical Uplink Control Channel
  • PRACH Physical Random Access Channel
  • a downlink resource grid can be used for downlink transmissions from any of the RAN nodes 106 to the UEs 102, while uplink transmissions can utilize similar techniques.
  • the grid can be a time -frequency grid, called a resource grid or time-frequency resource grid, which is the physical resource in the downlink in each slot.
  • a time-frequency plane representation is a common practice for OFDM systems, which makes it intuitive for radio resource allocation.
  • Each column and each row of the resource grid corresponds to one OFDM symbol and one OFDM subcarrier, respectively.
  • the duration of the resource grid in the time domain corresponds to one slot in a radio frame.
  • the smallest time-frequency unit in a resource grid is denoted as a resource element.
  • Each resource grid comprises a number of resource blocks, which describe the mapping of certain physical channels to resource elements.
  • Each resource block comprises a collection of resource elements; in the frequency domain, this may represent the smallest quantity of resources that currently can be allocated. There are several different physical downlink channels that are conveyed using such resource blocks.
  • the PDSCH carries user data and higher-layer signaling to the UEs 102.
  • the PDCCH carries information about the transport format and resource allocations related to the PDSCH channel, among other things. It may also inform the UEs 102 about the transport format, resource allocation, and hybrid automatic repeat request (HARQ) information related to the uplink shared channel.
  • HARQ hybrid automatic repeat request
  • Downlink scheduling e.g., assigning control and shared channel resource blocks to the UE 102b within a cell
  • the downlink resource assignment information may be sent on the PDCCH used for (e.g., assigned to) each of the UEs 102.
  • the PDCCH uses control channel elements (CCEs) to convey the control information.
  • CCEs control channel elements
  • the PDCCH complex -valued symbols may first be organized into quadruplets, which may then be permuted using a sub- block interleaver for rate matching.
  • each PDCCH may be transmitted using one or more of these CCEs, in which each CCE may correspond to nine sets of four physical resource elements collectively referred to as resource element groups (REGs).
  • REGs resource element groups
  • QPSK Quadrature Phase Shift Keying
  • the PDCCH can be transmitted using one or more CCEs, depending on the size of the downlink control information (DCI) and the channel condition.
  • DCI downlink control information
  • Some implementations may use concepts for resource allocation for control channel information that are an extension of the above-described concepts.
  • EPDCCH enhanced PDCCH
  • the EPDCCH may be transmitted using one or more enhanced CCEs (ECCEs). Similar to above, each ECCE may correspond to nine sets of four physical resource elements collectively referred to as an enhanced REG (EREG). An ECCE may have other numbers of EREGs.
  • the RAN nodes 106 are configured to communicate with one another using an interface 132.
  • the interface 132 may be an X2 interface 132.
  • the X2 interface may be defined between two or more RAN nodes 106 (e.g., two or more eNBs and the like) that connect to the EPC 114, or between two eNBs connecting to EPC 114, or both.
  • the X2 interface can include an X2 user plane interface (X2-U) and an X2 control plane interface (X2-C).
  • the X2-U may provide flow control mechanisms for user data packets transferred over the X2 interface, and may be used to communicate information about the delivery of user data between eNBs.
  • the X2-U may provide specific sequence number information for user data transferred from a master eNB to a secondary eNB; information about successful in sequence delivery of PDCP protocol data units (PDUs) to a UE 102 from a secondary eNB for user data; information of PDCP PDUs that were not delivered to a UE 102; information about a current minimum desired buffer size at the secondary eNB for transmitting to the UE user data, among other information.
  • the X2-C may provide intra- LTE access mobility functionality, including context transfers from source to target eNBs or user plane transport control; load management functionality; inter-cell interference coordination functionality, among other functionality.
  • the interface 132 may be an Xn interface 132.
  • the Xn interface may be defined between two or more RAN nodes 106 (e.g., two or more gNBs and the like) that connect to the 5G core network 114, between a RAN node 106 (e.g., a gNB) connecting to the 5G core network 114 and an eNB, or between two eNBs connecting to the 5G core network 114, or combinations of them.
  • the Xn interface can include an Xn user plane (Xn-U) interface and an Xn control plane (Xn-C) interface.
  • the Xn-U may provide non-guaranteed delivery of user plane PDUs and support/provide data forwarding and flow control functionality.
  • the Xn-C may provide management and error handling functionality, functionality to manage the Xn-C interface; mobility support for UE 102 in a connected mode (e.g., CM- CONNECTED) including functionality to manage the UE mobility for connected mode between one or more RAN nodes 106, among other functionality.
  • a connected mode e.g., CM- CONNECTED
  • the mobility support can include context transfer from an old (source) serving RAN node 106 to new (target) serving RAN node 106, and control of user plane tunnels between old (source) serving RAN node 106 to new (target) serving RAN node 106.
  • a protocol stack of the Xn-U can include a transport network layer built on Internet Protocol (IP) transport layer, and a GPRS tunneling protocol for user plane (GTP-U) layer on top of a user datagram protocol (UDP) or IP layer(s), or both, to carry user plane PDUs.
  • IP Internet Protocol
  • GTP-U GPRS tunneling protocol for user plane
  • UDP user datagram protocol
  • IP layer(s) IP layer(s)
  • the Xn-C protocol stack can include an application layer signaling protocol (referred to as Xn Application Protocol (Xn-AP or XnAP)) and a transport network layer (TNL) that is built on a stream control transmission protocol (SCTP).
  • the SCTP may be on top of an IP layer, and may provide the guaranteed delivery of application layer messages.
  • point-to-point transmission is used to deliver the signaling PDUs.
  • the Xn-U protocol stack or the Xn-C protocol stack, or both may be same or similar to the user plane and/or control plane protocol stack(s) shown and described herein.
  • the RAN 112 is shown to be communicatively coupled to a core network 114 (referred to as a "CN 114").
  • the CN 114 includes multiple network elements, such as network element 108a and network element 108b (collectively referred to as the "network elements 108"), which are configured to offer various data and telecommunications services to customers/subscribers (e.g., users of UEs 102) who are connected to the CN 114 using the RAN 112.
  • the components of the CN 114 may be implemented in one physical node or separate physical nodes and can include components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium).
  • network functions virtualization may be used to virtualize some or all of the network node functions described here using executable instructions stored in one or more computer-readable storage mediums, as described in further detail below.
  • a logical instantiation of the CN 114 may be referred to as a network slice, and a logical instantiation of a portion of the CN 114 may be referred to as a network sub-slice.
  • NFV architectures and infrastructures may be used to virtualize one or more network functions, alternatively performed by proprietary hardware, onto physical resources comprising a combination of industry-standard server hardware, storage hardware, or switches. In other words, NFV systems can be used to execute virtual or reconfigurable implementations of one or more network components or functions, or both.
  • An application server 110 may be an element offering applications that use IP bearer resources with the core network (e.g., UMTS packet services (PS) domain, LTE PS data services, among others).
  • the application server 110 can also be configured to support one or more communication services (e.g., VoIP sessions, PTT sessions, group communication sessions, social networking services, among others) for the UEs 102 using the CN 114.
  • the application server 110 can use an IP communications interface 130 to communicate with one or more network elements 108a.
  • the CN 114 may be a 5G core network (referred to as “5GC 114" or “5G core network 114"), and the RAN 112 may be connected with the CN 114 using a next generation interface 124.
  • the next generation interface 124 may be split into two parts, a next generation user plane (NG-U) interface 114, which carries traffic data between the RAN nodes 106 and a user plane function (UPF), and the SI control plane (NG-C) interface 126, which is a signaling interface between the RAN nodes 106 and access and mobility management functions (AMFs). Examples where the CN 114 is a 5G core network are discussed in more detail with regard to later figures.
  • the CN 114 may be an EPC (referred to as "EPC 114" or the like), and the RAN 112 may be connected with the CN 114 using an SI interface 124.
  • the SI interface 124 may be split into two parts, an SI user plane (Sl-U) interface 128, which carries traffic data between the RAN nodes 106 and the serving gateway (S-GW), and the SI -MME interface 126, which is a signaling interface between the RAN nodes 106 and mobility management entities (MMEs).
  • SI-U SI user plane
  • S-GW serving gateway
  • MME interface 126 which is a signaling interface between the RAN nodes 106 and mobility management entities (MMEs).
  • Energy saving is a critical issue for the 5G operators. Energy saving is achieved by activating the energy saving mode of the NR capacity booster cell or 5GC NF (e.g., a UPF etc.), and the energy saving activation decision making may be based on the various information such as load information of the related cells/UPFs, the energy saving policies set by operators as specified in a 3GPP TS or TR, such as TR 28.809, TR 37.817, TR 36.887, and TS 38.423.
  • 5GC NF e.g., a UPF etc.
  • a management system, node or logic has an overall view of network load information and it could also take the inputs from the control plane analysis (e.g., the analytics provided by NWDAF).
  • the management system may provide network wide analytics and cooperate with core network and RAN domains and decide on which cell/UPF should move into energy saving mode in a coordinated manner.
  • There are various performance measurements could be used as inputs by MDA for energy saving analysis, for example, energy efficiency (EE) related performance measurements, (e.g.
  • the composition of the traffic load could be also considered as inputs for energy saving analysis, (e.g., the percentage of high-value traffic in the traffic load).
  • the variation of traffic load may be related to the network data (e.g., historical handover information of the UEs or network congestion status, packet delay). Collecting and analyzing the network data with machine learning tools may provide predictions related to the trends of traffic load.
  • the composition and the trend of the traffic load may be used as references for making decision on energy saving.
  • prediction data models which may use machine learning tools for predicting the energy saving related information, such as traffic load. MDAS may also take these prediction data models as input, make analysis and select the optimal prediction data models to provide more accurate prediction results as references for making energy saving decision. The more accurate the prediction results are, the better the energy -saving decision based on the prediction results will be.
  • the prediction data models are related to services (e.g., traffic load, resource utilization, service experience), which can be provided by consumer.
  • MDAS may also obtain NF location or other inventory information such as energy efficiency and the energy cost of the data centers, while analyzing historical network information. Based on the collected information, MDAS producer makes analysis and gives suggestions to network management in optimization suggestion for 5G Core NF deployment options in high-value traffic region (e.g. location of VNF in context of energy saving).
  • NWDAF control plane data analysis
  • UE Communication analytics may also be used as input for energy saving analysis and instruction.
  • FIG. 2 illustrates an MDA system 200 suitable for use by a management system to implement AI/ML functionality and services for the wireless communications system 100.
  • the MDA system 200 illustrates an MDA functionality and service framework. As depicted in FIG.
  • the MDA system 200 may include a MDA platform 204, at least one MDA service (MDAS) consumer 202, and multiple MDAS producers, such as an other MDAS producer 216, a management service (MnS) producer 218, and a network data analytics function (NWDAF) 220.
  • the MDA platform 204 includes an MDAS producer 206, an MDAS analyzer 208, an multiple MDAS consumers.
  • the multiple MDAS consumers include an MDAS consumer 210, an MnS consumer 212 and a NWDAF subscriber 214, each communicating with a corresponding other MDAS producer 216, MnS producer 218 and NWDAF 220 via a MDAS interface, MnS interface and Nwdaf interface, respectively.
  • the MDA platform 204 may collect data for analysis by acting as the MnS consumer 212, and/or as the NWDAF subscriber 214, and/or as a consumer of the other MDAS producer 216. After analysis, the MDAS producer 206 exposes the analysis results to the one or more MDAS consumers 202.
  • the MDA system 200 forms a part of a management loop (which can be open loop or closed loop), and it brings intelligence and generates value by processing and analysis of management and network data, where the Al and ML techniques may be utilized.
  • the MDA system 200 plays the role of analytics in the management loop, which includes an observation state, an analytics state, a decision state and an execution state. In the observation state, the MDA system 200 conducts observation of the managed networks and services.
  • the observation state involves monitoring and collection of events, status and performance of the managed networks and services, and providing the observed/collected data (e.g., performance measurements, Trace/MDT/RLF/RCEF reports, network analytics reports, QoE reports, alarms, etc).
  • the data analytics state for the managed networks and services prepares, processes and analyzes the data related to the managed networks and services, and provides the analytics reports for root cause analysis of ongoing issues, prevention of potential issues and prediction of network or service demands.
  • the analytics report contains the description of the issues or predictions with optionally a degree of confidence indicator, the possible causes for the issue and the recommended actions.
  • Al and ML may be utilized by the MDA platform 204 with the input data including not only the observed data of the managed networks and services, but also the execution reports of actions (taken by the execution step).
  • the MDAS analyzer 208 classifies and correlates the input data (current and historical data), learns and recognizes the data patterns, and makes analysis to derive inference, insight and predictions.
  • the decision state involves making decisions for the management actions for the managed networks and services.
  • the management actions are decided based on the analytics reports (provided by the MDAS analyzer 208) and other management data (e.g., historical decisions made previously) if necessary.
  • the decision may be made by the consumer of MDAS (in the closed management loop), or a human operator (in the open management loop).
  • the decision includes what actions to take, and when to take the actions.
  • the execution state involves execution of the management actions according to the decisions. During the execution state, the actions are carried out to the managed networks and services, and the reports (e.g., notifications, logs) of the executed actions are provided.
  • FIG. 3 illustrates an AI/ML system 300 suitable for use by the MDAS analyzer 208 of the MDA system 200 for the wireless communications system 100.
  • the AI/ML system 300 comprises four major operational states, including a data collection state, an ML entity state, an ML training state, and an AI/ML inference state.
  • the AI/ML system 300 may use various ML entities. Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions.
  • the AI/ML system 300 may use various models or ML entities, such as derived using an artificial neural network (ANN), convolutional neural network (CNN), deep learning, decision tree learning, support -vector machine, regression analysis, Bayesian networks, genetic algorithms, federated learning, distributed artificial intelligence, and other suitable models. Embodiments are not limited in this context.
  • the AI/ML system 300 implements a function that provides input data to model training and model inference functions.
  • AI/ML algorithm specific data preparation e.g., data pre-processing and cleaning, formatting, and transformation
  • the AI/ML system 300 implements a data driven algorithm by applying machine learning techniques that generates a set of outputs comprising predicted information and/or decision parameters, based on a given set of inputs 310.
  • the AI/ML system 300 implements an online or offline process to train an ML entity by learning features and patterns that best present data and get the trained ML entity for inference.
  • the AI/ML inference state the AI/ML system 300 implements a process of using a trained ML entity to make a prediction or guide the decision based on collected data and the ML entity.
  • the AI/ML system 300 collects data from the network nodes, management entity or UE, as a basis for ML entity training, data analytics and inference.
  • a data collection 302 is a function that provides input data to ML training 304 and AI/ML inference 306 functions.
  • An AI/ML algorithm specific data preparation e.g., data pre-processing and cleaning, formatting, and transformation
  • Examples of input data may include measurements from UEs, NG-RAN nodes, 0AM nodes, or different network entities, feedback from an actor 308, and output from an ML entity.
  • the data collection 302 collects at least two types of data. The first is training data, which comprises data needed as input 310 for the ML training 304 function. The second is inference data, which comprises data needed as input 312 for the AI/ML inference 306 function.
  • the ML training 304 is a function that performs the ML training, validation, and testing which may generate model performance metrics as part of the ML entity testing procedure.
  • the ML training 304 function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on training data (e.g., input 310) delivered by the data collection 302 function, if required.
  • data preparation e.g., data pre-processing and cleaning, formatting, and transformation
  • training data e.g., input 310
  • the ML training 304 can initially deploy a trained, validated, and tested ML entity to the AI/ML inference 306 function or to deliver an updated entity to the AI/ML inference 306 function.
  • the AI/ML inference 306 is a function that provides AI/ML inference output (e.g., predictions or decisions).
  • the AI/ML inference 306 function may provide model performance feedback 314, 316 to the ML training 304 function when applicable.
  • the AI/ML inference 306 function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data (e.g., input 312) delivered by the data collection 302 function, if required.
  • the inference output of the ML entity produced by an AI/ML inference 306 function is use case specific.
  • the ML performance feedback information may be used for monitoring the performance of the ML entity, when available.
  • the actor 308 is a function that receives the output 318 from the AI/ML inference 306 function and triggers or performs corresponding actions.
  • the actor 308 may trigger actions directed to other entities or to itself.
  • the actor 308 may provide feedback information 320 to the data collection 302.
  • the feedback information may comprise data needed to derive training data, inference data or to monitor the performance of the ML entity and its impact to the network through updating of KPIs and performance counters.
  • the AI/ML system 300 may be applicable to various use cases and solutions for AI/ML in a RAN node 106 of the wireless communications system 100.
  • One use case is network energy saving or energy efficiency (EE).
  • FIG. 4 illustrates an MDA ML system 400 suitable for use in the wireless communications system 100.
  • a management system that implements the MDA system 200 and/or the AI/ML system 300 can be coalesced into the MDA ML system 400.
  • the MDA ML system 400 illustrates an example of a MDA process scenario where the ML entity and the management data analysis module are residing in a MDAS producer, although other scenarios are possible.
  • the MDA ML system 400 may generally rely on ML technologies, which may need a MDAS consumer to be involved to optimize the accuracy of the MDA results.
  • the MDA process in terms of the interaction with the MDAS consumer, when utilizing ML technologies, is described in FIG. 4.
  • an MDAS producer 206 serves as ML training producer, trains an ML entity 406 and provides an ML training report 414.
  • the process for ML training may also get an MDAS consumer 202 involved, by allowing the MDAS consumer 202 to provide input for ML training.
  • the ML training may be performed on an un-trained ML entity 406 or a trained ML entity 406.
  • the MDAS producer 206 analyzes the data by the trained ML entity, and provides an ML analytics report 416 to the MDAS consumer 202.
  • the MDAS consumer 202 may validate the ML training report 414 and ML analytics report 416 and provide a report validation feedback 418 to the MDAS producer 206. For each received report the MDAS consumer 202 may provide a feedback 418 towards the MDAS producer 206, which may be used to optimize ML entity 406.
  • the MDAS producer 206 may receive analytics input 412.
  • the analytics input 412 could be used by an ML entity trainer 404 for ML training or a management data analyzer 408 for management data analysis.
  • a data classifier 402 of the MDAS producer 206 classifies data from the analytics input 412 and passes the classified data along to a corresponding entity for further processing.
  • An ML trainer 404 of the MDAS producer 206 trains the ML entity 406.
  • the ML trainer 404 trains the ML entity 406 to be able to provide the expected training output by analysis of the training input.
  • the data for ML training may be training data, including the training input and the expected output, and/or the report validation feedback 418 provided by the MDAS consumer 202.
  • the MDAS producer 206 provides an ML training report 414 to the MDAS consumer 202.
  • the MDAS producer uses the trained ML entity 406, analyzes the classified data from the data classifier 402, and it generates the ML analytics report 416.
  • the ML analytics report 416 is output from the MDAS producer 206 to the MDAS consumer 202.
  • the MDAS consumer 202 may validate the ML analytics report 416 provided by the MDAS producer 206.
  • the analytics report 416 to be validated may be the ML analytics report 416 and/or the ML training report 414 as previously described.
  • the MDAS consumer 202 may provide a feedback 418 to the MDAS producer 206.
  • the MDAS consumer 202 may also provide training data and request to train the ML entity 406 and/or provide feedback indicating a scope of inaccuracy, e.g. time, geographical area, etc.
  • the MDA ML system 400 is implemented as part of a network node in a 3GPP system, such as a 3GPP RAN3 5G NR system
  • a 3GPP system such as a 3GPP RAN3 5G NR system
  • various embodiments herein describe new information that a RAN node may exchange with its neighboring nodes as well as other metrics of a cell KPIs and ML entities in order to facilitate better decision making from the ML entities to improve performance for an apparatus, device or system in a 3GPP system.
  • FIG. 5 illustrates a table 500.
  • the AI/ML system 300 and the MDA ML system 400 may implement various Al and ML algorithms suitable for supporting one or more operations for the wireless communications system 100.
  • machine learning approaches are traditionally divided into four broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system.
  • One approach is supervised learning, where a computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs.
  • Another approach is semi -supervised learning, which is similar to supervised learning, but includes both labelled data and unlabelled data.
  • Unsupervised learning where no labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
  • reinforcement learning where a computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximize.
  • Other approaches exist as well, such as dimensionality reduction, self-learning, feature learning, sparse dictionary learning, anomaly detection, robot learning, association rules, and so forth.
  • FIG. 6 illustrates a MLT system 600.
  • the MLT system 600 implements a novel functional overview and service framework for ML training (MLT) of an ML entity 612 for the MLT system 600.
  • the MLT system 600 can be used to train other ML entities as well, such as the ML entity 406 for AI/ML systems 300 and MDA ML system 400, for example.
  • the MLT system 600 may implement an AI/ML functional and service framework as defined by 3GPP TS 28.104 and/or 3GPP TS 28.105, among other 3GPP and non-3GPP standards. Embodiments are not limited in this context.
  • a MLT MnS producer 602 may implement an MLT ML training logic 604, which is a MLT function that consumes various data from one or more data sources 606 suitable for ML training purposes.
  • the MLT capability is provided via an ML training MnS 610 in the context of a SBMA to one or more authorized MLT MnS consumers 608 by the MLT MnS producer 602.
  • the MLT MnS producer 602 may train an ML entity 612 using the ML training logic 604.
  • the ML training logic 604 represents internal business logic suitable for a given AI/ML inference function or ML entity.
  • the ML training logic 604 leverages current and historical relevant data, including those listed below to monitor the networks and/or services where relevant to the ML entity, prepare the data, trigger and conduct the training: (1) Performance Measurements (PM) as per 3GPP TS 28.552, 3GPP TS 32.425 and Key Performance Indicators (KPIs) as per 3GPP TS 28.554; (2) Trace/MDT/RLF/RCEF data, as per 3GPP TS 32.422 and 3GPP TS 32.423; (3) QoE and service experience data as per 3GPP TS 28.405 and 3GPP TS 28.406; (4) Analytics data offered by NWDAF as per 3GPP TS 23.288; (5) Alarm information and notifications as per 3GPP TS 28.532; (6) CM
  • the MLT MnS producer 602 may train the ML entity 612 using the ML training logic 604 and management information 614.
  • the management information 614 may comprise a standardized set of requirements and information model definitions for AI/ML management. Examples of requirements include those set forth in Table 1 below.
  • the information model definitions for AI/ML management may include information such as imported and associated information entities, imported information entities and local labels, classes, class diagrams, class relationships, class inheritance, class definitions, class attributes, attributes, attribute constraints, notifications, data type definitions, attribute definitions, attribute properties, common notifications, service components, solution sets, program code, and other software and hardware constructs.
  • the management information 614 may be defined in view of a network resource model (NRM) for a network, such as a 3 GPP network like the wireless communications system 100.
  • NRM configuration management allows service providers to control and monitor the actual configuration on network resources, which are the fundamental resources to the mobility networks. Considering the huge number of existing information object classes (IOC) and increasing IOCS in various domains, NRM configuration management should be handled in a dynamic manner.
  • the management information 614 may be defined by one or more 3 GPP standards, such as 3GPP TS 28.105, among other 3GPP and non-3GPP standards. Embodiments are not limited in this context.
  • FIG. 7 illustrates a message flow 700 for a AI/ML system, such as the MLT system 600.
  • the message flow 700 illustrates messages communicated for MLT to support various AI/ML management use cases and requirements.
  • an ML entity 702 is deployed to conduct inference operations for a network node in the wireless communications system 100.
  • the ML entity 702 represents an AI/ML inference function.
  • the MLT MnS producer 602 Prior to deployment, or after deployment, the MLT MnS producer 602 implements an ML training function defined by the ML training logic 604 to train the ML entity 612 associated with the management information 614.
  • the ML training function may be implemented as a combined or an internal entity to the AI/ML inference function.
  • the ML training function may be implemented as a separate or an external entity to the AI/ML inference function.
  • ML entity training refers to training of ML model(s) associated with an ML entity.
  • the MLT MnS producer 602 of the MLT system 600 trains the ML entity 612 associated with the ML entity 702.
  • the ML training can be triggered by requests from one or more MLT MnS consumers 608, or initiated by the MLT MnS producer 602 (e.g. as result of model evaluation).
  • the ML training capabilities are provided by an MLT MnS producer 602 to one or more MLT MnS consumers 608.
  • the ML training may be triggered by one or more ML training requests 704 from one or more MLT MnS consumers 608.
  • the consumer MLT MnS consumer 608 may be for example a network function, a management function, an operator, or another functional differentiation to trigger an ML training.
  • the MLT MnS consumer 608 requests the MLT MnS producer 602 to train the ML model(s) associated with an MLentity.
  • the MLT MnS consumer 608 should specify an inference type which indicates the function or purpose of the ML entity, e.g. CoverageProblemAnalysis .
  • the MLT MnS producer 602 can perform the training according to the designated inference type.
  • the MLT MnS consumer 608 may provide the data sources 606 that contain the training data which are considered as inputs candidates for training. To obtain valid training outcomes, MLT MnS consumers 608 may also designate their requirements for model performance (e.g. accuracy, etc) in the training request.
  • the MLT MnS producer 602 provides an ML training response 706 to the MLT MnS consumer 608 indicating whether the request was accepted. If not accepted, the ML training response 706 may include a reason for non-acceptance, such as insufficient training data, overcapacity, insufficient priority, and so forth.
  • the MLT MnS producer 602 decides when to start the ML training with consideration of the ML training request 704 from the MLT MnS consumer 608. Once the training is decided, the MLT MnS producer 602 selects the training data, with consideration of the consumer provided candidate training data. Since the training data directly influences the algorithm and performance of the trained ML entity, the MLT MnS producer 602 may examine the consumer's provided training data and decide to select none, some or all of them. In addition, the MLT MnS producer 602 may select some other training data that are available. The MLT MnS producer 602 trains the ML entity using the selected training data. The MLT MnS producer 602 provides training results 708 to the MLT MnS consumer 608. The training result 708 may include a location of the trained ML model or entity, among other types of information.
  • MLT may be initiated by the MLT MnS producer 602.
  • the MLT MnS producer 602 may initiate MLT, for instance, as a result of a performance evaluation of the ML entity, based on feedback or new training data received from the MLT MnS consumer 608, or when new training data which are not from the MLT MnS consumer 608 describing a new network status/events become available.
  • the MLT MnS producer 602 decides to start the ML training, the MLT MnS producer 602 selects training data, trains the ML entity using the selected training data, and provides the training results (e.g., including the location of the trained ML entity, etc.) to the MLT MnS consumers 608 who have subscribed to receive the ML training results.
  • the training data e.g., including the location of the trained ML entity, etc.
  • different entities that apply the respective ML entity/model or AI/ML inference function may have different inference requirements and capabilities.
  • another MLT MnS consumer 608, for the same use case may support a rural environment and as such wishes to have an ML entity and AI/ML inference function fitting that type of environment.
  • the different consumers need to know the available versions of ML entities, with the variants of trained ML models or entities and to select the appropriate one for their respective conditions.
  • the models that have been trained may differ in terms of complexity and performance.
  • a generic comprehensive and complex model may have been trained in a cloud-like environment but when such a model cannot be used in the gNB and instead, a less complex model, trained as a derivative of this generic model, could be a better candidate.
  • multiple less complex models could be trained with different level of complexity and performance which would then allow different relevant models to be delivered to different network functions depending on operating conditions and performance requirements.
  • the network functions need to know the alternative models available and interactively request and replace them when needed and depending on the observed inference-related constraints and performance requirements.
  • This machine learning capability relates to means for managing and controlling ML model/entity training processes.
  • the ML model applied for such analytics and decision making needs to be trained with the appropriate data.
  • the training may be undertaken in managed function or in a management function.
  • the network or the 0AM system thereof
  • the network not only needs to have the required training capabilities but needs to also have the means to manage the training of the ML models/entities.
  • the consumers need to be able to interact with the training process, e.g. to suspend or restart the process; and also need to manage and control the requests related to any such training process.
  • a given MLT may have certain requirements.
  • MLT requirements are set forth in Table 1 as follows:
  • the MLT MnS producer shall have a capability ML training
  • MDA ML- allowing the consumer to request ML entity requested by
  • the MLT MnS producer shall have a capability ML training
  • the MLT MnS producer shall have a capability ML training
  • a particular MLT task may have other MLT requirements as well, such as those set forth in 3GPP TS 28.105, among other 3GPP and non-3GPP standards.
  • FIG. 8 illustrates an embodiment of a logic flow 800.
  • the logic flow 800 may be representative of some or all of the operations executed by one or more embodiments described herein.
  • the logic flow 800 may include some or all of the operations performed by the AI/ML system 300, the MDA ML system 400, and/or the MLT system 600 of the wireless communications system 100.
  • the logic flow 800 illustrates the AI/ML system 300, the MDA ML system 400, and/or the MLT system 600 utilizing a message exchange and message format discussed with reference to the message flow 700. Embodiments are not limited in this context.
  • logic flow 800 determines to initiate training of an ML entity using management information for a network resource model (NRM) of a fifth generation system (5GS), the training to be performed by a management service (MnS) producer of the 5GS.
  • NRM network resource model
  • MnS management service
  • an MLT MnS producer 602 determines to initiate training of an ML entity 612 using management information 614 for a NRM of the wireless communications system 100.
  • the MLT MnS producer 602 may receive an ML training request 704 for ML training from the MLT MnS consumer 608, and determine to initiate training of the ML entity 612 in response to a request for ML entity training from the MLT MnS consumer 608.
  • the MLT MnS producer 602 may itself determine to initiate training of the ML entity 612 as a result of evaluation of performance of the ML entity 612, based on feedback 418 received from the MLT MnS consumer 608, or when new training data describing new network status or events become available.
  • logic flow 800 determines an inference type associated with the ML entity.
  • the MLT MnS producer 602 may determine an inference type associated with the ML entity.
  • the MLT MnS producer 602 may receive an ML training request 704 specifying the interference type for the ML entity 612 to be trained from the MLT MnS consumer 608.
  • logic flow 800 selecting training data to train the ML entity.
  • the MLT MnS producer 602 may select training data to train the ML entity.
  • the training data may be stored in one or more data sources 606.
  • the MLT MnS producer 602 may receive an ML training request 704 specifying one or more data sources containing candidate training data for training the ML entity 612 from the MLT MnS consumer 608.
  • the MLT MnS producer 602 may optionally select at least a portion of the training data to train the ML entity 612 from the candidate training data received from the MLT MnS consumer 608.
  • logic flow 800 trains the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS.
  • MnS management service
  • the MLT MnS producer 602 may train the ML entity 612 according to the inference type using the selected training data.
  • the MLT MnS producer 602 may generate a training result 708 for the trained ML entity 612.
  • the MLT MnS producer 602 may provide the training result 708 to the MLT MnS consumer 608.
  • the training result 708 may include a location of the trained ML entity 612, so that the MLT MnS consumer 608 can retrieve the trained ML entity 612 from the MLT MnS producer 602.
  • the MLT MnS consumer 608 may deploy the trained ML entity 612. The MLT MnS consumer 608 can then use the trained ML entity 612 to conduct inference operations for the MLT MnS consumer 608 in the wireless communications system 100.
  • the logic flow 800 may further include various logic blocks, in various combinations, that are not necessarily shown in FIG. 8. Some examples are described below.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML training.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML training and a class hierarchy for ML training related to the NRM.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and where the ML training request managed object instance (MOI) is contained under one ML training function MOL
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an ML training report, the ML training report to represent an ML training report that is provided by the MnS producer, and where the ML entity training report managed object instance (MOI) is contained under one ML training function MOL
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity to the MnS producer.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML training.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MnS consumer.
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML training report managed object instance (MOI) that represents a last training report for the ML entity.
  • MOI ML training report managed object instance
  • the logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
  • FIG. 9 illustrates an MLT software architecture 900 suitable for supporting MLT operations as performed by the MLT system 600 in a wireless communications system 100, such as a 5GS, for example.
  • SBMA service based management architecture
  • MnS Management Service
  • MnS components are used to build 3 GPP-defined and vendor-specific Management Services and Management Functions. This approach combines the power of standardized (interoperable) interfaces for multivendor integrations with support for diverse deployment scenarios.
  • the SBMA provides a comprehensive toolset of RESTful management service components for building 5G management and orchestration solutions enabling improved operability and automation of 5G radio and core networks and services.
  • 3 GPP follows a strictly model -driven approach relying on generic yet powerful Create, Read, Update and Delete (CRUD) operations and rich Network Resource Models (NRMs). No task-specific operations are defined.
  • This approach is also referred to as Representational State Transfer (REST).
  • REST is a software architectural style that describes a uniform interface between physically separate components, often across a network in a client-server architecture.
  • Release- 16 contains Network Resource Models for the NR, 5GC and Network Slicing. Models for interactions with verticals and external management systems are available as well.
  • control NRM fragments have been introduced for different management tasks such as subscribing to receiving notifications or managing performance metric production jobs, often replacing and extending legacy approaches based on dedicated operations.
  • the main benefit of a fully model-driven approach is that the same set of basic CRUD operations can be used to generate sophisticated requests for manipulating and retrieving Network Resource Models. No task-specific operations are required.
  • An additional benefit of the strict separation of model and access is that the 3GPP-defined Network Resource Models can be reused easily by other management frameworks following the same separation of concerns.
  • the SBMA uses Network Resource Models, such as those defined in 3GPP TS 28.622 titled “Generic Network Resource Model (NRM) Integration Reference Point (IRP)” Release 18 (2022-09).
  • the 3GPP TS 28.622 specifies generic network resource information, referred to herein as the management information 614, that can be communicated between a MnS producer and a MnS consumer in deployment scenarios using the SBMA as defined in 3GPP TS 28.533 for telecommunication network management purposes, including management of converged networks and networks that include virtualized network functions. It specifies semantics of information object class (IOC) attributes and relations visible across the reference point in a protocol and technology neutral way.
  • IOC information object class
  • FNIM Federated Network Information Model
  • IOC Information Object Class
  • UAM Umbrella Information Model
  • a NRM is collection of IOCS, inclusive of their associations, attributes and operations, representing a set of network resources under management.
  • a network resource is a discrete entity represented by an Information Object Class (IOC) for the purpose of network and service management.
  • IOC Information Object Class
  • a network resource may represent intelligence, information, hardware and software of a telecommunication network.
  • An IOC represents the management aspect of a network resource. It describes the information that can be passed/used in management interfaces.
  • IOC has attributes that represents the various properties of the class of objects. Furthermore, IOC can support operations providing network management services invocable on demand for that class of objects. An IOC may support notifications that report event occurrences relevant for that class of objects. It is modeled using the stereotype "Class" in the UML meta-model.
  • a Managed Object (MO) is an instance of a Managed Object Class (MOC) representing the management aspects of a network resource. Its representation is a technology specific software object. It is sometimes called MO instance (MOI).
  • MOI MO instance
  • the MOC is a class of such technology specific software objects. An MOC is the same as an IOC except that the former is defined in technology specific terms and the latter is defined in technology agnostic terms. MOCs are used/defined in solution set (SS) level specifications. IOCs are used/defined in information service (IS) level specifications.
  • Embodiments define NRM-based solutions for ML training by defining standardized objects, both data and code, specifically designed to support MLT operations in a SBMA of a 5GS, such as the wireless communications system 100.
  • MLT operations for the MLT system 600 may be managed through management information 614.
  • the MLT system 600 may use the management information 614 to support MLT operations for ML entity deployed throughout the SBMA of 5GS.
  • the MLT system 600 may use the management information 614 to manage MLT operations, such as CRUD operations for one or more software Managed Object Instances (MOIs), such as a MOI 902 to support MLT operations.
  • MOIs software Managed Object Instances
  • a given MOI, such as the MOI 902 may be instantiated using one or more Information Object Classes (IOCs), such as an IOC 904, in accordance with the management information 614.
  • IOCs Information Object Classes
  • the management information 614, the MOI 902 and/or the IOC 904 may be implemented in accordance with at least 3 GPP TS 28.105 titled “Artificial Intelligence / Machine Learning (AI/ML) management” Release 17, versions 0.1.0 (2022- 02) to 17.1.1 (2022-09), including any progeny, revisions and variants. It may be appreciated that certain embodiments may related to other standards as well. Embodiments are not limited in this context.
  • This clause depicts a set of classes (e.g., IOCS) that encapsulates the information relevant to ML training.
  • a class diagram 1100a for the set of classes is depicted in FIG. 11A.
  • This clause depicts a class hierarchy for ML training related NRMs.
  • a class hierarchy 1200a for the class diagram 1100a is depicted in FIG. 12A.
  • the IOC MLTrainingRequest represents the ML entity training request that is created by the ML training MnS consumer.
  • the MLTrainingRequest MOI is contained under one MLTrainingFunction MOL Each AIMLTrainingRequest is associated to at least one MLEntity.
  • the MLTrainingRequest may have a source to identify where it is coming from, and which may be used to prioritize the training resources for different sources.
  • the sources may be for example the network functions, operator roles, or other functional differentiations.
  • Each MLTrainingRequest may indicate the expectedRunTimeContext that describes the specific conditions for which the MLEntity should be trained for.
  • the ML training MnS producer decides when to start the ML training. Once the MnS producer decides to start the training based on the request, the ML training MnS producer instantiates one or more MLTrainingProcess MOI(s) that are responsible to perform the followings: [0145] - collects (more) data for training, if the training data are not available or the data are available but not sufficient for the training;
  • the ML training MnS producer may examine the consumer's provided candidate training data and select none, some or all of them for training. In addition, the ML training MnS producer may select some other training data that are available in order to meet the consumer’s requirements for the MLentity training;
  • the MLTrainingRequest may have a requeststatus field to represent the status of the specific MLTrainingRequest:
  • the ML training MnS producer instantiates one or more MLTrainingProcess MOI(s) representing the training process(es) being performed per the request and notifies the MLT MnS consumer(s) who subscribed to the notification.
  • the IOC MLTrainingReport represents the ML entity training report that is provided by the training MnS producer. [0161] The MLTrainingReport MOI is contained under one MLTrainingFunction MOI.
  • the MLTrainingReport MOI represents the report for the ML entity training th
  • the MLTrainingReport MOI represents the report for the ML entity training tha
  • N/A isUnique:
  • True candidateTraingDataSource It provides the address(es) of the type: String candidate training data source multiplicity: provided by MnS consumer. The * detailed training data format is isOrdered: vendor specific. False isUnique: allowedValues: N/A. True defaultValue:
  • True usedConsumerTrainingData It provides the address(es) where type: String lists of the consumer-provided multiplicity: training data are located, which * have been used for the ML entity isOrdered: training. False isUnique: allowedValues: N/A. True defaultValue:
  • True trainingRequestRef It is the DN(s) of the type: DN (see related MLTrainingRequest MOI(s). TS 32.156 [13]) allowedValues: DN. multiplicity: isOrdered:
  • N/A isUnique:
  • This clause presents a list of notifications, defined in 3GPP TS 28.532, that an MnS consumer may receive.
  • the notification header attribute objectClass/obj ectInstance shall capture the DN of an instance of a class defined in the present document.
  • notifyMOICreation O notifyMOIDeletion O notifyMOI Attribute ValueChanges O notifyEvent O
  • This clause depicts a set of classes (e.g., IOCS) that encapsulates the information relevant to ML training.
  • a class diagram 1100b for the set of classes is depicted in FIG. 11B.
  • the IOC MLTrainingRequests represents the container of the MLTrainingRequest
  • the IOC MLTrainingRequest represents the ML entity training request that is created by the MnS consumer.
  • the MLTrainingRequest MOI is contained under one MLTrainingRequests MOL
  • the IOC MLTrainingReports represents the container of the MLTrainingReport IOC(s).
  • the IOC MLTrainingReport represents the AI/ML model training report that is provided by the MnS producer.
  • the MLTrainingReport MOI is contained under one MLTrainingReports MOL [0213] X.3.4.2 Attributes
  • TrainingRequestRef Support Condition The MLTrainingReport MOI represents Qualifier the report for the AI/ML model training that was requested by the MnS consumer (via AIMLTrainingRequest MOI).
  • lastTrainingRef Support Condition The MLTrainingReport MOI represents the Qualifier report for the ML training that was not initial training (i.e., the model has been trained before).
  • MLEntityPackageAddress It provides the address where the type: String ML entity package is located. multiplicity: 1
  • the ML entity package may isOrdered: N/A contain the ML entity (e.g., isUnique: N/A software image or file) and the defaultValue: model descriptor.
  • the model None descriptor may contain more isNullable: True detailed information about the model, such as version, resource requirements, etc.
  • candidateTraingDataSource It provides the address(es) of the type: String candidate training data source multiplicity: * provided by MnS consumer.
  • the algorithm for calculating the isNullable True accuracy score is vendor’s specific. allowedValues: ⁇ 0 .. 100 ⁇ . areConsumerTrainingDataUsed It indicates whether the consumer type: Enum provided training data have been multiplicity: 1 used for the ML entity training. isOrdered: N/A isUnique: N/A allowedValues: ALL, defaultValue:
  • This clause presents a list of notifications, defined in TS 28.532 [6], that an MnS consumer may receive.
  • the notification header attribute objectClass/obj ectInstance shall capture the DN of an instance of a class defined in the present document.
  • notifyMOICreation O notifyMOIDeletion O notifyMOI Attribute ValueChanges O notifyEvent O
  • FIG. 10 illustrates an apparatus 1000 suitable for an MLT system 600 of a 5GNR wireless system to implement MLT operations, procedures or methods such as defined in 3GPP TS 28.105 using one or more of the MOI 902, IOC 904, and/or the management information 614.
  • the apparatus 1000 to train an ML entity for a network node may include a processor circuitry 1002, a memory interface 1004, a data storage device 1006, and a transmitter/receiver ("transceiver") 1008.
  • the processor circuitry 1002 may implement the logic flow 800 and/or some or all of the message flow 700.
  • the memory interface 1004 may send or receive, to or from a data storage device 1006 (e.g., volatile or non-volatile memory), management information 614 for a network resource model (NRM) of a fifth generation system (5GS), such as the wireless communications system 100.
  • a data storage device 1006 e.g., volatile or non-volatile memory
  • NNM network resource model
  • the apparatus 1000 also includes processor circuitry 1002 communicatively coupled to the memory interface 1004, the processor circuitry 1002 to determine to initiate training of an ML entity 612 using the management information 614, the training to be performed by an MLT MnS producer 602 producer of the 5GS in accordance with ML training logic 604, determine an inference type associated with the ML entity 612, select training data to train the ML entity 612, and train the ML entity 612 according to the inference type using the selected training data by the MLT MnS producer 602, the trained ML entity 612 to conduct inference operations for an MLT MnS consumer 608 of the 5GS.
  • processor circuitry 1002 communicatively coupled to the memory interface 1004, the processor circuitry 1002 to determine to initiate training of an ML entity 612 using the management information 614, the training to be performed by an MLT MnS producer 602 producer of the 5GS in accordance with ML training logic 604, determine an inference type associated with the ML entity 612, select training data to train the ML entity
  • the apparatus 1000 may also include the processor circuitry 1002 to receive a ML training request 704 for ML entity training from the MLT MnS consumer 608, and determine to initiate training of the ML entity 612 in response to a request for AI/ML model training from the MLT MnS consumer 608.
  • the apparatus 1000 may also include the processor circuitry 1002 to receive an ML training request 704 specifying one or more data sources 606 containing candidate training data for training the ML entity 612 from the MLT MnS consumer 608, and select at least a portion of the training data to train the ML entity 612 from the candidate training data received from the MLT MnS consumer 608.
  • the apparatus 1000 may also include the processor circuitry 1002 to receive an ML training request 704 specifying the interference type for the ML entity 612 to be trained from the MLT MnS consumer 608.
  • the apparatus 1000 may also include the processor circuitry 1002 to determine to initiate training of the ML entity 612 by the MLT MnS producer 602 as a result of evaluation of performance of the ML entity 612, based on feedback 418 received from the MLT MnS consumer 608, or when new training data describing new network status or events are available.
  • the apparatus 1000 may also include the processor circuitry 1002 to generate a training result 708 for the trained ML entity 612 by the MLT MnS producer 602.
  • the apparatus 1000 may also include the processor circuitry 1002 to provide a training result 708 that includes a location of the trained ML entity 612 to the MLT MnS consumer 608 from the MLT MnS producer 602.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram 1100a, 1100b, the class diagram 1100a, 1100b to include a set of classes that encapsulates information relevant to ML entity training.
  • the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram 1100a, 1100b, the class diagram 1100a, 1100b to include a set of classes that encapsulates information relevant to ML entity training.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram 1100a, 1100b, the class diagram 1100a, 1100b to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy 1200a, 1200b for ML entity training related to the NRM.
  • the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram 1100a, 1100b, the class diagram 1100a, 1100b to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy 1200a, 1200b for ML entity training related to the NRM.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MLT MnS consumer 608, and where the ML entity training request managed object instance (MOI) such as MOI 902 is contained under one AI/ML training function MOL
  • MOI ML entity training request managed object instance
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and where the ML entity training report managed object instance (MOI) such as MOI 902 is contained under one AI/ML training function MOL
  • MOI ML entity training report managed object instance
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity 612 to the MLT MnS producer 602.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
  • the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity 612 supports.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity 612.
  • the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity 612.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MLT MnS consumer 608 provided training data has been used for the AI/ML training.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MLT MnS consumer 608 provided training data is located, which have been used for the ML entity training.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MLT MnS consumer 608.
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML training report managed object instance (MOI) such as MOI 902 that represents a last training report for the ML entity 612.
  • MOI ML training report managed object instance
  • the apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
  • the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
  • FIG. 11 A illustrates a class diagram 1100a.
  • the class diagram 1100a may comprise an example of a first set of classes suitable for a MOI 902, an IOC 904 and/or management information 614 of the MLT system 600.
  • the first set of classes are by way of example and not limitation. Other classes may be used as well. Embodiments are not limited in this context.
  • FIG. 11B illustrates a class diagram 1100b.
  • the class diagram 1100b may comprise an example of a second set of classes suitable for a MOI 902, an IOC 904 and/or management information 614 of the MLT system 600.
  • the second set of classes are by way of example and not limitation. Other classes may be used as well. Embodiments are not limited in this context.
  • FIG. 12A illustrates an embodiment of a class hierarchy 1200a.
  • the class hierarchy 1200a may comprise an example of a first class hierarchy for the first set of classes set forth in the class diagram 1100a suitable for a MOI 902, an IOC 904 and/or management information 614 of the MLT system 600.
  • the first class hierarchy is by way of example and not limitation. Other class hierarchies may be used as well. Embodiments are not limited in this context.
  • FIG. 12B illustrates an embodiment of a class hierarchy 1200b.
  • the class hierarchy 1200b may comprise an example of a second class hierarchy for the second set of classes set forth in the class diagram 1100b suitable for a MOI 902, an IOC 904 and/or management information 614 of the MLT system 600.
  • the second class hierarchy is by way of example and not limitation. Other class hierarchies may be used as well. Embodiments are not limited in this context.
  • FIGS. 13-16 illustrate various systems, devices and components that may implement aspects of disclosed embodiments.
  • the systems, devices, and components may be the same, or similar to, the systems, device and components described with reference to FIG. 1.
  • FIG. 13 illustrates a network 1300 in accordance with various embodiments.
  • the network 1300 may operate in a manner consistent with 3GPP technical specifications for LTE or 5G/NR systems.
  • 3GPP technical specifications for LTE or 5G/NR systems 3GPP technical specifications for LTE or 5G/NR systems.
  • the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3 GPP systems, or the like.
  • the network 1300 may include a UE 1302, which may include any mobile or non- mobile computing device designed to communicate with a RAN 1330 via an over-the-air connection.
  • the UE 1302 may be communicatively coupled with the RAN 1330 by a Uu interface.
  • the UE 1302 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in- car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc.
  • the network 1300 may include a plurality of UEs coupled directly with one another via a sidelink interface.
  • the UEs may be M2M/D2D devices that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc.
  • the UE 1302 may additionally communicate with an AP 1304 via an over-the-air connection.
  • the AP 1304 may manage a WLAN connection, which may serve to offload some/all network traffic from the RAN 1330.
  • the connection between the UE 1302 and the AP 1304 may be consistent with any IEEE 1302.11 protocol, wherein the AP 1304 could be a wireless fidelity (Wi-Fi®) router.
  • the UE 1302, RAN 1330, and AP 1304 may utilize cellular-WLAN aggregation (for example, LWA/LWIP).
  • Cellular-WLAN aggregation may involve the UE 1302 being configured by the RAN 1330 to utilize both cellular radio resources and WLAN resources.
  • the RAN 1330 may include one or more access nodes, for example, AN 1360.
  • AN 1360 may terminate air-interface protocols for the UE 1302 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and LI protocols.
  • RRC access stratum protocols
  • PDCP packet data convergence protocol
  • RLC access control protocol
  • MAC transport layer control protocol
  • LI protocols access stratum protocols
  • the AN 1360 may enable data/voice connectivity between CN 1318 and the UE 1302.
  • the AN 1360 may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network, which may be referred to as a CRAN or virtual baseband unit pool.
  • the AN 1360 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, TRP, etc.
  • the AN 1360 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells.
  • the RAN 1330 may be coupled with one another via an X2 interface (if the RAN 1330 is an LTE RAN) or an Xn interface (if the RAN 1330 is a 5G RAN).
  • the X2/Xn interfaces which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
  • the ANs of the RAN 1330 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 1302 with an air interface for network access.
  • the UE 1302 may be simultaneously connected with a plurality of cells provided by the same or different ANs of the RAN 1330.
  • the UE 1302 and RAN 1330 may use carrier aggregation to allow the UE 1302 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell.
  • a first AN may be a master node that provides an MCG and a second AN may be secondary node that provides an SCG.
  • the first/second ANs may be any combination of eNB, gNB, ng-eNB, etc.
  • the RAN 1330 may provide the air interface over a licensed spectrum or an unlicensed spectrum.
  • the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells.
  • the nodes Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
  • LBT listen-before-talk
  • the UE 1302 or AN 1360 may be or act as a RSU, which may refer to any transportation infrastructure entity used for V2X communications.
  • An RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE.
  • An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like.
  • an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs.
  • the RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic.
  • the RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services.
  • the components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.
  • the RAN 1330 may be an LTE RAN 1326 with eNBs, for example, eNB 1354.
  • the LTE RAN 1326 may provide an LTE air interface with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc.
  • the LTE air interface may rely on CSLRS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE.
  • the LTE air interface may operating on sub-6 GHz bands.
  • the RAN 1330 may be an NG-RAN 1328 with gNBs, for example, gNB 1356, or ng-eNBs, for example, ng-eNB 1358.
  • the gNB 1356 may connect with 5G-enabled UEs using a 5G NR interface.
  • the gNB 1356 may connect with a 5G core through an NG interface, which may include an N2 interface or an N3 interface.
  • the ng- eNB 1358 may also connect with the 5G core through an NG interface, but may connect with a UE via an LTE air interface.
  • the gNB 1356 and the ng-eNB 1358 may connect with each other over an Xn interface.
  • the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 1328 and a UPF 1338 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 1328 and an AMF 1334 (e.g., N2 interface).
  • NG-U NG user plane
  • N3 interface e.g., N3 interface
  • N-C NG control plane
  • the NG-RAN 1328 may provide a 5G-NR air interface with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data.
  • the 5G- NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface.
  • the 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking.
  • the 5G-NR air interface may operating on FR1 bands that include sub -6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz.
  • the 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.
  • the 5G-NR air interface may utilize BWPs for various purposes.
  • BWP can be used for dynamic adaptation of the SCS.
  • the UE 1302 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 1302, the SCS of the transmission is changed as well.
  • Another use case example of BWP is related to power saving.
  • multiple BWPs can be configured for the UE 1302 with different amount of frequency resources (for example, PRBs) to support data transmission under different traffic loading scenarios.
  • a BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 1302 and in some cases at the gNB 1356.
  • a BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
  • the RAN 1330 is communicatively coupled to CN 1318 that includes network elements to provide various functions to support data and telecommunications services to customers/subscribers (for example, users of UE 1302).
  • the components of the CN 1318 may be implemented in one physical node or separate physical nodes.
  • NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 1318 onto physical compute/storage resources in servers, switches, etc.
  • a logical instantiation of the CN 1318 may be referred to as a network slice, and a logical instantiation of a portion of the CN 1318 may be referred to as a network sub- slice.
  • the CN 1318 may be an LTE CN 1324, which may also be referred to as an EPC.
  • the LTE CN 1324 may include MME 1306, SGW 1308, SGSN 1314, HSS 1316, PGW 1310, and PCRF 1312 coupled with one another over interfaces (or “reference points”) as shown. Functions of the elements of the LTE CN 1324 may be briefly introduced as follows.
  • the MME 1306 may implement mobility management functions to track a current location of the UE 1302 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.
  • the SGW 1308 may terminate an SI interface toward the RAN and route data packets between the RAN and the LTE CN 1324.
  • the SGW 1308 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
  • the SGSN 1314 may track a location of the UE 1302 and perform security functions and access control. In addition, the SGSN 1314 may perform inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 1306; MME selection for handovers; etc.
  • the S3 reference point between the MME 1306 and the SGSN 1314 may enable user and bearer information exchange for inter-3GPP access network mobility in idle/active states.
  • the HSS 1316 may include a database for network users, including subscription- related information to support the network entities’ handling of communication sessions.
  • the HSS 1316 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc.
  • An S6a reference point between the HSS 1316 and the MME 1306 may enable transfer of subscription and authentication data for authenticating/authorizing user access to the LTE CN 1318.
  • the PGW 1310 may terminate an SGi interface toward a data network (DN) 1322 that may include an application/content server 1320.
  • the PGW 1310 may route data packets between the LTE CN 1324 and the data network 1322.
  • the PGW 1310 may be coupled with the SGW 1308 by an S5 reference point to facilitate user plane tunneling and tunnel management.
  • the PGW 1310 may further include a node for policy enforcement and charging data collection (for example, PCEF).
  • the SGi reference point between the PGW 1310 and the data network 1322 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services.
  • the PGW 1310 may be coupled with a PCRF 1312 via a Gx reference point.
  • the PCRF 1312 is the policy and charging control element of the LTE CN 1324.
  • the PCRF 1312 may be communicatively coupled to the app/content server 1320 to determine appropriate QoS and charging parameters for service flows.
  • the PCRF 1310 may provision associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.
  • the CN 1318 may be a 5GC 1352.
  • the 5GC 1352 may include an AUSF 1332, AMF 1334, SMF 1336, UPF 1338, NSSF 1340, NEF 1342, NRF 1344, PCF 1346, UDM 1348, and AF 1350 coupled with one another over interfaces (or “reference points”) as shown.
  • Functions of the elements of the 5GC 1352 may be briefly introduced as follows.
  • the AUSF 1332 may store data for authentication of UE 1302 and handle authentication-related functionality.
  • the AUSF 1332 may facilitate a common authentication framework for various access types.
  • the AUSF 1332 may exhibit an Nausf service-based interface.
  • the AMF 1334 may allow other functions of the 5GC 1352 to communicate with the UE 1302 and the RAN 1330 and to subscribe to notifications about mobility events with respect to the UE 1302.
  • the AMF 1334 may be responsible for registration management (for example, for registering UE 1302), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization.
  • the AMF 1334 may provide transport for SM messages between the UE 1302 and the SMF 1336, and act as a transparent proxy for routing SM messages.
  • AMF 1334 may also provide transport for SMS messages between UE 1302 and an SMSF.
  • AMF 1334 may interact with the AUSF 1332 and the UE 1302 to perform various security anchor and context management functions.
  • AMF 1334 may be a termination point of a RAN CP interface, which may include or be an N2 reference point between the RAN 1330 and the AMF 1334; and the AMF 1334 may be a termination point of NAS (Nl) signaling, and perform NAS ciphering and integrity protection.
  • AMF 1334 may also support NAS signaling with the UE 1302 over an N3 IWF interface.
  • the SMF 1336 may be responsible for SM (for example, session establishment, tunnel management between UPF 1338 and AN 1360); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 1338 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 1334 over N2 to AN 1360; and determining SSC mode of a session.
  • SM may refer to management of a PDU session, and a PDU session or “session” may refer to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 1302 and the data network 1322.
  • the UPF 1338 may act as an anchor point for intra -RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 1322, and a branching point to support multi-homed PDU session.
  • the UPF 1338 may also perform packet routing and forwarding, perform packet inspection, enforce the user plane part of policy rules, lawfully intercept packets (UP collection), perform traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform uplink traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the uplink and downlink, and perform downlink packet buffering and downlink data notification triggering.
  • UPF 1338 may include an uplink classifier to support routing traffic flows to a data network.
  • the NSSF 1340 may select a set of network slice instances serving the UE 1302.
  • the NSSF 1340 may also determine allowed NSSAI and the mapping to the subscribed S- NSSAIs, if needed.
  • the NSSF 1340 may also determine the AMF set to be used to serve the UE 1302, or a list of candidate AMFs based on a suitable configuration and possibly by querying the NRF 1344.
  • the selection of a set of network slice instances for the UE 1302 may be triggered by the AMF 1334 with which the UE 1302 is registered by interacting with the NSSF 1340, which may lead to a change of AMF.
  • the NSSF 1340 may interact with the AMF 1334 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown). Additionally, the NSSF 1340 may exhibit an Nnssf service-based interface.
  • the NEF 1342 may securely expose services and capabilities provided by 3GPP network functions for third party, internal exposure/re-exposure, AFs (e.g., AF 1350), edge computing or fog computing systems, etc.
  • the NEF 1342 may authenticate, authorize, or throttle the AFs.
  • NEF 1342 may also translate information exchanged with the AF 1350 and information exchanged with internal network functions. For example, the NEF 1342 may translate between an AF-Service-Identifier and an internal 5GC information.
  • NEF 1342 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 1342 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 1342 to other NFs and AFs, or used for other purposes such as analytics. Additionally, the NEF 1342 may exhibit an Nnef service-based interface.
  • the NRF 1344 may support service discovery functions, receive NF discovery requests from NF instances, and provide the information of the discovered NF instances to the NF instances. NRF 1344 also maintains information of available NF instances and their supported services. As used herein, the terms “instantiate,” “instantiation,” and the like may refer to the creation of an instance, and an “instance” may refer to a concrete occurrence of an object, which may occur, for example, during execution of program code. Additionally, the NRF 1344 may exhibit the Nnrf service-based interface.
  • the PCF 1346 may provide policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior.
  • the PCF 1346 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 1348.
  • the PCF 1346 exhibit an Npcf service-based interface.
  • the UDM 1348 may handle subscription-related information to support the network entities’ handling of communication sessions, and may store subscription data of UE 1302. For example, subscription data may be communicated via an N8 reference point between the UDM 1348 and the AMF 1334.
  • the UDM 1348 may include two parts, an application front end and a UDR.
  • the UDR may store subscription data and policy data for the UDM 1348 and the PCF 1346, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 1302) for the NEF 1342.
  • the Nudr service-based interface may be exhibited by the UDR 221 to allow the UDM 1348, PCF 1346, and NEF 1342 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR.
  • the UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions.
  • the UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management.
  • the UDM 1348 may exhibit the Nudm service-based interface.
  • the AF 1350 may provide application influence on traffic routing, provide access to NEF, and interact with the policy framework for policy control.
  • the 5GC 1352 may enable edge computing by selecting operator/3 rd party services to be geographically close to a point that the UE 1302 is attached to the network. This may reduce latency and load on the network.
  • the 5GC 1352 may select a UPF 1338 close to the UE 1302 and execute traffic steering from the UPF 1338 to data network 1322 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 1350. In this way, the AF 1350 may influence UPF (re)selection and traffic routing.
  • the network operator may permit AF 1350 to interact directly with relevant NFs. Additionally, the AF 1350 may exhibit an Naf service-based interface.
  • the data network 1322 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application/content server 1320.
  • FIG. 14 schematically illustrates a wireless network 1400 in accordance with various embodiments.
  • the wireless network 1400 may include a UE 1402 in wireless communication with an AN 1424.
  • the UE 1402 and AN 1424 may be similar to, and substantially interchangeable with, like-named components described elsewhere herein.
  • the UE 1402 may be communicatively coupled with the AN 1424 via connection 1446.
  • the connection 1446 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6GHz frequencies.
  • the UE 1402 may include a host platform 1404 coupled with a modem platform 1408.
  • the host platform 1404 may include application processing circuitry 1406, which may be coupled with protocol processing circuitry 1410 of the modem platform 1408.
  • the application processing circuitry 1406 may run various applications for the UE 1402 that source/ sink application data.
  • the application processing circuitry 1406 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations [0287]
  • the protocol processing circuitry 1410 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 1446.
  • the layer operations implemented by the protocol processing circuitry 1410 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
  • the modem platform 1408 may further include digital baseband circuitry 1412 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 1410 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/ decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi -antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
  • PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/ decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi -antenna port precoding/decoding,
  • the modem platform 1408 may further include transmit circuitry 1414, receive circuitry 1416, RF circuitry 1418, and RF front end (RFFE) 1420, which may include or connect to one or more antenna panels 1422.
  • the transmit circuitry 1414 may include a digital -to-analog converter, mixer, intermediate frequency (IF) components, etc.
  • the receive circuitry 1416 may include an analog-to-digital converter, mixer, IF components, etc.
  • the RF circuitry 1418 may include a low-noise amplifier, a power amplifier, power tracking components, etc.
  • RFFE 1420 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc.
  • transmit/receive components may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub -6 gHz frequencies, etc.
  • the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.
  • the protocol processing circuitry 1410 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
  • a UE reception may be established by and via the antenna panels 1422, RFFE 1420, RF circuitry 1418, receive circuitry 1416, digital baseband circuitry 1412, and protocol processing circuitry 1410.
  • the antenna panels 1422 may receive a transmission from the AN 1424 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 1422.
  • a UE transmission may be established by and via the protocol processing circuitry 1410, digital baseband circuitry 1412, transmit circuitry 1414, RF circuitry 1418, RFFE 1420, and antenna panels 1422.
  • the transmit components of the UE 1424 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 1422.
  • the AN 1424 may include a host platform 1426 coupled with a modem platform 1430.
  • the host platform 1426 may include application processing circuitry 1428 coupled with protocol processing circuitry 1432 of the modem platform 1430.
  • the modem platform may further include digital baseband circuitry 1434, transmit circuitry 1436, receive circuitry 1438, RF circuitry 1440, RFFE circuitry 1442, and antenna panels 1444.
  • the components of the AN 1424 may be similar to and substantially interchangeable with like-named components of the UE 1402.
  • the components of the A 1404 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
  • FIG. 15 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein.
  • FIG. 15 shows a diagrammatic representation of hardware resources 1530 including one or more processors (or processor cores) 1510, one or more memory/storage devices 1522, and one or more communication resources 1526, each of which may be communicatively coupled via a bus 1520 or other interface circuitry.
  • a hypervisor 1502 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 1530.
  • the processors 1510 may include, for example, a processor 1512 and a processor 1514.
  • the processors 1510 may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
  • CPU central processing unit
  • RISC reduced instruction set computing
  • CISC complex instruction set computing
  • GPU graphics processing unit
  • DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
  • the memory/storage devices 1522 may include main memory, disk storage, or any suitable combination thereof.
  • the memory/storage devices 1522 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc.
  • DRAM dynamic random access memory
  • SRAM static random access memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • Flash memory solid-state storage, etc.
  • the communication resources 1526 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 1504 or one or more databases 1506 or other network elements via a network 1508.
  • the communication resources 1526 may include wired communication components (e.g., for coupling via USB, Ethernet, etc.), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, Wi-Fi® components, and other communication components.
  • Instructions 106, 1518, 1524, 1528, 1532 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 1510 to perform any one or more of the methodologies discussed herein.
  • the instructions 106, 1518, 1524, 1528, 1532 may reside, completely or partially, within at least one of the processors 1510 (e.g., within the processor’s cache memory), the memory/storage devices 1522, or any suitable combination thereof.
  • any portion of the instructions 106, 1518, 1524, 1528, 1532 may be transferred to the hardware resources 1530 from any combination of the peripheral devices 1504 or the databases 1506. Accordingly, the memory of processors 1510, the memory/storage devices 1522, the peripheral devices 1504, and the databases 1506 are examples of computer -readable and machine-readable media.
  • At least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below.
  • the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.
  • circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.
  • FIG. 16 illustrates computer readable storage medium 1600.
  • Computer readable storage medium 1700 may comprise any non -transitory computer-readable storage medium or machine-readable storage medium, such as an optical, magnetic or semiconductor storage medium.
  • computer readable storage medium 1600 may comprise an article of manufacture.
  • computer readable storage medium 1600 may store computer executable instructions 1602 with which circuitry can execute.
  • computer executable instructions 1602 can include computer executable instructions 1602 to implement operations described with respect to logic flows 500 (deleted), 1200a and 900.
  • Examples of computer readable storage medium 1600 or machine-readable storage medium 1600 may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or nonremovable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth.
  • Examples of computer executable instructions 1602 may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like.
  • Example 1 An apparatus for a network node, comprising:
  • a memory interface to send or receive, to or from a data storage device, management information for artificial intelligence (Al) and machine learning (ML) management based on a a network resource model (NRM) of a fifth generation system (5GS); and
  • processor circuitry communicatively coupled to the memory interface, the processor circuitry to:
  • [0305] determine to initiate training of a ML entity using the management information, the training to be performed by a management service (MnS) producer of the 5GS;
  • MnS management service
  • Example 2 train the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS.
  • MnS management service
  • [0311] determine to initiate training of the ML entity in response to a request for ML entity training from the MnS consumer.
  • Example 3 The apparatus of any previous example such as example 1, the processor circuitry to:
  • [0314] select at least a portion of the training data to train the ML entity from the candidate training data received from the MnS consumer.
  • Example 4 The apparatus of any previous example such as example 1, the processor circuitry to receive a request specifying the interference type for the ML entity to be trained from the MnS consumer.
  • Example 5 The apparatus of any previous example such as example 1, the processor circuitry to determine to initiate training of the ML entity by the MnS producer as a result of evaluation of performance of the ML entity, based on feedback information received from the MnS consumer, or when new training data describing new network status or events are available.
  • Example 6 The apparatus of any previous example such as example 1, the processor circuitry to generate a training result for the trained ML entity by the MnS producer.
  • Example 7 The apparatus of any previous example such as example 1, the processor circuitry to provide a training result that includes a location of the trained ML entity to the MnS consumer from the MnS producer.
  • Example 8 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management.
  • Example 9 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training.
  • Example 10 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML entity training related to the NRM.
  • Example 11 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOL
  • Example 12 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and wherein the ML entity training report managed object instance (MOI) is contained under one AI/ML training function MOL
  • Example 13 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the AI/ML model to the MnS producer.
  • Example 14 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
  • Example 15 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports.
  • Example 16 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity.
  • Example 17 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training.
  • Example 18 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training.
  • Example 19 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML entity training request that is created by the MnS consumer.
  • Example 20 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an AI/ML model training report managed object instance (MOI) that represents a last training report for the ML entity.
  • MOI AI/ML model training report managed object instance
  • Example 21 The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
  • Example 22 A method for a network node, comprising:
  • ML machine learning
  • NNM network resource model
  • Example 23 The method of any previous example such as example 22, comprising:
  • Example 24 The method of any previous example such as example 22, comprising: [0342] receiving a request specifying one or more data sources containing candidate training data for training the ML entity from the MnS consumer; and
  • Example 25 The method of any previous example such as example 22, comprising receiving a request specifying the interference type for the ML entity to be trained from the MnS consumer.
  • Example 26 The method of any previous example such as example 22, comprising determining to initiate training of the ML entity by the MnS producer as a result of evaluation of performance of the ML entity, based on feedback information received from the MnS consumer, or when new training data describing new network status or events are available.
  • Example 27 The method of any previous example such as example 22, comprising generating a training result for the trained ML entity by the MnS producer.
  • Example 28 The method of any previous example such as example 22, comprising providing a training result that includes a location of the trained ML entity to the MnS consumer from the MnS producer.
  • Example 29 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management.
  • Example 30 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training.
  • Example 31 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML entity training related to the NRM.
  • Example 32 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOL
  • Example 33 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and wherein the ML entity training report managed object instance (MOI) is contained under one ML training function MOL
  • Example 34 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity to the MnS producer.
  • Example 35 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
  • Example 36 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports.
  • Example 37 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity.
  • Example 38 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training.
  • Example 39 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training.
  • Example 40 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MnS consumer.
  • Example 41 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML entity training report managed object instance (MOI) that represents a last training report for the ML entity.
  • MOI ML entity training report managed object instance
  • Example 42 The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
  • Example 43 A non-transitory computer-readable storage medium, the computer- readable storage medium including instructions that when executed by a computer, cause the computer to:
  • [0363] determine to initiate training of a machine learning (ML) entity using management information for artificial intelligence (Al) and ML management based on a network resource model (NRM) of a fifth generation system (5GS), the training to be performed by a management service (MnS) producer of the 5GS;
  • ML machine learning
  • NNM network resource model
  • MnS management service
  • [0366] train the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS.
  • MnS management service
  • Example 44 The computer-readable storage medium of any previous example such as example 43, comprising:
  • [0369] determine to initiate training of the ML entity in response to a request for ML entity training from the MnS consumer.
  • Example 45 The computer-readable storage medium of any previous example such as example 43, comprising: [0371] receive a request specifying one or more data sources containing candidate training data for training the ML entity from the MnS consumer; and
  • [0372] select at least a portion of the training data to train the ML entity from the candidate training data received from the MnS consumer.
  • Example 46 The computer-readable storage medium of any previous example such as example 43, comprising receive a request specifying the interference type for the ML entity to be trained from the MnS consumer.
  • Example 47 The computer-readable storage medium of any previous example such as example 43, comprising determine to initiate training of the ML entity by the MnS producer as a result of evaluation of performance of the ML entity, based on feedback information received from the MnS consumer, or when new training data describing new network status or events are available.
  • Example 48 The computer-readable storage medium of any previous example such as example 43, comprising generate a training result for the trained ML entity by the MnS producer.
  • Example 49 The computer-readable storage medium of any previous example such as example 43, comprising provide a training result that includes a location of the trained ML entity to the MnS consumer from the MnS producer.
  • Example 50 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management.
  • Example 51 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training.
  • Example 52 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML training related to the NRM.
  • the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML training related to the NRM.
  • Example 53 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an ML traing request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOL
  • Example 54 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML traing report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and wherein the ML entity training report managed object instance (MOI) is contained under one AI/ML training function MOL
  • Example 55 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity to the MnS producer.
  • Example 56 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate train data source.
  • Example 57 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports.
  • Example 58 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity.
  • Example 59 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training.
  • Example 60 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training.
  • Example 61 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MnS consumer.
  • Example 62 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML training report managed object instance (MOI) that represents a last training report for the ML entity.
  • MOI ML training report managed object instance
  • Example 63 The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
  • the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
  • Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. Any of the above-described examples may be implemented as system examples and means plus function examples, unless explicitly stated otherwise.
  • the foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the
  • circuitry refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high -capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality.
  • FPD field-programmable device
  • FPGA field-programmable gate array
  • PLD programmable logic device
  • CPLD complex PLD
  • HPLD high -capacity PLD
  • DSPs digital signal processors
  • the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality.
  • the term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.
  • processor circuitry refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data.
  • Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information.
  • processor circuitry may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes.
  • Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like.
  • the one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators.
  • CV computer vision
  • DL deep learning
  • application circuitry and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”
  • interface circuitry refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices.
  • interface circuitry may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.
  • the term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network.
  • the term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc.
  • the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface.
  • network element refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services.
  • network element may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.
  • computer system refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
  • appliance refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource.
  • program code e.g., software or firmware
  • a “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource.
  • resource refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like.
  • a “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s).
  • a “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc.
  • network resource or “communication resource” may refer to resources that are accessible by computer devices/ systems via a communications network.
  • system resources may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable.
  • channel refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream.
  • channel may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated.
  • link refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.
  • instantiate refers to the creation of an instance.
  • An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.
  • Coupled may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other.
  • directly coupled may mean that two or more elements are in direct contact with one another.
  • communicatively coupled may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or link, and/or the like.
  • information element refers to a structural element containing one or more fields.
  • field refers to individual contents of an information element, or a data element that contains content.
  • SMTC refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration.
  • SSB refers to an SS/PBCH block.
  • a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure.
  • Primary SCG Cell refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation.
  • Secondary Cell refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA.
  • Secondary Cell Group refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC.
  • the term “Serving Cell” refers to the primary cell for a UE in RRC CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell.
  • serving cell refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC CONNECTED configured with CA/.
  • Special Cell refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.

Abstract

An apparatus to train a machine learning (ML) entity for a network node in a 3 GPP system may comprise a memory interface communicatively coupled to processor circuitry. The memory interface may send or receive, to or from a data storage device, management information for a network resource model (NRM) of a fifth generation system (5GS). The processor circuitry may determine to initiate training of an ML entity using the management information. The training may be performed by a MnS producer of the 5GS. The processor circuitry may determine an inference type associated with the ML entity, select training data to train the ML entity, and train the ML entity according to the inference type using the selected training data by the MnS producer. The trained ML entity may be used to conduct inference operations for a MnS consumer. Other embodiments are described and claimed.

Description

NETWORK RESOURCE MODEL BASED SOLUTIONS FOR AI-ML MODEL TRAINING
[0001] This application claims the benefit of and priority to previously filed United States Provisional Patent Application Serial Number 63/288,778, filed December 13, 2021, entitled “NRM BASED SOLUTIONS FOR AI-ML MODEL TRAINING”, which is hereby incorporated by reference in its entirety.
BACKGROUND
[0002] Wireless communication systems are rapidly growing in usage. Further, wireless communication technology has evolved from voice-only communications to also include the transmission of data, such as Internet and multimedia content, to a variety of devices. To accommodate a growing number of devices communicating, many wireless communication systems share the available communication channel resources among devices. Further, Internet-of-Thing (loT) devices are also growing in usage and can coexist with user devices in various wireless communication systems such as cellular networks.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0003] To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
[0004] FIG. 1 illustrates a wireless communication system.
[0005] FIG. 2 illustrates a management data analytic (MDA) system in accordance with one embodiment.
[0006] FIG. 3 illustrates an artificial intelligence (Al) system in accordance with one embodiment.
[0007] FIG. 4 illustrates a MDA machine learning (ML) system in accordance with one embodiment.
[0008] FIG. 5 illustrates a chart in accordance with one embodiment.
[0009] FIG. 6 illustrates a machine learning training system in accordance with one embodiment.
[0010] FIG. 7 illustrates a message flow in accordance with one embodiment.
[0011] FIG. 8 illustrates a logic flow in accordance with one embodiment. [0012] FIG. 9 illustrates a machine learning software architecture in accordance with one embodiment.
[0013] FIG. 10 illustrates an apparatus in accordance with one embodiment.
[0014] FIG. 11 A illustrates a first class diagram in accordance with one embodiment.
[0015] FIG. 11B illustrates a second class diagram in accordance with one embodiment.
[0016] FIG. 12A illustrates a first class hierarchy in accordance with one embodiment.
[0017] FIG. 12B illustrates a second class hierarchy in accordance with one embodiment.
[0018] FIG. 13 illustrates a first network in accordance with one embodiment.
[0019] FIG. 14 illustrates a second network in accordance with one embodiment.
[0020] FIG. 15 illustrates a third network in accordance with one embodiment.
[0021] FIG. 16 illustrates computer readable medium in accordance with one embodiment.
DETAILED DESCRIPTION
[0022] The following detailed description refers to the accompanying drawings. The same reference numbers may be used in different drawings to identify the same or similar elements. In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular structures, architectures, interfaces, techniques, etc. in order to provide a thorough understanding of the various aspects of various embodiments. However, it will be apparent to those skilled in the art having the benefit of the present disclosure that the various aspects of the various embodiments may be practiced in other examples that depart from these specific details. In certain instances, descriptions of well-known devices, circuits, and methods are omitted so as not to obscure the description of the various embodiments with unnecessary detail. For the purposes of the present document, the phrases “A or B” and “A/B” mean (A), (B), or (A and B).
[0023] Various embodiments may generally relate to the field of wireless communications. More particularly, various embodiments are directed to principles for radio access network (RAN) intelligence to enable artificial intelligence (Al) and machine learning (ML) techniques (collectively referred to as “Al” or “ML” or “AI/ML”), a functional framework for AI/ML functionality, and input/output (I/O) of components for AI/ML enabled optimization, and use cases and solutions of AI/ML enabled RAN. For a Third Generation Partnership Project (3GPP) system, such as as a 3GPP system compliant with a Technical Specification Group Service and System Aspects (TSG SA) working group five (SA5) Fifth Generation (5G) system, the RAN intelligence enabled by AI/ML can be implemented, for example, as part of a management data analytics (MDA) system or platform in alignment with the SA5 5G Services Based Management Architecture (SBMA). Embodiments are not limited to this example.
[0024] Some embodiments are directed to ongoing standardization activity in TSG RAN Working Group Three (WG3) (RAN3). RAN3 is responsible for an overall universal mobile telecommunications system (UMTS) terrestrial radio access network (UTRAN), an evolved UMTS terrestrial radio access network (E -UTRAN), and a next generation RAN (NG-RAN) architecture and the specification of protocols for the related network interfaces. Embodiments may relate to, for example, 3 GPP technical report (TR) 28.809 titled “Study on enhancement of Management Data Analytics” Release 16 version 17.0.0 (2021-03); 3GPP technical standard (TS) 28.104 titled "Management Data Analytics (MDA)" Release 17 version 17.1.1 (2022-09); 3GPP TS 28.620 (deleted) titled "Telecommunication management; Generic Network Resource Model (NRM) Integration Reference Point (IRP); Information Service (IS)"; 3GPP TS 32.156 titled "Telecommunication management; Fixed Mobile Convergence (FMC) Model Repertoire"; 3GPP TS 28.104 titled "Management and orchestration; Management Data Analytics (MDA)"; 3GPP TS 23.288 titled "Architecture enhancements for 5G System (5GS) to support network data analytics services"; and 3GPP TS 28.532 titled "Management and orchestration; Generic management services", including any progeny, revisions and variants. Various embodiments have been adopted into at least 3GPP TS 28.105 titled “Artificial Intelligence / Machine Learning (AI/ML) management” Release 17, versions 0.1.0 (2022-02) to 17.1.1 (2022-09), including any progeny, revisions and variants. It may be appreciated that certain embodiments may related to other standards as well. Embodiments are not limited in this context.
[0025] Some embodiments may be implemented to support management data analytics (MDA) for a 3GPP system. For example, 3GPP TS 28.104 specifies MDA capabilities with corresponding analytics inputs and analytics outputs (reports), as well as processes and requirements for Management Data Analytics Service (MDAS), historical data handling for MDA, and ML support for MDA. This document also describes an MDA functionality and service framework, and the MDA role in a management loop. In another example, 3GPP TR 28.809 generally studies enhancements for MDA. More particularly, 3 GPP TR 28.809 describes MDA use cases, identifies corresponding potential requirements, and presents possible solutions with analytics input and output (report). The study also captures the MDA functionality and service framework, MDA process, MDA role in management loop and management aspects of MDA. Moreover, the study provides recommendations for the normative specifications work in full alignment with the 3 GPP TSG SA RAN3 and/or Working Group Five (SA5) 5G SBMA. The main objectives of SA5 are Management, Orchestration and Charging for 3 GPP systems. Both functional and service perspectives are covered.
[0026] In general, MDA is a key enabler of automation and intelligence, and it is considered a foundational capability for mobile networks and services management and orchestration. The MDA provides a capability of processing and analyzing data related to network and service events and status including, such as performance measurements, key performance indicators (KPIs), reports, alarms, configuration data, network analytics data, and service experience data from analytics functions (AFs). The MDA may provide analytics output, such as statistics or predictions, root cause analysis issues, and recommendations to enable necessary actions for network and service operations. The MDA output is provided by a Management Data Analytics Service (MDAS) producer to corresponding consumers that request the analytics.
[0027] The MDA can identify ongoing issues impacting the performance of the network and services, and help to identify in advance potential issues that may cause potential failure and/or performance degradation. The MDA can also assist to predict the network and service demand to enable the timely resource provisioning and deployments which would allow fast time-to-market network and service deployments.
[0028] The MDAS are services exposed by the MDA. The MDAS can be consumed by various consumers, including for instance management functions (MnFs) such as management service (MnS) producers and MnS consumers for network and service management, network functions (NFs) (e.g., network data analytics function (NWDAF)), self-organizing network (SON) functions, network and service optimization tools/functions, service level specification (SLS) assurance functions, human operators, applications functions (AFs), and so forth. A MDA MnS (also referred to as a MDAS) in the context of a SBMA enables any authorized consumer to request and receive analytics. It is worthy to note that the terms MDAS and MDA MnS are equivalent and may be used interchangeably throughout this document.
[0029] One significant area of research and development in 3GPP standards is AI/ML techniques to support various functions for a 3GPP system, such as MDA and others. For example, 3GPP TS 28.105 specifies AI/ML management capabilities and services for 5GS where AI/ML is used, including management and orchestration (e.g., MDA as defined in 3GPP TS 28.104) and 5G networks (e.g., a network data analytics function (NWDAF) as defined in 3GPP TS 23.288). 3GPP TS 28.105 also describes the functionality and service framework for AI/ML management. The AI/ML inference function in the 5GS uses an ML model for inference. To enable and facilitate AI/ML operations, an ML entity (which could be an ML model or the entity contains one or more ML models) and AI/ML inference function need to be managed. 3 GPP TS 28.105 specifies the AI/ML management related capabilities and services, which include ML training for training the ML model(s) associated with an ML entity. For example, 3GPP TS 28.105 specifies AI/ML functionality and a service framework for ML training. An entity playing the role of an ML Training MnS producer, may consume various data for ML training purposes. The ML entity training capability is provided via the ML Training MnS producer in the context of SBMA to the authorized consumers by the ML Training MnS producer. The ML entity training refers to the training of ML model(s) associated with the ML entity. The ML Training MnS producer may implement internal business logic related to ML training in order to leverage current and historical data related to MDA and 5G networks to monitor the networks and/or services where are relevant to the ML entity, prepare the data for model training, trigger and conduct the appropriate ML training.
[0030] For purposes of the present document, with specific reference to 3GPP TS 28.105, an ML entity is an entity that is either an ML model or contains one or more ML model (s) and ML model related metadata. It can be managed as a single composite entity. Metadata may include, e.g. the applicable runtime context for the ML model. An Al decision entity is an entity that applies a non-ML based logic for making Al decisions that can be managed as a single composite entity. An ML model or AI/ML model is a mathematical algorithm that can be "trained" by data and human expert input as examples to replicate a decision an expert would make when provided that same information. ML model training refers to capabilities of an ML training function to take data, run it through an ML model, derive the associated loss and adjust the parameterization of that ML model based on the computed loss. ML training refers to capabilities and associated end-to-end processes to enable an ML training function to perform /ML model training (as defined above). ML training capabilities may include interaction with other parties to collect and format the data required for training the ML model, and ML model training. A ML training function is a function with ML training capabilities; it is also referred to as MLT function. An AI/ML inference function is a function that employs an ML entity and/or Al decision entity to conduct inference.
[0031] There are a number of challenges associated with conventional 3 GPP systems that remain unresolved with respect to implementing a comprehensive AI/ML strategy for a wireless system. For instance, use cases and requirements for ML training are not standardized, such as a comprehensive set of information model definitions for AI/ML management in a 3GPP system.
[0032] Embodiments attempt to solve these and other challenges. Embodiments define a set of standard apparatus, systems, procedures, methods and techniques for ML training for a wireless communications systems, such as a 5GS or sixth generation system (6GS). Embodiments also provide a set of information model definitions suitable for AI/ML management. For example, in an operational environment before an AI/ML model is deployed, such as for an AI/ML inference function (referred to as an “inference function”) to conduct inference, it needs to be trained. ML training can be performed by an external entity of the inference function. In various embodiments, the ML model is trained by an ML Training (MLT) MnS producer. The training can be triggered by one or more requests from one or more MLT MnS consumers, or initiated by the MLT MnS producer (e.g., as a result of model evaluation).
[0033] In one embodiment, for example, an apparatus suitable to train an ML entity or model for a network node in a 3GPP system may comprise a memory interface communicatively coupled to processor circuitry. The memory interface may send or receive, to or from a data storage device, management information for a network resource model (NRM) of a fifth generation system (5GS). The processor circuitry may determine to initiate training of an ML entity using the management information. The training may be performed by a MnS producer of the 5GS. The processor circuitry may determine an inference type associated with the ML entity, select training data to train the ML entity, and train the ML entity according to the inference type using the selected training data by the MnS producer. The trained ML entity may be used to conduct inference operations for a MnS consumer of the 5GS. Other embodiments are described and claimed.
[0034] Reference is now made to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. However, the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives consistent with the claimed subject matter.
[0035] FIG. 1 illustrates an example of a wireless communication wireless communications system 100. For purposes of convenience and without limitation, the example wireless communications system 100 is described in the context of the long-term evolution (LTE) and fifth generation (5G) new radio (NR) (5G NR) cellular networks communication standards as defined by one or more 3GPP technical specifications (TSs) and/or technical reports (TRs). However, other types of wireless standards are possible. [0036] The wireless communications system 100 includes UE 102a and UE 102b (collectively referred to as the "UEs 102"). In this example, the UEs 102 are illustrated as smartphones (e.g., handheld touchscreen mobile computing devices connectable to one or more cellular networks). In other examples, any of the UEs 102 can include other mobile or non-mobile computing devices, such as consumer electronics devices, cellular phones, smartphones, feature phones, tablet computers, wearable computer devices, personal digital assistants (PDAs), pagers, wireless handsets, desktop computers, laptop computers, in- vehicle infotainment (IVI), in-car entertainment (ICE) devices, an Instrument Cluster (IC), head-up display (HUD) devices, onboard diagnostic (OBD) devices, dashtop mobile equipment (DME), mobile data terminals (MDTs), Electronic Engine Management System (EEMS), electronic/engine control units (ECUs), electronic/engine control modules (ECMs), embedded systems, microcontrollers, control modules, engine management systems (EMS), networked or "smart" appliances, machine-type communications (MTC) devices, machine-to-machine (M2M) devices, Internet of Things (loT) devices, or combinations of them, among others.
[0037] In some implementations, any of the UEs 102 may be loT UEs, which can include a network access layer designed for low-power loT applications utilizing short-lived UE connections. An loT UE can utilize technologies such as M2M or MTC for exchanging data with an MTC server or device using, for example, a public land mobile network (PLMN), proximity services (ProSe), device-to-device (D2D) communication, sensor networks, loT networks, or combinations of them, among others. The M2M or MTC exchange of data may be a machine-initiated exchange of data. An loT network describes interconnecting loT UEs, which can include uniquely identifiable embedded computing devices (within the Internet infrastructure), with short-lived connections. The loT UEs may execute background applications (e.g., keep-alive messages or status updates) to facilitate the connections of the loT network.
[0038] The UEs 102 are configured to connect (e.g., communicatively couple) with a radio access network (RAN) 112. In some implementations, the RAN 112 may be a next generation RAN (NG RAN), an evolved UMTS terrestrial radio access network (E- UTRAN), or a legacy RAN, such as a UMTS terrestrial radio access network (UTRAN) or a GSM EDGE radio access network (GERAN). As used herein, the term "NG RAN" may refer to a RAN 112 that operates in a 5G NR wireless communications system 100, and the term "E-UTRAN" may refer to a RAN 112 that operates in an LTE or 4G wireless communications system 100.
[0039] To connect to the RAN 112, the UEs 102 utilize connections (or channels) 118 and 120, respectively, each of which can include a physical communications interface or layer, as described below. In this example, the connections 118 and 120 are illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols, such as a global system for mobile communications (GSM) protocol, a code-division multiple access (CDMA) network protocol, a push-to-talk (PTT) protocol, a PTT over cellular (POC) protocol, a universal mobile telecommunications system (UMTS) protocol, a 3GPP LTE protocol, a 5G NR protocol, or combinations of them, among other communication protocols.
[0040] The UE 102b is shown to be configured to access an access point (AP) 104 (also referred to as "WLAN node 104," "WLAN 104," "WLAN Termination 104," "WT 104" or the like) using a connection 122. The connection 122 can include a local wireless connection, such as a connection consistent with any IEEE 802.11 protocol, in which the AP 104 would include a wireless fidelity (Wi-Fi) router. In this example, the AP 104 is shown to be connected to the Internet without connecting to the core network of the wireless system, as described in further detail below.
[0041] The RAN 112 can include one or more nodes such as RAN nodes 106a and 106b (collectively referred to as "RAN nodes 106" or "RAN node 106") that enable the connections 118 and 120. As used herein, the terms "access node," "access point," or the like may describe equipment that provides the radio baseband functions for data or voice connectivity, or both, between a network and one or more users. These access nodes can be referred to as base stations (BS), gNodeBs, gNBs, eNodeBs, eNBs, NodeBs, RAN nodes, rode side units (RSUs), transmission reception points (TRxPs or TRPs), and the link, and can include ground stations (e.g., terrestrial access points) or satellite stations providing coverage within a geographic area (e.g., a cell), among others. As used herein, the term "NG RAN node" may refer to a RAN node 106 that operates in an 5G NR wireless communications system 100 (for example, a gNB), and the term "E-UTRAN node" may refer to a RAN node 106 that operates in an LTE or 4G wireless communications system 100 (e.g., an eNB). In some implementations, the RAN nodes 106 may be implemented as one or more of a dedicated physical device such as a macrocell base station, or a low power (LP) base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells.
[0042] In some implementations, some or all of the RAN nodes 106 may be implemented as one or more software entities running on server computers as part of a virtual network, which may be referred to as a cloud RAN (CRAN) or a virtual baseband unit pool (vBBUP). The CRAN or vBBUP may implement a RAN function split, such as a packet data convergence protocol (PDCP) split in which radio resource control (RRC) and PDCP layers are operated by the CRAN/vBBUP and other layer two (e.g., data link layer) protocol entities are operated by individual RAN nodes 106; a medium access control (MAC)/physical layer (PHY) split in which RRC, PDCP, MAC, and radio link control (RLC) layers are operated by the CRAN/vBBUP and the PHY layer is operated by individual RAN nodes 106; or a "lower PHY" split in which RRC, PDCP, RLC, and MAC layers and upper portions of the PHY layer are operated by the CRAN/vBBUP and lower portions of the PHY layer are operated by individual RAN nodes 106. This virtualized framework allows the freed-up processor cores of the RAN nodes 106 to perform, for example, other virtualized applications. In some implementations, an individual RAN node 106 may represent individual gNB distributed units (DUs) that are connected to a gNB central unit (CU) using individual Fl interfaces (not shown in FIG. 1). In some implementations, the gNB -DUs can include one or more remote radio heads or RFEMs, and the gNB-CU may be operated by a server that is located in the RAN 112 (not shown) or by a server pool in a similar manner as the CRAN/vBBUP. Additionally or alternatively, one or more of the RAN nodes 106 may be next generation eNBs (ng-eNBs), including RAN nodes that provide E-UTRA user plane and control plane protocol terminations toward the UEs 102, and are connected to a 5G core network (e.g., core network 114) using a next generation interface.
[0043] In vehicle-to-everything (V2X) scenarios, one or more of the RAN nodes 106 may be or act as RSUs. The term "Road Side Unit" or "RSU" refers to any transportation infrastructure entity used for V2X communications. A RSU may be implemented in or by a suitable RAN node or a stationary (or relatively stationary) UE, where a RSU implemented in or by a UE may be referred to as a "UE-type RSU," a RSU implemented in or by an eNB may be referred to as an "eNB -type RSU," a RSU implemented in or by a gNB may be referred to as a "gNB-type RSU," and the like. In some implementations, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs 102 (vUEs 102). The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications or other software to sense and control ongoing vehicular and pedestrian traffic. The RSU may operate on the 5.9 GHz Direct Short Range Communications (DSRC) band to provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may operate on the cellular V2X band to provide the aforementioned low latency communications, as well as other cellular communications services. Additionally or alternatively, the RSU may operate as a Wi-Fi hotspot (2.4 GHz band) or provide connectivity to one or more cellular networks to provide uplink and downlink communications, or both. The computing device(s) and some or all of the radiofrequency circuitry of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and can include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network, or both.
[0044] Any of the RAN nodes 106 can terminate the air interface protocol and can be the first point of contact for the UEs 102. In some implementations, any of the RAN nodes 106 can fulfill various logical functions for the RAN 112 including, but not limited to, radio network controller (RNC) functions such as radio bearer management, uplink and downlink dynamic radio resource management and data packet scheduling, and mobility management. [0045] In some implementations, the UEs 102 can be configured to communicate using orthogonal frequency division multiplexing (OFDM) communication signals with each other or with any of the RAN nodes 106 over a multicarrier communication channel in accordance with various communication techniques, such as, but not limited to, OFDMA communication techniques (e.g., for downlink communications) or SC-FDMA communication techniques (e.g., for uplink communications), although the scope of the techniques described here not limited in this respect. The OFDM signals can comprise a plurality of orthogonal subcarriers.
[0046] The RAN nodes 106 can transmit to the UEs 102 over various channels. Various examples of downlink communication channels include Physical Broadcast Channel (PBCH), Physical Downlink Control Channel (PDCCH), and Physical Downlink Shared Channel (PDSCH). Other types of downlink channels are possible. The UEs 102 can transmit to the RAN nodes 106 over various channels. Various examples of uplink communication channels include Physical Uplink Shared Channel (PUSCH), Physical Uplink Control Channel (PUCCH), and Physical Random Access Channel (PRACH). Other types of uplink channels are possible. [0047] In some implementations, a downlink resource grid can be used for downlink transmissions from any of the RAN nodes 106 to the UEs 102, while uplink transmissions can utilize similar techniques. The grid can be a time -frequency grid, called a resource grid or time-frequency resource grid, which is the physical resource in the downlink in each slot. Such a time-frequency plane representation is a common practice for OFDM systems, which makes it intuitive for radio resource allocation. Each column and each row of the resource grid corresponds to one OFDM symbol and one OFDM subcarrier, respectively. The duration of the resource grid in the time domain corresponds to one slot in a radio frame. The smallest time-frequency unit in a resource grid is denoted as a resource element. Each resource grid comprises a number of resource blocks, which describe the mapping of certain physical channels to resource elements. Each resource block comprises a collection of resource elements; in the frequency domain, this may represent the smallest quantity of resources that currently can be allocated. There are several different physical downlink channels that are conveyed using such resource blocks.
[0048] The PDSCH carries user data and higher-layer signaling to the UEs 102. The PDCCH carries information about the transport format and resource allocations related to the PDSCH channel, among other things. It may also inform the UEs 102 about the transport format, resource allocation, and hybrid automatic repeat request (HARQ) information related to the uplink shared channel. Downlink scheduling (e.g., assigning control and shared channel resource blocks to the UE 102b within a cell) may be performed at any of the RAN nodes 106 based on channel quality information fed back from any of the UEs 102. The downlink resource assignment information may be sent on the PDCCH used for (e.g., assigned to) each of the UEs 102.
[0049] The PDCCH uses control channel elements (CCEs) to convey the control information. Before being mapped to resource elements, the PDCCH complex -valued symbols may first be organized into quadruplets, which may then be permuted using a sub- block interleaver for rate matching. In some implementations, each PDCCH may be transmitted using one or more of these CCEs, in which each CCE may correspond to nine sets of four physical resource elements collectively referred to as resource element groups (REGs). Four Quadrature Phase Shift Keying (QPSK) symbols may be mapped to each REG. The PDCCH can be transmitted using one or more CCEs, depending on the size of the downlink control information (DCI) and the channel condition. In LTE, there can be four or more different PDCCH formats defined with different numbers of CCEs (e.g., aggregation level, L=l, 2, 4, or 8). [0050] Some implementations may use concepts for resource allocation for control channel information that are an extension of the above-described concepts. For example, some implementations may utilize an enhanced PDCCH (EPDCCH) that uses PDSCH resources for control information transmission. The EPDCCH may be transmitted using one or more enhanced CCEs (ECCEs). Similar to above, each ECCE may correspond to nine sets of four physical resource elements collectively referred to as an enhanced REG (EREG). An ECCE may have other numbers of EREGs.
[0051] The RAN nodes 106 are configured to communicate with one another using an interface 132. In examples, such as where the wireless communications system 100 is an LTE system (e.g., when the core network 114 is an evolved packet core (EPC) network), the interface 132 may be an X2 interface 132. The X2 interface may be defined between two or more RAN nodes 106 (e.g., two or more eNBs and the like) that connect to the EPC 114, or between two eNBs connecting to EPC 114, or both. In some implementations, the X2 interface can include an X2 user plane interface (X2-U) and an X2 control plane interface (X2-C). The X2-U may provide flow control mechanisms for user data packets transferred over the X2 interface, and may be used to communicate information about the delivery of user data between eNBs. For example, the X2-U may provide specific sequence number information for user data transferred from a master eNB to a secondary eNB; information about successful in sequence delivery of PDCP protocol data units (PDUs) to a UE 102 from a secondary eNB for user data; information of PDCP PDUs that were not delivered to a UE 102; information about a current minimum desired buffer size at the secondary eNB for transmitting to the UE user data, among other information. The X2-C may provide intra- LTE access mobility functionality, including context transfers from source to target eNBs or user plane transport control; load management functionality; inter-cell interference coordination functionality, among other functionality.
[0052] In some implementations, such as where the wireless communications system 100 is a 5G NR system (e.g., when the core network 114 is a 5G core network), the interface 132 may be an Xn interface 132. The Xn interface may be defined between two or more RAN nodes 106 (e.g., two or more gNBs and the like) that connect to the 5G core network 114, between a RAN node 106 (e.g., a gNB) connecting to the 5G core network 114 and an eNB, or between two eNBs connecting to the 5G core network 114, or combinations of them. In some implementations, the Xn interface can include an Xn user plane (Xn-U) interface and an Xn control plane (Xn-C) interface. The Xn-U may provide non-guaranteed delivery of user plane PDUs and support/provide data forwarding and flow control functionality. The Xn-C may provide management and error handling functionality, functionality to manage the Xn-C interface; mobility support for UE 102 in a connected mode (e.g., CM- CONNECTED) including functionality to manage the UE mobility for connected mode between one or more RAN nodes 106, among other functionality. The mobility support can include context transfer from an old (source) serving RAN node 106 to new (target) serving RAN node 106, and control of user plane tunnels between old (source) serving RAN node 106 to new (target) serving RAN node 106. A protocol stack of the Xn-U can include a transport network layer built on Internet Protocol (IP) transport layer, and a GPRS tunneling protocol for user plane (GTP-U) layer on top of a user datagram protocol (UDP) or IP layer(s), or both, to carry user plane PDUs. The Xn-C protocol stack can include an application layer signaling protocol (referred to as Xn Application Protocol (Xn-AP or XnAP)) and a transport network layer (TNL) that is built on a stream control transmission protocol (SCTP). The SCTP may be on top of an IP layer, and may provide the guaranteed delivery of application layer messages. In the transport IP layer, point-to-point transmission is used to deliver the signaling PDUs. In other implementations, the Xn-U protocol stack or the Xn-C protocol stack, or both, may be same or similar to the user plane and/or control plane protocol stack(s) shown and described herein.
[0053] The RAN 112 is shown to be communicatively coupled to a core network 114 (referred to as a "CN 114"). The CN 114 includes multiple network elements, such as network element 108a and network element 108b (collectively referred to as the "network elements 108"), which are configured to offer various data and telecommunications services to customers/subscribers (e.g., users of UEs 102) who are connected to the CN 114 using the RAN 112. The components of the CN 114 may be implemented in one physical node or separate physical nodes and can include components to read and execute instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium). In some implementations, network functions virtualization (NFV) may be used to virtualize some or all of the network node functions described here using executable instructions stored in one or more computer-readable storage mediums, as described in further detail below. A logical instantiation of the CN 114 may be referred to as a network slice, and a logical instantiation of a portion of the CN 114 may be referred to as a network sub-slice. NFV architectures and infrastructures may be used to virtualize one or more network functions, alternatively performed by proprietary hardware, onto physical resources comprising a combination of industry-standard server hardware, storage hardware, or switches. In other words, NFV systems can be used to execute virtual or reconfigurable implementations of one or more network components or functions, or both. [0054] An application server 110 may be an element offering applications that use IP bearer resources with the core network (e.g., UMTS packet services (PS) domain, LTE PS data services, among others). The application server 110 can also be configured to support one or more communication services (e.g., VoIP sessions, PTT sessions, group communication sessions, social networking services, among others) for the UEs 102 using the CN 114. The application server 110 can use an IP communications interface 130 to communicate with one or more network elements 108a.
[0055] In some implementations, the CN 114 may be a 5G core network (referred to as "5GC 114" or "5G core network 114"), and the RAN 112 may be connected with the CN 114 using a next generation interface 124. In some implementations, the next generation interface 124 may be split into two parts, a next generation user plane (NG-U) interface 114, which carries traffic data between the RAN nodes 106 and a user plane function (UPF), and the SI control plane (NG-C) interface 126, which is a signaling interface between the RAN nodes 106 and access and mobility management functions (AMFs). Examples where the CN 114 is a 5G core network are discussed in more detail with regard to later figures.
[0056] In some implementations, the CN 114 may be an EPC (referred to as "EPC 114" or the like), and the RAN 112 may be connected with the CN 114 using an SI interface 124. In some implementations, the SI interface 124 may be split into two parts, an SI user plane (Sl-U) interface 128, which carries traffic data between the RAN nodes 106 and the serving gateway (S-GW), and the SI -MME interface 126, which is a signaling interface between the RAN nodes 106 and mobility management entities (MMEs).
[0057] Various embodiments address energy efficiency related issues for a cellular system such as wireless communications system 100. Energy saving is a critical issue for the 5G operators. Energy saving is achieved by activating the energy saving mode of the NR capacity booster cell or 5GC NF (e.g., a UPF etc.), and the energy saving activation decision making may be based on the various information such as load information of the related cells/UPFs, the energy saving policies set by operators as specified in a 3GPP TS or TR, such as TR 28.809, TR 37.817, TR 36.887, and TS 38.423.
[0058] A management system, node or logic has an overall view of network load information and it could also take the inputs from the control plane analysis (e.g., the analytics provided by NWDAF). The management system may provide network wide analytics and cooperate with core network and RAN domains and decide on which cell/UPF should move into energy saving mode in a coordinated manner. [0059] There are various performance measurements could be used as inputs by MDA for energy saving analysis, for example, energy efficiency (EE) related performance measurements, (e.g. PDCP data volume of cells, PNF temperature, and PNF power consumption etc.) for the gNBs, and the data volume, number of PDU sessions with SSC mode 1, delay related measurements, and VR usage for UPFs, and the traffic load variation related performance measurements, (e.g. the PRB utilization rate, RRC connection number). [0060] The composition of the traffic load could be also considered as inputs for energy saving analysis, (e.g., the percentage of high-value traffic in the traffic load). The variation of traffic load may be related to the network data (e.g., historical handover information of the UEs or network congestion status, packet delay). Collecting and analyzing the network data with machine learning tools may provide predictions related to the trends of traffic load. The composition and the trend of the traffic load may be used as references for making decision on energy saving.
[0061] There are many prediction data models which may use machine learning tools for predicting the energy saving related information, such as traffic load. MDAS may also take these prediction data models as input, make analysis and select the optimal prediction data models to provide more accurate prediction results as references for making energy saving decision. The more accurate the prediction results are, the better the energy -saving decision based on the prediction results will be. The prediction data models are related to services (e.g., traffic load, resource utilization, service experience), which can be provided by consumer.
[0062] MDAS may also obtain NF location or other inventory information such as energy efficiency and the energy cost of the data centers, while analyzing historical network information. Based on the collected information, MDAS producer makes analysis and gives suggestions to network management in optimization suggestion for 5G Core NF deployment options in high-value traffic region (e.g. location of VNF in context of energy saving). The information from control plane data analysis from NWDAF, such as UE Communication analytics may also be used as input for energy saving analysis and instruction.
[0063] The decision of core NF and RAN node energy saving should be coordinated by management system to guarantee the overall network and service performance are not affected as much as possible. To achieve an optimized balance between the energy consumed and the performance provided by the network, MDAS can be used to provide an analytics report by analyzing the above information comprehensively to assist the energy saving. [0064] FIG. 2 illustrates an MDA system 200 suitable for use by a management system to implement AI/ML functionality and services for the wireless communications system 100. The MDA system 200 illustrates an MDA functionality and service framework. As depicted in FIG. 2, the MDA system 200 may include a MDA platform 204, at least one MDA service (MDAS) consumer 202, and multiple MDAS producers, such as an other MDAS producer 216, a management service (MnS) producer 218, and a network data analytics function (NWDAF) 220. The MDA platform 204 includes an MDAS producer 206, an MDAS analyzer 208, an multiple MDAS consumers. The multiple MDAS consumers include an MDAS consumer 210, an MnS consumer 212 and a NWDAF subscriber 214, each communicating with a corresponding other MDAS producer 216, MnS producer 218 and NWDAF 220 via a MDAS interface, MnS interface and Nwdaf interface, respectively. [0065] In general, the MDA platform 204 may collect data for analysis by acting as the MnS consumer 212, and/or as the NWDAF subscriber 214, and/or as a consumer of the other MDAS producer 216. After analysis, the MDAS producer 206 exposes the analysis results to the one or more MDAS consumers 202. The MDA system 200 forms a part of a management loop (which can be open loop or closed loop), and it brings intelligence and generates value by processing and analysis of management and network data, where the Al and ML techniques may be utilized. The MDA system 200 plays the role of analytics in the management loop, which includes an observation state, an analytics state, a decision state and an execution state. In the observation state, the MDA system 200 conducts observation of the managed networks and services. The observation state involves monitoring and collection of events, status and performance of the managed networks and services, and providing the observed/collected data (e.g., performance measurements, Trace/MDT/RLF/RCEF reports, network analytics reports, QoE reports, alarms, etc). The data analytics state for the managed networks and services prepares, processes and analyzes the data related to the managed networks and services, and provides the analytics reports for root cause analysis of ongoing issues, prevention of potential issues and prediction of network or service demands. The analytics report contains the description of the issues or predictions with optionally a degree of confidence indicator, the possible causes for the issue and the recommended actions. Techniques such as Al and ML (e.g., ML model) may be utilized by the MDA platform 204 with the input data including not only the observed data of the managed networks and services, but also the execution reports of actions (taken by the execution step). The MDAS analyzer 208 classifies and correlates the input data (current and historical data), learns and recognizes the data patterns, and makes analysis to derive inference, insight and predictions. The decision state involves making decisions for the management actions for the managed networks and services. The management actions are decided based on the analytics reports (provided by the MDAS analyzer 208) and other management data (e.g., historical decisions made previously) if necessary. The decision may be made by the consumer of MDAS (in the closed management loop), or a human operator (in the open management loop). The decision includes what actions to take, and when to take the actions. Finally, the execution state involves execution of the management actions according to the decisions. During the execution state, the actions are carried out to the managed networks and services, and the reports (e.g., notifications, logs) of the executed actions are provided.
[0066] FIG. 3 illustrates an AI/ML system 300 suitable for use by the MDAS analyzer 208 of the MDA system 200 for the wireless communications system 100. The AI/ML system 300 comprises four major operational states, including a data collection state, an ML entity state, an ML training state, and an AI/ML inference state.
[0067] The AI/ML system 300 may use various ML entities. Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. The AI/ML system 300 may use various models or ML entities, such as derived using an artificial neural network (ANN), convolutional neural network (CNN), deep learning, decision tree learning, support -vector machine, regression analysis, Bayesian networks, genetic algorithms, federated learning, distributed artificial intelligence, and other suitable models. Embodiments are not limited in this context.
[0068] Generally, in the data collection state, the AI/ML system 300 implements a function that provides input data to model training and model inference functions. Note AI/ML algorithm specific data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) is typically not carried out in the data collection state. In the ML entity state, the AI/ML system 300 implements a data driven algorithm by applying machine learning techniques that generates a set of outputs comprising predicted information and/or decision parameters, based on a given set of inputs 310. In the ML training state, the AI/ML system 300 implements an online or offline process to train an ML entity by learning features and patterns that best present data and get the trained ML entity for inference. In the AI/ML inference state, the AI/ML system 300 implements a process of using a trained ML entity to make a prediction or guide the decision based on collected data and the ML entity.
[0069] More particularly, in the data collection state, the AI/ML system 300 collects data from the network nodes, management entity or UE, as a basis for ML entity training, data analytics and inference. As depicted in FIG. 3, a data collection 302 is a function that provides input data to ML training 304 and AI/ML inference 306 functions. An AI/ML algorithm specific data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) is not carried out in the data collection 302. Examples of input data may include measurements from UEs, NG-RAN nodes, 0AM nodes, or different network entities, feedback from an actor 308, and output from an ML entity. The data collection 302 collects at least two types of data. The first is training data, which comprises data needed as input 310 for the ML training 304 function. The second is inference data, which comprises data needed as input 312 for the AI/ML inference 306 function.
[0070] In the ML training state, the ML training 304 is a function that performs the ML training, validation, and testing which may generate model performance metrics as part of the ML entity testing procedure. The ML training 304 function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on training data (e.g., input 310) delivered by the data collection 302 function, if required. For deployment or updates to a given ML entity, the ML training 304 can initially deploy a trained, validated, and tested ML entity to the AI/ML inference 306 function or to deliver an updated entity to the AI/ML inference 306 function.
[0071] In the AI/ML inference state, the AI/ML inference 306 is a function that provides AI/ML inference output (e.g., predictions or decisions). The AI/ML inference 306 function may provide model performance feedback 314, 316 to the ML training 304 function when applicable. The AI/ML inference 306 function is also responsible for data preparation (e.g., data pre-processing and cleaning, formatting, and transformation) based on inference data (e.g., input 312) delivered by the data collection 302 function, if required. The inference output of the ML entity produced by an AI/ML inference 306 function is use case specific. The ML performance feedback information may be used for monitoring the performance of the ML entity, when available.
[0072] In the actor inference state, the actor 308 is a function that receives the output 318 from the AI/ML inference 306 function and triggers or performs corresponding actions. The actor 308 may trigger actions directed to other entities or to itself. The actor 308 may provide feedback information 320 to the data collection 302. The feedback information may comprise data needed to derive training data, inference data or to monitor the performance of the ML entity and its impact to the network through updating of KPIs and performance counters. [0073] The AI/ML system 300 may be applicable to various use cases and solutions for AI/ML in a RAN node 106 of the wireless communications system 100. One use case is network energy saving or energy efficiency (EE). To meet the 5G network requirements of key performance and the demands of the unprecedented growth of the mobile subscribers, millions of base stations (BSs) are being deployed. Such rapid growth brings the issues of high energy consumption, CO2 emissions and operation expenditures (OPEX). Therefore, energy saving is an important use case which may involve different layers of the network, with mechanisms operating at different time scales. Other use cases and solutions for AI/ML are possible as well, and embodiments are not limited in this context.
[0074] FIG. 4 illustrates an MDA ML system 400 suitable for use in the wireless communications system 100. Referring again to FIGS. 2, 3, a management system that implements the MDA system 200 and/or the AI/ML system 300 can be coalesced into the MDA ML system 400.
[0075] As depicted in FIG. 4, the MDA ML system 400 illustrates an example of a MDA process scenario where the ML entity and the management data analysis module are residing in a MDAS producer, although other scenarios are possible. The MDA ML system 400 may generally rely on ML technologies, which may need a MDAS consumer to be involved to optimize the accuracy of the MDA results. The MDA process in terms of the interaction with the MDAS consumer, when utilizing ML technologies, is described in FIG. 4.
[0076] There are two kinds of processes related to MDA, a process for ML training and a process for management data analysis. In the process for ML training, an MDAS producer 206 serves as ML training producer, trains an ML entity 406 and provides an ML training report 414. The process for ML training may also get an MDAS consumer 202 involved, by allowing the MDAS consumer 202 to provide input for ML training. The ML training may be performed on an un-trained ML entity 406 or a trained ML entity 406. In the process for management data analysis, the MDAS producer 206 analyzes the data by the trained ML entity, and provides an ML analytics report 416 to the MDAS consumer 202. The MDAS consumer 202 may validate the ML training report 414 and ML analytics report 416 and provide a report validation feedback 418 to the MDAS producer 206. For each received report the MDAS consumer 202 may provide a feedback 418 towards the MDAS producer 206, which may be used to optimize ML entity 406.
[0077] As depicted in FIG. 4, the MDAS producer 206 may receive analytics input 412. The analytics input 412 could be used by an ML entity trainer 404 for ML training or a management data analyzer 408 for management data analysis. A data classifier 402 of the MDAS producer 206 classifies data from the analytics input 412 and passes the classified data along to a corresponding entity for further processing.
[0078] An ML trainer 404 of the MDAS producer 206 trains the ML entity 406. The ML trainer 404 trains the ML entity 406 to be able to provide the expected training output by analysis of the training input. The data for ML training may be training data, including the training input and the expected output, and/or the report validation feedback 418 provided by the MDAS consumer 202. After training the ML entity 406, the MDAS producer 206 provides an ML training report 414 to the MDAS consumer 202.
[0079] The MDAS producer uses the trained ML entity 406, analyzes the classified data from the data classifier 402, and it generates the ML analytics report 416. The ML analytics report 416 is output from the MDAS producer 206 to the MDAS consumer 202. The MDAS consumer 202 may validate the ML analytics report 416 provided by the MDAS producer 206. The analytics report 416 to be validated may be the ML analytics report 416 and/or the ML training report 414 as previously described. The MDAS consumer 202 may provide a feedback 418 to the MDAS producer 206. As a result of validation, the MDAS consumer 202 may also provide training data and request to train the ML entity 406 and/or provide feedback indicating a scope of inaccuracy, e.g. time, geographical area, etc.
[0080] When the MDA ML system 400 is implemented as part of a network node in a 3GPP system, such as a 3GPP RAN3 5G NR system, various embodiments herein describe new information that a RAN node may exchange with its neighboring nodes as well as other metrics of a cell KPIs and ML entities in order to facilitate better decision making from the ML entities to improve performance for an apparatus, device or system in a 3GPP system.
[0081] FIG. 5 illustrates a table 500. The AI/ML system 300 and the MDA ML system 400 may implement various Al and ML algorithms suitable for supporting one or more operations for the wireless communications system 100. As depicted in table 500, machine learning approaches are traditionally divided into four broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system. One approach is supervised learning, where a computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs. Another approach is semi -supervised learning, which is similar to supervised learning, but includes both labelled data and unlabelled data. Still another approach is unsupervised learning, where no labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning). Yet another approach is reinforcement learning, where a computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximize. Other approaches exist as well, such as dimensionality reduction, self-learning, feature learning, sparse dictionary learning, anomaly detection, robot learning, association rules, and so forth.
[0082] FIG. 6 illustrates a MLT system 600. The MLT system 600 implements a novel functional overview and service framework for ML training (MLT) of an ML entity 612 for the MLT system 600. The MLT system 600 can be used to train other ML entities as well, such as the ML entity 406 for AI/ML systems 300 and MDA ML system 400, for example. In one embodiment, the MLT system 600 may implement an AI/ML functional and service framework as defined by 3GPP TS 28.104 and/or 3GPP TS 28.105, among other 3GPP and non-3GPP standards. Embodiments are not limited in this context.
[0083] As depicted in FIG. 6, a MLT MnS producer 602 (also referred to as an AI/ML training (AIMLT) MnS producer) may implement an MLT ML training logic 604, which is a MLT function that consumes various data from one or more data sources 606 suitable for ML training purposes. The MLT capability is provided via an ML training MnS 610 in the context of a SBMA to one or more authorized MLT MnS consumers 608 by the MLT MnS producer 602.
[0084] The MLT MnS producer 602 may train an ML entity 612 using the ML training logic 604. The ML training logic 604 represents internal business logic suitable for a given AI/ML inference function or ML entity. The ML training logic 604 leverages current and historical relevant data, including those listed below to monitor the networks and/or services where relevant to the ML entity, prepare the data, trigger and conduct the training: (1) Performance Measurements (PM) as per 3GPP TS 28.552, 3GPP TS 32.425 and Key Performance Indicators (KPIs) as per 3GPP TS 28.554; (2) Trace/MDT/RLF/RCEF data, as per 3GPP TS 32.422 and 3GPP TS 32.423; (3) QoE and service experience data as per 3GPP TS 28.405 and 3GPP TS 28.406; (4) Analytics data offered by NWDAF as per 3GPP TS 23.288; (5) Alarm information and notifications as per 3GPP TS 28.532; (6) CM information and notifications; (7) MDA reports from MDA MnS producers as per 3 GPP TS 28.104; (8) Management data from non-3GPP systems; and (9) other data that can be used for training. Embodiments are not limited in this context. [0085] The MLT MnS producer 602 may train the ML entity 612 using the ML training logic 604 and management information 614. The management information 614 may comprise a standardized set of requirements and information model definitions for AI/ML management. Examples of requirements include those set forth in Table 1 below. The information model definitions for AI/ML management may include information such as imported and associated information entities, imported information entities and local labels, classes, class diagrams, class relationships, class inheritance, class definitions, class attributes, attributes, attribute constraints, notifications, data type definitions, attribute definitions, attribute properties, common notifications, service components, solution sets, program code, and other software and hardware constructs.
[0086] In one embodiment, for example, the management information 614 may be defined in view of a network resource model (NRM) for a network, such as a 3 GPP network like the wireless communications system 100. NRM configuration management allows service providers to control and monitor the actual configuration on network resources, which are the fundamental resources to the mobility networks. Considering the huge number of existing information object classes (IOC) and increasing IOCS in various domains, NRM configuration management should be handled in a dynamic manner. In one embodiment, for example, the management information 614 may be defined by one or more 3 GPP standards, such as 3GPP TS 28.105, among other 3GPP and non-3GPP standards. Embodiments are not limited in this context.
[0087] FIG. 7 illustrates a message flow 700 for a AI/ML system, such as the MLT system 600. The message flow 700 illustrates messages communicated for MLT to support various AI/ML management use cases and requirements. In an operational environment, an ML entity 702 is deployed to conduct inference operations for a network node in the wireless communications system 100. The ML entity 702 represents an AI/ML inference function. Prior to deployment, or after deployment, the MLT MnS producer 602 implements an ML training function defined by the ML training logic 604 to train the ML entity 612 associated with the management information 614. In one embodiment, the ML training function may be implemented as a combined or an internal entity to the AI/ML inference function. In one embodiment, the ML training function may be implemented as a separate or an external entity to the AI/ML inference function. In the present document, ML entity training refers to training of ML model(s) associated with an ML entity.
[0088] As depicted in the message flow 700, the MLT MnS producer 602 of the MLT system 600 trains the ML entity 612 associated with the ML entity 702. The ML training can be triggered by requests from one or more MLT MnS consumers 608, or initiated by the MLT MnS producer 602 (e.g. as result of model evaluation).
[0089] In a case where MLT is requested by the MLT MnS consumer 608, the ML training capabilities are provided by an MLT MnS producer 602 to one or more MLT MnS consumers 608. The ML training may be triggered by one or more ML training requests 704 from one or more MLT MnS consumers 608. The consumer MLT MnS consumer 608 may be for example a network function, a management function, an operator, or another functional differentiation to trigger an ML training. The MLT MnS consumer 608 requests the MLT MnS producer 602 to train the ML model(s) associated with an MLentity. In the ML training request, the MLT MnS consumer 608 should specify an inference type which indicates the function or purpose of the ML entity, e.g. CoverageProblemAnalysis . The MLT MnS producer 602 can perform the training according to the designated inference type. The MLT MnS consumer 608 may provide the data sources 606 that contain the training data which are considered as inputs candidates for training. To obtain valid training outcomes, MLT MnS consumers 608 may also designate their requirements for model performance (e.g. accuracy, etc) in the training request.
[0090] The MLT MnS producer 602 provides an ML training response 706 to the MLT MnS consumer 608 indicating whether the request was accepted. If not accepted, the ML training response 706 may include a reason for non-acceptance, such as insufficient training data, overcapacity, insufficient priority, and so forth.
[0091] If the request is accepted, the MLT MnS producer 602 decides when to start the ML training with consideration of the ML training request 704 from the MLT MnS consumer 608. Once the training is decided, the MLT MnS producer 602 selects the training data, with consideration of the consumer provided candidate training data. Since the training data directly influences the algorithm and performance of the trained ML entity, the MLT MnS producer 602 may examine the consumer's provided training data and decide to select none, some or all of them. In addition, the MLT MnS producer 602 may select some other training data that are available. The MLT MnS producer 602 trains the ML entity using the selected training data. The MLT MnS producer 602 provides training results 708 to the MLT MnS consumer 608. The training result 708 may include a location of the trained ML model or entity, among other types of information.
[0092] In some cases, MLT may be initiated by the MLT MnS producer 602. The MLT MnS producer 602 may initiate MLT, for instance, as a result of a performance evaluation of the ML entity, based on feedback or new training data received from the MLT MnS consumer 608, or when new training data which are not from the MLT MnS consumer 608 describing a new network status/events become available.
[0093] When the MLT MnS producer 602 decides to start the ML training, the MLT MnS producer 602 selects training data, trains the ML entity using the selected training data, and provides the training results (e.g., including the location of the trained ML entity, etc.) to the MLT MnS consumers 608 who have subscribed to receive the ML training results.
[0094] For a given machine learning-based use case, different entities that apply the respective ML entity/model or AI/ML inference function may have different inference requirements and capabilities. For example, one MLT MnS consumer 608 with specific responsibility and wish to have an AI/ML inference function supported by an ML entity or entity trained for city central business district where mobile users move at speeds not exceeding 30 km/hr. On the other hand, another MLT MnS consumer 608, for the same use case may support a rural environment and as such wishes to have an ML entity and AI/ML inference function fitting that type of environment. The different consumers need to know the available versions of ML entities, with the variants of trained ML models or entities and to select the appropriate one for their respective conditions.
[0095] It is worthy to note that there is no guarantee that the available ML models/entities have been trained according to the characteristics that the consumers expect. As such the consumers need to know the conditions for which the ML models or ML entities have been trained to then enable them to select the models that are best fit to their conditions and needs.
[0096] The models that have been trained may differ in terms of complexity and performance. For example, a generic comprehensive and complex model may have been trained in a cloud-like environment but when such a model cannot be used in the gNB and instead, a less complex model, trained as a derivative of this generic model, could be a better candidate. Moreover, multiple less complex models could be trained with different level of complexity and performance which would then allow different relevant models to be delivered to different network functions depending on operating conditions and performance requirements. The network functions need to know the alternative models available and interactively request and replace them when needed and depending on the observed inference-related constraints and performance requirements.
[0097] This machine learning capability relates to means for managing and controlling ML model/entity training processes. To achieve the desired outcomes of any machine learning relevant use-case, the ML model applied for such analytics and decision making, needs to be trained with the appropriate data. The training may be undertaken in managed function or in a management function. In either case, the network (or the 0AM system thereof) not only needs to have the required training capabilities but needs to also have the means to manage the training of the ML models/entities. The consumers need to be able to interact with the training process, e.g. to suspend or restart the process; and also need to manage and control the requests related to any such training process.
[0098] A given MLT may have certain requirements. Some examples of MLT requirements are set forth in Table 1 as follows:
[0099] TABLE 1
Requirement Description Related use label case(s)
REQ- The MLT MnS producer shall have a capability ML training
MDA ML- allowing the consumer to request ML entity requested by
FUN-1 training. consumer
REQ- The MLT MnS producer shall have a capability ML training
MDA ML- allowing the consumer to specify the data requested by
FUN-2 sources containing the candidate training data consumer for ML entity training.
REQ- The MLT MnS producer shall have a capability ML training
MDA ML- to provide the training result (including the requested by
FUN-3 location of the trained model) to the consumer. consumer, and
ML training initiated by producer [0100] A particular MLT task may have other MLT requirements as well, such as those set forth in 3GPP TS 28.105, among other 3GPP and non-3GPP standards.
[0101] Operations for the disclosed embodiments may be further described with reference to the following figures. Some of the figures may include a logic flow. Although such figures presented herein may include a particular logic flow, it can be appreciated that the logic flow merely provides an example of how the general functionality as described herein can be implemented. Further, a given logic flow does not necessarily have to be executed in the order presented unless otherwise indicated. Moreover, not all acts illustrated in a logic flow may be required in some embodiments. In addition, the given logic flow may be implemented by a hardware element, a software element executed by a processor, or any combination thereof. The embodiments are not limited in this context.
[0102] FIG. 8 illustrates an embodiment of a logic flow 800. The logic flow 800 may be representative of some or all of the operations executed by one or more embodiments described herein. For example, the logic flow 800 may include some or all of the operations performed by the AI/ML system 300, the MDA ML system 400, and/or the MLT system 600 of the wireless communications system 100. More particularly, the logic flow 800 illustrates the AI/ML system 300, the MDA ML system 400, and/or the MLT system 600 utilizing a message exchange and message format discussed with reference to the message flow 700. Embodiments are not limited in this context.
[0103] In block 802, logic flow 800 determines to initiate training of an ML entity using management information for a network resource model (NRM) of a fifth generation system (5GS), the training to be performed by a management service (MnS) producer of the 5GS. For example, an MLT MnS producer 602 determines to initiate training of an ML entity 612 using management information 614 for a NRM of the wireless communications system 100. In one embodiment, the MLT MnS producer 602 may receive an ML training request 704 for ML training from the MLT MnS consumer 608, and determine to initiate training of the ML entity 612 in response to a request for ML entity training from the MLT MnS consumer 608. In one embodiment, the MLT MnS producer 602 may itself determine to initiate training of the ML entity 612 as a result of evaluation of performance of the ML entity 612, based on feedback 418 received from the MLT MnS consumer 608, or when new training data describing new network status or events become available.
[0104] In block 804, logic flow 800 determines an inference type associated with the ML entity. For example, the MLT MnS producer 602 may determine an inference type associated with the ML entity. In one embodiment, the MLT MnS producer 602 may receive an ML training request 704 specifying the interference type for the ML entity 612 to be trained from the MLT MnS consumer 608.
[0105] In block 806, logic flow 800 selecting training data to train the ML entity. For example, the MLT MnS producer 602 may select training data to train the ML entity. The training data may be stored in one or more data sources 606. In one embodiment, the MLT MnS producer 602 may receive an ML training request 704 specifying one or more data sources containing candidate training data for training the ML entity 612 from the MLT MnS consumer 608. The MLT MnS producer 602 may optionally select at least a portion of the training data to train the ML entity 612 from the candidate training data received from the MLT MnS consumer 608.
[0106] In block 808, logic flow 800 trains the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS. For example, the MLT MnS producer 602 may train the ML entity 612 according to the inference type using the selected training data. Once MLT operations are complete, the MLT MnS producer 602 may generate a training result 708 for the trained ML entity 612. The MLT MnS producer 602 may provide the training result 708 to the MLT MnS consumer 608. In one embodiment, the training result 708 may include a location of the trained ML entity 612, so that the MLT MnS consumer 608 can retrieve the trained ML entity 612 from the MLT MnS producer 602.
[0107] Once the MLT MnS consumer 608 retrieves or receives the trained ML entity 612, the MLT MnS consumer 608 may deploy the trained ML entity 612. The MLT MnS consumer 608 can then use the trained ML entity 612 to conduct inference operations for the MLT MnS consumer 608 in the wireless communications system 100.
[0108] The logic flow 800 may further include various logic blocks, in various combinations, that are not necessarily shown in FIG. 8. Some examples are described below.
[0109] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management.
[0110] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML training.
[OHl] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML training and a class hierarchy for ML training related to the NRM.
[0112] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and where the ML training request managed object instance (MOI) is contained under one ML training function MOL
[0113] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an ML training report, the ML training report to represent an ML training report that is provided by the MnS producer, and where the ML entity training report managed object instance (MOI) is contained under one ML training function MOL
[0114] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity to the MnS producer.
[0115] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
[0116] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports.
[0117] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity.
[0118] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training.
[0119] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML training.
[0120] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MnS consumer.
[0121] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML training report managed object instance (MOI) that represents a last training report for the ML entity.
[0122] The logic flow 800 may also include where the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
[0123] FIG. 9 illustrates an MLT software architecture 900 suitable for supporting MLT operations as performed by the MLT system 600 in a wireless communications system 100, such as a 5GS, for example.
[0124] The 3 GPP standards define a new management architecture referred to as a service based management architecture (SBMA). SBMA is an architectural style that defines only Management Service (MnS) components in normative fashion, and that follows strictly a model-driven approach for consumer/producer interactions. MnS components are used to build 3 GPP-defined and vendor-specific Management Services and Management Functions. This approach combines the power of standardized (interoperable) interfaces for multivendor integrations with support for diverse deployment scenarios.
[0125] In general, the SBMA provides a comprehensive toolset of RESTful management service components for building 5G management and orchestration solutions enabling improved operability and automation of 5G radio and core networks and services. Since Release- 16, 3 GPP follows a strictly model -driven approach relying on generic yet powerful Create, Read, Update and Delete (CRUD) operations and rich Network Resource Models (NRMs). No task-specific operations are defined. This approach is also referred to as Representational State Transfer (REST). REST is a software architectural style that describes a uniform interface between physically separate components, often across a network in a client-server architecture. Release- 16 contains Network Resource Models for the NR, 5GC and Network Slicing. Models for interactions with verticals and external management systems are available as well. Starting from Release-15, control NRM fragments have been introduced for different management tasks such as subscribing to receiving notifications or managing performance metric production jobs, often replacing and extending legacy approaches based on dedicated operations. The main benefit of a fully model-driven approach is that the same set of basic CRUD operations can be used to generate sophisticated requests for manipulating and retrieving Network Resource Models. No task-specific operations are required. An additional benefit of the strict separation of model and access is that the 3GPP-defined Network Resource Models can be reused easily by other management frameworks following the same separation of concerns.
[0126] The SBMA uses Network Resource Models, such as those defined in 3GPP TS 28.622 titled “Generic Network Resource Model (NRM) Integration Reference Point (IRP)” Release 18 (2022-09). The 3GPP TS 28.622 specifies generic network resource information, referred to herein as the management information 614, that can be communicated between a MnS producer and a MnS consumer in deployment scenarios using the SBMA as defined in 3GPP TS 28.533 for telecommunication network management purposes, including management of converged networks and networks that include virtualized network functions. It specifies semantics of information object class (IOC) attributes and relations visible across the reference point in a protocol and technology neutral way. It supports the Federated Network Information Model (FNIM) concept described in 3GPP TS 32.107 in that the relevant Information Object Class (IOC)s defined are directly or indirectly inherited from those specified in the Umbrella Information Model (UIM) of 3GPP TS 28.620. [0127] As defined by 3GPP, a NRM is collection of IOCS, inclusive of their associations, attributes and operations, representing a set of network resources under management. A network resource is a discrete entity represented by an Information Object Class (IOC) for the purpose of network and service management. For instance, a network resource may represent intelligence, information, hardware and software of a telecommunication network. An IOC represents the management aspect of a network resource. It describes the information that can be passed/used in management interfaces. Their representations are technology agnostic software objects. IOC has attributes that represents the various properties of the class of objects. Furthermore, IOC can support operations providing network management services invocable on demand for that class of objects. An IOC may support notifications that report event occurrences relevant for that class of objects. It is modeled using the stereotype "Class" in the UML meta-model. A Managed Object (MO) is an instance of a Managed Object Class (MOC) representing the management aspects of a network resource. Its representation is a technology specific software object. It is sometimes called MO instance (MOI). The MOC is a class of such technology specific software objects. An MOC is the same as an IOC except that the former is defined in technology specific terms and the latter is defined in technology agnostic terms. MOCs are used/defined in solution set (SS) level specifications. IOCs are used/defined in information service (IS) level specifications.
[0128] Embodiments define NRM-based solutions for ML training by defining standardized objects, both data and code, specifically designed to support MLT operations in a SBMA of a 5GS, such as the wireless communications system 100. As depicted in FIG. 9, MLT operations for the MLT system 600 may be managed through management information 614. The MLT system 600 may use the management information 614 to support MLT operations for ML entity deployed throughout the SBMA of 5GS. The MLT system 600 may use the management information 614 to manage MLT operations, such as CRUD operations for one or more software Managed Object Instances (MOIs), such as a MOI 902 to support MLT operations. A given MOI, such as the MOI 902, may be instantiated using one or more Information Object Classes (IOCs), such as an IOC 904, in accordance with the management information 614.
[0129] In various embodiments, the management information 614, the MOI 902 and/or the IOC 904 may be implemented in accordance with at least 3 GPP TS 28.105 titled “Artificial Intelligence / Machine Learning (AI/ML) management” Release 17, versions 0.1.0 (2022- 02) to 17.1.1 (2022-09), including any progeny, revisions and variants. It may be appreciated that certain embodiments may related to other standards as well. Embodiments are not limited in this context.
[0130] With reference to 3GPP TS 28.105, a set of examples of information model definitions for AI/ML management that may be implemented by the MLT system 600 are presented as follows.
[0131] 7 Information model definitions for AI/ML management
[0132] 7.2 Class diagram
[0133] 7.2.1 Relationships
[0134] This clause depicts a set of classes (e.g., IOCS) that encapsulates the information relevant to ML training. A class diagram 1100a for the set of classes is depicted in FIG. 11A.
[0135] 7.2.2 Inheritance
[0136] This clause depicts a class hierarchy for ML training related NRMs. A class hierarchy 1200a for the class diagram 1100a is depicted in FIG. 12A.
[0137] 7.3 Class definitions
[0138] 7.3.2 MLTrainingRequest
[0139] 7.3.2.1 Definition
[0140] The IOC MLTrainingRequest represents the ML entity training request that is created by the ML training MnS consumer.
[0141] The MLTrainingRequest MOI is contained under one MLTrainingFunction MOL Each AIMLTrainingRequest is associated to at least one MLEntity.
[0142] The MLTrainingRequest may have a source to identify where it is coming from, and which may be used to prioritize the training resources for different sources. The sources may be for example the network functions, operator roles, or other functional differentiations.
[0143] Each MLTrainingRequest may indicate the expectedRunTimeContext that describes the specific conditions for which the MLEntity should be trained for.
[0144] In case the request is accepted, the ML training MnS producer decides when to start the ML training. Once the MnS producer decides to start the training based on the request, the ML training MnS producer instantiates one or more MLTrainingProcess MOI(s) that are responsible to perform the followings: [0145] - collects (more) data for training, if the training data are not available or the data are available but not sufficient for the training;
[0146] - prepares and selects the required training data, with consideration of the consumer’s request provided candidate training data if any. The ML training MnS producer may examine the consumer's provided candidate training data and select none, some or all of them for training. In addition, the ML training MnS producer may select some other training data that are available in order to meet the consumer’s requirements for the MLentity training;
[0147] - trains the MLEntity using the selected and prepared training data.
[0148] The MLTrainingRequest may have a requeststatus field to represent the status of the specific MLTrainingRequest:
[0149] - The attribute values are "NOT STARTED", "TRAINING IN PROGRESS", "SUSPENDED", "FINISHED", and "CANCELLED".
[0150] - When value turns to "TraininglnProcess", the ML training MnS producer instantiates one or more MLTrainingProcess MOI(s) representing the training process(es) being performed per the request and notifies the MLT MnS consumer(s) who subscribed to the notification.
[0151] When all of the training process associated to this request are completed, the value turns to “FINISHED”.
[0152] 7.3.2.2 Attributes
[0153] Table 7.3.2.2-1
Attribute name Support islnvaria isNotifyab
Qualifi isReadab isWritab nt le er le le mLEntityld M T T F T candidateTraingDataSo M T T F T urce
Attribute related to role [0154] 7.3.2.3 Attribute constraints
[0155] None.
[0156] 7.3.2.4 Notifications
[0157] The common notifications defined in clause 7.6 are valid for this IOC, without exceptions or additions.
[0158] 7.3.3 MLTrainingReport
[0159] 7.3.3.1 Definition
[0160] The IOC MLTrainingReport represents the ML entity training report that is provided by the training MnS producer. [0161] The MLTrainingReport MOI is contained under one MLTrainingFunction MOI.
[0162] 7.3.3.2 Attributes
[0163] Table 7.3.3.2-1
Attribute name Suppor islnvaria isNotifya t nt ble
Qualifi isReadab isWritab er le le mLEntityld M T F F T areConsumerTrainingDat M T F F T aUsed usedConsumerTrainingDa CM T F F T ta confidenceindication O T F F T modelPerformanceTrainin CM T F F T
Attribute related to role trainingRequestRef CM T F F T lastTrainingRef CM T F F T
[0164] 7.3.3.3 Attribute constraints
[0165] Table 7.3.3.3-1
Definition
Condition: The value of areConsumerTrainingDataUsed attribute is ALL or PARTIALLY.
Condition: The MLTrainingReport MOI represents the report for the ML entity training th
Condition: The MLTrainingReport MOI represents the report for the ML entity training tha
[0166] 7.3.3.4 Notifications
[0167] The common notifications defined in clause 7.6 are valid for this IOC, without exceptions or additions.
[0168] 7.5 Attribute definitions
[0169] 7.5.1 Attribute properties
[0170] Table 7.5.1-1
Attribute Name Documentation and Allowed Properties
Values mLEntityld It identifies the ML entity. type: String
It is unique in each MnS producer. multiplicity:
1 allowedValues: N/A. isOrdered:
N/A isUnique:
N/A defaultValue:
None isNullable:
True candidateTraingDataSource It provides the address(es) of the type: String candidate training data source multiplicity: provided by MnS consumer. The * detailed training data format is isOrdered: vendor specific. False isUnique: allowedValues: N/A. True defaultValue:
None isNullable:
True inferenceType It indicates the type of inference type: String that the ML entity supports. multiplicity:
1 allowedValues: the values of the isOrdered:
MDA type (see 3 GPP TS 28.104 N/A [2]), Analytics ID(s) of NWDAF isUnique:
(see 3GPP TS 23.288 [3]), types of N/A inference for RAN-intelligence, and defaultValue: vendor's specific extensions. None isNullable:
True areConsumerTrainingDataUsed It indicates whether the consumer type: Enum provided training data have been multiplicity: used for the ML entity training. 1 isOrdered: allowedValues: ALL, PARTIALLY, N/A NONE. isUnique:
N/A defaultValue:
None isNullable:
True usedConsumerTrainingData It provides the address(es) where type: String lists of the consumer-provided multiplicity: training data are located, which * have been used for the ML entity isOrdered: training. False isUnique: allowedValues: N/A. True defaultValue:
None isNullable:
True trainingRequestRef It is the DN(s) of the type: DN (see related MLTrainingRequest MOI(s). TS 32.156 [13]) allowedValues: DN. multiplicity: isOrdered:
False isUnique:
True defaultValue:
None isNullable:
True tr ai ni ngRep ortRef It is the DN of type: DN (see the MLTrainingReport MOI that 3 GPP represents the reports of the ML TS 32.156 training. [12]) multiplicity: allowedValues: DN. 1 isOrdered: N/A isUnique: N/A defaultValue: None isNullable: True lastTrainingRef It is the DN of type: DN (see the MLTrainingReport MOI that 3GPP represents the reports for the last TS 32.156 training of the ML entity. [13]) multiplicity: allowedValues: DN. 1 isOrdered:
N/A isUnique:
N/A defaultValue:
None isNullable:
True confidenceindication It indicates the confidence (in unit type: integer of percentage) that the ML entity multiplicity: would perform for inference on the 1 data with the same distribution as isOrdered: training data. N/A allowedValues: { 0..100 }. isUnique: N/A defaultValue: None isNullable: False
[0171] 7.6 Common notifications
[0172] 7.6.1 Configuration notifications
[0173] This clause presents a list of notifications, defined in 3GPP TS 28.532, that an MnS consumer may receive. The notification header attribute objectClass/obj ectInstance shall capture the DN of an instance of a class defined in the present document.
[0174] Table 7.6.1-1
Name Qualifier Notes notifyMOICreation O notifyMOIDeletion O notifyMOI Attribute ValueChanges O notifyEvent O
[0175] With reference to 3GPP TS 28.105, another set of examples of information model definitions for AI/ML management that may be implemented by the MLT system 600 are presented as follows. [0176] X.2 Class diagram
[0177] This clause depicts a set of classes (e.g., IOCS) that encapsulates the information relevant to ML training. A class diagram 1100b for the set of classes is depicted in FIG. 11B.
[0178] X.2.1 Relationships [0179] This clause depicts the set of classes (e.g., IOCs) that encapsulates the information relevant ML training. For the UML semantics, see 3GPP TS 32.156. A class hierarchy 1200b for the class diagram 1100a is depicted in FIG. 12B. [0180] X.2.2 Inheritance
[0181] X.3 Class definitions
[0182] X.3.1 MLTrainingRequests
[0183] X.3.1.1 Definition
[0184] The IOC MLTrainingRequests represents the container of the MLTrainingRequest
IOC(s).
[0185] X.3.1.2 Attributes
[0186] None.
[0187] X.3.1.3 Attribute constraints
[0188] None.
[0189] X.3.1.4 Notifications
[0190] The common notifications defined in clause X.5 are valid for this IOC, without exceptions or additions.
[0191] X.3.2 MLTrainingRequest
[0192] X.3.2.1 Definition
[0193] The IOC MLTrainingRequest represents the ML entity training request that is created by the MnS consumer.
[0194] The MLTrainingRequest MOI is contained under one MLTrainingRequests MOL
[0195] X.3.2.2 Attributes
Attribute name Support islnvaria isNotifyab
Qualifi isReadab isWritab nt le er le le alMLModelld M T T F T candidateTraingDataSo M T T F T urce
Attribute related to role [0196] X.3.2.3 Attribute constraints
[0197] None.
[0198] X.3.1.4 Notifications
[0199] The common notifications defined in clause X.5 are valid for this IOC, without exceptions or additions.
[0200] X.3.3 MLTrainingReports
[0201] X.3.3.1 Definition
[0202] The IOC MLTrainingReports represents the container of the MLTrainingReport IOC(s).
[0203] X.3.3.2 Attributes
[0204] None.
[0205] X.3.3.3 Attribute constraints
[0206] None.
[0207] X.3.3.4 Notifications
[0208] The common notifications defined in clause X.5 are valid for this IOC, without exceptions or additions.
[0209] X.3.4 MLTrainingReport
[0210] X.3.4.1 Definition
[0211] The IOC MLTrainingReport represents the AI/ML model training report that is provided by the MnS producer.
[0212] The MLTrainingReport MOI is contained under one MLTrainingReports MOL [0213] X.3.4.2 Attributes
Attribute name Suppor islnvaria isNotifya t nt ble
Qualifi isReadab isWritab er le le
MLEntityld M T F F T alMLModelPackageAddr M T F F T ess inferenceType M T F F T trainingAccuracyScore M T F F T areConsumerTrainingDat M T F F T aUsed usedConsumerTrainingDa CM T F F T ta
Attribute related to role trainingRequestRef CM T F F T lastTrainingRef CM T F F T
[0214] X.3.4.3 Attribute constraints
Name Definition usedConsumerTrainingData Condition: The value
Support Qualifier of areConsumerTrainingDataUsed attribute is PARTIALLY. trainingRequestRef Support Condition: The MLTrainingReport MOI represents Qualifier the report for the AI/ML model training that was requested by the MnS consumer (via AIMLTrainingRequest MOI). lastTrainingRef Support Condition: The MLTrainingReport MOI represents the Qualifier report for the ML training that was not initial training (i.e., the model has been trained before).
[0215] X.3.4.4 Notifications
[0216] The common notifications defined in clause X.5 are valid for this IOC, without exceptions or additions. [0217] X.4 Attribute definitions
[0218] X.4.1 Attribute properties
Attribute Name Documentation and Allowed Properties
Values
MLEntityld It identifies the ML entity. type: String
It is unique in each MnS multiplicity: 1 producer. isOrdered: N/A isUnique: True defaultValue: None isNullable: True
MLEntityPackageAddress It provides the address where the type: String ML entity package is located. multiplicity: 1 The ML entity package may isOrdered: N/A contain the ML entity (e.g., isUnique: N/A software image or file) and the defaultValue: model descriptor. The model None descriptor may contain more isNullable: True detailed information about the model, such as version, resource requirements, etc.
The model descriptor is not specified in the present document. candidateTraingDataSource It provides the address(es) of the type: String candidate training data source multiplicity: * provided by MnS consumer. The isOrdered: N/A detailed training data format is isUnique: N/A vendor specific. defaultValue:
None isNullable: True inferenceType It indicates the type of inference type: String that the ML entity supports. multiplicity: 1 isOrdered: N/A allowedValues: the values of the isUnique: N/A MDA type (see TS 28.104 [4]), defaultValue: Analytics ID(s) of NWDAF (see None TS 23.288 [5]), RAN- isNullable: True intelligence inference function name(s), and vendor’s specific extensions. trainingAccuracy Score It indicates the training accuracy type: Real score of the ML model. multiplicity: 1 isOrdered: N/A
It is on a scale from 0 to 100, isUnique: N/A where 100 indicates the best. defaultValue:
None
The algorithm for calculating the isNullable: True accuracy score is vendor’s specific. allowedValues: { 0 .. 100 }. areConsumerTrainingDataUsed It indicates whether the consumer type: Enum provided training data have been multiplicity: 1 used for the ML entity training. isOrdered: N/A isUnique: N/A allowedValues: ALL, defaultValue:
PARTIALLY, NONE. None isNullable: True usedConsumerTrainingData It provides the address(es) where type: String lists the consumer provided multiplicity: * isOrdered: N/A training data which have been isUnique: N/A used for the ML entity training. defaultValue: None isNullable: True trainingRequestRef It is the DN(s) of the type: DN (see related MLTrainingRequest TS 32.156 [3]) MOI(s). multiplicity: * isOrdered: N/A isUnique: N/A defaultValue: None isNullable: True lastTrainingRef It is the DN of type: DN (see the MLTrainingReport MOI that TS 32.156 [3]) represents the reports for the last multiplicity: 1 training of the ML entity. isOrdered: N/A isUnique: N/A defaultValue: None isNullable: True
[0219] X.5 Common notifications
[0220] X.5.1 Configuration notifications
[0221] This clause presents a list of notifications, defined in TS 28.532 [6], that an MnS consumer may receive. The notification header attribute objectClass/obj ectInstance shall capture the DN of an instance of a class defined in the present document.
Name Qualifier Notes notifyMOICreation O notifyMOIDeletion O notifyMOI Attribute ValueChanges O notifyEvent O
[0222] FIG. 10 illustrates an apparatus 1000 suitable for an MLT system 600 of a 5GNR wireless system to implement MLT operations, procedures or methods such as defined in 3GPP TS 28.105 using one or more of the MOI 902, IOC 904, and/or the management information 614.
[0223] As depicted in FIG. 10, the apparatus 1000 to train an ML entity for a network node may include a processor circuitry 1002, a memory interface 1004, a data storage device 1006, and a transmitter/receiver ("transceiver") 1008. The processor circuitry 1002 may implement the logic flow 800 and/or some or all of the message flow 700. The memory interface 1004 may send or receive, to or from a data storage device 1006 (e.g., volatile or non-volatile memory), management information 614 for a network resource model (NRM) of a fifth generation system (5GS), such as the wireless communications system 100. The apparatus 1000 also includes processor circuitry 1002 communicatively coupled to the memory interface 1004, the processor circuitry 1002 to determine to initiate training of an ML entity 612 using the management information 614, the training to be performed by an MLT MnS producer 602 producer of the 5GS in accordance with ML training logic 604, determine an inference type associated with the ML entity 612, select training data to train the ML entity 612, and train the ML entity 612 according to the inference type using the selected training data by the MLT MnS producer 602, the trained ML entity 612 to conduct inference operations for an MLT MnS consumer 608 of the 5GS.
[0224] The apparatus 1000 may also include the processor circuitry 1002 to receive a ML training request 704 for ML entity training from the MLT MnS consumer 608, and determine to initiate training of the ML entity 612 in response to a request for AI/ML model training from the MLT MnS consumer 608.
[0225] The apparatus 1000 may also include the processor circuitry 1002 to receive an ML training request 704 specifying one or more data sources 606 containing candidate training data for training the ML entity 612 from the MLT MnS consumer 608, and select at least a portion of the training data to train the ML entity 612 from the candidate training data received from the MLT MnS consumer 608. [0226] The apparatus 1000 may also include the processor circuitry 1002 to receive an ML training request 704 specifying the interference type for the ML entity 612 to be trained from the MLT MnS consumer 608.
[0227] The apparatus 1000 may also include the processor circuitry 1002 to determine to initiate training of the ML entity 612 by the MLT MnS producer 602 as a result of evaluation of performance of the ML entity 612, based on feedback 418 received from the MLT MnS consumer 608, or when new training data describing new network status or events are available.
[0228] The apparatus 1000 may also include the processor circuitry 1002 to generate a training result 708 for the trained ML entity 612 by the MLT MnS producer 602.
[0229] The apparatus 1000 may also include the processor circuitry 1002 to provide a training result 708 that includes a location of the trained ML entity 612 to the MLT MnS consumer 608 from the MLT MnS producer 602.
[0230] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management.
[0231] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram 1100a, 1100b, the class diagram 1100a, 1100b to include a set of classes that encapsulates information relevant to ML entity training.
[0232] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram 1100a, 1100b, the class diagram 1100a, 1100b to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy 1200a, 1200b for ML entity training related to the NRM.
[0233] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MLT MnS consumer 608, and where the ML entity training request managed object instance (MOI) such as MOI 902 is contained under one AI/ML training function MOL
[0234] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and where the ML entity training report managed object instance (MOI) such as MOI 902 is contained under one AI/ML training function MOL
[0235] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity 612 to the MLT MnS producer 602.
[0236] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
[0237] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity 612 supports.
[0238] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity 612.
[0239] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MLT MnS consumer 608 provided training data has been used for the AI/ML training.
[0240] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MLT MnS consumer 608 provided training data is located, which have been used for the ML entity training.
[0241] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MLT MnS consumer 608.
[0242] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML training report managed object instance (MOI) such as MOI 902 that represents a last training report for the ML entity 612.
[0243] The apparatus 1000 may also include where the management information 614 includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
[0244] FIG. 11 A illustrates a class diagram 1100a. The class diagram 1100a may comprise an example of a first set of classes suitable for a MOI 902, an IOC 904 and/or management information 614 of the MLT system 600. The first set of classes are by way of example and not limitation. Other classes may be used as well. Embodiments are not limited in this context.
[0245] FIG. 11B illustrates a class diagram 1100b. The class diagram 1100b may comprise an example of a second set of classes suitable for a MOI 902, an IOC 904 and/or management information 614 of the MLT system 600. The second set of classes are by way of example and not limitation. Other classes may be used as well. Embodiments are not limited in this context.
[0246] FIG. 12A illustrates an embodiment of a class hierarchy 1200a. The class hierarchy 1200a may comprise an example of a first class hierarchy for the first set of classes set forth in the class diagram 1100a suitable for a MOI 902, an IOC 904 and/or management information 614 of the MLT system 600. The first class hierarchy is by way of example and not limitation. Other class hierarchies may be used as well. Embodiments are not limited in this context.
[0247] FIG. 12B illustrates an embodiment of a class hierarchy 1200b. The class hierarchy 1200b may comprise an example of a second class hierarchy for the second set of classes set forth in the class diagram 1100b suitable for a MOI 902, an IOC 904 and/or management information 614 of the MLT system 600. The second class hierarchy is by way of example and not limitation. Other class hierarchies may be used as well. Embodiments are not limited in this context.
[0248] FIGS. 13-16 illustrate various systems, devices and components that may implement aspects of disclosed embodiments. The systems, devices, and components may be the same, or similar to, the systems, device and components described with reference to FIG. 1.
[0249] FIG. 13 illustrates a network 1300 in accordance with various embodiments. The network 1300 may operate in a manner consistent with 3GPP technical specifications for LTE or 5G/NR systems. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3 GPP systems, or the like.
[0250] The network 1300 may include a UE 1302, which may include any mobile or non- mobile computing device designed to communicate with a RAN 1330 via an over-the-air connection. The UE 1302 may be communicatively coupled with the RAN 1330 by a Uu interface. The UE 1302 may be, but is not limited to, a smartphone, tablet computer, wearable computer device, desktop computer, laptop computer, in-vehicle infotainment, in- car entertainment device, instrument cluster, head-up display device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, M2M or D2D device, loT device, etc.
[0251] In some embodiments, the network 1300 may include a plurality of UEs coupled directly with one another via a sidelink interface. The UEs may be M2M/D2D devices that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc. [0252] In some embodiments, the UE 1302 may additionally communicate with an AP 1304 via an over-the-air connection. The AP 1304 may manage a WLAN connection, which may serve to offload some/all network traffic from the RAN 1330. The connection between the UE 1302 and the AP 1304 may be consistent with any IEEE 1302.11 protocol, wherein the AP 1304 could be a wireless fidelity (Wi-Fi®) router. In some embodiments, the UE 1302, RAN 1330, and AP 1304 may utilize cellular-WLAN aggregation (for example, LWA/LWIP). Cellular-WLAN aggregation may involve the UE 1302 being configured by the RAN 1330 to utilize both cellular radio resources and WLAN resources.
[0253] The RAN 1330 may include one or more access nodes, for example, AN 1360. AN 1360 may terminate air-interface protocols for the UE 1302 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and LI protocols. In this manner, the AN 1360 may enable data/voice connectivity between CN 1318 and the UE 1302. In some embodiments, the AN 1360 may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network, which may be referred to as a CRAN or virtual baseband unit pool. The AN 1360 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, TRP, etc. The AN 1360 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells.
[0254] In embodiments in which the RAN 1330 includes a plurality of ANs, they may be coupled with one another via an X2 interface (if the RAN 1330 is an LTE RAN) or an Xn interface (if the RAN 1330 is a 5G RAN). The X2/Xn interfaces, which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.
[0255] The ANs of the RAN 1330 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 1302 with an air interface for network access. The UE 1302 may be simultaneously connected with a plurality of cells provided by the same or different ANs of the RAN 1330. For example, the UE 1302 and RAN 1330 may use carrier aggregation to allow the UE 1302 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell. In dual connectivity scenarios, a first AN may be a master node that provides an MCG and a second AN may be secondary node that provides an SCG. The first/second ANs may be any combination of eNB, gNB, ng-eNB, etc. [0256] The RAN 1330 may provide the air interface over a licensed spectrum or an unlicensed spectrum. To operate in the unlicensed spectrum, the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells. Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.
[0257] In V2X scenarios the UE 1302 or AN 1360 may be or act as a RSU, which may refer to any transportation infrastructure entity used for V2X communications. An RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE. An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like. In one example, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic. The RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services. The components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.
[0258] In some embodiments, the RAN 1330 may be an LTE RAN 1326 with eNBs, for example, eNB 1354. The LTE RAN 1326 may provide an LTE air interface with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc. The LTE air interface may rely on CSLRS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operating on sub-6 GHz bands.
[0259] In some embodiments, the RAN 1330 may be an NG-RAN 1328 with gNBs, for example, gNB 1356, or ng-eNBs, for example, ng-eNB 1358. The gNB 1356 may connect with 5G-enabled UEs using a 5G NR interface. The gNB 1356 may connect with a 5G core through an NG interface, which may include an N2 interface or an N3 interface. The ng- eNB 1358 may also connect with the 5G core through an NG interface, but may connect with a UE via an LTE air interface. The gNB 1356 and the ng-eNB 1358 may connect with each other over an Xn interface.
[0260] In some embodiments, the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 1328 and a UPF 1338 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 1328 and an AMF 1334 (e.g., N2 interface).
[0261] The NG-RAN 1328 may provide a 5G-NR air interface with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data. The 5G- NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking. The 5G-NR air interface may operating on FR1 bands that include sub -6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz. The 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.
[0262] In some embodiments, the 5G-NR air interface may utilize BWPs for various purposes. For example, BWP can be used for dynamic adaptation of the SCS. For example, the UE 1302 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 1302, the SCS of the transmission is changed as well. Another use case example of BWP is related to power saving. In particular, multiple BWPs can be configured for the UE 1302 with different amount of frequency resources (for example, PRBs) to support data transmission under different traffic loading scenarios. A BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 1302 and in some cases at the gNB 1356. A BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.
[0263] The RAN 1330 is communicatively coupled to CN 1318 that includes network elements to provide various functions to support data and telecommunications services to customers/subscribers (for example, users of UE 1302). The components of the CN 1318 may be implemented in one physical node or separate physical nodes. In some embodiments, NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 1318 onto physical compute/storage resources in servers, switches, etc. A logical instantiation of the CN 1318 may be referred to as a network slice, and a logical instantiation of a portion of the CN 1318 may be referred to as a network sub- slice.
[0264] In some embodiments, the CN 1318 may be an LTE CN 1324, which may also be referred to as an EPC. The LTE CN 1324 may include MME 1306, SGW 1308, SGSN 1314, HSS 1316, PGW 1310, and PCRF 1312 coupled with one another over interfaces (or “reference points”) as shown. Functions of the elements of the LTE CN 1324 may be briefly introduced as follows.
[0265] The MME 1306 may implement mobility management functions to track a current location of the UE 1302 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.
[0266] The SGW 1308 may terminate an SI interface toward the RAN and route data packets between the RAN and the LTE CN 1324. The SGW 1308 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.
[0267] The SGSN 1314 may track a location of the UE 1302 and perform security functions and access control. In addition, the SGSN 1314 may perform inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 1306; MME selection for handovers; etc. The S3 reference point between the MME 1306 and the SGSN 1314 may enable user and bearer information exchange for inter-3GPP access network mobility in idle/active states.
[0268] The HSS 1316 may include a database for network users, including subscription- related information to support the network entities’ handling of communication sessions. The HSS 1316 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc. An S6a reference point between the HSS 1316 and the MME 1306 may enable transfer of subscription and authentication data for authenticating/authorizing user access to the LTE CN 1318.
[0269] The PGW 1310 may terminate an SGi interface toward a data network (DN) 1322 that may include an application/content server 1320. The PGW 1310 may route data packets between the LTE CN 1324 and the data network 1322. The PGW 1310 may be coupled with the SGW 1308 by an S5 reference point to facilitate user plane tunneling and tunnel management. The PGW 1310 may further include a node for policy enforcement and charging data collection (for example, PCEF). Additionally, the SGi reference point between the PGW 1310 and the data network 1322 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. The PGW 1310 may be coupled with a PCRF 1312 via a Gx reference point. [0270] The PCRF 1312 is the policy and charging control element of the LTE CN 1324. The PCRF 1312 may be communicatively coupled to the app/content server 1320 to determine appropriate QoS and charging parameters for service flows. The PCRF 1310 may provision associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.
[0271] In some embodiments, the CN 1318 may be a 5GC 1352. The 5GC 1352 may include an AUSF 1332, AMF 1334, SMF 1336, UPF 1338, NSSF 1340, NEF 1342, NRF 1344, PCF 1346, UDM 1348, and AF 1350 coupled with one another over interfaces (or “reference points”) as shown. Functions of the elements of the 5GC 1352 may be briefly introduced as follows.
[0272] The AUSF 1332 may store data for authentication of UE 1302 and handle authentication-related functionality. The AUSF 1332 may facilitate a common authentication framework for various access types. In addition to communicating with other elements of the 5GC 1352 over reference points as shown, the AUSF 1332 may exhibit an Nausf service-based interface.
[0273] The AMF 1334 may allow other functions of the 5GC 1352 to communicate with the UE 1302 and the RAN 1330 and to subscribe to notifications about mobility events with respect to the UE 1302. The AMF 1334 may be responsible for registration management (for example, for registering UE 1302), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization. The AMF 1334 may provide transport for SM messages between the UE 1302 and the SMF 1336, and act as a transparent proxy for routing SM messages. AMF 1334 may also provide transport for SMS messages between UE 1302 and an SMSF. AMF 1334 may interact with the AUSF 1332 and the UE 1302 to perform various security anchor and context management functions. Furthermore, AMF 1334 may be a termination point of a RAN CP interface, which may include or be an N2 reference point between the RAN 1330 and the AMF 1334; and the AMF 1334 may be a termination point of NAS (Nl) signaling, and perform NAS ciphering and integrity protection. AMF 1334 may also support NAS signaling with the UE 1302 over an N3 IWF interface.
[0274] The SMF 1336 may be responsible for SM (for example, session establishment, tunnel management between UPF 1338 and AN 1360); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 1338 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 1334 over N2 to AN 1360; and determining SSC mode of a session. SM may refer to management of a PDU session, and a PDU session or “session” may refer to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 1302 and the data network 1322.
[0275] The UPF 1338 may act as an anchor point for intra -RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 1322, and a branching point to support multi-homed PDU session. The UPF 1338 may also perform packet routing and forwarding, perform packet inspection, enforce the user plane part of policy rules, lawfully intercept packets (UP collection), perform traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), perform uplink traffic verification (e.g., SDF-to-QoS flow mapping), transport level packet marking in the uplink and downlink, and perform downlink packet buffering and downlink data notification triggering. UPF 1338 may include an uplink classifier to support routing traffic flows to a data network.
[0276] The NSSF 1340 may select a set of network slice instances serving the UE 1302. The NSSF 1340 may also determine allowed NSSAI and the mapping to the subscribed S- NSSAIs, if needed. The NSSF 1340 may also determine the AMF set to be used to serve the UE 1302, or a list of candidate AMFs based on a suitable configuration and possibly by querying the NRF 1344. The selection of a set of network slice instances for the UE 1302 may be triggered by the AMF 1334 with which the UE 1302 is registered by interacting with the NSSF 1340, which may lead to a change of AMF. The NSSF 1340 may interact with the AMF 1334 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown). Additionally, the NSSF 1340 may exhibit an Nnssf service-based interface.
[0277] The NEF 1342 may securely expose services and capabilities provided by 3GPP network functions for third party, internal exposure/re-exposure, AFs (e.g., AF 1350), edge computing or fog computing systems, etc. In such embodiments, the NEF 1342 may authenticate, authorize, or throttle the AFs. NEF 1342 may also translate information exchanged with the AF 1350 and information exchanged with internal network functions. For example, the NEF 1342 may translate between an AF-Service-Identifier and an internal 5GC information. NEF 1342 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 1342 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 1342 to other NFs and AFs, or used for other purposes such as analytics. Additionally, the NEF 1342 may exhibit an Nnef service-based interface.
[0278] The NRF 1344 may support service discovery functions, receive NF discovery requests from NF instances, and provide the information of the discovered NF instances to the NF instances. NRF 1344 also maintains information of available NF instances and their supported services. As used herein, the terms “instantiate,” “instantiation,” and the like may refer to the creation of an instance, and an “instance” may refer to a concrete occurrence of an object, which may occur, for example, during execution of program code. Additionally, the NRF 1344 may exhibit the Nnrf service-based interface.
[0279] The PCF 1346 may provide policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior. The PCF 1346 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 1348. In addition to communicating with functions over reference points as shown, the PCF 1346 exhibit an Npcf service-based interface.
[0280] The UDM 1348 may handle subscription-related information to support the network entities’ handling of communication sessions, and may store subscription data of UE 1302. For example, subscription data may be communicated via an N8 reference point between the UDM 1348 and the AMF 1334. The UDM 1348 may include two parts, an application front end and a UDR. The UDR may store subscription data and policy data for the UDM 1348 and the PCF 1346, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 1302) for the NEF 1342. The Nudr service-based interface may be exhibited by the UDR 221 to allow the UDM 1348, PCF 1346, and NEF 1342 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR. The UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs over reference points as shown, the UDM 1348 may exhibit the Nudm service-based interface.
[0281] The AF 1350 may provide application influence on traffic routing, provide access to NEF, and interact with the policy framework for policy control.
[0282] In some embodiments, the 5GC 1352 may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE 1302 is attached to the network. This may reduce latency and load on the network. To provide edgecomputing implementations, the 5GC 1352 may select a UPF 1338 close to the UE 1302 and execute traffic steering from the UPF 1338 to data network 1322 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 1350. In this way, the AF 1350 may influence UPF (re)selection and traffic routing. Based on operator deployment, when AF 1350 is considered to be a trusted entity, the network operator may permit AF 1350 to interact directly with relevant NFs. Additionally, the AF 1350 may exhibit an Naf service-based interface.
[0283] The data network 1322 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application/content server 1320.
[0284] FIG. 14 schematically illustrates a wireless network 1400 in accordance with various embodiments. The wireless network 1400 may include a UE 1402 in wireless communication with an AN 1424. The UE 1402 and AN 1424 may be similar to, and substantially interchangeable with, like-named components described elsewhere herein.
[0285] The UE 1402 may be communicatively coupled with the AN 1424 via connection 1446. The connection 1446 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6GHz frequencies.
[0286] The UE 1402 may include a host platform 1404 coupled with a modem platform 1408. The host platform 1404 may include application processing circuitry 1406, which may be coupled with protocol processing circuitry 1410 of the modem platform 1408. The application processing circuitry 1406 may run various applications for the UE 1402 that source/ sink application data. The application processing circuitry 1406 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations [0287] The protocol processing circuitry 1410 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 1446. The layer operations implemented by the protocol processing circuitry 1410 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.
[0288] The modem platform 1408 may further include digital baseband circuitry 1412 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 1410 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ-ACK functions, scrambling/descrambling, encoding/ decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi -antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.
[0289] The modem platform 1408 may further include transmit circuitry 1414, receive circuitry 1416, RF circuitry 1418, and RF front end (RFFE) 1420, which may include or connect to one or more antenna panels 1422. Briefly, the transmit circuitry 1414 may include a digital -to-analog converter, mixer, intermediate frequency (IF) components, etc.; the receive circuitry 1416 may include an analog-to-digital converter, mixer, IF components, etc.; the RF circuitry 1418 may include a low-noise amplifier, a power amplifier, power tracking components, etc.; RFFE 1420 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc. The selection and arrangement of the components of the transmit circuitry 1414, receive circuitry 1416, RF circuitry 1418, RFFE 1420, and antenna panels 1422 (referred generically as “transmit/receive components”) may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub -6 gHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.
[0290] In some embodiments, the protocol processing circuitry 1410 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.
[0291] A UE reception may be established by and via the antenna panels 1422, RFFE 1420, RF circuitry 1418, receive circuitry 1416, digital baseband circuitry 1412, and protocol processing circuitry 1410. In some embodiments, the antenna panels 1422 may receive a transmission from the AN 1424 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 1422.
[0292] A UE transmission may be established by and via the protocol processing circuitry 1410, digital baseband circuitry 1412, transmit circuitry 1414, RF circuitry 1418, RFFE 1420, and antenna panels 1422. In some embodiments, the transmit components of the UE 1424 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 1422.
[0293] Similar to the UE 1402, the AN 1424 may include a host platform 1426 coupled with a modem platform 1430. The host platform 1426 may include application processing circuitry 1428 coupled with protocol processing circuitry 1432 of the modem platform 1430. The modem platform may further include digital baseband circuitry 1434, transmit circuitry 1436, receive circuitry 1438, RF circuitry 1440, RFFE circuitry 1442, and antenna panels 1444. The components of the AN 1424 may be similar to and substantially interchangeable with like-named components of the UE 1402. In addition to performing data transmission/reception as described above, the components of the A 1404 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.
[0294] FIG. 15 is a block diagram illustrating components, according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, FIG. 15 shows a diagrammatic representation of hardware resources 1530 including one or more processors (or processor cores) 1510, one or more memory/storage devices 1522, and one or more communication resources 1526, each of which may be communicatively coupled via a bus 1520 or other interface circuitry. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor 1502 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 1530.
[0295] The processors 1510 may include, for example, a processor 1512 and a processor 1514. The processors 1510 may be, for example, a central processing unit (CPU), a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, a graphics processing unit (GPU), a DSP such as a baseband processor, an ASIC, an FPGA, a radio-frequency integrated circuit (RFIC), another processor (including those discussed herein), or any suitable combination thereof.
[0296] The memory/storage devices 1522 may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices 1522 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as dynamic random access memory (DRAM), static random access memory (SRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, etc.
[0297] The communication resources 1526 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 1504 or one or more databases 1506 or other network elements via a network 1508. For example, the communication resources 1526 may include wired communication components (e.g., for coupling via USB, Ethernet, etc.), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, Wi-Fi® components, and other communication components.
[0298] Instructions 106, 1518, 1524, 1528, 1532 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 1510 to perform any one or more of the methodologies discussed herein. The instructions 106, 1518, 1524, 1528, 1532 may reside, completely or partially, within at least one of the processors 1510 (e.g., within the processor’s cache memory), the memory/storage devices 1522, or any suitable combination thereof. Furthermore, any portion of the instructions 106, 1518, 1524, 1528, 1532 may be transferred to the hardware resources 1530 from any combination of the peripheral devices 1504 or the databases 1506. Accordingly, the memory of processors 1510, the memory/storage devices 1522, the peripheral devices 1504, and the databases 1506 are examples of computer -readable and machine-readable media.
[0299] For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.
[0300] FIG. 16 illustrates computer readable storage medium 1600. Computer readable storage medium 1700 may comprise any non -transitory computer-readable storage medium or machine-readable storage medium, such as an optical, magnetic or semiconductor storage medium. In various embodiments, computer readable storage medium 1600 may comprise an article of manufacture. In some embodiments, computer readable storage medium 1600 may store computer executable instructions 1602 with which circuitry can execute. For example, computer executable instructions 1602 can include computer executable instructions 1602 to implement operations described with respect to logic flows 500 (deleted), 1200a and 900. Examples of computer readable storage medium 1600 or machine-readable storage medium 1600 may include any tangible media capable of storing electronic data, including volatile memory or non-volatile memory, removable or nonremovable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of computer executable instructions 1602 may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object-oriented code, visual code, and the like.
[0301] Examples
[0302] Example 1. An apparatus for a network node, comprising:
[0303] a memory interface to send or receive, to or from a data storage device, management information for artificial intelligence (Al) and machine learning (ML) management based on a a network resource model (NRM) of a fifth generation system (5GS); and
[0304] processor circuitry communicatively coupled to the memory interface, the processor circuitry to:
[0305] determine to initiate training of a ML entity using the management information, the training to be performed by a management service (MnS) producer of the 5GS;
[0306] determine an inference type associated with the ML entity;
[0307] select training data to train the ML entity; and
[0308] train the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS. [0309] Example 2. The apparatus of any previous example such as example 1, the processor circuitry to:
[0310] receive a request for ML entity training from the MnS consumer; and
[0311] determine to initiate training of the ML entity in response to a request for ML entity training from the MnS consumer.
[0312] Example 3. The apparatus of any previous example such as example 1, the processor circuitry to:
[0313] receive a request specifying one or more data sources containing candidate training data for training the ML entity from the MnS consumer; and
[0314] select at least a portion of the training data to train the ML entity from the candidate training data received from the MnS consumer.
[0315] Example 4. The apparatus of any previous example such as example 1, the processor circuitry to receive a request specifying the interference type for the ML entity to be trained from the MnS consumer.
[0316] Example 5. The apparatus of any previous example such as example 1, the processor circuitry to determine to initiate training of the ML entity by the MnS producer as a result of evaluation of performance of the ML entity, based on feedback information received from the MnS consumer, or when new training data describing new network status or events are available.
[0317] Example 6. The apparatus of any previous example such as example 1, the processor circuitry to generate a training result for the trained ML entity by the MnS producer.
[0318] Example 7. The apparatus of any previous example such as example 1, the processor circuitry to provide a training result that includes a location of the trained ML entity to the MnS consumer from the MnS producer.
[0319] Example 8. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management.
[0320] Example 9. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training. [0321] Example 10. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML entity training related to the NRM.
[0322] Example 11. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOL
[0323] Example 12. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and wherein the ML entity training report managed object instance (MOI) is contained under one AI/ML training function MOL
[0324] Example 13. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the AI/ML model to the MnS producer.
[0325] Example 14. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
[0326] Example 15. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports.
[0327] Example 16. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity.
[0328] Example 17. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training.
[0329] Example 18. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training.
[0330] Example 19. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML entity training request that is created by the MnS consumer.
[0331] Example 20. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an AI/ML model training report managed object instance (MOI) that represents a last training report for the ML entity.
[0332] Example 21. The apparatus of any previous example such as example 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
[0333] Example 22. A method for a network node, comprising:
[0334] determining to initiate training of a machine learning (ML) entity using management information for artificial intelligence (Al) and machine learning (ML management based on a network resource model (NRM) of a fifth generation system (5GS), the training to be performed by a management service (MnS) producer of the 5GS;
[0335] determining an inference type associated with an ML entity;
[0336] selecting training data to train the ML entity; and
[0337] training the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS.
[0338] Example 23. The method of any previous example such as example 22, comprising:
[0339] receiving a request for ML entity training from the MnS consumer; and
[0340] determining to initiate training of the ML entity in response to a request for ML entity training from the MnS consumer.
[0341] Example 24. The method of any previous example such as example 22, comprising: [0342] receiving a request specifying one or more data sources containing candidate training data for training the ML entity from the MnS consumer; and
[0343] selecting at least a portion of the training data to train the ML entity from the candidate training data received from the MnS consumer.
[0344] Example 25. The method of any previous example such as example 22, comprising receiving a request specifying the interference type for the ML entity to be trained from the MnS consumer.
[0345] Example 26. The method of any previous example such as example 22, comprising determining to initiate training of the ML entity by the MnS producer as a result of evaluation of performance of the ML entity, based on feedback information received from the MnS consumer, or when new training data describing new network status or events are available.
[0346] Example 27. The method of any previous example such as example 22, comprising generating a training result for the trained ML entity by the MnS producer. [0347] Example 28. The method of any previous example such as example 22, comprising providing a training result that includes a location of the trained ML entity to the MnS consumer from the MnS producer.
[0348] Example 29. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management.
[0349] Example 30. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training.
[0350] Example 31. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML entity training related to the NRM.
[0351] Example 32. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOL
[0352] Example 33. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML training report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and wherein the ML entity training report managed object instance (MOI) is contained under one ML training function MOL
[0353] Example 34. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity to the MnS producer. [0354] Example 35. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source.
[0355] Example 36. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports.
[0356] Example 37. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity. [0357] Example 38. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training.
[0358] Example 39. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training.
[0359] Example 40. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MnS consumer.
[0360] Example 41. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML entity training report managed object instance (MOI) that represents a last training report for the ML entity.
[0361] Example 42. The method of any previous example such as example 22, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
[0362] Example 43. A non-transitory computer-readable storage medium, the computer- readable storage medium including instructions that when executed by a computer, cause the computer to:
[0363] determine to initiate training of a machine learning (ML) entity using management information for artificial intelligence (Al) and ML management based on a network resource model (NRM) of a fifth generation system (5GS), the training to be performed by a management service (MnS) producer of the 5GS;
[0364] determine an inference type associated with the ML entity;
[0365] select training data to train the ML entity; and
[0366] train the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS.
[0367] Example 44. The computer-readable storage medium of any previous example such as example 43, comprising:
[0368] receive a request for ML entity training from the MnS consumer; and
[0369] determine to initiate training of the ML entity in response to a request for ML entity training from the MnS consumer.
[0370] Example 45. The computer-readable storage medium of any previous example such as example 43, comprising: [0371] receive a request specifying one or more data sources containing candidate training data for training the ML entity from the MnS consumer; and
[0372] select at least a portion of the training data to train the ML entity from the candidate training data received from the MnS consumer.
[0373] Example 46. The computer-readable storage medium of any previous example such as example 43, comprising receive a request specifying the interference type for the ML entity to be trained from the MnS consumer.
[0374] Example 47. The computer-readable storage medium of any previous example such as example 43, comprising determine to initiate training of the ML entity by the MnS producer as a result of evaluation of performance of the ML entity, based on feedback information received from the MnS consumer, or when new training data describing new network status or events are available.
[0375] Example 48. The computer-readable storage medium of any previous example such as example 43, comprising generate a training result for the trained ML entity by the MnS producer.
[0376] Example 49. The computer-readable storage medium of any previous example such as example 43, comprising provide a training result that includes a location of the trained ML entity to the MnS consumer from the MnS producer.
[0377] Example 50. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management.
[0378] Example 51. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training.
[0379] Example 52. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML training related to the NRM.
[0380] Example 53. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an ML traing request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOL
[0381] Example 54. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more class definitions, with one class definition to include an AI/ML traing report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and wherein the ML entity training report managed object instance (MOI) is contained under one AI/ML training function MOL
[0382] Example 55. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity to the MnS producer.
[0383] Example 56. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate train data source.
[0384] Example 57. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports.
[0385] Example 58. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity.
[0386] Example 59. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training.
[0387] Example 60. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training.
[0388] Example 61. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MnS consumer.
[0389] Example 62. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML training report managed object instance (MOI) that represents a last training report for the ML entity.
[0390] Example 63. The computer-readable storage medium of any previous example such as example 43, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance. [0391] Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. Any of the above-described examples may be implemented as system examples and means plus function examples, unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments.
[0392] Terminology
[0393] For the purposes of the present document, the following terms and definitions are applicable to the examples and embodiments discussed herein.
[0394] The term “circuitry” as used herein refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high -capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. The term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.
[0395] The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes. Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”
[0396] The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.
[0397] The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface.
[0398] The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.
[0399] The term “computer system” as used herein refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.
[0400] The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource.
[0401] The term “resource” as used herein refers to a physical or virtual device, a physical or virtual component within a computing environment, and/or a physical or virtual component within a particular device, such as computer devices, mechanical devices, memory space, processor/CPU time, processor/CPU usage, processor and accelerator loads, hardware time or usage, electrical power, input/output operations, ports or network sockets, channel/link allocation, throughput, memory usage, storage, network, database and applications, workload units, and/or the like. A “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/ systems via a communications network. The term “system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable.
[0402] The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information.
[0403] The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code.
[0404] The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or link, and/or the like.
[0405] The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content.
[0406] The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration.
[0407] The term “SSB” refers to an SS/PBCH block.
[0408] The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure.
[0409] The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation. [0410] The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA.
[0411] The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC.
[0412] The term “Serving Cell” refers to the primary cell for a UE in RRC CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell.
[0413] The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC CONNECTED configured with CA/.
[0414] The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.

Claims

77 CLAIMS What is claimed is:
1. An apparatus for a network node, comprising: a memory interface to send or receive, to or from a data storage device, management information for artificial intelligence (Al) and machine learning (ML) management based on a a network resource model (NRM) of a fifth generation system (5GS); and processor circuitry communicatively coupled to the memory interface, the processor circuitry to: determine to initiate training of a ML entity using the management information, the training to be performed by a management service (MnS) producer of the 5GS; determine an inference type associated with the ML entity; select training data to train the ML entity; and train the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS.
2. The apparatus of claim 1, the processor circuitry to: receive a request for ML entity training from the MnS consumer; and determine to initiate training of the ML entity in response to a request for ML entity training from the MnS consumer.
3. The apparatus of claim 1, the processor circuitry to: receive a request specifying one or more data sources containing candidate training data for training the ML entity from the MnS consumer; and select at least a portion of the training data to train the ML entity from the candidate training data received from the MnS consumer.
4. The apparatus of claim 1, the processor circuitry to receive a request specifying the interference type for the ML entity to be trained from the MnS consumer.
5. The apparatus of claim 1, the processor circuitry to determine to initiate training of the ML entity by the MnS producer as a result of evaluation of performance of the ML entity, based on feedback information received from the MnS consumer, or when new training data describing new network status or events are available. 78
6. The apparatus of claim 1, the processor circuitry to generate a training result for the trained ML entity by the MnS producer.
7. The apparatus of claim 1, the processor circuitry to provide a training result that includes a location of the trained ML entity to the MnS consumer from the MnS producer.
8. The apparatus of claim 1, wherein the management information includes one or more information model definitions for AI/ML management.
9. The apparatus of claim 1, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include: a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training; a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML entity training related to the NRM; one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOI; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the AI/ML model to the MnS producer; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports; 79 one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML entity training request that is created by the MnS consumer; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an AI/ML model training report managed object instance (MOI) that represents a last training report for the ML entity; or one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
10. A method for a network node, comprising: determining to initiate training of a machine learning (ML) entity using management information for artificial intelligence (Al) and machine learning (ML management based on a network resource model (NRM) of a fifth generation system (5GS), the training to be performed by a management service (MnS) producer of the 5GS; determining an inference type associated with an ML entity; selecting training data to train the ML entity; and training the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS.
11. The method of claim 10, comprising: 80 receiving a request for ML entity training from the MnS consumer; and determining to initiate training of the ML entity in response to a request for ML entity training from the MnS consumer.
12. The method of claim 10, comprising: receiving a request specifying one or more data sources containing candidate training data for training the ML entity from the MnS consumer; and selecting at least a portion of the training data to train the ML entity from the candidate training data received from the MnS consumer.
13. The method of claim 10, comprising receiving a request specifying the interference type for the ML entity to be trained from the MnS consumer.
14. The method of claim 10, comprising determining to initiate training of the ML entity by the MnS producer as a result of evaluation of performance of the ML entity, based on feedback information received from the MnS consumer, or when new training data describing new network status or events are available.
15. The method of claim 10, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include: a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training; a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML entity training related to the NRM; one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOI; one or more class definitions, with one class definition to include an AI/ML training report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and wherein the ML entity training report managed object instance (MOI) is contained under one ML training function MOI; 81 one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity to the MnS producer; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MnS consumer; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML entity training report managed object instance (MOI) that represents a last training report for the ML entity; or one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance. 82
16. A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to: determine to initiate training of a machine learning (ML) entity using management information for artificial intelligence (Al) and ML management based on a network resource model (NRM) of a fifth generation system (5GS), the training to be performed by a management service (MnS) producer of the 5GS; determine an inference type associated with the ML entity; select training data to train the ML entity; and train the ML entity according to the inference type using the selected training data by the MnS producer, the trained ML entity to conduct inference operations for a management service (MnS) consumer of the 5GS.
17. The computer-readable storage medium of claim 16, comprising: receive a request for ML entity training from the MnS consumer; and determine to initiate training of the ML entity in response to a request for ML entity training from the MnS consumer.
18. The computer-readable storage medium of claim 16, comprising: receive a request specifying one or more data sources containing candidate training data for training the ML entity from the MnS consumer; and select at least a portion of the training data to train the ML entity from the candidate training data received from the MnS consumer.
19. The computer-readable storage medium of claim 16, comprising receive a request specifying the interference type for the ML entity to be trained from the MnS consumer.
20. The computer-readable storage medium of claim 16, wherein the management information includes one or more information model definitions for AI/ML management, the information model definitions to include: a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training; a class diagram, the class diagram to include a set of classes that encapsulates information relevant to ML entity training and a class hierarchy for ML entity training related to the NRM; 83 one or more class definitions, with one class definition to include an AI/ML training request, the AI/ML training request to represent an ML entity training request that is created by the MnS consumer, and wherein the ML entity training request managed object instance (MOI) is contained under one ML training function MOI; one or more class definitions, with one class definition to include an AI/ML training report, the AI/ML training report to represent an ML entity training report that is provided by the MnS producer, and wherein the ML entity training report managed object instance (MOI) is contained under one ML training function MOI; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an identifier to uniquely identify the ML entity to the MnS producer; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an address of a candidate training data source; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate the inference type that the ML entity supports; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a performance metric used to evaluate a performance of the ML entity; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise an indicator to indicate whether the MnS consumer provided training data has been used for the AI/ML training; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise one or more addresses of where a list of MnS consumer provided training data is located, which have been used for the ML entity training; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of a related ML training request that is created by the MnS consumer; one or more attribute definitions, with one attribute definition to include one or more attribute properties, with one attribute property to comprise a distinguished name of an ML entity training report managed object instance (MOI) that represents a last training report for the ML entity; or one or more common notifications that the MnS consumer may receive, with one common notification to have a notification header attribute for an object class or object instance that captures a distinguished name of the object class or object instance.
PCT/US2022/051545 2021-12-13 2022-12-01 Network resource model based solutions for ai-ml model training WO2023114017A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280046922.9A CN117716674A (en) 2021-12-13 2022-12-01 Network resource model-based solution for AI-ML model training

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163288778P 2021-12-13 2021-12-13
US63/288,778 2021-12-13

Publications (1)

Publication Number Publication Date
WO2023114017A1 true WO2023114017A1 (en) 2023-06-22

Family

ID=86773347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/051545 WO2023114017A1 (en) 2021-12-13 2022-12-01 Network resource model based solutions for ai-ml model training

Country Status (2)

Country Link
CN (1) CN117716674A (en)
WO (1) WO2023114017A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210021494A1 (en) * 2019-10-03 2021-01-21 Intel Corporation Management data analytics
US20210089921A1 (en) * 2019-09-25 2021-03-25 Nvidia Corporation Transfer learning for neural networks
EP3869847A1 (en) * 2020-02-24 2021-08-25 INTEL Corporation Multi-access traffic management in open ran (o-ran)

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210089921A1 (en) * 2019-09-25 2021-03-25 Nvidia Corporation Transfer learning for neural networks
US20210021494A1 (en) * 2019-10-03 2021-01-21 Intel Corporation Management data analytics
EP3869847A1 (en) * 2020-02-24 2021-08-25 INTEL Corporation Multi-access traffic management in open ran (o-ran)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Management and orchestration; Management Data Analytics (MDA) (Release 17)", 3GPP TS 28.104, no. V0.3.0, 8 December 2021 (2021-12-08), pages 1 - 40, XP052083073 *
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Management and orchestration; Study on enhancement of Management Data Analytics (MDA) (Release 17)", 3GPP TR 28.809, no. V17.0.0, 6 April 2021 (2021-04-06), pages 1 - 96, XP052000543 *

Also Published As

Publication number Publication date
CN117716674A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US11943122B2 (en) Management data analytics
US11917527B2 (en) Resource allocation and activation/deactivation configuration of open radio access network (O-RAN) network slice subnets
KR20220092366A (en) Interoperable framework for secure dual mode edge application programming interface consumption in hybrid edge computing platforms
CN115119331A (en) Reinforcement learning for multi-access traffic management
CN115175130A (en) Method and apparatus for multiple access edge computing service for mobile user equipment
US11968559B2 (en) Apparatus and method for 5G quality of service indicator management
US20230141237A1 (en) Techniques for management data analytics (mda) process and service
CN114443556A (en) Device and method for man-machine interaction of AI/ML training host
CN117897980A (en) Intelligent application manager for wireless access network
WO2022159400A1 (en) Quality of service monitoring in integrated cellular time sensitive bridged network
WO2022087482A1 (en) Resource allocation for new radio multicast-broadcast service
WO2023069534A1 (en) Using ai-based models for network energy savings
WO2023283102A1 (en) Radio resource planning and slice-aware scheduling for intelligent radio access network slicing
WO2022235525A1 (en) Enhanced collaboration between user equpiment and network to facilitate machine learning
WO2022154961A1 (en) Support for edge enabler server and edge configuration server lifecycle management
WO2023114017A1 (en) Network resource model based solutions for ai-ml model training
EP4240050A1 (en) A1 enrichment information related functions and services in the non-real time ran intelligent controller
WO2024091970A1 (en) Performance evaluation for artificial intelligence/machine learning inference
WO2022221495A1 (en) Machine learning support for management services and management data analytics services
WO2024026515A1 (en) Artificial intelligence and machine learning entity testing
CN116998137A (en) Machine learning support for management services and management data analysis services
WO2024091862A1 (en) Artificial intelligence/machine learning (ai/ml) models for determining energy consumption in virtual network function instances
WO2024064534A1 (en) Non-grid of beams (gob) beamforming control and policy over e2 interface
WO2024092132A1 (en) Artificial intelligence and machine learning entity loading in cellular networks
WO2022217083A1 (en) Methods and apparatus to support radio resource management (rrm) optimization for network slice instances in 5g systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22908225

Country of ref document: EP

Kind code of ref document: A1