WO2023240592A1 - Apparatus, methods, and computer programs - Google Patents

Apparatus, methods, and computer programs Download PDF

Info

Publication number
WO2023240592A1
WO2023240592A1 PCT/CN2022/099429 CN2022099429W WO2023240592A1 WO 2023240592 A1 WO2023240592 A1 WO 2023240592A1 CN 2022099429 W CN2022099429 W CN 2022099429W WO 2023240592 A1 WO2023240592 A1 WO 2023240592A1
Authority
WO
WIPO (PCT)
Prior art keywords
function
request
requested information
reporting
signalling
Prior art date
Application number
PCT/CN2022/099429
Other languages
French (fr)
Inventor
Stephen MWANJE
Shu Qiang SUN
Original Assignee
Nokia Shanghai Bell Co., Ltd.
Nokia Solutions And Networks Oy
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Shanghai Bell Co., Ltd., Nokia Solutions And Networks Oy, Nokia Technologies Oy filed Critical Nokia Shanghai Bell Co., Ltd.
Priority to PCT/CN2022/099429 priority Critical patent/WO2023240592A1/en
Publication of WO2023240592A1 publication Critical patent/WO2023240592A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • Various examples described herein generally relate to apparatus, methods, and computer programs, and more particularly (but not exclusively) to apparatus, methods and computer programs for network apparatuses.
  • a communication system can be seen as a facility that enables communication sessions between two or more entities such as user terminals, access nodes and/or other nodes by providing carriers between the various entities involved in the communications path.
  • a communication system can be provided, for example, by means of a communication network and one or more compatible communication devices.
  • the communication sessions may comprise, for example, communication of data for carrying communications such as voice, electronic mail (email) , text message, multimedia and/or content data and so on.
  • Content may be multicast or uni-cast to communication devices.
  • a user can access the communication system by means of an appropriate communication device or terminal.
  • a communication device of a user is often referred to as user equipment (UE) or user device.
  • the communication device may access a carrier provided by an access node and transmit and/or receive communications on the carrier.
  • the communication system and associated devices typically operate in accordance with a required standard or specification which sets out what the various entities associated with the system are permitted to do and how that should be achieved. Communication protocols and/or parameters which shall be used for the connection are also typically defined.
  • UTRAN 3G radio
  • Another example of an architecture is the long-term evolution (LTE) or the Universal Mobile Telecommunications System (UMTS) radio-access technology.
  • LTE long-term evolution
  • UMTS Universal Mobile Telecommunications System
  • Another example communication system is so called 5G system that allows user equipment (UE) or user device to contact a 5G core via e.g. new radio (NR) access technology or via other access technology such as Untrusted access to 5GC or wireline access technology.
  • NR new radio
  • a UE Registration Area comprises a list of one or more Tracking Areas (TA) .
  • a Tracking Area is a logical concept of an area where a UE can move around without updating the network.
  • the network can allocate a list with one or more TAs to the UE.
  • an apparatus for a first function comprising means for performing: signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receiving the requested information from the second function.
  • the apparatus may comprise means for performing: signalling, to the second function, a request to delete any reporting associated with the request; and receiving an indication that the second function will no longer report the requested information to the first function.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • the means for obtaining the requested information may comprise means for performing: retrieving information related to said at least one decision from a storage function; and processing the retrieved information to form the requested information.
  • the apparatus may comprise means for performing: in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and updating the first instance to comprise an instruction to signal the requested information to the first function.
  • the apparatus may comprise means for performing: in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and instantiating a second instance that comprises an instruction to signal the requested information to the first function.
  • the means for performing determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise means for performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
  • the apparatus may comprise means for performing: providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
  • the means for obtaining the requested information may comprise means for performing: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
  • the apparatus may comprise means for performing: receiving, from the first function, a request to delete any reporting associated with the request; and signalling an indication that the second function will no longer report the requested information to the first function.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • the apparatus may be caused to perform: signalling, to the second function, a request to modify at least one parameter comprised in the request; and receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  • the apparatus may be caused to perform: signalling, to the second function, a request to delete any reporting associated with the request; and receiving an indication that the second function will no longer report the requested information to the first function.
  • the obtaining the requested information may comprise performing: retrieving information related to said at least one decision from a storage function; and processing the retrieved information to form the requested information.
  • the apparatus may be caused to perform: in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and updating the first instance to comprise an instruction to signal the requested information to the first function.
  • the performing determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
  • the obtaining the requested information may comprise performing: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
  • the apparatus may be caused to perform: receiving, from the first function, a request to modify at least one parameter comprised in the request; and signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  • the apparatus may be caused to perform: receiving, from the first function, a request to delete any reporting associated with the request; and signalling an indication that the second function will no longer report the requested information to the first function.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • the method may comprise performing: signalling, to the second function, a request to modify at least one parameter comprised in the request; and receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  • a method for an apparatus for a second function comprising performing: receiving, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; obtaining the requested information; and signalling the requested information to the first function.
  • the updating the first instance may comprise performing: associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
  • the method may comprise performing: receiving, from the first function, a request to modify at least one parameter comprised in the request; and signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • an apparatus for a first function comprising: signalling circuitry for signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receiving circuitry for receiving the requested information from the second function.
  • the apparatus may comprise: signalling circuitry for signalling, to the second function, a request to delete any reporting associated with the request; and receiving circuitry for receiving an indication that the second function will no longer report the requested information to the first function.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • the obtaining circuitry for obtaining the requested information may comprise: retrieving circuitry for retrieving information related to said at least one decision from a storage function; and processing circuitry for processing the retrieved information to form the requested information.
  • the updating circuitry for updating the first instance may comprise: associating circuitry for associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
  • the apparatus may comprise: determining circuitry for, in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and instantiating circuitry for instantiating a second instance that comprises an instruction to signal the requested information to the first function.
  • the determining circuitry for determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise performing circuitry for performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
  • the apparatus may comprise: providing circuitry for providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
  • the obtaining circuitry for obtaining the requested information may comprise: determining circuitry for determining filter conditions for obtaining the requested information; and applying circuitry for applying the determined filer conditions to collect with the requested information.
  • the apparatus may comprise: receiving circuitry for receiving, from the first function, a request to delete any reporting associated with the request; and signalling circuitry for signalling an indication that the second function will no longer report the requested information to the first function.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • non-transitory computer readable medium comprising program instructions for causing an apparatus for a first function to perform at least the following: signal, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receive the requested information from the second function.
  • the apparatus may be caused to perform: signalling, to the second function, a request to modify at least one parameter comprised in the request; and receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  • the apparatus may be caused to perform: signalling, to the second function, a request to delete any reporting associated with the request; and receiving an indication that the second function will no longer report the requested information to the first function.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • non-transitory computer readable medium comprising program instructions for causing an apparatus for a second function to perform at least the following: receive, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; obtain the requested information; and signal the requested information to the first function.
  • the obtaining the requested information may comprise performing: retrieving information related to said at least one decision from a storage function; and processing the retrieved information to form the requested information.
  • the apparatus may be caused to perform: in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and updating the first instance to comprise an instruction to signal the requested information to the first function.
  • the apparatus may be caused to perform: in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and instantiating a second instance that comprises an instruction to signal the requested information to the first function.
  • the performing determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
  • the apparatus may be caused to perform: providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
  • the obtaining the requested information may comprise performing: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
  • the apparatus may be caused to perform: receiving, from the first function, a request to modify at least one parameter comprised in the request; and signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  • the apparatus may be caused to perform: receiving, from the first function, a request to delete any reporting associated with the request; and signalling an indication that the second function will no longer report the requested information to the first function.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • a computer program product stored on a medium that may cause an apparatus to perform any method as described herein.
  • a chipset that may comprise an apparatus as described herein.
  • Figures 1A and 1B show a schematic representation of a 5G system
  • Figure 2 shows a schematic representation of a network apparatus
  • Figure 3 shows a schematic representation of a user equipment
  • Figure 4 shows a schematic representation of a non-volatile memory medium storing instructions which when executed by a processor allow a processor to perform one or more of the steps of the methods of some examples;
  • Figure 5 shows a schematic representation of a network
  • FIG. 6 to 10 illustrate example architecture
  • FIG. 1A shows a schematic representation of a 5G system (5GS) 100.
  • the 5GS may comprise a user equipment (UE) 102 (which may also be referred to as a communication device or a terminal) , a 5G access network (AN) (which may be a 5G Radio Access Network (RAN) or any other type of 5G AN such as a Non-3GPP Interworking Function (N3IWF) /aTrusted Non3GPP Gateway Function (TNGF) for Untrusted /Trusted Non-3GPP access or Wireline Access Gateway Function (W-AGF) for Wireline access) 104, a 5G core (5GC) 106, one or more application functions (AF) 108 and one or more data networks (DN) 110.
  • UE user equipment
  • AN which may be a 5G Radio Access Network (RAN) or any other type of 5G AN such as a Non-3GPP Interworking Function (N3IWF) /aTrusted Non3GPP Gateway Function (TNGF) for
  • the network may further comprise a management data analytics service (MDAS) producer or MDAS Management Service (MnS) producer.
  • MDAS MnS producer may provide data analytics in the management plane considering parameters including, for example, load level and/or resource utilization.
  • the MDAS MnS producer for a network function (NF) may collect the NF’s load-related performance data, e.g., resource usage status of the NF.
  • the analysis of the collected data may provide forecast of resource usage information in a predefined future time window. This analysis may also recommend appropriate actions e.g., scaling of resources, admission control, load balancing of traffic, and so forth.
  • Figure 1B shows a schematic representations of a 5GC represented in current 3GPP specifications. It is understood that this architecture is intended to illustrate potential components that may be comprised in a core network, and the presently described principles are not limited to core networks comprising only the described components.
  • Cognitive Autonomous Networks promise to provide intelligence and autonomy in network Operations, Administration and Management (OAM) as well as in network procedures for supporting increased flexibility and complexity of a radio network.
  • OAM Operations, Administration and Management
  • ML Machine Learning
  • AI Artificial Intelligence
  • CANs may be able to: 1) take higher level goals and derive the appropriate performance targets, 2) learn from their environment and their individual or shared experiences therein, 3) learn to contextualize their operating conditions, and 4) learn their “optimal” behavior fitting to the specific environment and contexts.
  • the results of these four operations may be used by implementing at least one decision based on information derived while performing them.
  • the historical decisions may be used in decision making as part of input data.
  • current 3GPP specifications do not comprise solutions for managing the history of inference, in terms of information modeling, procedures of lifecycle management of inference history records.
  • the network function and/or management systems thereof may model information objects for requesting and reporting on ML inference history.
  • the network function and/or management systems may further be usefully configured to provide operations that support procedures for requesting and reporting on ML inference history.
  • the ML Inference History Function is configured to receive a request for the history of contextualised decisions taken by an MLApp. This received request is hereinafter referred to as ML Inference History Function Request.
  • the ML Inference History Function may assume the existence of an ML Inference History Log that stores decisions of the MLApp that were made for different contexts.
  • the different contexts may be stored in respect of each decision.
  • the requested history may be compiled from these stored decisions.
  • the ML Inference History Function Request may map to an existing ML Inference History Function Reporting.
  • the requestor is added to the recipients of the report provided by the existing ML Inference History Function Reporting.
  • the request may not map to any existing ML Inference History Function Reporting.
  • the ML Inference History Function may create a new ML Inference History Function Reporting to be associated with the received ML Inference History Function Request and the MLApp specified in the request.
  • an information model for the ML Inference History Function report may be defined that provides data on the captured inference history.
  • the information model may be of a single MLInferencerecord.
  • Procedures for requesting and reporting on the ML Inference History Function may be defined.
  • Figure 6 illustrates one example architecture in which an ML Inference History Function may be provided in a management plane.
  • Figure 7 illustrates an example architecture in which an ML Inference History Function may be provided over an air interface.
  • Figure 7 illustrates an ML Inference History Module 701 comprised in a UE 702.
  • the ML Inference History Module 701 may receive a request for ML Model Inference History from an access point 703 (e.g., a gNB) .
  • the ML Inference History Module 701 may report information to the ML Model Inference History through the access point 703.
  • IOCs information object classes
  • dataTypes for realizing an ML Inference History Function.
  • each Information Object Class is partitioned with similar attributes being grouped together. These groupings of Attributes are specified as independent modules and may be reused by other Information Object Classes.
  • Figure 8 illustrates a Managed Entity Proxy Class 801 that provides class names (such as, for example, Subnetwork ManagedFunction, ManagementFunction) to an ML Inference History Function IOC 802.
  • class names such as, for example, Subnetwork ManagedFunction, ManagementFunction
  • a Proxy class is a class that implements a list of interfaces specified at runtime when the class is created.
  • the ML Inference History Function IOC attributes will now be described.
  • the ML Inference History Function IOC may be associated with one or more ML Models.
  • the ML Inference History Function IOC may be associated with ML Models associated via a list of MLModelIdentifers.
  • the list of MLModelIdentifiers may be defined in the model of the IOC within an operating communication protocol (e.g., within a 3GPP specification) .
  • the list may be created at each time the IOC is instantiated (e.g., by the operator) .
  • the ML Inference History Function IOC may comprise attributes inherited from a Managed Function IOC (such as, for example, a ManagedFunction IOC defined in 3GPP TS 28.622 [30] ) and the following attributes:
  • IOC representing properties of ML Inference History Function Request IOC.
  • a consumer may create a new ML Inference History Function Request on the ML Inference History Function.
  • the ML Inference History Function Request may be an information object class that is instantiated for each request for ML Inference History.
  • the ML Inference History Function Request may be associated with an ML Inference History Function Reporting. However, this association is not necessarily a one to one mapping between an instance of ML Inference History Function Request and an ML Inference History Function Reporting instance, since multiple requests may be mapped to a single reporting instance. As such the ML Inference History Function Requests could be managed independently of the ML Inference History Function Reporting.
  • the association between ML Inference History Function Request and its corresponding ML Inference History Function Reporting may be written into the request by the ML Inference History Function after determining which ML Inference History Function Reporting IOC will provide reports for the specific request.
  • the ML Inference History Function Request may comprise a source to identify where its coming from (e.g., to identify the consumer requesting the report) .
  • the source may be, for example, an enumeration defined for network functions, operator roles, and/or other functional differentiations.
  • the source may be implemented as a value associated with an index, where the value and the index may be used to point towards/identify from where the request is being generated.
  • the ML Inference History Function Request IOC may comprise attributes inherited from Top IOC (such as, for example, the TOP IOC defined in 3GPP TS 28.622) .
  • the ML Inference History Function Request may comprise the following attributes:
  • IOC representing properties of ML Inference History Function Request IOC.
  • the ML Inference History Function Reporting IOC may represent the properties of ML Inference History Function Reporting.
  • the ML Inference History Function Reporting may represent the capability of compiling and delivering reports and notifications about ML Inference History Function or its associated ML Inference History Function Requests.
  • the ML Inference History Function may be associated with one or more instances of ML Inference History Function Reporting.
  • the ML Inference History Function Reporting may report on (and, as such, may be associated with one or more ML Inference History Function Requests) .
  • the ML Inference History Function Reporting may comprise an ML Inference History Function Reporting Matrix that defines the frequencies at which the report is to be sent and the report recipients which are the specific entities to which each report may be sent at each time instant.
  • the ML Inference History Function Reporting may be managed separately from the ML Inference History Function Requests. This is because an ML Inference History Function Reporting instance may provide reports on more than one ML Inference History Function Requests.
  • the ML Inference History Function Reporting shall be associated with one or more ML Inference History Function Reports.
  • the ML Inference History Function Reporting IOC may comprise attributes inherited from a Top IOC (defined, for example, in TS 28.622) .
  • the ML Inference History Function Reporting IOC may comprise the following attributes:
  • the ML Model dataType (also referred to herein as an MLApp dataType) may represent the properties of an MLModel.
  • Multiple ML Inference History Function Requests may be submitted on a single MLModel but the ML Inference History Function Requests may be independent of the MLModel.
  • the ML Inference History Function Requests may be associated with the MLModel, the MLModel does not need to be associated with any specific ML Inference History Function Request.
  • the MLModel does not need to be associated with any specific ML Inference History Function Reporting instance, since the data used to compile the ML Inference History Function Reports may be collected from a log that is independent of the MLModel.
  • the MLModel may comprise the attributes inherited from TOP IOC (for example, as defined in 3GPP TS 28.622) . Further, the model may be represented in a repository, such that its attributes (including a current state) may be retrieved from a repository function. Although the MLModel is modelled here as a dataType, it is understood that the MLModel may be modelled as an Information Object Class.
  • the ML Inference History Reporting IOC 803 and the ML Inference History Request IOC 805 output to the MLApp dataType 804.
  • the ML Inference History Request IOC 805 further outputs to a Reporting Context dataType 806 and to the ML Inference History Reporting IOC 803.
  • the ML Inference History Reporting IOC 803 further outputs to ML Inference Hisory Request IOC 805, the Reporting Context dataType 806, an ML Reporting Matrix dataType 807, and to an ML Inference History Report dataType 808.
  • the MLReportingMatrix may define the frequencies at which reports should sent and the specific entities to which each report shall be sent at each time instant.
  • the frequencies at which reports are to be sent may be defined in terms of a ReportingPeriod (for example, in seconds as the time between 2 successive reports for that MLTrainingReporting instance) .
  • the MLReportingMatrix may comprise the following attributes:
  • the consumerList is a list of DistinguishedNames of the managed functions that have requested the related report and to which the report is to be delivered at that time instants defined by the ReportingPeriod.
  • the ML Inference History Function Report dataType represents the properties of ML Inference History Function Report.
  • a report may be generated when a value representing the Attribute is obtained, the value falling within the ValueRange while the Condition (s) is fulfilled.
  • the ML Inference History Function may have a capability for instantiating a ML Inference History Function Reporting instance based on a receipt of at least one request from at least one consumer function. This is illustrated with respect to Figure 11.
  • the request may comprise an ML Reporting Frequency.
  • This frequency represents a frequency at which the consumer to requesting to be kept informed about ML Inference History Function. This frequency is provided when the at least one consumer wishes to receive the report on inference history more than once.
  • the producer 1102 instantiates an ML Inference Histroy Reporting instance for Req1. From 11006, the producer 1102 proceeds to 11008.
  • the producer 1102 adds the requirements associated with Req1 to an existing MLInference History Reporting Instance. From 11007, the producer 1102 proceeds to 11008.
  • Figure 12 illustrates operations relating to reporting an ML Inference History for a given ML Inference History Frunction Request Instance.
  • the ML Inference History Function has the capability and a control interface to allow a consumer (e.g., the operator) to configure and manage one or more ML Inference History Function Reporting instances.
  • the control interface may enable the consumer to get the outcomes of the ML Inference History Function Reporting process. This may be achieved using the Notify procedure of the 3GPP provising management service as illustrated by Figure 12.
  • the at least one consumer 1301 and the producer 1302 exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance.
  • Req1 ML Inference History Request
  • the at least one consumer 1301 signals the producer 1302.
  • This signalling of 13002 may comprise a request to retrieve an indication of attributes of a managed object instance.
  • the signalling may comprise a request to retrieve an indication of at least one attribute associated with respective at least one ML Inference History Reporting Instances.
  • the status may represent a determination of how far the reporting instance has been completed.
  • the provision of the status may be provided, for example, in response to a triggered data collection, data being collected, a result being reported (when only one report is to be made) , and/or continuous reporting instantiated (when the request relates to repetitive reporting) .
  • the status may correspond to a value assignment for each attribute in MLInferenceHistoryReporting. There may thus exist a job with a status attribute.
  • Figure 14 illustrates operations that may be performed for modifying at least one characteristic of one or more ML Inference History Reporting instances.
  • Figure 14 illustrates signalling that may be performed by at least one consumer 1401 and at least one producer 1402.
  • the at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13, and may thus perform at least one of the actions described therein.
  • the at least one consumer 1401 and the producer 1402 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
  • the at least one consumer 1401 signals the producer 1402.
  • This signalling of 14002 may comprise a request to change a value assocated with at least one attribute of at least one ML Inference History Reporting instances.
  • This signalling of 14002 may comprise a request to modify at least one attribute of a managed object instance.
  • This signalling of 14002 may comprise a modifyMOIAttributes service operation.
  • This signalling of 14002 may comprise at least one identifer associated with the at least one ML Inference History Reporting instances that are the subject of the signalling of 14002.
  • the signalling of 14002 may identify at least one attribute to be modified.
  • the producer 1402 signals the at least one consumer 1401.
  • This signalling may comprise an indication that a value associated with an attribute respectively associated with at least one ML Inference History Reporting instance has been modified from a previous value to a current value.
  • the signalling of 14003 may comprise an indication of the current value for any modified attribute identified in the signalling of 14003.
  • the signalling of 14003 may comprise at least one identifer associated with the at least one ML Inference History Reporting instances that are the subject of the signalling of 14003.
  • the signalling of 14003 may comprise a notifyMOIAttributeValueChanges service operation.
  • the signalling of Figure 14 may be configured to enable the consumer to configure new and ongoing ML Inference History Function Reporting instances.
  • the operator may assign priorities to one or more ML Inference History Function Reporting instances to indicate that in case of resource constraints, some particular ML Inference History Function Reporting instances with higher priority should be executed before ML Inference History Function Reporting instances with lower priorities are executed.
  • the configuration may be achieved using the modifyMOIAttributes procedure of the 3GPP provising management service as illustrated by Figure 14, although it is understood that alternative service operations may be used to effect the same function.
  • the ML Inference History Function producer may notify the consumer about the success of the executed update on the ML Inference History Function Reporting instance (s) .
  • Figure 15 illustrates operations that may be performed for deleting one or more ML Inference History Reporting instances.
  • Figure 15 illustrates signalling that may be performed by at least one consumer 1501 and at least one producer 1502.
  • the at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13 and/or Figure 14, and may thus perform at least one of the actions described therein.
  • the at least one consumer 1501 and the producer 1502 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
  • the at least one consumer 1501 may signal the producer 1502.
  • This signalling of 15002 may comprise a request to delete a managed object instance.
  • this signalling may comprise a request to delete an ML Inference History Reporting instance.
  • This signalling of 15002 may comprise an identifier of the ML Inference History Reporting instance that is the subject of the signalling of 15002.
  • the producer 1502 signals the at least one consumer 1501.
  • This signalling of 15003 may comprise an indication that the ML Inference History Reporting instance that was the subject of the signalling of 15002 has been deleted.
  • This signalling of 15003 may comprise an identifier of the ML Inference History Reporting instance that has been deleted.
  • the signalling of 15003 may comprise an explicit indication that the identified ML Inference History Reporting instance has been deleted.
  • the signalling of 15003 may comprise a NotifyMOIDeletion service operation.
  • the signalling of 15003 may be considered as a response to the signalling of 15002 in that it indicates that the signalling of 15002 has been successful in its requested objective.
  • Figure 15 the consumer is enabled to request the deletion unwanted ML Inference History Function Reporting instances. This may be achieved using the deleteMOI procedure of the 3GPP provisioning management service as illustrated by Figure 15, lathough it is understood that other service operations may be used to effect this function. Following a delete request, the ML Inference History Function producer may notify the consumer about the success of the executed deletion of ML Inference History Function Reporting instances.
  • Figures 16 to 18 illustrate signalling that may be used for managing and controlling ML Model Inference History Requests.
  • Figure 16 illustrates operations that may be performed for reading characteristics of one or more ML Model Inference History Requests.
  • Figure 16 illustrates signalling that may be performed by at least one consumer 1601 and at least one producer 1602.
  • the at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13 and/or Figure 14, and/or Figure 15, and may thus perform at least one of the actions described therein.
  • the at least one consumer 1601 and the producer 1602 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
  • the at least one consumer 1601 signals the producer 1602.
  • This signalling of 16002 may comprise a request for information relating to at least one attribute of an ML Inference History Request.
  • the signalling of 16002 may comprise a ProvMnS. getMOIAttributes service operation.
  • the signalling of 16002 may comprise an identifer of the ML Inference History Request that is the subject of the signalling of 16002.
  • the producer 1602 responds to the signalling of 16002.
  • This response may comprise an idenifier of the ML Inference History Request to which the signalling of 16002 related.
  • This response may compirse data.
  • the data may comprise values associated with the instance of MLInferenceHistoryRequests. Therefore, the type of values may be as defined in IOC of MLInferenceHistoryRequests
  • This response may be effected by a File/Streamreporting service operation.
  • an ML Inference History Function has the capability and a control interface to allow a consumer (e.g., the operator) to configure and manage one or more ML Inference History Function Requests.
  • the control interface may, among other functionality, enable the consumer to read the characteristics of submitted ML Inference History Function Requests. This may be achieved using the getMOIAttributes procedure of the 3GPP provising management service as illustrated by Figure 16. For example, the consumer may request to read the number of submitted, ML Inference History Function Requests and/or features (e.g. priorities, sources, ...) of the different ML Inference History Function Requests.
  • Figure 17 illustrates example signalling that may be performed for modifying at least one characteristic of one or more ML Inference History Requests.
  • Figure 17 illustrates signalling that may be performed by at least one consumer 1701 and at least one producer 1702.
  • the at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13 and/or Figure 14, and/or Figure 15, and/or Figure 16 and may thus perform at least one of the actions described therein.
  • the at least one consumer 1701 and the producer 1702 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
  • the producer 1701 may respond to the signalling of 17002.
  • This response may comprise an indication that a value associated with an attribute respectively associated with at least one ML Inference History Request instance has been modified from a previous value to a current value.
  • the signalling of 17003 may comprise an indication of the current value for any modified attribute identified in the signalling of 17003.
  • the signalling of 17003 may comprise at least one identifer associated with the at least one ML Inference History request instances that are the subject of the signalling of 17003.
  • the signalling of 17003 may comprise a notifyMOIAttributeValueChanges service operation.
  • control interface may be configured to enable the consumer to configure the submitted ML Inference History Function Requests, e.g., the operator may change the priorities of one or more ML Inference History Function Requests to indicate those that need to be prioritised ragarding the instantiation of the related ML Inference History Function Reporting instances.
  • control interface may enable a consumer to update the priority of the ML Inference History Function Request sent a priori by that consumer.
  • the producer has to decide whether the modified request can still be served by the same original job or whether a new job needs to be instantiated.
  • the configuration may be achieved using the modifyMOIAttributes procedure of the 3GPP provising management service as illustrated by Figure 18.
  • the ML Inference History Function may notify the consumer about the success of the executed updates on the ML Inference History Function Requests
  • Figure 18 illustrates signalling that may be performed by at least one consumer 1501 and at least one producer 1802.
  • the at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13 and/or Figure 14 and/or Figure 15 and/or Figure 16 and/or Figure 17, and may thus perform at least one of the actions described therein.
  • the at least one consumer 1801 and the producer 1802 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
  • the producer 1802 signals the at least one consumer 1801.
  • This signalling of 18003 may comprise an indication that the ML Inference History request instance that was the subject of the signalling of 18002 has been deleted.
  • This signalling of 18003 may comprise an identifier of the ML Inference History request instance that has been deleted.
  • the signalling of 18003 may comprise an explicit indication that the identified ML Inference History request instance has been deleted.
  • the signalling of 18003 may comprise a NotifyMOIDeletion service operation.
  • the signalling of 18003 may be considered as a response to the signalling of 18002 in that it indicates that the signalling of 18002 has been successful in its requested objective.
  • control interface may be configured to enable the consumer to delete unwanted ML Inference History Function Requests. This may be achieved using the deleteMOI procedure of the 3GPP provising management service, as illustrated by Figure 18.
  • the producer receiving this request may either delete the corresponding job when that job serves only that one request, or the producer modifies attributes of the job to remove the requirements of the deleted request.
  • the ML Inference History Function may notify the consumer about the success of the executed deletion of ML Inference History Function Requests.
  • FIGS 19 and 20 illustrate example aspects of the apparatus described above. It is therefore understood that features of the above examples may be implemented in the following without loss of generality.
  • the apparatus signals, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision.
  • the second function may be as described below in relation to Figure 20
  • the apparatus receives the requested information from the second function.
  • the apparatus may signal, to the second function, a request to modify at least one parameter comprised in the request, and receive an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  • the apparatus may signal, to the second function, a request to delete any reporting associated with the request, and receive an indication that the second function will no longer report the requested information to the first function.
  • the apparatus may use the received requested information to determine whether to change at least one network parameter and/or machine learning decision. When it is determined to change at least one network parameter and/or machine learning decision in dependence on the received information, this may be caused to happen.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • Figure 20 illustrates operations that may be performed by an apparatus for a second function.
  • the second function may be, for example, an ML Inference History Function.
  • the apparatus receives, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision.
  • the first function may be the apparatus of Figure 19.
  • the apparatus obtains the requested information.
  • the apparatus signals the requested information to the first function.
  • the obtaining the requested information may comprise retrieving information related to said at least one decision from a storage function, and processing the retrieved information to form the requested information.
  • the apparatus may, in response to said receiving, determine that the second function is currently maintaining a first reporting instance in respect of said requested information, and update the first instance to comprise an instruction to signal the requested information to the first function.
  • the updating the first instance may comprise associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
  • the apparatus may, in response to said receiving, determine that the second function is not currently maintaining a first reporting instance in respect of said requested information, and instantiate a second instance that comprises an instruction to signal the requested information to the first function.
  • the determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
  • the apparatus may provide the first function with an identifier of the first instance and/or with an identifier of the second instance.
  • the obtaining the requested information may comprise: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
  • the filters conditions may be obtained from MLInferenceHistoryRequest.
  • ReportingContext may be indicated, where ReportingContext is a list have at least one entry, each entry in the list being represented/defined by three fields, ⁇ Attribute, Condition, ValueRange >.
  • the Attribute field may indicate a certain Key Performance Indicator name, which could be used by the second function to determine where to apply a filed.
  • the second function may apply further filtering using the condition and ValueRange fields. When all of the filtering has been performed, the second function may generate a complete report (MLInferenceHistoryReporting) as requested.
  • the apparatus may receive, from the first function, a request to modify at least one parameter comprised in the request, and signal, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  • the apparatus may receive, from the first function, a request to delete any reporting associated with the request, and signal an indication that the second function will no longer report the requested information to the first function.
  • the request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  • the presently disclosed mechanisms enable an authorized consumer to request and receive the contextualized history of ML inferences.
  • Such history can be used, for example, to determine the appropriateness of different decisions in different contexts and to determine whether ML model retraining is to be triggered.
  • ML model retraining may be triggered when it is determined that a plurality of relatively poor decisions have been made.
  • a determination of whether or not a plurality of decisions have been made may be performed using predetermined criteria. For example, a decision may be determined to be poor when it does not result in the improvement that it was intended to achieve when implemented and/or when any improvement that it was intended to achieve was minimal, relative to a predetermined threshold.
  • the provided history information may be used to compare the decisions taken by different ML models or ML applications, e.g. before selecting a model or application to employ in a given context.
  • Figure 2 shows an example of a control apparatus for a communication system, for example to be coupled to and/or for controlling a station of an access system, such as a RAN node, e.g. a base station, gNB, a central unit of a cloud architecture or a node of a core network such as an MME or S-GW, a scheduling entity such as a spectrum management entity, or a server or host, for example an apparatus hosting an NRF, NWDAF, AMF, SMF, UDM/UDR, and so forth.
  • the control apparatus may be integrated with or external to a node or module of a core network or RAN.
  • base stations comprise a separate control apparatus unit or module.
  • control apparatus can be another network element, such as a radio network controller or a spectrum controller.
  • the control apparatus 200 can be arranged to provide control on communications in the service area of the system.
  • the apparatus 200 comprises at least one memory 201, at least one data processing unit 202, 203 and an input/output interface 204. Via the interface the control apparatus can be coupled to a receiver and a transmitter of the apparatus.
  • the receiver and/or the transmitter may be implemented as a radio front end or a remote radio head.
  • the control apparatus 200 or processor 201 can be configured to execute an appropriate software code to provide the control functions.
  • a possible wireless communication device will now be described in more detail with reference to Figure 3 showing a schematic, partially sectioned view of a communication device 300.
  • a communication device is often referred to as user equipment (UE) or terminal.
  • An appropriate mobile communication device may be provided by any device capable of sending and receiving radio signals.
  • Non-limiting examples comprise a mobile station (MS) or mobile device such as a mobile phone or what is referred to as a ’smart phone’ , a computer provided with a wireless interface card or other wireless interface facility (e.g., USB dongle) , personal data assistant (PDA) or a tablet provided with wireless communication capabilities, or any combinations of these or the like.
  • MS mobile station
  • PDA personal data assistant
  • a mobile communication device may provide, for example, communication of data for carrying communications such as voice, electronic mail (email) , text message, multimedia and so on. Users may thus be offered and provided numerous services via their communication devices. Non-limiting examples of these services comprise two-way or multi-way calls, data communication or multimedia services or simply an access to a data communications network system, such as the Internet. Users may also be provided broadcast or multicast data. Non-limiting examples of the content comprise downloads, television and radio programs, videos, advertisements, various alerts and other information.
  • a wireless communication device may be for example a mobile device, that is, a device not fixed to a particular location, or it may be a stationary device.
  • the wireless device may need human interaction for communication, or may not need human interaction for communication.
  • the terms UE or “user” are used to refer to any type of wireless communication device.
  • the wireless device 300 may receive signals over an air or radio interface 307 via appropriate apparatus for receiving and may transmit signals via appropriate apparatus for transmitting radio signals.
  • a transceiver apparatus is designated schematically by block 306.
  • the transceiver apparatus 306 may be provided, for example, by means of a radio part and associated antenna arrangement.
  • the antenna arrangement may be arranged internally or externally to the wireless device.
  • a wireless device is typically provided with at least one data processing entity 301, at least one memory 302 and other possible components 303 for use in software and hardware aided execution of tasks it is designed to perform, including control of access to and communications with access systems and other communication devices.
  • the data processing, storage and other relevant control apparatus can be provided on an appropriate circuit board and/or in chipsets. This feature is denoted by reference 304.
  • the user may control the operation of the wireless device by means of a suitable user interface such as keypad 305, voice commands, touch sensitive screen or pad, combinations thereof or the like.
  • a display 308, a speaker and a microphone can be also provided.
  • a wireless communication device may comprise appropriate connectors (either wired or wireless) to other devices and/or for connecting external accessories, for example hands-free equipment, thereto.
  • Figure 4 shows a schematic representation of non-volatile memory media 400a (e.g. computer disc (CD) or digital versatile disc (DVD) ) and 400b (e.g. universal serial bus (USB) memory stick) storing instructions and/or parameters 402 which when executed by a processor allow the processor to perform one or more of the steps of the methods of Figure 19, and/or Figure 20, and/or methods otherwise described previously.
  • CD computer disc
  • DVD digital versatile disc
  • 400b e.g. universal serial bus (USB) memory stick
  • the examples may be implemented by computer software stored in a memory and executable by at least one data processor of the involved entities or by hardware, or by a combination of software and hardware.
  • any procedures e.g., as in Figure 19, and/or Figure 20, and/or otherwise described previously, may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
  • the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media (such as hard disk or floppy disks) , and optical media (such as for example DVD and the data variants thereof, CD, and so forth) .
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) , application specific integrated circuits (AStudy ItemC) , gate level circuits and processors based on multicore processor architecture, as nonlimiting examples.
  • circuitry may be configured to perform one or more of the functions and/or method steps previously described. That circuitry may be provided in the base station and/or in the communications device and/or in a core network entity.
  • circuitry may refer to one or more or all of the following:
  • circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example integrated device.
  • UMTS universal mobile telecommunications system
  • UTRAN wireless local area network
  • WiFi wireless local area network
  • WiMAX worldwide interoperability for microwave access
  • PCS personal communications services
  • WCDMA wideband code division multiple access
  • UWB ultra-wideband
  • sensor networks sensor networks
  • MANETs mobile ad-hoc networks
  • IMS Internet Protocol multimedia subsystems
  • Figure 5 depicts examples of simplified system architectures only showing some elements and functional entities, all being logical units, whose implementation may differ from what is shown.
  • the connections shown in Figure 5 are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the system typically comprises also other functions and structures than those shown in Figure 5.
  • the example of Figure 5 shows a part of an exemplifying radio access network.
  • the radio access network may support sidelink communications described below in more detail.
  • Figure 5 shows devices 500 and 502.
  • the devices 500 and 502 are configured to be in a wireless connection on one or more communication channels with a node 504.
  • the node 504 is further connected to a core network 506.
  • the node 504 may be an access node such as (e/g) NodeB serving devices in a cell.
  • the node 504 may be a non-3GPP access node.
  • the physical link from a device to a (e/g) NodeB is called uplink or reverse link and the physical link from the (e/g) NodeB to the device is called downlink or forward link.
  • (e/g) NodeBs or their functionalities may be implemented by using any node, host, server or access point etc. entity suitable for such a usage.
  • a communications system typically comprises more than one (e/g) NodeB in which case the (e/g) NodeBs may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for signalling purposes.
  • the (e/g) NodeB is a computing device configured to control the radio resources of communication system it is coupled to.
  • the NodeB may also be referred to as a base station, an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment.
  • the (e/g) NodeB includes or is coupled to transceivers. From the transceivers of the (e/g) NodeB, a connection is provided to an antenna unit that establishes bi-directional radio links to devices.
  • the antenna unit may comprise a plurality of antennas or antenna elements.
  • the (e/g) NodeB is further connected to the core network 506 (CN or next generation core NGC) .
  • the (e/g) NodeB is connected to a serving and packet data network gateway (S-GW +P-GW) or user plane function (UPF) , for routing and forwarding user data packets and for providing connectivity of devices to one or more external packet data networks, and to a mobile management entity (MME) or access mobility management function (AMF) , for controlling access and mobility of the devices.
  • S-GW +P-GW serving and packet data network gateway
  • UPF user plane function
  • MME mobile management entity
  • AMF access mobility management function
  • Examples of a device are a subscriber unit, a user device, a user equipment (UE) , a user terminal, a terminal device, a mobile station, a mobile device, etc.
  • UE user equipment
  • the device typically refers to a mobile or static device (e.g. a portable or non-portable computing device) that includes wireless mobile communication devices operating with or without an universal subscriber identification module (USIM) , including, but not limited to, the following types of devices: mobile phone, smartphone, personal digital assistant (PDA) , handset, device using a wireless modem (alarm or measurement device, etc. ) , laptop and/or touch screen computer, tablet, game console, notebook, and multimedia device. It should be appreciated that a device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network.
  • a mobile or static device e.g. a portable or non-portable computing device
  • USB universal subscriber identification module
  • a device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network.
  • a device may also be a device having capability to operate in Internet of Things (IoT) network which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction, e.g. to be used in smart power grids and connected vehicles.
  • IoT Internet of Things
  • the device may also utilise cloud.
  • a device may comprise a user portable device with radio parts (such as a watch, earphones or eyeglasses) and the computation is carried out in the cloud.
  • the device illustrates one type of an apparatus to which resources on the air interface are allocated and assigned, and thus any feature described herein with a device may be implemented with a corresponding apparatus, such as a relay node.
  • a relay node is a layer 3 relay (self-backhauling relay) towards the base station.
  • the device (or, in some examples, a layer 3 relay node) is configured to perform one or more of user equipment functionalities.
  • CPS cyber-physical system
  • ICT interconnected information and communications technology
  • devices sensors, actuators, processors microcontrollers, etc.
  • Mobile cyber physical systems in which the physical system in question has inherent mobility, are a subcategory of cyber-physical systems. Examples of mobile physical systems include mobile robotics and electronics transported by humans or animals.
  • apparatuses have been depicted as single entities, different units, processors and/or memory units (not all shown in Figure 5) may be implemented.
  • 5G enables using multiple input –multiple output (MIMO) antennas, many more base stations or nodes than the LTE (aso-called small cell concept) , including macro sites operating in co-operation with smaller stations and employing a variety of radio technologies depending on service needs, use cases and/or spectrum available.
  • MIMO multiple input –multiple output
  • 5G mobile communications supports a wide range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications (such as (massive) machine-type communications (mMTC) , including vehicular safety, different sensors and real-time control) .
  • 5G is expected to have multiple radio interfaces, e.g.
  • 5G is planned to support both inter-RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as below 6GHz –cmWave, 6 or above 24 GHz –cmWave and mmWave) .
  • inter-RAT operability such as LTE-5G
  • inter-RI operability inter-radio interface operability, such as below 6GHz –cmWave, 6 or above 24 GHz –cmWave and mmWave
  • One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
  • the LTE network architecture is fully distributed in the radio and fully centralized in the core network.
  • the low latency applications and services in 5G require to bring the content close to the radio which leads to local break out and multi-access edge computing (MEC) .
  • 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors.
  • MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time.
  • Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical) , critical communications (autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications) .
  • technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical)
  • the communication system is also able to communicate with other networks 512, such as a public switched telephone network, or a VoIP network, or the Internet, or a private network, or utilize services provided by them.
  • the communication network may also be able to support the usage of cloud services, for example at least part of core network operations may be carried out as a cloud service (this is depicted in Figure 5 by “cloud” 514) . This may also be referred to as Edge computing when performed away from the core network.
  • the communication system may also comprise a central control entity, or a like, providing facilities for networks of different operators to cooperate for example in spectrum sharing.
  • Edge computing may be brought into a radio access network (RAN) by utilizing network function virtualization (NFV) and software defined networking (SDN) .
  • RAN radio access network
  • NFV network function virtualization
  • SDN software defined networking
  • Using the technology of edge cloud may mean access node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head or base station comprising radio parts. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts.
  • Application of cloudRAN architecture enables RAN real time functions being carried out at or close to a remote antenna site (in a distributed unit, DU 508) and non-real time functions being carried out in a centralized manner (in a centralized unit, CU 510) .
  • 5G may also utilize satellite communication to enhance or complement the coverage of 5G service, for example by providing backhauling.
  • Possible use cases are providing service continuity for machine-to-machine (M2M) or Internet of Things (IoT) devices or for passengers on board of vehicles, Mobile Broadband, (MBB) or ensuring service availability for critical communications, and future railway/maritime/aeronautical communications.
  • Satellite communication may utilise geostationary earth orbit (GEO) satellite systems, but also low earth orbit (LEO) satellite systems, in particular mega-constellations (systems in which hundreds of (nano) satellites are deployed) .
  • GEO geostationary earth orbit
  • LEO low earth orbit
  • mega-constellations systems in which hundreds of (nano) satellites are deployed
  • Each satellite in the mega-constellation may cover several satellite-enabled network entities that create on-ground cells.
  • the on-ground cells may be created through an on-ground relay node or by a gNB located on-ground or in a satellite
  • the depicted system is only an example of a part of a radio access system and in practice, the system may comprise a plurality of (e/g) NodeBs, the device may have an access to a plurality of radio cells and the system may comprise also other apparatuses, such as physical layer relay nodes or other network elements, etc. At least one of the (e/g) NodeBs or may be a Home (e/g) nodeB. Additionally, in a geographical area of a radio communication system a plurality of different kinds of radio cells as well as a plurality of radio cells may be provided.
  • Radio cells may be macro cells (or umbrella cells) which are large cells, usually having a diameter of up to tens of kilometers, or smaller cells such as micro-, femto-or picocells.
  • the (e/g) NodeBs of Figure 5 may provide any kind of these cells.
  • a cellular radio system may be implemented as a multilayer network including several kinds of cells. Typically, in multilayer networks, one access node.

Abstract

There is provided an apparatus, method and computer program for an apparatus for a first function that causes the apparatus to perform: signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receiving the requested information from the second function.

Description

APPARATUS, METHODS, AND COMPUTER PROGRAMS Field
Various examples described herein generally relate to apparatus, methods, and computer programs, and more particularly (but not exclusively) to apparatus, methods and computer programs for network apparatuses.
Background
In general, a communication system can be seen as a facility that enables communication sessions between two or more entities such as user terminals, access nodes and/or other nodes by providing carriers between the various entities involved in the communications path. A communication system can be provided, for example, by means of a communication network and one or more compatible communication devices. The communication sessions may comprise, for example, communication of data for carrying communications such as voice, electronic mail (email) , text message, multimedia and/or content data and so on. Content may be multicast or uni-cast to communication devices.
A user can access the communication system by means of an appropriate communication device or terminal. A communication device of a user is often referred to as user equipment (UE) or user device. The communication device may access a carrier provided by an access node and transmit and/or receive communications on the carrier.
The communication system and associated devices typically operate in accordance with a required standard or specification which sets out what the various entities associated with the system are permitted to do and how that should be achieved. Communication protocols and/or parameters which shall be used for the connection are also typically defined. One example of a communications system is UTRAN (3G radio) . Another example of an architecture is the long-term evolution (LTE) or the Universal Mobile Telecommunications System (UMTS) radio-access technology. Another example communication system is so called 5G system that allows user equipment (UE) or user device to contact a 5G core via e.g. new radio (NR) access technology or via other access technology such as Untrusted access to 5GC or wireline access technology.
In 5G, a UE Registration Area (RA) comprises a list of one or more Tracking Areas (TA) . A Tracking Area is a logical concept of an area where a UE can move around without updating the network. The network can allocate a list with one or more TAs to the UE.
Summary
According to a first aspect, there is provided an apparatus for a first function, the apparatus comprising means for performing: signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receiving the requested information from the second function.
The apparatus may comprise means for performing: signalling, to the second function, a request to modify at least one parameter comprised in the request; and receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may comprise means for performing: signalling, to the second function, a request to delete any reporting associated with the request; and receiving an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to a second aspect, there is provided an apparatus for a second function, the apparatus comprising means for performing: receiving, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; obtaining the requested information; and signalling the requested information to the first function.
The means for obtaining the requested information may comprise means for performing: retrieving information related to said at least one decision from a storage function; and processing the retrieved information to form the requested information.
The apparatus may comprise means for performing: in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and updating the first instance to comprise an instruction to signal the requested information to the first function.
The means for updating the first instance may comprise means for performing: associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
The apparatus may comprise means for performing: in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and instantiating a second instance that comprises an instruction to signal the requested information to the first function.
The means for performing determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise means for performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
The apparatus may comprise means for performing: providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
The means for obtaining the requested information may comprise means for performing: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
The apparatus may comprise means for performing: receiving, from the first function, a request to modify at least one parameter comprised in the request; and signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may comprise means for performing: receiving, from the first function, a request to delete any reporting associated with the request; and signalling  an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to a third aspect, there is provided an apparatus for a first function, the apparatus comprising: at least one processor; and at least one memory comprising code that, when executed by the at least one processor, causes the apparatus to: signal, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receive the requested information from the second function.
The apparatus may be caused to perform: signalling, to the second function, a request to modify at least one parameter comprised in the request; and receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may be caused to perform: signalling, to the second function, a request to delete any reporting associated with the request; and receiving an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to a fourth aspect, there is provided an apparatus for a second function, the apparatus comprising at least one processor; and at least one memory comprising code that, when executed by the at least one processor, causes the apparatus to: receive, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; obtain the requested information; and signal the requested information to the first function.
The obtaining the requested information may comprise performing: retrieving information related to said at least one decision from a storage function; and processing the retrieved information to form the requested information.
The apparatus may be caused to perform: in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and updating the first instance to comprise an instruction to signal the requested information to the first function.
The updating the first instance may comprise performing: associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
The apparatus may be caused to perform: in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and instantiating a second instance that comprises an instruction to signal the requested information to the first function.
The performing determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
The apparatus may be caused to perform: providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
The obtaining the requested information may comprise performing: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
The apparatus may be caused to perform: receiving, from the first function, a request to modify at least one parameter comprised in the request; and signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may be caused to perform: receiving, from the first function, a request to delete any reporting associated with the request; and signalling an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to a fifth aspect, there is provided a method for an apparatus for a first function, the method comprising performing: signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receiving the requested information from the second function.
The method may comprise performing: signalling, to the second function, a request to modify at least one parameter comprised in the request; and receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The method may comprise performing: signalling, to the second function, a request to delete any reporting associated with the request; and receiving an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to a sixth aspect, there is provided a method for an apparatus for a second function, the method comprising performing: receiving, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; obtaining the requested information; and signalling the requested information to the first function.
The obtaining the requested information may comprise performing: retrieving information related to said at least one decision from a storage function; and processing the retrieved information to form the requested information.
The method may comprise performing: in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and updating the first instance to comprise an instruction to signal the requested information to the first function.
The updating the first instance may comprise performing: associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
The method may comprise performing: in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and instantiating a second instance that comprises an instruction to signal the requested information to the first function.
The performing determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
The method may comprise performing: providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
The obtaining the requested information may comprise performing: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
The method may comprise performing: receiving, from the first function, a request to modify at least one parameter comprised in the request; and signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The method may comprise performing: receiving, from the first function, a request to delete any reporting associated with the request; and signalling an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to a seventh aspect, there is provided an apparatus for a first function, the apparatus comprising: signalling circuitry for signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receiving circuitry for receiving the requested information from the second function.
The apparatus may comprise: signalling circuitry for signalling, to the second function, a request to modify at least one parameter comprised in the request; and receiving circuitry for receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may comprise: signalling circuitry for signalling, to the second function, a request to delete any reporting associated with the request; and receiving circuitry for receiving an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to an eighth aspect, there is provided an apparatus for a second function, the apparatus comprising: receiving circuitry for receiving, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; obtaining circuitry for obtaining the requested information; and signalling circuitry for signalling the requested information to the first function.
The obtaining circuitry for obtaining the requested information may comprise: retrieving circuitry for retrieving information related to said at least one decision from a storage function; and processing circuitry for processing the retrieved information to form the requested information.
The apparatus may comprise: determining circuitry for, in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and updating circuitry for updating the first instance to comprise an instruction to signal the requested information to the first function.
The updating circuitry for updating the first instance may comprise: associating circuitry for associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
The apparatus may comprise: determining circuitry for, in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and instantiating circuitry for instantiating a second instance that comprises an instruction to signal the requested information to the first function.
The determining circuitry for determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise performing circuitry for performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
The apparatus may comprise: providing circuitry for providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
The obtaining circuitry for obtaining the requested information may comprise: determining circuitry for determining filter conditions for obtaining the requested information; and applying circuitry for applying the determined filer conditions to collect with the requested information.
The apparatus may comprise: receiving circuitry for receiving, from the first function, a request to modify at least one parameter comprised in the request; and signalling circuitry for signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may comprise: receiving circuitry for receiving, from the first function, a request to delete any reporting associated with the request; and signalling circuitry for signalling an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to a ninth aspect, there is provided non-transitory computer readable medium comprising program instructions for causing an apparatus for a first function to perform at least the following: signal, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and receive the requested information from the second function.
The apparatus may be caused to perform: signalling, to the second function, a request to modify at least one parameter comprised in the request; and receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may be caused to perform: signalling, to the second function, a request to delete any reporting associated with the request; and receiving an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to a tenth aspect, there is provided non-transitory computer readable medium comprising program instructions for causing an apparatus for a second function to perform at least the following: receive, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; obtain the requested information; and signal the requested information to the first function.
The obtaining the requested information may comprise performing: retrieving information related to said at least one decision from a storage function; and processing the retrieved information to form the requested information.
The apparatus may be caused to perform: in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and updating the first instance to comprise an instruction to signal the requested information to the first function.
The updating the first instance may comprise performing: associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
The apparatus may be caused to perform: in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and instantiating a second instance that comprises an instruction to signal the requested information to the first function.
The performing determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise performing at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
The apparatus may be caused to perform: providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
The obtaining the requested information may comprise performing: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
The apparatus may be caused to perform: receiving, from the first function, a request to modify at least one parameter comprised in the request; and signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may be caused to perform: receiving, from the first function, a request to delete any reporting associated with the request; and signalling an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
According to an eleventh aspect, there is provided a computer program product stored on a medium that may cause an apparatus to perform any method as described herein.
According to a twelfth aspect, there is provided an electronic device that may comprise apparatus as described herein.
According to a thirteenth aspect, there is provided a chipset that may comprise an apparatus as described herein.
Brief description of Figures
Some examples, will now be described, merely by way of illustration only, with reference to the accompanying drawings in which:
Figures 1A and 1B show a schematic representation of a 5G system;
Figure 2 shows a schematic representation of a network apparatus;
Figure 3 shows a schematic representation of a user equipment;
Figure 4 shows a schematic representation of a non-volatile memory medium storing instructions which when executed by a processor allow a processor to perform one or more of the steps of the methods of some examples;
Figure 5 shows a schematic representation of a network;
Figures 6 to 10 illustrate example architecture; and
Figures 11 to 20 illustrate example signalling operations.
Detailed description
In the following description of examples, certain aspects are explained with reference to mobile communication devices capable of communication via a wireless cellular system and mobile communication systems serving such mobile communication devices. For brevity and clarity, the following describes such aspects with reference to a 5G wireless communication system. However, it is understood that such aspects are not limited to 5G wireless communication systems, and may, for example, be applied to other wireless communication systems (for example, current 6G proposals) .
Before describing in detail the examples, certain general principles of a 5G wireless communication system are briefly explained with reference to Figures 1A and 1B.
Figure 1A shows a schematic representation of a 5G system (5GS) 100. The 5GS may comprise a user equipment (UE) 102 (which may also be referred to as a communication device or a terminal) , a 5G access network (AN) (which may be a 5G Radio Access Network (RAN) or any other type of 5G AN such as a Non-3GPP Interworking Function (N3IWF) /aTrusted Non3GPP Gateway Function (TNGF) for Untrusted /Trusted Non-3GPP access or Wireline Access Gateway Function (W-AGF) for Wireline access) 104, a 5G core (5GC) 106, one or more application functions (AF) 108 and one or more data networks (DN) 110.
The 5G RAN may comprise one or more gNodeB (gNB) distributed unit functions connected to one or more gNodeB (gNB) unit functions. The RAN may comprise one or more access nodes.
The 5GC 106 may comprise one or more Access and Mobility Management Functions (AMF) 112, one or more Session Management Functions (SMF) 114, one or more authentication server functions (AUSF) 116, one or more unified data management (UDM) functions 118, one or more user plane functions (UPF) 120, one or more unified data repository (UDR) functions 122, one or more network repository functions (NRF) 128, and/or one or more network exposure functions (NEF) 124. The role of an NEF is to provide secure exposure of network services (e.g. voice, data connectivity, charging, subscriber data, and so forth) towards a 3rd party. Although NRF 128 is not depicted with its interfaces, it is understood that this is for clarity  reasons and that NRF 128 may have a plurality of interfaces with other network functions.
The 5GC 106 also comprises a network data analytics function (NWDAF) 126. The NWDAF is responsible for providing network analytics information upon request from one or more network functions or apparatus within the network. Network functions can also subscribe to the NWDAF 126 to receive information therefrom. Accordingly, the NWDAF 126 is also configured to receive and store network information from one or more network functions or apparatus within the network. The data collection by the NWDAF 126 may be performed based on at least one subscription to the events provided by the at least one network function.
The network may further comprise a management data analytics service (MDAS) producer or MDAS Management Service (MnS) producer. The MDAS MnS producer may provide data analytics in the management plane considering parameters including, for example, load level and/or resource utilization. For example, the MDAS MnS producer for a network function (NF) may collect the NF’s load-related performance data, e.g., resource usage status of the NF. The analysis of the collected data may provide forecast of resource usage information in a predefined future time window. This analysis may also recommend appropriate actions e.g., scaling of resources, admission control, load balancing of traffic, and so forth.
Figure 1B shows a schematic representations of a 5GC represented in current 3GPP specifications. It is understood that this architecture is intended to illustrate potential components that may be comprised in a core network, and the presently described principles are not limited to core networks comprising only the described components.
Figure 1B shows a 5GC 106’ comprising a UPF 120’ connected to an SMF 114’ over an N4 interface. The SMF 114’ is connected to each of a UDM 122’, an NEF 124’, an NWDAF 126’, an AF 108’, a Policy Control Function (PCF) 130’, an AMF 112’, and a Charging function 132’ over an interconnect medium that also connects these network functions to each other. The 5G core 106’ further comprises a network repository function (NRF) 133’ and a network function 134’ that connect to the interconnect medium.
3GPP refers to a group of organizations that develop and release different standardized communication protocols. 3GPP develops and publishes documents pertaining to a system of “Releases” (e.g., Release 15, Release 16, and beyond) .
One of the current areas of 3GPP research relates to Cognitive Autonomous Networks (CAN) . A cognitive autonomous network runs with minimal to no human intervention as it is able to configure, monitor, and maintain itself independently. To effect this, various machine learning (ML) models and applications, and Artificial Intelligence systems are deployed within a network in order to make decisions on how to alter network parameters in response to changing network conditions.
Cognitive Autonomous Networks promise to provide intelligence and autonomy in network Operations, Administration and Management (OAM) as well as in network procedures for supporting increased flexibility and complexity of a radio network. Through use of Machine Learning (ML) and Artificial Intelligence (AI) in the Cognitive Functions, CANs may be able to: 1) take higher level goals and derive the appropriate performance targets, 2) learn from their environment and their individual or shared experiences therein, 3) learn to contextualize their operating conditions, and 4) learn their “optimal” behavior fitting to the specific environment and contexts. The results of these four operations may be used by implementing at least one decision based on information derived while performing them.
CAN may be deployed in any of a plurality of different use cases. One example use case for such cognitive automation is handover optimization.
The network automation functions forming at least part of a CAN may apply at least one Machine Learning Model (ML Model) to make appropriate inferences in different contexts, where a specific context represents a specific set of input parameters to the ML Model.
Machine learning (ML) inference is the process of running data points (e.g., current network parameter values) into a machine learning algorithm (or “ML model” ) to calculate an output (such as a single numerical score) . This process is also referred to as “operationalizing an ML model” or “putting an ML model into production. ” When an ML model is running in production, it is often then described as artificial intelligence (AI) since it is performing functions similar to human thinking and analysis.
An ML lifecycle can be broken up into two main, distinct parts. The first is the training phase, in which an ML model is created or “trained” by running a specified subset of data into the model. ML inference is the second phase, in which the model is put into action on live data to produce actionable output.
Depending on the contexts, an ML App (e.g., an ML Model or an ML-based function) may make different decisions at inference, resulting in respective different  outcomes. The selected decision may, however, be tracked for future reference. Tracking network parameters in response to implementing a selected decision may be used to evaluate whether the decisions/inferences that led to selecting that decision were appropriate for that context, and/or to evaluate degradations in the ML App's decision-making capability. For this, the network may not only have access to information regarding the inference capabilities of an ML App, but also be able to track and enable usage of the history of the inferences made by the ML applications.
3GPP has previously considered usage of inference history in Management Data Analytics (MDA) roles in management loop.
For example, 3GPP TS 28.104 relates to decision making for the management actions for the managed networks and services. The management actions are decided based on an analytics reports (provided by MDA and indicating, for example, root cause analysis of ongoing issues, predictions of potential issues and corresponding relevant causes and recommended actions for preventions, and/or prediction of network and/or service demands) , and other management data (e.g., historical decisions made previously) if necessary. This historical data is not contextualized in that it does not illustrate how those decisions were arrived at. Instead, they indicate a decision made (e.g., a change deployed in the network) and whether that change led to an improvement or not. The decision may be made by the consumer of MDAS (in the closed management control loop) , or by a human operator (in the case of open management loop) . The decision may comprise, for example, an indication of what action (s) to take, and when to take the action (s) .
The historical decisions may be used in decision making as part of input data. However, current 3GPP specifications do not comprise solutions for managing the history of inference, in terms of information modeling, procedures of lifecycle management of inference history records.
It would therefore be useful for a network to have appropriate services and capabilities to support the management of the inference history of ML Models and ML-based functions.
In particular, it would be useful for a network function and/or the management systems thereof to have appropriate services and capabilities to support the management of the inference history of ML Apps. The network function and/or management systems thereof may model information objects for requesting and reporting on ML inference history. The network function and/or management systems  may further be usefully configured to provide operations that support procedures for requesting and reporting on ML inference history.
The following relates to apparatus and methods for realizing an ML Inference History Function (also referred to herein as and the related services for supporting the request and delivery of ML inference history.
To help effect this, for an authorized consumer to request an inference history of a specific MLApp, the ML Inference History Function is configured to receive a request for the history of contextualised decisions taken by an MLApp. This received request is hereinafter referred to as ML Inference History Function Request.
The ML Inference History Function may assume the existence of an ML Inference History Log that stores decisions of the MLApp that were made for different contexts. The different contexts may be stored in respect of each decision. The requested history may be compiled from these stored decisions.
The ML Inference History Function derives the requested history with related context from the past decision events recorded in the ML Inference History Log.
In order for a producer of ML inference history (e.g., the ML Inference History Function) to provide to authorized consumers reports on the inference history of a specific MLAppMLApp, the ML Inference History function may be configured to map a received ML Inference History Function Request to a reporting process (hereinafter referred to as the ML Inference History Function Reporting) associated with the received ML Inference History Function Request and the specified MLApp.
In one example, the ML Inference History Function Request may map to an existing ML Inference History Function Reporting. In this case, the requestor is added to the recipients of the report provided by the existing ML Inference History Function Reporting.
In another example, the request may not map to any existing ML Inference History Function Reporting. In this case, the ML Inference History Function may create a new ML Inference History Function Reporting to be associated with the received ML Inference History Function Request and the MLApp specified in the request.
In both of these examples, an authorized consumer may comprise reporting characteristics for the reports in the ML Inference History Function Request. These reporting characteristics may define at least one of the following: Type of information to be comprised in the report, indication of where the report is to be provided once  formed; frequency of reporting, and/or an indication of the MLApp to which the request relates.
The authorized consumer of both of these examples may be configured to manage one or more requests in respect of the inference history of a specific MLApp, and to receive at least one report on the inference history of a specific MLApp. At least part of the one or more requests may be managed simultaneously. The one or more Requests may be managed at different times to each other.
To help with storing the ML Inference History, an information model for the ML Inference History Function report may be defined that provides data on the captured inference history. The information model may be of a single MLInferencerecord. Procedures for requesting and reporting on the ML Inference History Function may be defined.
Figure 6 illustrates one example architecture in which an ML Inference History Function may be provided in a management plane.
Figure 6 illustrates an authorized consumer 601 configured to request an ML Model Inference History in respect of an MLApp from an ML Model Inference History Producer 602 over management plane 603. The ML Model Inference History Producer 602 is configured to exchange signalling with an ML Inference History Function 604 using an ML Model Inference History Control Management Service 605. The ML Model Inference History Producer 602 may return a report on the ML Model Inference History to the ML Model Inference History Consumer 601 over an interface 606.
Figure 7 illustrates an example architecture in which an ML Inference History Function may be provided over an air interface.
Figure 7 illustrates an ML Inference History Module 701 comprised in a UE 702. The ML Inference History Module 701 may receive a request for ML Model Inference History from an access point 703 (e.g., a gNB) . In response to this request, the ML Inference History Module 701 may report information to the ML Model Inference History through the access point 703.
The following provides definitions for information object classes (IOCs) and dataTypes for realizing an ML Inference History Function.
Each Information Object Class definition comprises a description of a purpose of the IOC and the attributes that define the IOC. An IOC does not comprise the values for the attributes that comprise its definition.
To simplify IOC definitions, the attributes of each Information Object Class are partitioned with similar attributes being grouped together. These groupings of Attributes are specified as independent modules and may be reused by other Information Object Classes.
A dataType is a classification that specifies which type of value a variable has and what type of mathematical, relational or logical operations can be applied to it without causing an error. A string, for example, is a dataType that is used to classify text, and an integer is a dataType used to classify whole numbers.
The following further provides a description of the relationships among these IOCs and dataTypes in the present case.
Figures 8, 9 and 10 illustrate at least one example for modelling an ML Inference History Function.
Figure 8 is an information model for control of an ML Inference History Function. In general, an information model is an abstract, formal representation of entity types that may include their properties, relationships and the operations that can be performed on them. Typically, they are used to model a constrained domain that can be described by a closed set of entity types, properties, relationships and operations.
Figure 8 illustrates a Managed Entity Proxy Class 801 that provides class names (such as, for example, Subnetwork ManagedFunction, ManagementFunction) to an ML Inference History Function IOC 802. A Proxy class is a class that implements a list of interfaces specified at runtime when the class is created.
The ML Inference History Function IOC represents the properties of ML Inference History Function. The ML Inference History Function is the function responsible for receiving requests and sending the related reports. It may thus be instantiable from an information object class and name-contained in either a Subnetwork, a Managed Function or a Management Function. Thus, the ML Inference History Function is a type of managed Function (i.e. the ML Inference History Function may be considered a subclass of and inherits the capabilities of a managed Function) .
A Managed Function may be an IOC defined in TS28.622 clause 4.3.4. The ManagedFunction IOC may represent a telecommunication function that is either realized by software running on dedicated hardware or realized by software running on a network functions virtualization infrastructure (NFVI) . Each Managed Function instance communicates with a manager (directly or indirectly) over one or more management interfaces exposed via its containing managed entity instance.
The ML Inference History Function IOC attributes will now be described. The ML Inference History Function IOC may be associated with one or more ML Models. For example, the ML Inference History Function IOC may be associated with ML Models associated via a list of MLModelIdentifers. The list of MLModelIdentifiers may be defined in the model of the IOC within an operating communication protocol (e.g., within a 3GPP specification) . The list may be created at each time the IOC is instantiated (e.g., by the operator) .
The MLModel (also referred to herein as an MLApp, or an AIMLEntity) may be an IOC. In this case, the ML Model IOC may be contained by a ManagedEntity and created by system or by consumer as per request. Alternatively, the MLModel may be a dataType. In this case, the MLModel may be an attribute member of an IOC AIMLTrainingFunction (as defined in current TS 28.105) . The lifecycle of the MLModel datatype may be maintained with the training request that is associated with the AIMLTrainingFunction IOC.
The ML Inference History Function may comprise one or more ML Inference History Function Requests as well as one or more ML Inference History Function Reporting instances responsible for sending the ML Inference History Function reports.
The ML Inference History Function IOC may comprise attributes inherited from a Managed Function IOC (such as, for example, a ManagedFunction IOC defined in 3GPP TS 28.622 [30] ) and the following attributes:
Figure PCTCN2022099429-appb-000001
The ML Inference History IOC 802 outputs to an ML Inference History Reporting IOC 803 and to an MLApp dataType 804. The ML Inference History IOC 802 further outputs the names to an ML Inference History Request IOC 805.
The following defines the IOC representing properties of ML Inference History Function Request IOC.
For each request to collect ML Inference History, a consumer may create a new ML Inference History Function Request on the ML Inference History Function. In other words, the ML Inference History Function Request may be an information object class that is instantiated for each request for ML Inference History.
Each ML Inference History Function Request may be associated to exactly one MLModel or MLApp. Each ML Inference History Function Request may comprise specific reporting requirements. For example, each ML Inference History Function Request may comprise a reporting requirement for ReportingPeriod that defines how frequently the ML Inference History Function may report about the ML Inference History Function Request.
The ML Inference History Function Request may be associated with an ML Inference History Function Reporting. However, this association is not necessarily a one to one mapping between an instance of ML Inference History Function Request and an ML Inference History Function Reporting instance, since multiple requests may be mapped to a single reporting instance. As such the ML Inference History Function Requests could be managed independently of the ML Inference History Function Reporting. The association between ML Inference History Function Request and its corresponding ML Inference History Function Reporting may be written into the request by the ML Inference History Function after determining which ML Inference History Function Reporting IOC will provide reports for the specific request.
The ML Inference History Function Request may comprise a source to identify where its coming from (e.g., to identify the consumer requesting the report) . The source may be, for example, an enumeration defined for network functions, operator roles, and/or other functional differentiations. For example, the source may be implemented as a value associated with an index, where the value and the index may be used to point towards/identify from where the request is being generated.
The ML Inference History Function Request IOC may comprise attributes inherited from Top IOC (such as, for example, the TOP IOC defined in 3GPP TS  28.622) . The ML Inference History Function Request may comprise the following attributes:
Figure PCTCN2022099429-appb-000002
The following defines the IOC representing properties of ML Inference History Function Request IOC.
The ML Inference History Function Reporting IOC may represent the properties of ML Inference History Function Reporting. The ML Inference History Function Reporting may represent the capability of compiling and delivering reports and notifications about ML Inference History Function or its associated ML Inference History Function Requests. The ML Inference History Function may be associated with one or more instances of ML Inference History Function Reporting.
The ML Inference History Function Reporting may report on (and, as such, may be associated with one or more ML Inference History Function Requests) .
For each ML Inference History Function Request, the characteristics of the request as well as the reporting requirements may be mapped to an ML Inference History Function Reporting instance. The ML Inference History Function Reporting may comprise an ML Inference History Function Reporting Matrix that defines the frequencies at which the report is to be sent and the report recipients which are the specific entities to which each report may be sent at each time instant.
The ML Inference History Function Reporting may be managed separately from the ML Inference History Function Requests. This is because an ML Inference History Function Reporting instance may provide reports on more than one ML Inference History Function Requests. The ML Inference History Function Reporting shall be associated with one or more ML Inference History Function Reports.
The ML Inference History Function Reporting IOC may comprise attributes inherited from a Top IOC (defined, for example, in TS 28.622) . The ML Inference History Function Reporting IOC may comprise the following attributes:
Figure PCTCN2022099429-appb-000003
The ML Model dataType (also referred to herein as an MLApp dataType) may represent the properties of an MLModel.
Multiple ML Inference History Function Requests may be submitted on a single MLModel but the ML Inference History Function Requests may be independent of the MLModel. Relatedly, although the ML Inference History Function Requests may be associated with the MLModel, the MLModel does not need to be associated with any specific ML Inference History Function Request. Relatedly, the MLModel does not need to be associated with any specific ML Inference History Function Reporting instance, since the data used to compile the ML Inference History Function Reports may be collected from a log that is independent of the MLModel.
The MLModel may comprise the attributes inherited from TOP IOC (for example, as defined in 3GPP TS 28.622) . Further, the model may be represented in a repository, such that its attributes (including a current state) may be retrieved from a repository function. Although the MLModel is modelled here as a dataType, it is understood that the MLModel may be modelled as an Information Object Class.
The ML Inference History Reporting IOC 803 and the ML Inference History Request IOC 805 output to the MLApp dataType 804. The ML Inference History Request IOC 805 further outputs to a Reporting Context dataType 806 and to the ML Inference History Reporting IOC 803.
The ML Inference History Reporting IOC 803 further outputs to ML Inference Hisory Request IOC 805, the Reporting Context dataType 806, an ML Reporting Matrix dataType 807, and to an ML Inference History Report dataType 808.
The MLReportingMatrix dataType represents the properties of MLReportingMatrix.
The MLReportingMatrix may define the frequencies at which reports should sent and the specific entities to which each report shall be sent at each time instant. The frequencies at which reports are to be sent may be defined in terms of a ReportingPeriod (for example, in seconds as the time between 2 successive reports for that MLTrainingReporting instance) .
The MLReportingMatrix may comprise the following attributes:
Figure PCTCN2022099429-appb-000004
The consumerList is a list of DistinguishedNames of the managed functions that have requested the related report and to which the report is to be delivered at that time instants defined by the ReportingPeriod.
The ML Inference History Function Report dataType represents the properties of ML Inference History Function Report.
The ML Inference History Function 802 may generate one or more ML Inference History FunctionReports. Each of the generated ML Inference History  FunctionReport may be associated to one or more MLModels/MLApps since the ML Inference History FunctionReports is associated to the ML Inference History Function Reporting instance that reports about ML Inference History Function Requests that are themselves associated to the MLModels/MLApps for which InferenceHistory is requested and/or reported.
The ML Inference History FunctionReport may comprise the following attributes:
Figure PCTCN2022099429-appb-000005
The ReportingContext <<dataType>> represents the properties of a context. A context describes the list of constraints or conditions that should evaluate to true when the Reporting is executed.
The ReportingContext <<dataType>> may be defined by the triple <Attribute, Condition, ValueRange > condition.
Of these, Attribute may describe (or otherwise indicate) a specific attribute of or related to the object or a use case that relates to the MLApp on which reporting is to be executed. “Attribute” may also or alternatively refer to at least one characteristic of such an object (e.g., its control parameter, gauge, counter, key performance indicator (KPI) , weighted metric, etc. ) . “Attribute” may also or alternatively refer to an attribute related to the operating conditions of the object or use case (such as, for example, weather conditions, load conditions, etc. ) .
Further, Condition may express the limits within which the Attribute is allowed/supposed to be. The allowed values for the condition may comprise: "is equal to" ; "is less than" ; "is greater than" ; "is within the range" ; and/or "is outside the range" .
Further, ValueRange may describe (or otherwise indicate) a range of values that are applicable to the Attribute as constrained by the Condition.
A report may be generated when a value representing the Attribute is obtained, the value falling within the ValueRange while the Condition (s) is fulfilled.
The Context may comprise the following attributes:
Figure PCTCN2022099429-appb-000006
Figure 9 illustrates inheritance relations in ML Inference History Functions. Inheritance relates to the relationship between a so-called parent class and a child class. In particular, the child class inherits all attributes of parent class. For example, assume a Parent class A with 2 attributes (attribute_p_1; attribute_p_2) . When defining a new class B that is a child class of claim A, class B inherits two attributes from class A, and class B further has 3 its own attributes (attribute_c_1; attribute_c_2; attribute_c_3) . In other words, due to inheritance relationship, in the end, the class B will have in total 5 attributes: (attribute_p_1; attribute_p_2; attribute_c_1; attribute_c_2; attribute_c_3) .
Figure 9 illustrates an ML Inference History Function IOC 901 providing an output to a Managed Function IOC.
Figure 10 illustrates Inheritance Relations to an ML Inference History request and reporting.
Figure 10 illustrates a “Top” IOC 1001 that receives outputs from an ML Inference History Request IOC 1002 and an ML Inference History Reporting IOC 1003. These entities of Figures 9 and 10 may correspond to those equivalently labelled IOCs in Figure 8.
The following describes procedures that may be performed by the ML Inference History Function, with reference to Figures 11 to 15.
The ML Inference History Function may have a capability for instantiating a ML Inference History Function Reporting instance based on a receipt of at least one  request from at least one consumer function. This is illustrated with respect to Figure 11.
Figure 11 illustrates at least one consumer 1101 interacting with at least one ML Inference History Management service Producer 1102 (also referred to herein as simply “producer” ) .
At 11001, the at least one consumer 1101 signals a request to the producer 1102. This signalling may be a request to creast a managed object instance. This request may comprise a request for ML Inference History. This request may comprise an identification of an ML model to which the request relates. This request may comprise an identifier for the request that may be used in communications comprising information relating to that request.
The request for ML Inference History Function from the at least one consumer may be received using ML Inference History Function Provisioning Management service implemented via CRUD (Create, Read, Update, Delete) operations on the ML Inference History Function Request objects.
As mentioned above, the request may comprise an indication of the ML Model to which the request relates. This may be, for example, a unique identifier of the specific MLModel on which the consumer wishes to receive inference. The request may alternately indicate an ML Model by comprising an identifier of the MLApp that comprises the MLModel on which the consumer wishes to receive inference.
The request may comprise an indication of a Reporting Context. The Reporting Context are the conditions and environment characteristics to be considered in reporting the ML Inference History Function
The request may comprise an ML Reporting Frequency. This frequency represents a frequency at which the consumer to requesting to be kept informed about ML Inference History Function. This frequency is provided when the at least one consumer wishes to receive the report on inference history more than once.
At 11002, the producer 1102 may instantiate an ML Inference History request (referred to herein as Req1) .
At 11003, the producer 1102 may identify filter history data for identifying data falling within the scope of the request.
At 11004, the producer 1102 may apply the filter (s) identified at 11003 in order to collect reporting data for fulfilling the request.
At 11005, the producer 1102 may check whether a reporting instance already exists for the applied filters. When no reporting instance already exists for the applied function, the producer 1102 proceeds to 11006.
At 11006, the producer 1102 instantiates an ML Inference Histroy Reporting instance for Req1. From 11006, the producer 1102 proceeds to 11008.
When the check of 10005 determines that a reporting already exists for the applied function, the producer 1102 proceeds to 11007.
During 11007, the producer 1102 adds the requirements associated with Req1 to an existing MLInference History Reporting Instance. From 11007, the producer 1102 proceeds to 11008.
During 11008, the producer 1102 signals a notidication to the at least one consumer 1101. This notification may comprise an indication that the requiments of the ML Inference Hisotry Request (Req1) have been comprised as part of an ML Inference History Reporting instance. An identifier may be provided for the ML Inference History Reporting instance. For example, the identifier for the ML inference history reporting instance may be labelled as “JBx” in the present example.
Therefore, in the example of Figure 11, following receipt of an ML Inference History Function Request from a consumer, an ML Inference History Function may create an instance of ML Inference History Function Reporting or may update an existing instance of ML Inference History Function Reporting with characteristics of the ML Inference History Function Request.
The requirement (s) comprised in the ML Inference History Function Request may be added to an existing ML Inference History Function Reporting instance with related characteristics. For example, the requirement (s) comprised in the ML Inference History Function Request may be added to an existing ML Inference History Function Reporting instance when the ML Inference History Function Request and the existing ML Inference History Function Reporting instance both are on the same model and context but with different frequency. For example, when a Request A with an ML Reporting Frequency of 30 minutes is received when there is a ML Inference History Function Reporting instance with MLReporting Frequency of 1 hour, the new instance may be added to the existing instance to create a common ML Inference History Function Reporting instance with differentiated MLReporting Frequency instances for the two recipients.
For the alternative case, ML Inference History Function Reporting may be instantiated for the ML Inference History Function Request. For ML Inference History Function Requests with different Expected Runtime Context and/or Termination Conditions, different ML Inference History Function Reporting instances may be instantiated.
Subsequent to instantiating a new ML Inference Hisotory Reporting Instance and ML Inference History Function may also notify the consumer who initiated the request of the corresponding action taken regarding the request, such as descirbed in relation to 1108.
Figure 12 illustrates operations relating to reporting an ML Inference History for a given ML Inference History Frunction Request Instance.
Figure 12 illustrates operations that may be performed between at least one consumer 1201 and a producer 1202. These entities may correspond to the entities of Figure 11.
During 12001, the at least one consumer 1201 and the producer 1202 exchange isgnnaling for instantiating an ML Inference History Request instance. This may be as described in respect of steps 11001 to 11007 of Figure 11.
In other words, for a given ML Inference History Function Request, the ML Inference History Function producer may instantiate and trigger an ML Inference History Function Reporting as a process. The Inference History Function Reporting process may comprise the process illustrated in Figure 12, e.g.:
· identifying the filter characteristics to be used to select the history data from any available logs
· applying the filters to select the data
· compiling the obtained data into Inference History report to be shared with the requesting consumer (s) .
During 12002, the producer 1202 may signal the at least one consumer 1201. This signalling of 12002 may correspond to the signalling of 11008 of Figure 11. This signalling of 12002 may notify the at least one consumer 1201 of the creation of a managed object instance. This signalling of 12002 may comprise an identifier of the Instance. The identifier of the Instance may be a unique identifier of the reporting instance, which may be the DistinguishedName (DN) of the MLInferenceHistoryReporting. This signaling of 12002 may comprise data. The data  comprised in the signalling of 12002 may correspond to a full representation of MLInferenceHistoryReporting, including the data about the inference history that is to be reported.
During 12003, the producer 1202 compiles ML Inference History according to the requirements of Req1.
During 12004, the producer 1202 signals the at least one consumer 1201. This signalling of 12004 may comprise an indication that data according to the requirements of Req1 are available. This signalling of 12004 may comprise at least a part of the compiled data. This signalling of 12004 may comprise none of the compiled data. This signalling of 12004 may comprise an indication of where at least part of the compiled data may be downloaded from (or is otherwise accessible) . For example, the signaling of 12004 may comprise an address from which at least part (and potentially, all) of the compiled data may be downloaded. The signalling of 12004 may comprise an identifier of the ML Model.
In other words, subsequent to instantiating the ML Inference History Function Reporting, the ML Inference History Function function may configure reporting for the corresponding ML Inference History Function Request. If there exists a ML Inference History Function Reporting instance with the same characteristics as those stated in the ML Inference History Function Request, the ML Inference History Function producer may append the new reporting requirements on to the existing ML Inference History Function Reporting instance. Alternatively, the ML Inference History Function producer may instantiate a new ML Inference History Function Reporting instance with the requirements as defined in the ML Inference History Function Request.
The ML Inference History Function has the capability and a control interface to allow a consumer (e.g., the operator) to configure and manage one or more ML Inference History Function Reporting instances. The control interface may enable the consumer to get the outcomes of the ML Inference History Function Reporting process. This may be achieved using the Notify procedure of the 3GPP provising management service as illustrated by Figure 12.
Figure 13 illustrates operations relating to reading characterisitcs of one of more ML Inference Reporting Instances.
Figure 13 illustrates operations that may be performed by at least one consumer 1301 and a producer 1302. The at least one consumer and the producer  may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12, and may thus perform at least one of the actions described therein.
During 13001, the at least one consumer 1301 and the producer 1302 exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance.
During 13002, the at least one consumer 1301 signals the producer 1302. This signalling of 13002 may comprise a request to retrieve an indication of attributes of a managed object instance. For example, the signalling may comprise a request to retrieve an indication of at least one attribute associated with respective at least one ML Inference History Reporting Instances.
During 13003, the producer 1302 signals the at least one consumer 1301. This signalling of 13003 may comprise a file and/or stream reporting service operation. This signalling of 13003 may comprise information related to at least one ML Inference History Reporting Inference. For each ML Inference History Reporting Inference being provided, the signalling of 13003 may comprise an identifier of that ML Inference History Reporting Inference and associated data. The associated data may comprise values of the instance of MLInferenceHistoryReporting, where the type of values may be as defined in IOC of MLInferenceHistoryReporting.
In other words, in the example of Figure 13, a consumer is enabled to read the characteristics of a ML Inference History Function Reporting instance. For example, the consumer may determine to read the status of the ML Inference History Function Reporting instance, and/or read outcomes (such as, for example, decisions and/or historical evaluations of past decisions) of the ML Inference History Function Reporting instance. This may be achieved using the getMOIAttributes procedure of the 3GPP provising management service operation, although it is understood that another service operation may be used.
The status may represent a determination of how far the reporting instance has been completed. The provision of the status may be provided, for example, in response to a triggered data collection, data being collected, a result being reported (when only one report is to be made) , and/or continuous reporting instantiated (when the request relates to repetitive reporting) . The status may correspond to a value assignment for each attribute in MLInferenceHistoryReporting. There may thus exist a job with a status attribute.
Figure 14 illustrates operations that may be performed for modifying at least one characteristic of one or more ML Inference History Reporting instances.
Figure 14 illustrates signalling that may be performed by at least one consumer 1401 and at least one producer 1402. The at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13, and may thus perform at least one of the actions described therein.
During 14001, the at least one consumer 1401 and the producer 1402 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
During 14002, the at least one consumer 1401 signals the producer 1402. This signalling of 14002 may comprise a request to change a value assocated with at least one attribute of at least one ML Inference History Reporting instances. This signalling of 14002 may comprise a request to modify at least one attribute of a managed object instance. This signalling of 14002 may comprise a modifyMOIAttributes service operation. This signalling of 14002 may comprise at least one identifer associated with the at least one ML Inference History Reporting instances that are the subject of the signalling of 14002. The signalling of 14002 may identify at least one attribute to be modified.
During 14003, the producer 1402 signals the at least one consumer 1401. This signalling may comprise an indication that a value associated with an attribute respectively associated with at least one ML Inference History Reporting instance has been modified from a previous value to a current value. The signalling of 14003 may comprise an indication of the current value for any modified attribute identified in the signalling of 14003. The signalling of 14003 may comprise at least one identifer associated with the at least one ML Inference History Reporting instances that are the subject of the signalling of 14003. The signalling of 14003 may comprise a notifyMOIAttributeValueChanges service operation.
In other words, the signalling of Figure 14 may be configured to enable the consumer to configure new and ongoing ML Inference History Function Reporting instances. For example, the operator may assign priorities to one or more ML Inference History Function Reporting instances to indicate that in case of resource constraints, some particular ML Inference History Function Reporting instances with higher priority should be executed before ML Inference History Function Reporting  instances with lower priorities are executed. The configuration may be achieved using the modifyMOIAttributes procedure of the 3GPP provising management service as ilustrated by Figure 14, although it is understood that alternative service operations may be used to effect the same function.
Following receipt of an update request, the ML Inference History Function producer may notify the consumer about the success of the executed update on the ML Inference History Function Reporting instance (s) .
Figure 15 illustrates operations that may be performed for deleting one or more ML Inference History Reporting instances.
Figure 15 illustrates signalling that may be performed by at least one consumer 1501 and at least one producer 1502. The at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13 and/or Figure 14, and may thus perform at least one of the actions described therein.
During 15001, the at least one consumer 1501 and the producer 1502 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
During 15002, the at least one consumer 1501 may signal the producer 1502. This signalling of 15002 may comprise a request to delete a managed object instance. For example, this signalling may comprise a request to delete an ML Inference History Reporting instance. This signalling of 15002 may comprise an identifier of the ML Inference History Reporting instance that is the subject of the signalling of 15002.
During 15003, the producer 1502 signals the at least one consumer 1501. This signalling of 15003 may comprise an indication that the ML Inference History Reporting instance that was the subject of the signalling of 15002 has been deleted. This signalling of 15003 may comprise an identifier of the ML Inference History Reporting instance that has been deleted. The signalling of 15003 may comprise an explicit indication that the identified ML Inference History Reporting instance has been deleted. The signalling of 15003 may comprise a NotifyMOIDeletion service operation. The signalling of 15003 may be considered as a response to the signalling of 15002 in that it indicates that the signalling of 15002 has been successful in its requested objective.
In other words, in Figure 15 the consumer is enabled to request the deletion unwanted ML Inference History Function Reporting instances. This may be achieved using the deleteMOI procedure of the 3GPP provisioning management service as ilustrated by Figure 15, lathough it is understood that other service operations may be used to effect this function. Following a delete request, the ML Inference History Function producer may notify the consumer about the success of the executed deletion of ML Inference History Function Reporting instances.
Figures 16 to 18 illustrate signalling that may be used for managing and controlling ML Model Inference History Requests.
Figure 16 illustrates operations that may be performed for reading characteristics of one or more ML Model Inference History Requests.
Figure 16 illustrates signalling that may be performed by at least one consumer 1601 and at least one producer 1602. The at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13 and/or Figure 14, and/or Figure 15, and may thus perform at least one of the actions described therein.
During 16001, the at least one consumer 1601 and the producer 1602 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
During 16002, the at least one consumer 1601 signals the producer 1602. This signalling of 16002 may comprise a request for information relating to at least one attribute of an ML Inference History Request. The signalling of 16002 may comprise a ProvMnS. getMOIAttributes service operation. The signalling of 16002 may comprise an identifer of the ML Inference History Request that is the subject of the signalling of 16002.
During 16003, the producer 1602 responds to the signalling of 16002. This response may comprise an idenifier of the ML Inference History Request to which the signalling of 16002 related. This response may compirse data. The data may comprise values associated with the instance of MLInferenceHistoryRequests. Therefore, the type of values may be as defined in IOC of MLInferenceHistoryRequests This response may be effected by a File/Streamreporting service operation.
In other words, in the example of Figure 16, an ML Inference History Function has the capability and a control interface to allow a consumer (e.g., the operator) to  configure and manage one or more ML Inference History Function Requests. The control interface may, among other functionality, enable the consumer to read the characteristics of submitted ML Inference History Function Requests. This may be achieved using the getMOIAttributes procedure of the 3GPP provising management service as ilustrated by Figure 16. For example, the consumer may request to read the number of submitted, ML Inference History Function Requests and/or features (e.g. priorities, sources, …) of the different ML Inference History Function Requests.
Figure 17 illustrates example signalling that may be performed for modifying at least one characteristic of one or more ML Inference History Requests.
Figure 17 illustrates signalling that may be performed by at least one consumer 1701 and at least one producer 1702. The at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13 and/or Figure 14, and/or Figure 15, and/or Figure 16 and may thus perform at least one of the actions described therein.
During 17001, the at least one consumer 1701 and the producer 1702 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
During 17002, the at least one consumer 1701 signals the producer 1702. This signalling may comprise an indication that a modification of at least one attribute of an ML Inference History Request is requested. The signalling of 17002 may comprise an identifier of the ML Inference History Request whose at least one attribute is to be modified. The signalling of 17002 may comprise identifier (s) of the at least one attribute that is being requested to be modified. The signalling of 17002 may comprise a ProvMnS. modify. MOIAttributes service operation.
During 17003, the producer 1701 may respond to the signalling of 17002. This response may comprise an indication that a value associated with an attribute respectively associated with at least one ML Inference History Request instance has been modified from a previous value to a current value. The signalling of 17003 may comprise an indication of the current value for any modified attribute identified in the signalling of 17003. The signalling of 17003 may comprise at least one identifer associated with the at least one ML Inference History request instances that are the  subject of the signalling of 17003. The signalling of 17003 may comprise a notifyMOIAttributeValueChanges service operation.
In the example of Figure 18, the control interface may be configured to enable the consumer to configure the submitted ML Inference History Function Requests, e.g., the operator may change the priorities of one or more ML Inference History Function Requests to indicate those that need to be prioritised ragarding the instantiation of the related ML Inference History Function Reporting instances. Relatedly, the control interface may enable a consumer to update the priority of the ML Inference History Function Request sent a priori by that consumer.
For example, when a consumer adjusts a request, the producer has to decide whether the modified request can still be served by the same original job or whether a new job needs to be instantiated. The configuration may be achieved using the modifyMOIAttributes procedure of the 3GPP provising management service as illustrated by Figure 18. Following an update request, the ML Inference History Function may notify the consumer about the success of the executed updates on the ML Inference History Function Requests
Figure 18 illustrates signalling that may be performed by at least one consumer 1501 and at least one producer 1802. The at least one consumer and the producer may correspond to the at least one consumer and the producer of Figure 11 and/or Figure 12 and/or Figure 13 and/or Figure 14 and/or Figure 15 and/or Figure 16 and/or Figure 17, and may thus perform at least one of the actions described therein.
During 18001, the at least one consumer 1801 and the producer 1802 may exchange signalling for instantiating an ML Inference History Request (Req1) and Reporting Instance. This may be as described above with reference to Figure 11.
During 18002, the at least one consumer 1801 may signal the producer 1802. This signalling of 18002 may comprise a request to delete a managed object instance. For example, this signalling may comprise a request to delete an ML Inference History request instance. This signalling of 18002 may comprise an identifier of the ML Inference History request instance that is the subject of the signalling of 18002.
During 18003, the producer 1802 signals the at least one consumer 1801. This signalling of 18003 may comprise an indication that the ML Inference History request instance that was the subject of the signalling of 18002 has been deleted. This signalling of 18003 may comprise an identifier of the ML Inference History request instance that has been deleted. The signalling of 18003 may comprise an explicit  indication that the identified ML Inference History request instance has been deleted. The signalling of 18003 may comprise a NotifyMOIDeletion service operation. The signalling of 18003 may be considered as a response to the signalling of 18002 in that it indicates that the signalling of 18002 has been successful in its requested objective.
In the example of Figure 18, the control interface may be configured to enable the consumer to delete unwanted ML Inference History Function Requests. This may be achieved using the deleteMOI procedure of the 3GPP provising management service, as ilustrated by Figure 18.
Therefore, when a consumer requests that an ML Inference History Function request be deleted, the producer receiving this request may either delete the corresponding job when that job serves only that one request, or the producer modifies attributes of the job to remove the requirements of the deleted request.
Following a delete request, the ML Inference History Function may notify the consumer about the success of the executed deletion of ML Inference History Function Requests.
Figures 19 and 20 illustrate example aspects of the apparatus described above. It is therefore understood that features of the above examples may be implemented in the following without loss of generality.
Figure 19 illustrates operations that may be performed by an apparatus for a first function. The operations of Figure 19 may be performed by a consumer (e.g., an authorized consumer) . The operations of Figure 19 may be performed by a producer.
During 1901, the apparatus signals, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision. The second function may be as described below in relation to Figure 20
During 1902, the apparatus receives the requested information from the second function.
The apparatus may signal, to the second function, a request to modify at least one parameter comprised in the request, and receive an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may signal, to the second function, a request to delete any reporting associated with the request, and receive an indication that the second function will no longer report the requested information to the first function.
The apparatus may use the received requested information to determine whether to change at least one network parameter and/or machine learning decision. When it is determined to change at least one network parameter and/or machine learning decision in dependence on the received information, this may be caused to happen.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
Figure 20 illustrates operations that may be performed by an apparatus for a second function. The second function may be, for example, an ML Inference History Function.
During 2001, the apparatus receives, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision. The first function may be the apparatus of Figure 19.
During 2002, the apparatus obtains the requested information.
During 2003, the apparatus signals the requested information to the first function.
The obtaining the requested information may comprise retrieving information related to said at least one decision from a storage function, and processing the retrieved information to form the requested information.
The apparatus may, in response to said receiving, determine that the second function is currently maintaining a first reporting instance in respect of said requested information, and update the first instance to comprise an instruction to signal the requested information to the first function.
The updating the first instance may comprise associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
The apparatus may, in response to said receiving, determine that the second function is not currently maintaining a first reporting instance in respect of said requested information, and instantiate a second instance that comprises an instruction to signal the requested information to the first function.
The determining that the second function is not currently maintaining a first reporting instance in respect of said requested information may comprise at least one of: determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
The apparatus may provide the first function with an identifier of the first instance and/or with an identifier of the second instance.
The obtaining the requested information may comprise: determining filter conditions for obtaining the requested information; and applying the determined filer conditions to collect with the requested information.
The filters conditions may be obtained from MLInferenceHistoryRequest. For example, ReportingContext may be indicated, where ReportingContext is a list have at least one entry, each entry in the list being represented/defined by three fields, <Attribute, Condition, ValueRange >. The Attribute field may indicate a certain Key Performance Indicator name, which could be used by the second function to determine where to apply a filed. The second function may apply further filtering using the condition and ValueRange fields. When all of the filtering has been performed, the second function may generate a complete report (MLInferenceHistoryReporting) as requested.
The apparatus may receive, from the first function, a request to modify at least one parameter comprised in the request, and signal, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
The apparatus may receive, from the first function, a request to delete any reporting associated with the request, and signal an indication that the second function will no longer report the requested information to the first function.
The request may comprise at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when  providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
The system described herein has numerous advantages. For example, the presently disclosed mechanisms enable an authorized consumer to request and receive the contextualized history of ML inferences. Such history can be used, for example, to determine the appropriateness of different decisions in different contexts and to determine whether ML model retraining is to be triggered. ML model retraining may be triggered when it is determined that a plurality of relatively poor decisions have been made. A determination of whether or not a plurality of decisions have been made may be performed using predetermined criteria. For example, a decision may be determined to be poor when it does not result in the improvement that it was intended to achieve when implemented and/or when any improvement that it was intended to achieve was minimal, relative to a predetermined threshold. Moreover the provided history information may be used to compare the decisions taken by different ML models or ML applications, e.g. before selecting a model or application to employ in a given context.
Figure 2 shows an example of a control apparatus for a communication system, for example to be coupled to and/or for controlling a station of an access system, such as a RAN node, e.g. a base station, gNB, a central unit of a cloud architecture or a node of a core network such as an MME or S-GW, a scheduling entity such as a spectrum management entity, or a server or host, for example an apparatus hosting an NRF, NWDAF, AMF, SMF, UDM/UDR, and so forth. The control apparatus may be integrated with or external to a node or module of a core network or RAN. In some examples, base stations comprise a separate control apparatus unit or module. In other examples, the control apparatus can be another network element, such as a radio network controller or a spectrum controller. The control apparatus 200 can be arranged to provide control on communications in the service area of the system. The apparatus 200 comprises at least one memory 201, at least one  data processing unit  202, 203 and an input/output interface 204. Via the interface the control apparatus can be coupled to a receiver and a transmitter of the apparatus. The receiver and/or the transmitter may be implemented as a radio front end or a remote radio head. For example, the control apparatus 200 or processor 201 can be configured to execute an appropriate software code to provide the control functions.
A possible wireless communication device will now be described in more detail with reference to Figure 3 showing a schematic, partially sectioned view of a communication device 300. Such a communication device is often referred to as user equipment (UE) or terminal. An appropriate mobile communication device may be provided by any device capable of sending and receiving radio signals. Non-limiting examples comprise a mobile station (MS) or mobile device such as a mobile phone or what is referred to as a ’smart phone’ , a computer provided with a wireless interface card or other wireless interface facility (e.g., USB dongle) , personal data assistant (PDA) or a tablet provided with wireless communication capabilities, or any combinations of these or the like. A mobile communication device may provide, for example, communication of data for carrying communications such as voice, electronic mail (email) , text message, multimedia and so on. Users may thus be offered and provided numerous services via their communication devices. Non-limiting examples of these services comprise two-way or multi-way calls, data communication or multimedia services or simply an access to a data communications network system, such as the Internet. Users may also be provided broadcast or multicast data. Non-limiting examples of the content comprise downloads, television and radio programs, videos, advertisements, various alerts and other information.
A wireless communication device may be for example a mobile device, that is, a device not fixed to a particular location, or it may be a stationary device. The wireless device may need human interaction for communication, or may not need human interaction for communication. As described herein, the terms UE or “user” are used to refer to any type of wireless communication device.
The wireless device 300 may receive signals over an air or radio interface 307 via appropriate apparatus for receiving and may transmit signals via appropriate apparatus for transmitting radio signals. In Figure 3, a transceiver apparatus is designated schematically by block 306. The transceiver apparatus 306 may be provided, for example, by means of a radio part and associated antenna arrangement. The antenna arrangement may be arranged internally or externally to the wireless device.
A wireless device is typically provided with at least one data processing entity 301, at least one memory 302 and other possible components 303 for use in software and hardware aided execution of tasks it is designed to perform, including control of access to and communications with access systems and other communication devices.  The data processing, storage and other relevant control apparatus can be provided on an appropriate circuit board and/or in chipsets. This feature is denoted by reference 304. The user may control the operation of the wireless device by means of a suitable user interface such as keypad 305, voice commands, touch sensitive screen or pad, combinations thereof or the like. A display 308, a speaker and a microphone can be also provided. Furthermore, a wireless communication device may comprise appropriate connectors (either wired or wireless) to other devices and/or for connecting external accessories, for example hands-free equipment, thereto.
Figure 4 shows a schematic representation of non-volatile memory media 400a (e.g. computer disc (CD) or digital versatile disc (DVD) ) and 400b (e.g. universal serial bus (USB) memory stick) storing instructions and/or parameters 402 which when executed by a processor allow the processor to perform one or more of the steps of the methods of Figure 19, and/or Figure 20, and/or methods otherwise described previously.
As provided herein, various aspects are described in the detailed description of examples and in the claims. In general, some examples may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although examples are not limited thereto. While various examples may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The examples may be implemented by computer software stored in a memory and executable by at least one data processor of the involved entities or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any procedures, e.g., as in Figure 19, and/or Figure 20, and/or otherwise described previously, may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media (such as hard disk  or floppy disks) , and optical media (such as for example DVD and the data variants thereof, CD, and so forth) .
The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) , application specific integrated circuits (AStudy ItemC) , gate level circuits and processors based on multicore processor architecture, as nonlimiting examples.
Additionally or alternatively, some examples may be implemented using circuitry. The circuitry may be configured to perform one or more of the functions and/or method steps previously described. That circuitry may be provided in the base station and/or in the communications device and/or in a core network entity.
As used in this application, the term “circuitry” may refer to one or more or all of the following:
(a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) ;
(b) combinations of hardware circuits and software, such as:
(i) a combination of analogue and/or digital hardware circuit (s) with software/firmware and
(ii) any portions of hardware processor (s) with software (including digital signal processor (s) ) , software, and memory (ies) that work together to cause an apparatus, such as the communications device or base station to perform the various functions previously described; and
(c) hardware circuit (s) and or processor (s) , such as a microprocessor (s) or a portion of a microprocessor (s) , that requires software (e.g., firmware) for operation, but the software may not be present when it is not needed for operation.
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their)  accompanying software and/or firmware. The term circuitry also covers, for example integrated device.
The foregoing description has provided by way of non-limiting examples a full and informative description of some examples. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the claims. However, all such and similar modifications of the teachings will still fall within the scope of the claims.
In the above, different examples are described using, as an example of an access architecture to which the described techniques may be applied, a radio access architecture based on long term evolution advanced (LTE Advanced, LTE-A) or new radio (NR, 5G) , without restricting the examples to such an architecture, however. The examples may also be applied to other kinds of communications networks having suitable means by adjusting parameters and procedures appropriately. Some examples of other options for suitable systems are the universal mobile telecommunications system (UMTS) radio access network (UTRAN) , wireless local area network (WLAN or WiFi) , worldwide interoperability for microwave access (WiMAX) , 
Figure PCTCN2022099429-appb-000007
personal communications services (PCS) , 
Figure PCTCN2022099429-appb-000008
wideband code division multiple access (WCDMA) , systems using ultra-wideband (UWB) technology, sensor networks, mobile ad-hoc networks (MANETs) and Internet Protocol multimedia subsystems (IMS) or any combination thereof.
Figure 5 depicts examples of simplified system architectures only showing some elements and functional entities, all being logical units, whose implementation may differ from what is shown. The connections shown in Figure 5 are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the system typically comprises also other functions and structures than those shown in Figure 5.
The examples are not, however, restricted to the system given as an example but a person skilled in the art may apply the solution to other communication systems provided with necessary properties.
The example of Figure 5 shows a part of an exemplifying radio access network. For example, the radio access network may support sidelink communications described below in more detail.
Figure 5 shows  devices  500 and 502. The  devices  500 and 502 are configured to be in a wireless connection on one or more communication channels with a node 504. The node 504 is further connected to a core network 506. In one example, the node 504 may be an access node such as (e/g) NodeB serving devices in a cell. In one example, the node 504 may be a non-3GPP access node. The physical link from a device to a (e/g) NodeB is called uplink or reverse link and the physical link from the (e/g) NodeB to the device is called downlink or forward link. It should be appreciated that (e/g) NodeBs or their functionalities may be implemented by using any node, host, server or access point etc. entity suitable for such a usage.
A communications system typically comprises more than one (e/g) NodeB in which case the (e/g) NodeBs may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for signalling purposes. The (e/g) NodeB is a computing device configured to control the radio resources of communication system it is coupled to. The NodeB may also be referred to as a base station, an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment. The (e/g) NodeB includes or is coupled to transceivers. From the transceivers of the (e/g) NodeB, a connection is provided to an antenna unit that establishes bi-directional radio links to devices. The antenna unit may comprise a plurality of antennas or antenna elements. The (e/g) NodeB is further connected to the core network 506 (CN or next generation core NGC) . Depending on the deployed technology, the (e/g) NodeB is connected to a serving and packet data network gateway (S-GW +P-GW) or user plane function (UPF) , for routing and forwarding user data packets and for providing connectivity of devices to one or more external packet data networks, and to a mobile management entity (MME) or access mobility management function (AMF) , for controlling access and mobility of the devices.
Examples of a device are a subscriber unit, a user device, a user equipment (UE) , a user terminal, a terminal device, a mobile station, a mobile device, etc
The device typically refers to a mobile or static device (e.g. a portable or non-portable computing device) that includes wireless mobile communication devices operating with or without an universal subscriber identification module (USIM) , including, but not limited to, the following types of devices: mobile phone, smartphone, personal digital assistant (PDA) , handset, device using a wireless modem (alarm or measurement device, etc. ) , laptop and/or touch screen computer, tablet, game  console, notebook, and multimedia device. It should be appreciated that a device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network. A device may also be a device having capability to operate in Internet of Things (IoT) network which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction, e.g. to be used in smart power grids and connected vehicles. The device may also utilise cloud. In some applications, a device may comprise a user portable device with radio parts (such as a watch, earphones or eyeglasses) and the computation is carried out in the cloud.
The device illustrates one type of an apparatus to which resources on the air interface are allocated and assigned, and thus any feature described herein with a device may be implemented with a corresponding apparatus, such as a relay node. An example of such a relay node is a layer 3 relay (self-backhauling relay) towards the base station. The device (or, in some examples, a layer 3 relay node) is configured to perform one or more of user equipment functionalities.
Various techniques described herein may also be applied to a cyber-physical system (CPS) (a system of collaborating computational elements controlling physical entities) . CPS may enable the implementation and exploitation of massive amounts of interconnected information and communications technology, ICT, devices (sensors, actuators, processors microcontrollers, etc. ) embedded in physical objects at different locations. Mobile cyber physical systems, in which the physical system in question has inherent mobility, are a subcategory of cyber-physical systems. Examples of mobile physical systems include mobile robotics and electronics transported by humans or animals.
Additionally, although the apparatuses have been depicted as single entities, different units, processors and/or memory units (not all shown in Figure 5) may be implemented.
5G enables using multiple input –multiple output (MIMO) antennas, many more base stations or nodes than the LTE (aso-called small cell concept) , including macro sites operating in co-operation with smaller stations and employing a variety of radio technologies depending on service needs, use cases and/or spectrum available. 5G mobile communications supports a wide range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and  various forms of machine type applications (such as (massive) machine-type communications (mMTC) , including vehicular safety, different sensors and real-time control) . 5G is expected to have multiple radio interfaces, e.g. below 6GHz or above 24 GHz, cmWave and mmWave, and also being integrable with existing legacy radio access technologies, such as the LTE. Integration with the LTE may be implemented, at least in the early phase, as a system, where macro coverage is provided by the LTE and 5G radio interface access comes from small cells by aggregation to the LTE. In other words, 5G is planned to support both inter-RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as below 6GHz –cmWave, 6 or above 24 GHz –cmWave and mmWave) . One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
The LTE network architecture is fully distributed in the radio and fully centralized in the core network. The low latency applications and services in 5G require to bring the content close to the radio which leads to local break out and multi-access edge computing (MEC) . 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors. MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time. Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical) , critical communications (autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications) .
The communication system is also able to communicate with other networks 512, such as a public switched telephone network, or a VoIP network, or the Internet, or a private network, or utilize services provided by them. The communication network  may also be able to support the usage of cloud services, for example at least part of core network operations may be carried out as a cloud service (this is depicted in Figure 5 by “cloud” 514) . This may also be referred to as Edge computing when performed away from the core network. The communication system may also comprise a central control entity, or a like, providing facilities for networks of different operators to cooperate for example in spectrum sharing.
The technology of Edge computing may be brought into a radio access network (RAN) by utilizing network function virtualization (NFV) and software defined networking (SDN) . Using the technology of edge cloud may mean access node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head or base station comprising radio parts. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts. Application of cloudRAN architecture enables RAN real time functions being carried out at or close to a remote antenna site (in a distributed unit, DU 508) and non-real time functions being carried out in a centralized manner (in a centralized unit, CU 510) .
It should also be understood that the distribution of labour between core network operations and base station operations may differ from that of the LTE or even be non-existent. Some other technology advancements probably to be used are Big Data and all-IP, which may change the way networks are being constructed and managed. 5G (or new radio, NR) networks are being designed to support multiple hierarchies, where Edge computing servers can be placed between the core and the base station or nodeB (gNB) . One example of Edge computing is MEC, which is defined by the European Telecommunications Standards Institute. It should be appreciated that MEC (and other Edge computing protocols) can be applied in 4G networks as well.
5G may also utilize satellite communication to enhance or complement the coverage of 5G service, for example by providing backhauling. Possible use cases are providing service continuity for machine-to-machine (M2M) or Internet of Things (IoT) devices or for passengers on board of vehicles, Mobile Broadband, (MBB) or ensuring service availability for critical communications, and future railway/maritime/aeronautical communications. Satellite communication may utilise geostationary earth orbit (GEO) satellite systems, but also low earth orbit (LEO) satellite systems, in particular mega-constellations (systems in which hundreds of  (nano) satellites are deployed) . Each satellite in the mega-constellation may cover several satellite-enabled network entities that create on-ground cells. The on-ground cells may be created through an on-ground relay node or by a gNB located on-ground or in a satellite.
The depicted system is only an example of a part of a radio access system and in practice, the system may comprise a plurality of (e/g) NodeBs, the device may have an access to a plurality of radio cells and the system may comprise also other apparatuses, such as physical layer relay nodes or other network elements, etc. At least one of the (e/g) NodeBs or may be a Home (e/g) nodeB. Additionally, in a geographical area of a radio communication system a plurality of different kinds of radio cells as well as a plurality of radio cells may be provided. Radio cells may be macro cells (or umbrella cells) which are large cells, usually having a diameter of up to tens of kilometers, or smaller cells such as micro-, femto-or picocells. The (e/g) NodeBs of Figure 5 may provide any kind of these cells. A cellular radio system may be implemented as a multilayer network including several kinds of cells. Typically, in multilayer networks, one access node.

Claims (18)

  1. An apparatus for a first function, the apparatus comprising means for performing:
    signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and
    receiving the requested information from the second function.
  2. An apparatus as claimed in claim 1, comprising means for performing:
    signalling, to the second function, a request to modify at least one parameter comprised in the request; and
    receiving an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  3. An apparatus as claimed in any preceding claim, comprising means for performing:
    signalling, to the second function, a request to delete any reporting associated with the request; and
    receiving an indication that the second function will no longer report the requested information to the first function.
  4. An apparatus for a second function, the apparatus comprising means for performing:
    receiving, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision;
    obtaining the requested information; and
    signalling the requested information to the first function.
  5. An apparatus as claimed in claim 4, wherein the means for obtaining the requested information comprises means for performing:
    retrieving information related to said at least one decision from a storage function; and
    processing the retrieved information to form the requested information.
  6. An apparatus as claimed in any of claims 4 to 5, further comprising means for performing:
    in response to said receiving, determining that the second function is currently maintaining a first reporting instance in respect of said requested information; and
    updating the first instance to comprise an instruction to signal the requested information to the first function.
  7. An apparatus as claimed in claim 6, wherein the means for updating the first instance comprises means for performing:
    associating, in the first instance, an indication of a frequency at which the first function is requesting to be provided with the requested information with an identifier of the first function.
  8. An apparatus as claimed in any of claims 4 to 5, further comprising means for performing:
    in response to said receiving, determining that the second function is not currently maintaining a first reporting instance in respect of said requested information; and
    instantiating a second instance that comprises an instruction to signal the requested information to the first function.
  9. An apparatus as claimed in claim 8, wherein the means for performing determining that the second function is not currently maintaining a first reporting instance in respect of said requested information comprises means for performing at least one of:
    determining that the received request comprises an indication of a different machine learning model to any machine learning model that is a subject of the first reporting instance; and/or
    determining that the received request comprises an indication of a different context to any context that is a subject of the first reporting instance; and/or
    determining that the received request comprises an indication of a different termination condition to any termination condition that is a subject of the first reporting instance.
  10. An apparatus as claimed in any of claims 6 to 9, comprising means for performing:
    providing the first function with an identifier of the first instance and/or with an identifier of the second instance.
  11. An apparatus as claimed in any of claims 4 to 10, wherein the means for obtaining the requested information comprises means for performing:
    determining filter conditions for obtaining the requested information; and
    applying the determined filer conditions to collect with the requested information.
  12. An apparatus as claimed in any of claims 4 to 11, comprising means for performing:
    receiving, from the first function, a request to modify at least one parameter comprised in the request; and
    signalling, to the first function, an indication that the second function has modified reporting of the requested information in accordance with the modification of the at least one parameter.
  13. An apparatus as claimed in any of claims 4 to 12, comprising means for performing:
    receiving, from the first function, a request to delete any reporting associated with the request; and
    signalling an indication that the second function will no longer report the requested information to the first function.
  14. An apparatus as claimed in any preceding claim, wherein the request comprises at least one of: an identifier of the machine learning model, an identifier of a machine learning application that comprises the machine learning model, an indication of context to be considered by the second function when providing the requested information, and/or an indication of a frequency at which the first function is requesting to be provided with the requested information.
  15. A method for an apparatus for a first function, the method comprising:
    signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and
    receiving the requested information from the second function.
  16. A method for an apparatus for a second function, the method comprising:
    receiving, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision;
    obtaining the requested information; and
    signalling the requested information to the first function.
  17. A computer program product that, when run on an apparatus for a first function, causes the apparatus to perform:
    signalling, to a second function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision; and
    receiving the requested information from the second function.
  18. A computer program product that, when run on an apparatus for a second function, causes the apparatus to perform:
    receiving, from a first function, a request for information of at least one decision made by a machine learning model and respective context associated with making said at least one decision;
    obtaining the requested information; and
    signalling the requested information to the first function.
PCT/CN2022/099429 2022-06-17 2022-06-17 Apparatus, methods, and computer programs WO2023240592A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/099429 WO2023240592A1 (en) 2022-06-17 2022-06-17 Apparatus, methods, and computer programs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/099429 WO2023240592A1 (en) 2022-06-17 2022-06-17 Apparatus, methods, and computer programs

Publications (1)

Publication Number Publication Date
WO2023240592A1 true WO2023240592A1 (en) 2023-12-21

Family

ID=89192874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/099429 WO2023240592A1 (en) 2022-06-17 2022-06-17 Apparatus, methods, and computer programs

Country Status (1)

Country Link
WO (1) WO2023240592A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160345250A1 (en) * 2013-11-28 2016-11-24 Ntt Docomo, Inc. Macro-cell assisted small cell discovery and resource activation
CN110324800A (en) * 2018-03-30 2019-10-11 华为技术有限公司 A kind of method of policy control, network element and system
US20200334979A1 (en) * 2017-09-15 2020-10-22 Velsis Sistemas E Tecnologia Viaria S/A Predictive, integrated and intelligent system for control of times in traffic lights
WO2021238397A1 (en) * 2020-05-24 2021-12-02 中兴通讯股份有限公司 Network optimization method, server, client device, network side device, network device, system, and medium
WO2022034259A1 (en) * 2020-08-11 2022-02-17 Nokia Technologies Oy Communication system for machine learning metadata

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160345250A1 (en) * 2013-11-28 2016-11-24 Ntt Docomo, Inc. Macro-cell assisted small cell discovery and resource activation
US20200334979A1 (en) * 2017-09-15 2020-10-22 Velsis Sistemas E Tecnologia Viaria S/A Predictive, integrated and intelligent system for control of times in traffic lights
CN110324800A (en) * 2018-03-30 2019-10-11 华为技术有限公司 A kind of method of policy control, network element and system
WO2021238397A1 (en) * 2020-05-24 2021-12-02 中兴通讯股份有限公司 Network optimization method, server, client device, network side device, network device, system, and medium
WO2022034259A1 (en) * 2020-08-11 2022-02-17 Nokia Technologies Oy Communication system for machine learning metadata

Similar Documents

Publication Publication Date Title
US20220014963A1 (en) Reinforcement learning for multi-access traffic management
Condoluci et al. Softwarization and virtualization in 5G mobile networks: Benefits, trends and challenges
US20220124543A1 (en) Graph neural network and reinforcement learning techniques for connection management
US20220124560A1 (en) Resilient radio resource provisioning for network slicing
US20220408293A1 (en) Method and device for providing network analysis information for rfsp index selection in mobile communication network
US11700187B2 (en) Systems and methods for configuring and deploying multi-access edge computing applications
US20230025010A1 (en) Apparatuses and methods for enhancing network coverage in accordance with predictions
WO2020152389A1 (en) Machine learning for a communication network
US20220217620A1 (en) Controlling network access
US11622322B1 (en) Systems and methods for providing satellite backhaul management over terrestrial fiber
US11115271B1 (en) Network slice configuration
WO2023240592A1 (en) Apparatus, methods, and computer programs
US20230284301A1 (en) Dynamic selection of radio access network (ran) among available ran architectural alternatives
EP4161020A1 (en) Apparatus, methods, and computer programs
Sun et al. Intelligent ran automation for 5g and beyond
EP4106273A1 (en) Apparatus, methods, and computer programs
WO2023130359A1 (en) Apparatus, methods, and computer programs
US20230099315A1 (en) Apparatus, methods, and computer programs
US20230216929A1 (en) Apparatus, methods, and computer programs
US20240056886A1 (en) Extended, open network architectures supporting delivery of network-accessible services
US11792096B1 (en) Method and system for predictive and feedback management of network performance
EP4250802A1 (en) Optimizing physical cell id assignment in a wireless communication network
US20240121184A1 (en) Situation aware qos automation system and method leveraging user device real time updating
WO2023006228A1 (en) Analytics reporting control
US20240114392A1 (en) Intelligent machine learning (ml)-enabled end-to-end (e2e) automated orchestration for collaborative next-generation wireless wireline convergence (wwc)

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22946281

Country of ref document: EP

Kind code of ref document: A1