WO2024092788A1 - Wireless communication system and method for determining ai/ml model during ue mobility - Google Patents

Wireless communication system and method for determining ai/ml model during ue mobility Download PDF

Info

Publication number
WO2024092788A1
WO2024092788A1 PCT/CN2022/130072 CN2022130072W WO2024092788A1 WO 2024092788 A1 WO2024092788 A1 WO 2024092788A1 CN 2022130072 W CN2022130072 W CN 2022130072W WO 2024092788 A1 WO2024092788 A1 WO 2024092788A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
node
wireless communication
gnb
communication system
Prior art date
Application number
PCT/CN2022/130072
Other languages
French (fr)
Inventor
Miao Qu
Original Assignee
Shenzhen Tcl New Technology Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tcl New Technology Co., Ltd. filed Critical Shenzhen Tcl New Technology Co., Ltd.
Priority to PCT/CN2022/130072 priority Critical patent/WO2024092788A1/en
Publication of WO2024092788A1 publication Critical patent/WO2024092788A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models

Definitions

  • the present disclosure relates to the field of wireless communication systems, and more particularly, to a wireless communication system and a method for determining artificial intelligence (AI) /machine learning (ML) model during user equipment (UE) mobility. More specifically, the target is to ensure that an AI/ML model work properly after handover completing.
  • the present disclosure may decrease the latency and guarantee the service continuity for UE mobility in AI/ML based wireless communication system.
  • AI artificial intelligence
  • ML machine learning
  • This study TR37.817 was to explore the wireless big data acquisitions and applications for network automation and intelligence, including the definition of use case, and the process and information interaction required by different uses cases. Then, for investigate the potential benefits of AI/ML algorithms for air interface, a new study item was supported in Rel. 18 at 3GPP RAN plenary meetings, which is leaded by RAN1. This study involves three uses cases, such as CSI feedback enhancement, beam management and positioning accuracy enhancement, and the related discussion is divided in two parts, common topics and use case specific topics. After twice meetings later, the study for RAN2 perspective is started in Oct 2022, it mainly focusses on the procedure and signalling based on RAN1 process and agreements.
  • model selection is a factor that led to model selection, in RAN1#110bis meeting, is made an initial agreement for the mechanism of model selection, for UE sided models and two-sided models. More specifically, the decision of model selection can be by network and UE.
  • the channel may become worse due to the UE mobility. Consequently, may result in handover happening.
  • the system can use traditional method and AI/ML-based method in some use cases. If the AI/ML-based method is adopted, in order to ensure the service continuity, trigger model selection is a candidate method. However, the details of model selection for UE mobility are not discussed. Furthermore, since UE moves into a new serving gNB, the AI/ML model and/or its characteristics which used in the old serving gNB may be knew for the serving gNB, during UE mobility, it may involve the model transfer. Besides, considering the large size of AI/ML model will generate signaling overhead, model reconfiguration also seems possible.
  • An object of the present disclosure is to propose a wireless communication system and a method for determining artificial intelligence (AI) /machine learning (ML) model during user equipment (UE) mobility, to provide the methods for determining AI/ML model during UE mobility, which is not discussed in current study and guarantee the service continuity during UE mobility and improve handover performance.
  • AI artificial intelligence
  • ML machine learning
  • a method for determining an artificial intelligence (AI) /machine learning (ML) model in a wireless communication system based on information includes receiving the information from a first node and/or a second node by a third node, wherein the information is used for the third node to determining the AI/ML model in the wireless communication system.
  • AI artificial intelligence
  • ML machine learning
  • a method of processing about the AI/ML model by a first node includes the first node receiving a message from a third node and/or a second node, and the first node changing the AI/ML model or not.
  • a wireless communication system comprises a memory, a transceiver, and a processor coupled to the memory and the transceiver.
  • the processor is configured to perform the above method.
  • a non-transitory machine-readable storage medium has stored thereon instructions that, when executed by a computer, cause the computer to perform the above method.
  • a chip includes a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the above method.
  • a computer readable storage medium in which a computer program is stored, causes a computer to execute the above method.
  • a computer program product includes a computer program, and the computer program causes a computer to execute the above method.
  • a computer program causes a computer to execute the above method.
  • FIG. 1 is a block diagram of one or more user equipments and a network/gNB of communication in a communication network system according to an embodiment of the present disclosure.
  • FIG. 2A is a flowchart illustrating a method for determining an artificial intelligence (AI) /machine learning (ML) model in a wireless communication system based on information according to an embodiment of the present disclosure.
  • AI artificial intelligence
  • ML machine learning
  • FIG. 2B is a flowchart illustrating a method of processing about the AI/ML model by a first node according to an embodiment of the present disclosure.
  • FIG. 3A is a schematic diagram illustrating an example of locations of AI/ML model functionalities are for one-sided model according to an embodiment of the present disclosure.
  • FIG. 3B is a schematic diagram illustrating an example of locations of AI/ML model functionalities are for two-sided model according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram illustrating an example of locations of AI/ML model functionalities for two-sided model according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram illustrating an example of locations of AI/ML model functionalities for one sided model according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating an example of locations of AI/ML model functionalities for one-sided model according to an embodiment of the present disclosure.
  • FIG. 7 is a block diagram of a system for wireless communication according to an embodiment of the present disclosure.
  • FIG. 1 illustrates that, in some embodiments, one or more user equipments (UEs) 10 and a network/gNB 20 for communication in a communication network system 40 according to an embodiment of the present disclosure are provided.
  • the communication network system 40 includes one or more UEs 10 and a network/gNB 20.
  • the one or more UEs 10 may include a memory 12, a transceiver 13, and a processor 11 coupled to the memory 12 and the transceiver 13.
  • the network/gNB 20 may include a memory 22, a transceiver 23, and a processor 21 coupled to the memory 22 and the transceiver 23.
  • the processor 11or 21 may be configured to implement proposed functions, procedures and/or methods described in this description. Layers of radio interface protocol may be implemented in the processor 11 or 21.
  • the memory 12 or 22 is operatively coupled with the processor 11 or 21 and stores a variety of information to operate the processor 11 or 21.
  • the transceiver 13 or 23 is operatively coupled with the processor 11 or 21, and the transceiver 13 or 23 transmits and/or receives a radio signal.
  • the processor 11 or 21 may include application-specific integrated circuit (ASIC) , other chipset, logic circuit and/or data processing device.
  • the memory 12 or 22 may include read-only memory (ROM) , random access memory (RAM) , flash memory, memory card, storage medium and/or other storage device.
  • the transceiver 13 or 23 may include baseband circuitry to process radio frequency signals.
  • modules e.g., procedures, functions, and so on
  • the modules can be stored in the memory 12 or 22 and executed by the processor 11 or 21.
  • the memory 12 or 22 can be implemented within the processor 11 or 21 or external to the processor 11 or 21 in which case those can be communicatively coupled to the processor 11 or 21 via various means as is known in the art.
  • FIG. 2A illustrates a method 200 for determining an artificial intelligence (AI) /machine learning (ML) model in a wireless communication system based on information according to an embodiment of the present disclosure.
  • the method 200 includes: a block 202, receiving the information from a first node and/or a second node by a third node, wherein the information is used for the third node to determining the AI/ML model in the wireless communication system.
  • the information is sent form the first node and/or the second node to the third node and comprises a message including AI/ML model, and/or AI/ML model related assistant information.
  • AI/ML model is transmitted from the second node to the third node; and/or from the first node to the third node.
  • the AI/ML model related assistant information contains one of the followings: model ID: model group ID; or parameters and/or structure of AI/ML model.
  • the third node determines one AI/ML model.
  • the third node determines one AI/ML model if the third node receives parameters and/or structure of AI/ML model.
  • the third node determines that the AI/ML model by model selection, and/or model reconfiguration, and/or model update.
  • the model correlation indicates a range for some parameters and/or structure of AI/ML models.
  • the model correlation is configured by a network, or fixed, or indicated by the second node or the first node.
  • the model reconfiguration and/or update comprises options as followings: based on the third node decides by itself; or -based on model correlation.
  • FIG. 2B is a flowchart illustrating a method of processing about the AI/ML model by a first node according to an embodiment of the present disclosure.
  • the method 300 includes: a block 302, the first node receiving a message from a third node and/or a second node, and the first node changing the AI/ML model or not.
  • the message comprises at least one of followings: a first indicator, which indicates whether the AI/ML model changed, and the UE based on the first indicator is able to know if a target gNB uses a new AI/ML model or not; AI/ML model ID, which uniquely identifies the AI/ML model; a second indicator, which indicates that the UE uses the old AI/ML model or use the new AI/ML model.
  • the first node changes the AI/ML model or not, comprising: if the target gNB sends the second indicator, the UE changes the new AI/ML model change or not based on the second indicator requirement; or if the target gNB does not send the second indicator, but the first node only knows the AI/ML model is changed in the target gNB based on the first indicator, the first node based on itself changes the new AI/ML model or not.
  • the wireless communication system 40 is configured to perform the above method 200 or the above method 300.
  • the wireless communication system 400 is also configured to perform the method in the following some embodiments.
  • the locations of AI/ML model functionalities are for one-side model or two-side mode
  • the wireless communication system comprises a base station and a UE.
  • the AI/ML model functionalities comprise an AI/ML model training function and an AI/ML model inference function.
  • the AI/ML model training function is deployed in the base station, and the AI/ML model inference function is deployed in the UE; or the AI/ML model training function and the AI/ML model inference function are both deployed in the base station or the UE.
  • the AI/ML model training function is deployed in the base station or in the UE and the base station, and the AI/ML model inference function is deployed in the UE and the base station.
  • a source base station or the UE is able to send a message including AI/ML model related assistant information, and/or transfer an old used AI/ML model to a target base station.
  • the source gNB decides to initial the handover procedure for the UE based on related AI/ML model report messages, or the UE is noticed that it is moved to another base station, the source gNB or the UE sends the message including the AI/ML model related assistant information to the target base station, then the target gNB selects an AI/ML model or reconfiguring an AI/ML model based on a received message for the UE to continue working based on the AI/ML operation.
  • the received message comprises a model ID for model selection, a model group ID for model selection, a model correlation for model selection, and/or a model reconfiguration.
  • the source base station decides to initial the handover procedure for UE based on related AI/ML model report messages, or the UE is noticed that it is moved to another base station, the source base station or the UE sends the AI/ML model to the target gNB, then the target base station uses the old used AI/ML model for the UE to continue working based on the AI/ML operation.
  • a new message is used for transferring the AI/ML model, and/or an enhancement for legacy message for AI/ML model is used for transferring the AI/ML model.
  • a partial AI/ML model is sent from the source base station or the UE to the target base station.
  • the target gNB sends information about a situation of AI/ML model to the source base station, the source base station forwards the information to the UE, and the UE makes a decision according to the information.
  • the target gNB sends information about a situation of AI/ML model to the UE, and the UE makes a decision according to the information.
  • the situation of AI/ML model comprises that: the target base station selects a new AI/ML model, the target base station reconfigures an AI/ML model, or the target base station receives the old used AI/ML model.
  • the information comprises at least one of the followings: one indicator which indicates whether the AI/ML model changed, an AI/ML model ID which uniquely identifies the AI/ML model, and one indicator which indicates that the UE uses the old AI/ML model or use the new AI/ML model.
  • the AI/ML model related assistant information comprises at least one of the followings: a UE ID which uniquely identifies the UE, an AI/ML model ID, which uniquely identifies the AI/ML model, an AI/ML model group ID, which identifies group AI/ML models, a type of AI/ML model, a complexity of AI/ML model, a generalization of AI/ML model, and KPIs of AI/ML model.
  • This invention is related to wireless communication system, where the AI/ML algorithms are enabled over RAN, the relevant new SID on AI/ML for NR Air Interface of the Rel. 18, which is approved in 3GPP RAN plenary meetings 94e in Dec. 2022, and the related discussion is led by RAN1.
  • the study for RAN2 perspective is started in Oct 2022, it mainly focusses on the procedure and signalling based on RAN1 process.
  • This invention provides the methods for determining AI/ML model during UE mobility. More specifically, the target is that ensure an AI/ML model work properly after handover completing.
  • the provided methods may decrease the latency and guarantee the service continuity for UE mobility in AI/ML based wireless communication system.
  • the methods for determining an AI/ML model for serving gNB is necessary to support service continuity, which may include the following aspects for different scenarios: model transfer, model selection, and model (re) configuration.
  • model transfer, model selection, and model reconfiguration may refer to model transfer, model selection, and model reconfiguration.
  • Source gNB to present old serving gNB
  • Tiget gNB to present new serving gNB.
  • this disclosure provides the methods for determining an AI/ML model: Methods of AI/ML model selection, transfer, and/or reconfiguration; and transmission and configuration of AI/ML model related assistant information.
  • the invention effect may provide the methods for determining AI/ML model during UE mobility, which is not discussed in current study and guarantee the service continuity during UE mobility and improve handover performance.
  • FIG. 3A and FIG. 3B illustrates locations of AI/ML model functionalities for different scenarios.
  • FIG. 3A locations of AI/ML model functionalities are for one-sided model.
  • FIG. 3B locations of AI/ML model functionalities are for two-sided model.
  • a source gNB or a UE When a system initials a handover procedure because of UE mobility, in order to maintain service continuity for AI/ML operation in the wireless communication, a source gNB or a UE is able to send a message including AI/ML model related assistant information, and/or transfer the old used AI/ML model to a target gNB.
  • the following methods can be considered.
  • Method 1 Assistant information is sent from source gNB or UE to target gNB
  • source gNB when source gNB decides to initial a handover procedure for UE based on related AI/ML model report messages, or UE is noticed that it is moved to another gNB, source gNB or UE may send a message including the AI/ML model related assistant information to target gNB, then target gNB may select an AI/ML model or reconfiguration an AI/ML model based on received message for the UE to continue working based on AI/ML operation.
  • target gNB may select an AI/ML model or reconfiguration an AI/ML model based on received message for the UE to continue working based on AI/ML operation.
  • the target gNB receives the message including AI/ML model ID, which is a unify indicator of AI/ML model in global. Then the target gNB according to the AI/ML model ID to search the AI/ML models on its own side. If the target gNB side has the AI/ML model with the same model ID with old used AI/ML model, it means that target gNB side has the same AI/ML model with source gNB and also used to UE side. Then target gNB select this AI/ML model to be used for the UE to continue working based on AI/ML operation.
  • AI/ML model ID is a unify indicator of AI/ML model in global.
  • the received message for target gNB is provided by source gNB, and/or UE.
  • the AI/ML model can be trained in other entities, such as third server, other gNB, OAM, OTT and so on. Since AI/ML model training will result in large consume, to avoid repeat training, the trained AI/ML model can be transferred between different entities in some cases. Hence, there may be possible that an AI/ML model is existed in different entities.
  • the model ID can be a relative ID or an absolute ID, whatever, target gNB can based on the model ID to make sure a unique AI/ML model.
  • the target gNB receives the message including AI/ML model group ID, which indicates that these AI/ML models with the same specific function, and/or same characteristics share a same AI/ML model group ID. Then, the target gNB according to the AI/ML model group ID, selects an AI/ML model on its own side. If the target gNB side has the same model group ID of AI/ML model with old used AI/ML model, it means that target gNB side has the AI/ML models with same specific function, and/or same characteristics as the old used AI/ML model which resides source gNB and also used to UE side. Once the AI/ML model which located in target gNB has the same model group ID with old used AI/ML model, it will be regard as candidate AI/ML model. Then, the target gNB selects an AI/ML model among the candidate AI/ML models to be used for the UE to continue working based on AI/ML operation.
  • AI/ML model group ID indicates that these AI/ML models with the same specific function, and/or same characteristics
  • the selection rule is described as following: If there is only one candidate AI/ML model, the candidate AI/ML model can be regard as selected AI/ML model directly. If there is more than one candidate AI/ML model, the target gNB can select an AI/ML model among the candidate AI/ML models randomly.
  • the received message for target gNB is provided by source gNB, and/or UE.
  • the target gNB which has a model correlation for model selection in the handover procedure, which indicates a range for some parameters and/or structure of AI/ML model. If the parameters and/or structure of an AI/ML model satisfy the model correlation, it will be regarded as a candidate AI/ML model. More narrowly, the source gNB or UE provides the old used AI/ML model related parameters or structure, such like AI/ML model complexity, generalization, KPI and so on to target gNB, these information are regarded as a baseline/reference point for model correlation. For instance, if the provided layer of the old used AI/ML model is 5, so the said baseline/reference point of model layer is 5 since the model correlation.
  • the model correlation for different parameters and structure can be calculated in a variety of ways, such as adding, subtracting, multiplying, dividing, and other methods, and it may be one or a combination of several.
  • the model correlation can be configured by network, or fixed, or indication by source gNB or UE. Because there may be more than one candidate AI/ML model, the selection rule is described as following: If there is only one candidate AI/ML model, the candidate AI/ML model can be regard as selected AI/ML model directly. If there is more than one candidate AI/ML model, the target gNB can select an AI/ML model among the candidate AI/ML models randomly. The received message for target gNB is provided by source gNB, and/or UE.
  • the source gNB or UE sends the AI/ML model related parameters and/or structure to target gNB, such like AI/ML model complexity, generalization, KPI and so on, which make that target gNB reconfiguration the AI/ML model or change some part of the AI/ML model that resides its own side according to these provided information. For instance, if the provided number of layers is 5, and the AI/ML models reside target gNB may do not satisfy the number of layers, but it has a AI/ML model which number of model layer is 6, hence, the target reconfiguration the number of AI/ML model layer from 6 to 5.
  • target gNB such like AI/ML model complexity, generalization, KPI and so on, which make that target gNB reconfiguration the AI/ML model or change some part of the AI/ML model that resides its own side according to these provided information. For instance, if the provided number of layers is 5, and the AI/ML models reside target gNB may do not satisfy the number of layers, but it has a AI/ML model which number of model layer is 6, hence
  • Target gNB Since the target gNB may hold many AI/ML models, how to determine which AI/ML model should be reconfigured, the options are as followings: Target gNB implementation, it based on target gNB decide by itself. Base on model correlation, set the model correlation for model selection, target gNB select an AI/ML model among the candidate AI/ML models to reconfigure or change it. The details for model range refer to Alternative 3.
  • the received message for target gNB is provided by source gNB, and/or UE.
  • the message may include at least one of the following assistant information: UE ID, which uniquely identify the UE; AI/ML model ID, which uniquely identify the AI/ML model or relatively identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; Format: Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) ,
  • Method 2 AI/ML model is sent from source gNB to target gNB
  • a source gNB when a source gNB decides to initial a handover procedure for a UE based on related AI/ML model report messages, or the UE is noticed that it is moved to another gNB, the source gNB or the UE may send the AI/ML model to target gNB directly. Then target gNB uses the old used AI/ML model for the UE to continue working based on AI/ML operation. How to transmit the AI/ML model, the following alternatives can be considered.
  • Xn message For transmitted the AI/ML model for source gNB and target gNB, a new Xn message between source gNB and target gNB should be introduced.
  • the Xn message may include at least one of the following assistant information:
  • UE ID which uniquely identifies the UE
  • AI/ML model ID which uniquely identify the AI/ML model or relatively identify the AI/ML model
  • AI/ML model group ID which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario
  • Type of AI/ML model which indicates which kind of AI/ML models is
  • Complexity of AI/ML model such as FLOPs, size of parameters
  • Generalization of AI/ML model which means the capability of AI/ML model can be worked in different use case
  • KPIs of AI/Ml model such as rate, precision, power consumption, reliability and so on
  • Format Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
  • the RRC message may include at least one of the following assistant information: UE ID, which uniquely identify the UE; AI/ML model ID, which uniquely identify the AI/ML model or relatively identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; and Format: Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec
  • Case 1 For transmitted the AI/ML model for source gNB and target gNB, the legacy Xn message between source gNB and target gNB should be reused. Handover Request as the first Xn message from source gNB to target gNB when source gNB decides to do handover. Hence, the Xn message should introduced some AI/ML related information.
  • the Handover Request message may include at least one of the following assistant information: AI/ML model ID, which uniquely identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
  • AI/ML model ID which uniquely identify the AI/ML model
  • AI/ML model group ID which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario
  • Type of AI/ML model which indicates which kind of AI/ML
  • Case 2 For transmitted the AI/ML model for UE and target gNB, the legacy RRC message between UE and target gNB should be reused. RRC complete as the first is sent from UE to target gNB should be enhanced. Hence, the RRC message should introduced some AI/ML related information.
  • the RRC complete message may include at least one of the following assistant information: UE ID, which uniquely identify the UE; AI/ML model ID, which uniquely identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
  • assistant information UE ID, which uniquely identify the UE
  • AI/ML model ID which uniquely identify the AI/ML model
  • AI/ML model group ID which identify a group AI/ML models, and the AI
  • Method 3 Partial AI/ML model is sent from source gNB or UE to target gNB
  • the source gNB or UE sends the a part of the old used AI/ML model to target gNB, such like backbone, or a part of main AI/ML model structure and parameters, such like layer, or activation function, or optimizer, or AI/ML model complexity, or generalization, or KPI, or type of AI/ML model and so on.
  • target gNB based on the receive main part AI/ML model to reconfigure or change the AI/ML model.
  • the target gNB When the handover happens, based on the above discussion, the target gNB according to the different information, there may be three situations of AI/ML models in target gNB side: (1) The target gNB select a new AI/ML model. (2) The target gNB reconfigure an AI/ML model. (3) The target gNB receive the old used AI/ML model. In the (1) and (2) , AI/ML model which will be used in the system is full or partial changed, the AI/ML model is a new AI/ML model to some extent. In the (3) , AI/ML model which will be used in the system is not changed, the AI/ML model is the old AI/ML model to some extent.
  • target gNB will inform UE the situation of AI/ML model, which may notice the UE that AI/ML model at target gNB change or not. Furthermore, if target gNB use a new AI/ML model, it may also ask the UE whether it will change the used AI/ML model; or just inform UE that change the used AI/ML model; or whether use the old AI/ML model or the new AI/ML model, which depends on UE. To achieve this, these alternatives should be considered. Actually, there are two cases for that target gNB will inform to UE. One is that target gNB can inform the UE by source gNB, another is that target gNB notice the UE directly.
  • Target gNB will send the information about situation of AI/ML model to source gNB, and source gNB forward these information to UE.
  • UE make a decision according to these information.
  • the information can be at one least of the following:
  • One indicator which indicates whether the AI/ML model changed. And UE based on the indicator can know if the target gNB use a new AI/ML model or not; For instance, if the indicator set 0, it means that target gNB use the old AI/ML model, and if the indicator set 1, it means that target gNB use the new AI/ML model;
  • AI/ML model ID which uniquely identify the AI/ML model.
  • the UE also can know that whether the AI/ML model change or not, moreover, it also knows the ID of the new AI/ML model;
  • One indicator which indicates that the UE use the old AI/ML model or use the new AI/ML model. For instance, if the indicator set 0, it means the UE can use the old AI/ML model, and if the indicator set 1, it means the UE should use the new AI/ML model. Another instance, if configure the indicator, it means that the UE use the old AI/ML model, vice versa.
  • the above message between source gNB and target gNB may be included in handover command which is in Xn message HANDOVER REQUEST ACKNOWLEDGE, or may be included in new dedicated RRC message which is in Xn message HANDOVER REQUEST ACKNOWLEDGE; or may be included in new dedicated RRC message which is in a new Xn message; the above message between source gNB and UE may be included in RRC Reconfiguration message, or may be included in new dedicated RRC message.
  • Target gNB will send the information about situation of AI/ML model to UE directly. More narrowly, the procedure may occur after the UE access to the target gNB. And the UE make a decision according to these information.
  • the information can be at one least of the following:
  • One indicator which indicates whether the AI/ML model changed. And UE based on the indicator can know if the target gNB use a new AI/ML model or not; For instance, if the indicator set 0, it means that target gNB use the old AI/ML model, and if the indicator set 1, it means that target gNB use the new AI/ML model;
  • AI/ML model ID which uniquely identify the AI/ML model.
  • the UE also can know that whether the AI/ML model change or not, moreover, it also knows the ID of the new AI/ML model;
  • One indicator which indicates that the UE use the old AI/ML model or use the new AI/ML model. For instance, if the indicator set 0, it means the UE can use the old AI/ML model, and if the indicator set 1, it means the UE should use the new AI/ML model. Another instance, if configure the indicator, it means that the UE use the old AI/ML model, vice versa.
  • the UE receives the information which is transmitted from target gNB, if the target gNB send the explicit indicator, the UE will be based on the explicit indicator require the new AI/ML model or not; if the gNB does not send the explicit indicator, but only knows the AI/ML model is changed in target gNB, the UE may base on itself to require the new AI/ML model or not.
  • the above message between UE and target gNB may be included in RRC Reconfiguration message, or may be included in new dedicated RRC message.
  • the AI/ML model training function is deployed in gNB side and UE side, while the AI/ML model inference function sides within gNB side and UE side, more specifically, the scenario can be two-sided model for joint training and separate training, the locations of AI/ML model functionalities for two-sided model are depicted as FIG. 4.
  • source gNB is able to send a message including AI/ML model related assistant information, and/or transfer the old used AI/ML model which resides source gNB to target gNB.
  • AI/ML model related assistant information since there are two parts of training model in different sides (gNB, UE) , hence, there is no model transfer between UE and gNB. The following methods can be considered.
  • Method 1 Assistant information is sent from source gNB to target gNB
  • source gNB when source gNB decides to initial a handover procedure for UE based on related AI/ML model report messages, source gNB may send a message including the AI/ML model related assistant information to target gNB, then target gNB may select an AI/ML model or reconfiguration an AI/ML model based on received message for the UE to continue working based on AI/ML operation.
  • target gNB may select an AI/ML model or reconfiguration an AI/ML model based on received message for the UE to continue working based on AI/ML operation.
  • Method 2 AI/ML model is sent from source gNB to target gNB
  • the source gNB may send the AI/ML model to a target gNB directly. Then, the target gNB uses the old used AI/ML model for the UE to continue working based on AI/ML operation. How to transmit the AI/ML model, the following alternatives can be considered. The alternatives are same as embodiment 1-method 2, but it is noticed that assistant information is only provided by the source gNB, no UE is involved, and the AI/ML model is only carried in Xn message.
  • the target gNB When the handover happens, based on the above discussion, the target gNB according to the different information, there may be three situations of AI/ML models in target gNB side: (1) The target gNB select a new AI/ML model. (2) The target gNB reconfigure a AI/ML model. (3) The target gNB receive the old used AI/ML model. In the (1) and (2) , AI/ML model which will be used in the system is full or partial changed, the AI/ML model is a new AI/ML model to some extent. In the (3) , AI/ML model which will be used in the system is not changed, the AI/ML model is the old AI/ML model to some extent.
  • target gNB will inform UE the situation of AI/ML model, which may notice the UE that AI/ML model at target gNB change or not. To achieve this, these alternatives should be considered. Actually, there are two cases for that target gNB will inform to UE. One is that target gNB can inform the UE by source gNB, another is that target gNB notice the UE directly.
  • Target gNB will send the information about situation of AI/ML model to source gNB, and source gNB forward these information to UE.
  • the information can be at one least of the following:
  • One indicator which indicates whether the AI/ML model changed. And UE based on the indicator can know if the target gNB use a new AI/ML model or not; For instance, if the indicator set 0, it means that target gNB use the old AI/ML model, and if the indicator set 1, it means that target gNB use the new AI/ML model; and
  • AI/ML model ID which uniquely identify the AI/ML model.
  • the UE also can know that whether the AI/ML model change or not, moreover, it also knows the ID of the new AI/ML model.
  • the UE receives the information which is forwarded by source gNB, and knows the AI/ML model is changed or not in target gNB. Then the UE notice the target gNB that it knows the AI/ML model situation or not.
  • the above message between source gNB and target gNB may be included in handover command which is in Xn message HANDOVER REQUEST ACKNOWLEDGE, or may be included in new dedicated RRC message which is in Xn message HANDOVER REQUEST ACKNOWLEDGE; or may be included in new dedicated RRC message which is in a new Xn message; the above message between UE and target gNB may be included in RRC Reconfiguration message, or RRC complete message, or may be included in new dedicated RRC message.
  • Target gNB will send the information about situation of AI/ML model to UE directly. More narrowly, the procedure may occur after the UE access to the target gNB. And the UE make a decision according to these information.
  • the information can be at one least of the following:
  • One indicator which indicates whether the AI/ML model changed. And UE based on the indicator can know if the target gNB use a new AI/ML model or not; For instance, if the indicator set 0, it means that target gNB use the old AI/ML model, and if the indicator set 1, it means that target gNB use the new AI/ML model; and
  • AI/ML model ID which uniquely identify the AI/ML model.
  • the UE also can know that whether the AI/ML model change or not, moreover, it also knows the ID of the new AI/ML model.
  • the UE receives the information which is forwarded by source gNB, and knows the AI/ML model is changed or not in target gNB. Then, the UE notice the target gNB that it knows the AI/ML model situation or not.
  • the above message between UE and target gNB may be included in RRC Reconfiguration message, or may be included in new dedicated RRC message.
  • the AI/ML model training function is deployed in gNB side, while the AI/ML model inference function also sides within gNB side, more specifically, the scenario can be one sided model, the locations of AI/ML model functionalities are depicted for one sided model as FIG. 5.
  • the methods for determining of AI/ML model and notification of UE are same as embodiment 3.
  • the AI/ML model training function is deployed in gNB side, while the AI/ML model inference function also sides within gNB side, more specifically, the scenario can be one sided model, the locations of AI/ML model functionalities for one-sided model are depicted as FIG. 6.
  • the UE moves to target gNB, since both model training function and model inference function are located in UE, if the current AI/ML model is able to work properly, there is no need model transfer. However, for other AI/ML functionalities, such as data collection, it may need collect the data from Target gNB. Besides, the AI/ML model may also be refined or reconfigured, the target gNB is one of the entities. Hence, the target gNB may need knows the related assistant information of AI/ML model.
  • the UE will send the message which includes AI/ML model related assistant information to target gNB after UE access to the target gNB, the message may include at least one of the following assistant information:
  • UE ID which uniquely identify the UE
  • AI/ML model ID which uniquely identify the AI/ML model
  • AI/ML model group ID which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario
  • Type of AI/ML model which indicates which kind of AI/ML models is
  • Complexity of AI/ML model such as FLOPs, size of parameters
  • Generalization of AI/ML model which means the capability of AI/ML model can be worked in different use case
  • KPIs of AI/ML model such as rate, precision, power consumption, reliability and so on
  • Format of AI/ML model which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
  • the above message between UE and target gNB may be included in RRC complete message, or may be included in new dedicated RRC message.
  • this disclosure provides the methods for determining an AI/ML model: Methods of AI/ML model selection, transfer, and/or reconfiguration; and transmission and configuration of AI/ML model related assistant information.
  • the invention effect may provide the methods for determining AI/ML model during UE mobility, which is not discussed in current study and guarantee the service continuity during UE mobility and improve handover performance.
  • FIG. 7 is a block diagram of an example system 700 for wireless communication according to an embodiment of the present disclosure. Embodiments described herein may be implemented into the system using any suitably configured hardware and/or software.
  • FIG. 7 illustrates the system 700 including a radio frequency (RF) circuitry 710, a baseband circuitry 720, an application circuitry 730, a memory/storage 740, a display 750, a camera 760, a sensor 770, and an input/output (I/O) interface 780, coupled with each other at least as illustrated.
  • the application circuitry 730 may include a circuitry such as, but not limited to, one or more single-core or multi-core processors.
  • the processors may include any combination of general-purpose processors and dedicated processors, such as graphics processors, application processors.
  • the processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A method for determining an artificial intelligence (AI)/machine learning (ML) model in a wireless communication system based on information includes receiving the information from a first node and/or a second node by a third node, wherein the information is used for the third node to determining the Al/ ML model in the wireless communication system. Further, a method of processing about the AI/ML model by a first node includes the first node receiving a message from a third node and/or a second node, and the first node changing the AI/ML model or not.

Description

WIRELESS COMMUNICATION SYSTEM AND METHOD FOR DETERMINING AI/ML MODEL DURING UE MOBILITY
BACKGROUND OF DISCLOSURE
1. Field of the Disclosure
The present disclosure relates to the field of wireless communication systems, and more particularly, to a wireless communication system and a method for determining artificial intelligence (AI) /machine learning (ML) model during user equipment (UE) mobility. More specifically, the target is to ensure that an AI/ML model work properly after handover completing. The present disclosure may decrease the latency and guarantee the service continuity for UE mobility in AI/ML based wireless communication system.
2. Description of the Related Art
Our society is undergoing a digitization revolution, in the last few years, artificial intelligence (AI) and machine learning (ML) methods are widely used in various industries to advance innovation and increase process efficiency. However, for the network design, with the dramatic increase of both the extraordinary amount of data and network complexity, conventional approaches will not be able to provide swift solutions in many cases. Hence, AI/ML will be an indispensable technology to improve the performance for future wireless communication networks.
Support for integrating AI/ML into the design of cellular network, the NWDAF (network data analytics function) was introduced in Rel. 15 and has been enhanced in Rel. 16 and Rel. 17 in SA2, which insights are mainly applied to 5G core (5GC) networks to enhance their functionality. Meanwhile, Optimized data collection and storage has been specified, together with training and ML model retrieval. In Nov 2019, 3GPP SA1 TR22.874 has studied the use cases and the potential performance requirements for 5G system (5GS) support of Artificial Intelligence/Machine Learning (AI/ML) model distribution and transfer. While for RAN perspective, a study item about AI/ML enabled RAN in RAN3 Rel. 17 has approved. This study TR37.817 was to explore the wireless big data acquisitions and applications for network automation and intelligence, including the definition of use case, and the process and information interaction required by different uses cases. Then, for investigate the potential benefits of AI/ML algorithms for air interface, a new study item was supported in Rel. 18 at 3GPP RAN plenary meetings, which is leaded by RAN1. This study involves three uses cases, such as CSI feedback enhancement, beam management and positioning accuracy enhancement, and the related discussion is divided in two parts, common topics and use case specific topics. After twice meetings later, the study for RAN2 perspective is started in Oct 2022, it mainly focusses on the procedure and signalling based on RAN1 process and agreements.
Many companies mention that the handover is a factor that led to model selection, in RAN1#110bis meeting, is made an initial agreement for the mechanism of model selection, for UE sided models and two-sided models. More specifically, the decision of model selection can be by network and UE.
Actually, in the AI/ML over air interface wireless communication system, the channel may become worse due to the UE mobility. Consequently, may result in handover happening. When UE move into a new  serving gNB, the system can use traditional method and AI/ML-based method in some use cases. If the AI/ML-based method is adopted, in order to ensure the service continuity, trigger model selection is a candidate method. However, the details of model selection for UE mobility are not discussed. Furthermore, since UE moves into a new serving gNB, the AI/ML model and/or its characteristics which used in the old serving gNB may be knew for the serving gNB, during UE mobility, it may involve the model transfer. Besides, considering the large size of AI/ML model will generate signaling overhead, model reconfiguration also seems possible.
Therefore, based on the above discussion, during UE mobility, the method for determining an AI/ML model for new serving gNB should be studied.
SUMMARY
An object of the present disclosure is to propose a wireless communication system and a method for determining artificial intelligence (AI) /machine learning (ML) model during user equipment (UE) mobility, to provide the methods for determining AI/ML model during UE mobility, which is not discussed in current study and guarantee the service continuity during UE mobility and improve handover performance.
In a first aspect of the present disclosure, a method for determining an artificial intelligence (AI) /machine learning (ML) model in a wireless communication system based on information includes receiving the information from a first node and/or a second node by a third node, wherein the information is used for the third node to determining the AI/ML model in the wireless communication system.
In a second aspect of the present disclosure, a method of processing about the AI/ML model by a first node includes the first node receiving a message from a third node and/or a second node, and the first node changing the AI/ML model or not.
In a third aspect of the present disclosure, a wireless communication system comprises a memory, a transceiver, and a processor coupled to the memory and the transceiver. The processor is configured to perform the above method.
In a fourth aspect of the present disclosure, a non-transitory machine-readable storage medium has stored thereon instructions that, when executed by a computer, cause the computer to perform the above method.
In a fifth aspect of the present disclosure, a chip includes a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the above method.
In a sixth aspect of the present disclosure, a computer readable storage medium, in which a computer program is stored, causes a computer to execute the above method.
In a seventh aspect of the present disclosure, a computer program product includes a computer program, and the computer program causes a computer to execute the above method.
In an eighth aspect of the present disclosure, a computer program causes a computer to execute the above method.
BRIEF DESCRIPTION OF DRAWINGS
In order to illustrate the embodiments of the present disclosure or related art more clearly, the following figures will be described in the embodiments are briefly introduced. It is obvious that the drawings are merely  some embodiments of the present disclosure, a person having ordinary skill in this field can obtain other figures according to these figures without paying the premise.
FIG. 1 is a block diagram of one or more user equipments and a network/gNB of communication in a communication network system according to an embodiment of the present disclosure.
FIG. 2A is a flowchart illustrating a method for determining an artificial intelligence (AI) /machine learning (ML) model in a wireless communication system based on information according to an embodiment of the present disclosure.
FIG. 2B is a flowchart illustrating a method of processing about the AI/ML model by a first node according to an embodiment of the present disclosure.
FIG. 3A is a schematic diagram illustrating an example of locations of AI/ML model functionalities are for one-sided model according to an embodiment of the present disclosure.
FIG. 3B is a schematic diagram illustrating an example of locations of AI/ML model functionalities are for two-sided model according to an embodiment of the present disclosure.
FIG. 4 is a schematic diagram illustrating an example of locations of AI/ML model functionalities for two-sided model according to an embodiment of the present disclosure.
FIG. 5 is a schematic diagram illustrating an example of locations of AI/ML model functionalities for one sided model according to an embodiment of the present disclosure.
FIG. 6 is a schematic diagram illustrating an example of locations of AI/ML model functionalities for one-sided model according to an embodiment of the present disclosure.
FIG. 7 is a block diagram of a system for wireless communication according to an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Embodiments of the present disclosure are described in detail with the technical matters, structural features, achieved objects, and effects with reference to the accompanying drawings as follows. Specifically, the terminologies in the embodiments of the present disclosure are merely for describing the purpose of the certain embodiment, but not to limit the disclosure.
FIG. 1 illustrates that, in some embodiments, one or more user equipments (UEs) 10 and a network/gNB 20 for communication in a communication network system 40 according to an embodiment of the present disclosure are provided. The communication network system 40 includes one or more UEs 10 and a network/gNB 20. The one or more UEs 10 may include a memory 12, a transceiver 13, and a processor 11 coupled to the memory 12 and the transceiver 13. The network/gNB 20 may include a memory 22, a transceiver 23, and a processor 21 coupled to the memory 22 and the transceiver 23. The processor 11or 21 may be configured to implement proposed functions, procedures and/or methods described in this description. Layers of radio interface protocol may be implemented in the  processor  11 or 21. The  memory  12 or 22 is operatively coupled with the  processor  11 or 21 and stores a variety of information to operate the  processor  11 or 21. The  transceiver  13 or 23 is operatively coupled with the  processor  11 or 21, and the  transceiver  13 or 23 transmits and/or receives a radio signal.
The  processor  11 or 21 may include application-specific integrated circuit (ASIC) , other chipset, logic circuit and/or data processing device. The  memory  12 or 22 may include read-only memory (ROM) , random access memory (RAM) , flash memory, memory card, storage medium and/or other storage device. The  transceiver  13 or 23 may include baseband circuitry to process radio frequency signals. When the embodiments are implemented in software, the techniques described herein can be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The modules can be stored in the  memory  12 or 22 and executed by the  processor  11 or 21. The  memory  12 or 22 can be implemented within the  processor  11 or 21 or external to the  processor  11 or 21 in which case those can be communicatively coupled to the  processor  11 or 21 via various means as is known in the art.
FIG. 2A illustrates a method 200 for determining an artificial intelligence (AI) /machine learning (ML) model in a wireless communication system based on information according to an embodiment of the present disclosure. In some embodiments, the method 200 includes: a block 202, receiving the information from a first node and/or a second node by a third node, wherein the information is used for the third node to determining the AI/ML model in the wireless communication system.
In some embodiments, the information is sent form the first node and/or the second node to the third node and comprises a message including AI/ML model, and/or AI/ML model related assistant information. In some embodiments, AI/ML model is transmitted from the second node to the third node; and/or from the first node to the third node. In some embodiments, the AI/ML model related assistant information contains one of the followings: model ID: model group ID; or parameters and/or structure of AI/ML model. In some embodiments, if the third node receives parameters and/or structure of AI/ML model, and if the third node has model correlation, the third node determines one AI/ML model. In some embodiments, if the third node receives parameters and/or structure of AI/ML model, the third node determines one AI/ML model.
In some embodiments, the third node determines that the AI/ML model by model selection, and/or model reconfiguration, and/or model update. In some embodiments, the model correlation indicates a range for some parameters and/or structure of AI/ML models. In some embodiments, the model correlation is configured by a network, or fixed, or indicated by the second node or the first node. In some embodiments, the model reconfiguration and/or update comprises options as followings: based on the third node decides by itself; or -based on model correlation.
FIG. 2B is a flowchart illustrating a method of processing about the AI/ML model by a first node according to an embodiment of the present disclosure. In some embodiments, the method 300 includes: a block 302, the first node receiving a message from a third node and/or a second node, and the first node changing the AI/ML model or not.
In some embodiments, the message comprises at least one of followings: a first indicator, which indicates whether the AI/ML model changed, and the UE based on the first indicator is able to know if a target gNB uses a new AI/ML model or not; AI/ML model ID, which uniquely identifies the AI/ML model; a second indicator, which indicates that the UE uses the old AI/ML model or use the new AI/ML model. In some embodiments, the first node changes the AI/ML model or not, comprising: if the target gNB sends the second indicator, the UE changes the new AI/ML model change or not based on the second indicator requirement; or if  the target gNB does not send the second indicator, but the first node only knows the AI/ML model is changed in the target gNB based on the first indicator, the first node based on itself changes the new AI/ML model or not.
Further, the wireless communication system 40 is configured to perform the above method 200 or the above method 300. The wireless communication system 400 is also configured to perform the method in the following some embodiments.
In some embodiments, the locations of AI/ML model functionalities are for one-side model or two-side mode, and the wireless communication system comprises a base station and a UE. In some embodiments, the AI/ML model functionalities comprise an AI/ML model training function and an AI/ML model inference function. In some embodiments, in the locations of AI/ML model functionalities for one-side model, the AI/ML model training function is deployed in the base station, and the AI/ML model inference function is deployed in the UE; or the AI/ML model training function and the AI/ML model inference function are both deployed in the base station or the UE. In some embodiments, in the locations of AI/ML model functionalities for two-side model, the AI/ML model training function is deployed in the base station or in the UE and the base station, and the AI/ML model inference function is deployed in the UE and the base station.
In some embodiments, a source base station or the UE is able to send a message including AI/ML model related assistant information, and/or transfer an old used AI/ML model to a target base station. In some embodiments, when the source gNB decides to initial the handover procedure for the UE based on related AI/ML model report messages, or the UE is noticed that it is moved to another base station, the source gNB or the UE sends the message including the AI/ML model related assistant information to the target base station, then the target gNB selects an AI/ML model or reconfiguring an AI/ML model based on a received message for the UE to continue working based on the AI/ML operation.
In some embodiments, the received message comprises a model ID for model selection, a model group ID for model selection, a model correlation for model selection, and/or a model reconfiguration. In some embodiments, when the source base station decides to initial the handover procedure for UE based on related AI/ML model report messages, or the UE is noticed that it is moved to another base station, the source base station or the UE sends the AI/ML model to the target gNB, then the target base station uses the old used AI/ML model for the UE to continue working based on the AI/ML operation. In some embodiments, a new message is used for transferring the AI/ML model, and/or an enhancement for legacy message for AI/ML model is used for transferring the AI/ML model. In some embodiments, a partial AI/ML model is sent from the source base station or the UE to the target base station.
In some embodiments, the target gNB sends information about a situation of AI/ML model to the source base station, the source base station forwards the information to the UE, and the UE makes a decision according to the information. In some embodiments, the target gNB sends information about a situation of AI/ML model to the UE, and the UE makes a decision according to the information. In some embodiments, the situation of AI/ML model comprises that: the target base station selects a new AI/ML model, the target base station reconfigures an AI/ML model, or the target base station receives the old used AI/ML model.
In some embodiments, the information comprises at least one of the followings: one indicator which indicates whether the AI/ML model changed, an AI/ML model ID which uniquely identifies the AI/ML model,  and one indicator which indicates that the UE uses the old AI/ML model or use the new AI/ML model. In some embodiments, the AI/ML model related assistant information comprises at least one of the followings: a UE ID which uniquely identifies the UE, an AI/ML model ID, which uniquely identifies the AI/ML model, an AI/ML model group ID, which identifies group AI/ML models, a type of AI/ML model, a complexity of AI/ML model, a generalization of AI/ML model, and KPIs of AI/ML model.
This invention is related to wireless communication system, where the AI/ML algorithms are enabled over RAN, the relevant new SID on AI/ML for NR Air Interface of the Rel. 18, which is approved in 3GPP RAN plenary meetings 94e in Dec. 2022, and the related discussion is led by RAN1. The study for RAN2 perspective is started in Oct 2022, it mainly focusses on the procedure and signalling based on RAN1 process. This invention provides the methods for determining AI/ML model during UE mobility. More specifically, the target is that ensure an AI/ML model work properly after handover completing. The provided methods may decrease the latency and guarantee the service continuity for UE mobility in AI/ML based wireless communication system.
Since the UE mobility may result in handover happening, for AI/ML based wireless communication system, the methods for determining an AI/ML model for serving gNB is necessary to support service continuity, which may include the following aspects for different scenarios: model transfer, model selection, and model (re) configuration. In this invention, it may refer to model transfer, model selection, and model reconfiguration. For the following part, use “Source gNB” to present old serving gNB, and use “Target gNB” to present new serving gNB.
In order to make the AI/ML-based method work properly after handover completing because of UE mobility, one important thing is that determining an AI/ML model for the wireless system, this disclosure provides the methods for determining an AI/ML model: Methods of AI/ML model selection, transfer, and/or reconfiguration; and transmission and configuration of AI/ML model related assistant information. The invention effect may provide the methods for determining AI/ML model during UE mobility, which is not discussed in current study and guarantee the service continuity during UE mobility and improve handover performance.
Embodiment 1
Consider the scenario that: The AI/ML model training function is deployed in gNB, while the AI/ML model inference function sides within the UE and gNB, more specifically, the scenario can be one-sided model and/or two-sided model, the locations of AI/ML model functionalities are depicted as FIG. 3. FIG. 3A and FIG. 3B illustrates locations of AI/ML model functionalities for different scenarios. In details, in FIG. 3A, locations of AI/ML model functionalities are for one-sided model. In FIG. 3B, locations of AI/ML model functionalities are for two-sided model. When a system initials a handover procedure because of UE mobility, in order to maintain service continuity for AI/ML operation in the wireless communication, a source gNB or a UE is able to send a message including AI/ML model related assistant information, and/or transfer the old used AI/ML model to a target gNB. The following methods can be considered.
Method 1: Assistant information is sent from source gNB or UE to target gNB
In this method 1, when source gNB decides to initial a handover procedure for UE based on related AI/ML model report messages, or UE is noticed that it is moved to another gNB, source gNB or UE may send  a message including the AI/ML model related assistant information to target gNB, then target gNB may select an AI/ML model or reconfiguration an AI/ML model based on received message for the UE to continue working based on AI/ML operation. To achieve this, the following alternatives can be considered.
Alternative 1: Model ID for model selection
In some examples, the target gNB receives the message including AI/ML model ID, which is a unify indicator of AI/ML model in global. Then the target gNB according to the AI/ML model ID to search the AI/ML models on its own side. If the target gNB side has the AI/ML model with the same model ID with old used AI/ML model, it means that target gNB side has the same AI/ML model with source gNB and also used to UE side. Then target gNB select this AI/ML model to be used for the UE to continue working based on AI/ML operation.
The received message for target gNB is provided by source gNB, and/or UE. Moreover, the AI/ML model can be trained in other entities, such as third server, other gNB, OAM, OTT and so on. Since AI/ML model training will result in large consume, to avoid repeat training, the trained AI/ML model can be transferred between different entities in some cases. Hence, there may be possible that an AI/ML model is existed in different entities. The model ID can be a relative ID or an absolute ID, whatever, target gNB can based on the model ID to make sure a unique AI/ML model.
Alternative 2: Model group ID for model selection
In some examples, the target gNB receives the message including AI/ML model group ID, which indicates that these AI/ML models with the same specific function, and/or same characteristics share a same AI/ML model group ID. Then, the target gNB according to the AI/ML model group ID, selects an AI/ML model on its own side. If the target gNB side has the same model group ID of AI/ML model with old used AI/ML model, it means that target gNB side has the AI/ML models with same specific function, and/or same characteristics as the old used AI/ML model which resides source gNB and also used to UE side. Once the AI/ML model which located in target gNB has the same model group ID with old used AI/ML model, it will be regard as candidate AI/ML model. Then, the target gNB selects an AI/ML model among the candidate AI/ML models to be used for the UE to continue working based on AI/ML operation.
Because there may be more than one candidate AI/ML model, the selection rule is described as following: If there is only one candidate AI/ML model, the candidate AI/ML model can be regard as selected AI/ML model directly. If there is more than one candidate AI/ML model, the target gNB can select an AI/ML model among the candidate AI/ML models randomly. The received message for target gNB is provided by source gNB, and/or UE.
Alternative 3: Model correlation for model selection
In some examples, the target gNB which has a model correlation for model selection in the handover procedure, which indicates a range for some parameters and/or structure of AI/ML model. If the parameters and/or structure of an AI/ML model satisfy the model correlation, it will be regarded as a candidate AI/ML model. More narrowly, the source gNB or UE provides the old used AI/ML model related parameters or structure, such like AI/ML model complexity, generalization, KPI and so on to target gNB, these information are regarded  as a baseline/reference point for model correlation. For instance, if the provided layer of the old used AI/ML model is 5, so the said baseline/reference point of model layer is 5 since the model correlation.
Based on baseline/reference float, assume the model correlation floats plus or minus1, hence, the number of model layers falls within the interval of [4, 6] , in other words, the AI/ML model which number of layers is 4, 5, 6 can be regarded as candidate AI/ML model by target gNB.
By the way, because of the different nature of the AI/ML related parameters and structure, the model correlation for different parameters and structure can be calculated in a variety of ways, such as adding, subtracting, multiplying, dividing, and other methods, and it may be one or a combination of several.
Moreover, the model correlation can be configured by network, or fixed, or indication by source gNB or UE. Because there may be more than one candidate AI/ML model, the selection rule is described as following: If there is only one candidate AI/ML model, the candidate AI/ML model can be regard as selected AI/ML model directly. If there is more than one candidate AI/ML model, the target gNB can select an AI/ML model among the candidate AI/ML models randomly. The received message for target gNB is provided by source gNB, and/or UE.
Alternative 4: Model reconfiguration
In some examples, the source gNB or UE sends the AI/ML model related parameters and/or structure to target gNB, such like AI/ML model complexity, generalization, KPI and so on, which make that target gNB reconfiguration the AI/ML model or change some part of the AI/ML model that resides its own side according to these provided information. For instance, if the provided number of layers is 5, and the AI/ML models reside target gNB may do not satisfy the number of layers, but it has a AI/ML model which number of model layer is 6, hence, the target reconfiguration the number of AI/ML model layer from 6 to 5.
Since the target gNB may hold many AI/ML models, how to determine which AI/ML model should be reconfigured, the options are as followings: Target gNB implementation, it based on target gNB decide by itself. Base on model correlation, set the model correlation for model selection, target gNB select an AI/ML model among the candidate AI/ML models to reconfigure or change it. The details for model range refer to Alternative 3. The received message for target gNB is provided by source gNB, and/or UE.
One or any combination of the above alternatives, which can ensure the UE work properly based AI/ML operation and also guarantee the service continuity during UE mobility.
As mentioned above, in method1, there are several alternatives to achieve the aim during UE mobility. It is noted that all of the alternatives use the assistant information which is included in the message that from source gNB or UE to target gNB. In detail, the message may include at least one of the following assistant information: UE ID, which uniquely identify the UE; AI/ML model ID, which uniquely identify the AI/ML model or relatively identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; Format: Format  of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
Method 2: AI/ML model is sent from source gNB to target gNB
In this method 2, when a source gNB decides to initial a handover procedure for a UE based on related AI/ML model report messages, or the UE is noticed that it is moved to another gNB, the source gNB or the UE may send the AI/ML model to target gNB directly. Then target gNB uses the old used AI/ML model for the UE to continue working based on AI/ML operation. How to transmit the AI/ML model, the following alternatives can be considered.
Alternative1: A new message for transferring the AI/ML model
Case1: For transmitted the AI/ML model for source gNB and target gNB, a new Xn message between source gNB and target gNB should be introduced. The Xn message may include at least one of the following assistant information:
UE ID, which uniquely identifies the UE; AI/ML model ID, which uniquely identify the AI/ML model or relatively identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/Ml model, such as rate, precision, power consumption, reliability and so on; Format: Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
Case 2: For transmitted the AI/ML model for UE and target gNB, a new RRC message between UE and target gNB should be introduced, meanwhile, it can be transmitted after access is complete. The RRC message may include at least one of the following assistant information: UE ID, which uniquely identify the UE; AI/ML model ID, which uniquely identify the AI/ML model or relatively identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; and Format: Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
Alternative2: Enhancement for legacy message for AI/ML model
Case 1: For transmitted the AI/ML model for source gNB and target gNB, the legacy Xn message between source gNB and target gNB should be reused. Handover Request as the first Xn message from source gNB to target gNB when source gNB decides to do handover. Hence, the Xn message should introduced some AI/ML related information.
The Handover Request message may include at least one of the following assistant information: AI/ML model ID, which uniquely identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML  models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
Case 2: For transmitted the AI/ML model for UE and target gNB, the legacy RRC message between UE and target gNB should be reused. RRC complete as the first is sent from UE to target gNB should be enhanced. Hence, the RRC message should introduced some AI/ML related information.
The RRC complete message may include at least one of the following assistant information: UE ID, which uniquely identify the UE; AI/ML model ID, which uniquely identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
Method 3: Partial AI/ML model is sent from source gNB or UE to target gNB
In some examples, the source gNB or UE sends the a part of the old used AI/ML model to target gNB, such like backbone, or a part of main AI/ML model structure and parameters, such like layer, or activation function, or optimizer, or AI/ML model complexity, or generalization, or KPI, or type of AI/ML model and so on. Target gNB based on the receive main part AI/ML model to reconfigure or change the AI/ML model.
Embodiment 2
When the handover happens, based on the above discussion, the target gNB according to the different information, there may be three situations of AI/ML models in target gNB side: (1) The target gNB select a new AI/ML model. (2) The target gNB reconfigure an AI/ML model. (3) The target gNB receive the old used AI/ML model. In the (1) and (2) , AI/ML model which will be used in the system is full or partial changed, the AI/ML model is a new AI/ML model to some extent. In the (3) , AI/ML model which will be used in the system is not changed, the AI/ML model is the old AI/ML model to some extent.
Subsequently, target gNB will inform UE the situation of AI/ML model, which may notice the UE that AI/ML model at target gNB change or not. Furthermore, if target gNB use a new AI/ML model, it may also ask the UE whether it will change the used AI/ML model; or just inform UE that change the used AI/ML model; or whether use the old AI/ML model or the new AI/ML model, which depends on UE. To achieve this, these alternatives should be considered. Actually, there are two cases for that target gNB will inform to UE. One is that target gNB can inform the UE by source gNB, another is that target gNB notice the UE directly.
Case 1: Inform the UE by source gNB
Target gNB will send the information about situation of AI/ML model to source gNB, and source gNB forward these information to UE. UE make a decision according to these information. The information can be at one least of the following:
One indicator, which indicates whether the AI/ML model changed. And UE based on the indicator can know if the target gNB use a new AI/ML model or not; For instance, if the indicator set 0, it means that target gNB use the old AI/ML model, and if the indicator set 1, it means that target gNB use the new AI/ML model;
AI/ML model ID, which uniquely identify the AI/ML model. Base on this element, the UE also can know that whether the AI/ML model change or not, moreover, it also knows the ID of the new AI/ML model; and
One indicator, which indicates that the UE use the old AI/ML model or use the new AI/ML model. For instance, if the indicator set 0, it means the UE can use the old AI/ML model, and if the indicator set 1, it means the UE should use the new AI/ML model. Another instance, if configure the indicator, it means that the UE use the old AI/ML model, vice versa.
Moreover, the above message between source gNB and target gNB may be included in handover command which is in Xn message HANDOVER REQUEST ACKNOWLEDGE, or may be included in new dedicated RRC message which is in Xn message HANDOVER REQUEST ACKNOWLEDGE; or may be included in new dedicated RRC message which is in a new Xn message; the above message between source gNB and UE may be included in RRC Reconfiguration message, or may be included in new dedicated RRC message.
Case 2: Inform the UE by target gNB
Target gNB will send the information about situation of AI/ML model to UE directly. More narrowly, the procedure may occur after the UE access to the target gNB. And the UE make a decision according to these information. The information can be at one least of the following:
One indicator, which indicates whether the AI/ML model changed. And UE based on the indicator can know if the target gNB use a new AI/ML model or not; For instance, if the indicator set 0, it means that target gNB use the old AI/ML model, and if the indicator set 1, it means that target gNB use the new AI/ML model;
AI/ML model ID, which uniquely identify the AI/ML model. Base on this element, the UE also can know that whether the AI/ML model change or not, moreover, it also knows the ID of the new AI/ML model;
One indicator, which indicates that the UE use the old AI/ML model or use the new AI/ML model. For instance, if the indicator set 0, it means the UE can use the old AI/ML model, and if the indicator set 1, it means the UE should use the new AI/ML model. Another instance, if configure the indicator, it means that the UE use the old AI/ML model, vice versa.
UE receives the information which is transmitted from target gNB, if the target gNB send the explicit indicator, the UE will be based on the explicit indicator require the new AI/ML model or not; if the gNB does not send the explicit indicator, but only knows the AI/ML model is changed in target gNB, the UE may base on itself to require the new AI/ML model or not. Moreover, the above message between UE and target gNB may be included in RRC Reconfiguration message, or may be included in new dedicated RRC message.
Embodiment 3
Consider the scenario that: The AI/ML model training function is deployed in gNB side and UE side, while the AI/ML model inference function sides within gNB side and UE side, more specifically, the scenario can be two-sided model for joint training and separate training, the locations of AI/ML model functionalities for two-sided model are depicted as FIG. 4.
Sub-embodiment 1:
When system initials a handover procedure because of UE mobility, in order to maintain service continuity for AI/ML operation in the wireless communication in embodiment 3, source gNB is able to send a message including AI/ML model related assistant information, and/or transfer the old used AI/ML model which resides source gNB to target gNB. Besides, since there are two parts of training model in different sides (gNB, UE) , hence, there is no model transfer between UE and gNB. The following methods can be considered.
Method 1: Assistant information is sent from source gNB to target gNB
In this method 1, when source gNB decides to initial a handover procedure for UE based on related AI/ML model report messages, source gNB may send a message including the AI/ML model related assistant information to target gNB, then target gNB may select an AI/ML model or reconfiguration an AI/ML model based on received message for the UE to continue working based on AI/ML operation. To achieve this, the following alternatives can be considered:
The alternatives are same as embodiment1-method1, but it is noticed that assistant information is only provided by source gNB, no UE involved.
Method 2: AI/ML model is sent from source gNB to target gNB
In this method 2, when a source gNB decides to initial a handover procedure for UE based on related AI/ML model report messages, the source gNB may send the AI/ML model to a target gNB directly. Then, the target gNB uses the old used AI/ML model for the UE to continue working based on AI/ML operation. How to transmit the AI/ML model, the following alternatives can be considered. The alternatives are same as embodiment 1-method 2, but it is noticed that assistant information is only provided by the source gNB, no UE is involved, and the AI/ML model is only carried in Xn message.
Sub-embodiment 2:
Furthermore, When the handover happens, based on the above discussion, the target gNB according to the different information, there may be three situations of AI/ML models in target gNB side: (1) The target gNB select a new AI/ML model. (2) The target gNB reconfigure a AI/ML model. (3) The target gNB receive the old used AI/ML model. In the (1) and (2) , AI/ML model which will be used in the system is full or partial changed, the AI/ML model is a new AI/ML model to some extent. In the (3) , AI/ML model which will be used in the system is not changed, the AI/ML model is the old AI/ML model to some extent.
Subsequently, target gNB will inform UE the situation of AI/ML model, which may notice the UE that AI/ML model at target gNB change or not. To achieve this, these alternatives should be considered. Actually, there are two cases for that target gNB will inform to UE. One is that target gNB can inform the UE by source gNB, another is that target gNB notice the UE directly.
Case1: Inform the UE by source gNB
Target gNB will send the information about situation of AI/ML model to source gNB, and source gNB forward these information to UE. The information can be at one least of the following:
One indicator, which indicates whether the AI/ML model changed. And UE based on the indicator can know if the target gNB use a new AI/ML model or not; For instance, if the indicator set 0, it means that target gNB use the old AI/ML model, and if the indicator set 1, it means that target gNB use the new AI/ML model; and
AI/ML model ID, which uniquely identify the AI/ML model. Base on this element, the UE also can know that whether the AI/ML model change or not, moreover, it also knows the ID of the new AI/ML model.
UE receives the information which is forwarded by source gNB, and knows the AI/ML model is changed or not in target gNB. Then the UE notice the target gNB that it knows the AI/ML model situation or not.
Moreover, the above message between source gNB and target gNB may be included in handover command which is in Xn message HANDOVER REQUEST ACKNOWLEDGE, or may be included in new dedicated RRC message which is in Xn message HANDOVER REQUEST ACKNOWLEDGE; or may be included in new dedicated RRC message which is in a new Xn message; the above message between UE and target gNB may be included in RRC Reconfiguration message, or RRC complete message, or may be included in new dedicated RRC message.
Case 2: Inform the UE by target gNB
Target gNB will send the information about situation of AI/ML model to UE directly. More narrowly, the procedure may occur after the UE access to the target gNB. And the UE make a decision according to these information. The information can be at one least of the following:
One indicator, which indicates whether the AI/ML model changed. And UE based on the indicator can know if the target gNB use a new AI/ML model or not; For instance, if the indicator set 0, it means that target gNB use the old AI/ML model, and if the indicator set 1, it means that target gNB use the new AI/ML model; and
AI/ML model ID, which uniquely identify the AI/ML model. Base on this element, the UE also can know that whether the AI/ML model change or not, moreover, it also knows the ID of the new AI/ML model.
UE receives the information which is forwarded by source gNB, and knows the AI/ML model is changed or not in target gNB. Then, the UE notice the target gNB that it knows the AI/ML model situation or not.
Moreover, the above message between UE and target gNB may be included in RRC Reconfiguration message, or may be included in new dedicated RRC message.
Embodiment 4
Consider the scenario that: The AI/ML model training function is deployed in gNB side, while the AI/ML model inference function also sides within gNB side, more specifically, the scenario can be one sided model, the locations of AI/ML model functionalities are depicted for one sided model as FIG. 5. The methods for determining of AI/ML model and notification of UE are same as embodiment 3.
Embodiment 5
Consider the scenario that: The AI/ML model training function is deployed in gNB side, while the AI/ML model inference function also sides within gNB side, more specifically, the scenario can be one sided model, the locations of AI/ML model functionalities for one-sided model are depicted as FIG. 6.
In this embodiment, when the UE moves to target gNB, since both model training function and model inference function are located in UE, if the current AI/ML model is able to work properly, there is no need model transfer. However, for other AI/ML functionalities, such as data collection, it may need collect the data from Target gNB. Besides, the AI/ML model may also be refined or reconfigured, the target gNB is one of the entities. Hence, the target gNB may need knows the related assistant information of AI/ML model.
UE will send the message which includes AI/ML model related assistant information to target gNB after UE access to the target gNB, the message may include at least one of the following assistant information:
UE ID, which uniquely identify the UE; AI/ML model ID, which uniquely identify the AI/ML model; AI/ML model group ID, which identify a group AI/ML models, and the AI/ML models have similar characteristics for same scenario; Type of AI/ML model; which indicates which kind of AI/ML models is; Complexity of AI/ML model, such as FLOPs, size of parameters; Generalization of AI/ML model, which means the capability of AI/ML model can be worked in different use case; KPIs of AI/ML model, such as rate, precision, power consumption, reliability and so on; and Format: Format of AI/ML model, which is a format of AI/ML model transfer, it can be privity format, or open format (ONNX) , or spec a new format in 3gpp.
Moreover, the above message between UE and target gNB may be included in RRC complete message, or may be included in new dedicated RRC message.
In summary, in order to make the AI/ML-based method work properly after handover completing because of UE mobility, one important thing is that determining an AI/ML model for the wireless system, this disclosure provides the methods for determining an AI/ML model: Methods of AI/ML model selection, transfer, and/or reconfiguration; and transmission and configuration of AI/ML model related assistant information. The invention effect may provide the methods for determining AI/ML model during UE mobility, which is not discussed in current study and guarantee the service continuity during UE mobility and improve handover performance.
FIG. 7 is a block diagram of an example system 700 for wireless communication according to an embodiment of the present disclosure. Embodiments described herein may be implemented into the system using any suitably configured hardware and/or software. FIG. 7 illustrates the system 700 including a radio frequency (RF) circuitry 710, a baseband circuitry 720, an application circuitry 730, a memory/storage 740, a display 750, a camera 760, a sensor 770, and an input/output (I/O) interface 780, coupled with each other at least as illustrated. The application circuitry 730 may include a circuitry such as, but not limited to, one or more single-core or multi-core processors. The processors may include any combination of general-purpose processors and dedicated processors, such as graphics processors, application processors. The processors may be coupled with the memory/storage and configured to execute instructions stored in the memory/storage to enable various applications and/or operating systems running on the system.
While the present disclosure has been described in connection with what is considered the most practical and preferred embodiments, it is understood that the present disclosure is not limited to the disclosed  embodiments but is intended to cover various arrangements made without departing from the scope of the broadest interpretation of the appended claims.

Claims (19)

  1. A method for determining an artificial intelligence (AI) /machine learning (ML) model in a wireless communication system based on information, comprising:
    receiving the information from a first node and/or a second node by a third node, wherein the information is used for the third node to determining the AI/ML model in the wireless communication system.
  2. The method for determining the AI/ML model in the wireless communication system based on information according to claim 1, wherein the information is sent form the first node and/or the second node to the third node and comprises a message including AI/ML model, and/or AI/ML model related assistant information.
  3. The method for determining the AI/ML model in the wireless communication system based on information according to claim 2, wherein AI/ML model is transmitted from the second node to the third node; and/or from the first node to the third node.
  4. The method for determining the AI/ML model in the wireless communication system based on information according to claim 2, wherein the AI/ML model related assistant information contains one of the followings:
    - model ID:
    - model group ID; or
    - parameters and/or structure of AI/ML model.
  5. The method for determining the AI/ML model in the wireless communication system based on information according to claim 4, wherein if the third node receives parameters and/or structure of AI/ML model, and if the third node has model correlation, the third node determines one AI/ML model.
  6. The method for determining the AI/ML model in the wireless communication system based on information according to claim 4, wherein if the third node receives parameters and/or structure of AI/ML model, the third node determines one AI/ML model.
  7. The method for determining the AI/ML model in the wireless communication system based on information according to claim 4, wherein the third node determines that the AI/ML model by model selection, and/or model reconfiguration, and/or model update.
  8. The method for determining the AI/ML model in the wireless communication system based on information according to claim 5, wherein the model correlation indicates a range for some parameters and/or structure of AI/ML models.
  9. The method for determining the AI/ML model in the wireless communication system based on information according to claim 5, wherein the model correlation is configured by a network, or fixed, or indicated by the second node or the first node.
  10. The method for determining the AI/ML model in the wireless communication system based on information according to claim 7, wherein the model reconfiguration and/or update comprises options as followings:
    - based on the third node decides by itself; or
    - based on model correlation.
  11. A method of processing about the AI/ML model by a first node, comprising:
    the first node receiving a message from a third node, and/or a second node; and
    the first node changing the AI/ML model or not.
  12. The method of claim 11, wherein the message comprises at least one of followings:
    - a first indicator, which indicates whether the AI/ML model changed, and the UE based on the first indicator is able to know if a target gNB uses a new AI/ML model or not;
    - AI/ML model ID, which uniquely identifies the AI/ML model;
    - a second indicator, which indicates that the UE uses the old AI/ML model or use the new AI/ML model.
  13. The method of claim 12, wherein the first node changes the AI/ML model or not, comprising:
    if the target gNB sends the second indicator, the UE changes the new AI/ML model change or not based on the second indicator requirement; or
    if the target gNB does not send the second indicator, but the first node only knows the AI/ML model is changed in the target gNB based on the first indicator, the first node based on itself changes the new AI/ML model or not.
  14. A wireless communication system, comprising:
    a memory;
    a transceiver; and
    a processor coupled to the memory and the transceiver;
    wherein the processor is configured to execute the method of any one of claims 1 to 13.
  15. A non-transitory machine-readable storage medium having stored thereon instructions that, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 13.
  16. A chip, comprising:
    a processor, configured to call and run a computer program stored in a memory, to cause a device in which the chip is installed to execute the method of any one of claims 1 to 13.
  17. A computer readable storage medium, in which a computer program is stored, wherein the computer program causes a computer to execute the method of any one of claims 1 to 13.
  18. A computer program product, comprising a computer program, wherein the computer program causes a computer to execute the method of any one of claims 1 to 13.
  19. A computer program, wherein the computer program causes a computer to execute the method of any one of claims 1 to 13.
PCT/CN2022/130072 2022-11-04 2022-11-04 Wireless communication system and method for determining ai/ml model during ue mobility WO2024092788A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/130072 WO2024092788A1 (en) 2022-11-04 2022-11-04 Wireless communication system and method for determining ai/ml model during ue mobility

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/130072 WO2024092788A1 (en) 2022-11-04 2022-11-04 Wireless communication system and method for determining ai/ml model during ue mobility

Publications (1)

Publication Number Publication Date
WO2024092788A1 true WO2024092788A1 (en) 2024-05-10

Family

ID=90929492

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/130072 WO2024092788A1 (en) 2022-11-04 2022-11-04 Wireless communication system and method for determining ai/ml model during ue mobility

Country Status (1)

Country Link
WO (1) WO2024092788A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021253232A1 (en) * 2020-06-16 2021-12-23 北京小米移动软件有限公司 Communication method and device, and electronic device and computer-readable storage medium
CN113873538A (en) * 2020-06-30 2021-12-31 华为技术有限公司 Model data transmission method and communication device
WO2022000216A1 (en) * 2020-06-29 2022-01-06 北京小米移动软件有限公司 Communication method and device, electronic device, and computer-readable storage medium
CN114143799A (en) * 2020-09-03 2022-03-04 华为技术有限公司 Communication method and device
US20220078637A1 (en) * 2018-12-28 2022-03-10 Telefonaktiebolaget Lm Ericsson (Publ) Wireless device, a network node and methods therein for updating a first instance of a machine learning model
CN115004755A (en) * 2020-10-13 2022-09-02 中兴通讯股份有限公司 Wireless multi-carrier configuration and selection
WO2022186458A1 (en) * 2021-03-04 2022-09-09 Lg Electronics Inc. Method and apparatus for performing handover based on ai model in a wireless communication system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220078637A1 (en) * 2018-12-28 2022-03-10 Telefonaktiebolaget Lm Ericsson (Publ) Wireless device, a network node and methods therein for updating a first instance of a machine learning model
WO2021253232A1 (en) * 2020-06-16 2021-12-23 北京小米移动软件有限公司 Communication method and device, and electronic device and computer-readable storage medium
WO2022000216A1 (en) * 2020-06-29 2022-01-06 北京小米移动软件有限公司 Communication method and device, electronic device, and computer-readable storage medium
CN113873538A (en) * 2020-06-30 2021-12-31 华为技术有限公司 Model data transmission method and communication device
CN114143799A (en) * 2020-09-03 2022-03-04 华为技术有限公司 Communication method and device
CN115004755A (en) * 2020-10-13 2022-09-02 中兴通讯股份有限公司 Wireless multi-carrier configuration and selection
WO2022186458A1 (en) * 2021-03-04 2022-09-09 Lg Electronics Inc. Method and apparatus for performing handover based on ai model in a wireless communication system

Similar Documents

Publication Publication Date Title
US10966122B2 (en) Method and migration managing module for managing a migration of a service
US20210256855A1 (en) Information transmission methods and apparatuses
US10820262B2 (en) Paging method and device
US11765551B2 (en) Information transmission method and apparatus
US20230189271A1 (en) Method of reporting user equipment capability, equipment, and computer readable storage medium
JP2020167716A (en) Communication processing method and apparatus in the case of tight interworking between lte and 5g
KR102491526B1 (en) Automatic Neighbor Improvement Technique for Dual Connectivity
US11665707B2 (en) Information transmission method, device and computer-readable medium
US11470672B2 (en) Multi-connection recovery method in non-activated state and device therefor
US20240073768A1 (en) Information transmission method and device thereof
WO2018177431A1 (en) Power-saving method for user terminal and device
US20190045005A1 (en) Method for replicating data in a network and a network component
US20230189057A1 (en) Service traffic steering method and apparatus
US20240236707A9 (en) Spatial relation indication method and device
WO2024092788A1 (en) Wireless communication system and method for determining ai/ml model during ue mobility
KR20230029943A (en) Information transmission method, device, storage medium and electronic device
WO2020133491A1 (en) Capability reporting method and terminal device
US20150230145A1 (en) Method and device for cell selection between heterogeneous networks in mobile communication system
US12041579B2 (en) Method and apparatus for communication between a terminal and a server
CN111698784B (en) Data transmission method, device, equipment and storage medium
CN117279116A (en) Connection control method, device and system
EP4280657A1 (en) Communication method based on reference signal, and related apparatus
CN117121560A (en) Dynamic network slice management
JP2023541101A (en) Information transmission method and related devices
WO2024040533A1 (en) Methods and apparatuses for artificial intelligence based user equipment positioning estimation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22964092

Country of ref document: EP

Kind code of ref document: A1