WO2023030077A1 - 一种通信方法、通信装置及通信*** - Google Patents

一种通信方法、通信装置及通信*** Download PDF

Info

Publication number
WO2023030077A1
WO2023030077A1 PCT/CN2022/114043 CN2022114043W WO2023030077A1 WO 2023030077 A1 WO2023030077 A1 WO 2023030077A1 CN 2022114043 W CN2022114043 W CN 2022114043W WO 2023030077 A1 WO2023030077 A1 WO 2023030077A1
Authority
WO
WIPO (PCT)
Prior art keywords
network element
training
model
encrypted
type
Prior art date
Application number
PCT/CN2022/114043
Other languages
English (en)
French (fr)
Inventor
封召
辛阳
王远
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023030077A1 publication Critical patent/WO2023030077A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W12/00Security arrangements; Authentication; Protecting privacy or anonymity
    • H04W12/03Protecting confidentiality, e.g. by encryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition

Definitions

  • the present application relates to the technical field of communication, and in particular to a communication method, a communication device and a communication system.
  • the training network element can train the model, and provide the trained model to the inference network element, and the inference network element inputs the data to be analyzed into the model for inference, and obtains the analysis result.
  • the address information of one or more training network elements and the identification information of the analysis type supported by each training network element are generally configured locally on the reasoning network element.
  • the reasoning network element can According to the analysis type corresponding to the data to be analyzed, a training network element that can provide a model is selected from the one or more training network elements.
  • the manufacturer of the inference network element is the same as that of each training network element, and the model deployment platform used is also the same.
  • Embodiments of the present application provide a communication method, a communication device, and a communication system, so as to realize cross-vendor sharing of models.
  • the embodiment of the present application provides a communication method, and the method may be executed by an inference network element or a module (such as a chip) applied to the inference network element.
  • the method includes: the reasoning network element sends a first request message to the training network element, the first request message includes identification information of the analysis type, and the first request message is used to request support for the For the model of analysis type, the manufacturer type of the training network element is different from that of the inference network element, and the type of model deployment platform of the reasoning network element and the training network element is the same; the reasoning network element receives the first response from the training network element message, the first response message includes the encrypted model or the address information of the encrypted model, and the encrypted model supports the analysis type; the reasoning network element obtains the encrypted analysis result according to the encrypted model; the reasoning network element obtains the encrypted analysis result according to the The encrypted analysis result is obtained to obtain the decrypted analysis result.
  • the inference network element and the training network element are deployed by different manufacturers, but the model deployment platform used by the two is the same, breaking the limitation that the model in the existing solution can only be shared with the manufacturer.
  • This solution provides a cross-vendor encrypted distribution process of the model, enhances the ability of the training network element to distribute the model encrypted, and avoids the risk of stealing the framework and parameters of the model by the deployment manufacturer of the inference network element.
  • the reasoning network element sends the encrypted analysis result to the training network element; the reasoning network element receives the decrypted analysis result from the training network element.
  • the training network element is an encrypted network element of the model
  • the encrypted analysis result is decrypted by the training network element, so that accurate decryption of the encrypted analysis result can be realized.
  • the first response message further includes first indication information, where the first indication information indicates that the training network element decrypts the encrypted analysis result.
  • the reasoning network element can accurately know that the network element that encrypts the encrypted analysis result is the training network element according to the first indication information.
  • the reasoning network element sends the encrypted analysis result and the association identifier to the training network element, and the association identifier is used for the training network element to determine the encryption algorithm corresponding to the encrypted model.
  • the training network element can accurately obtain the encryption algorithm corresponding to the encrypted model, and then accurately know the decryption algorithm to be used for decrypting the encrypted analysis, which can improve the efficiency of decryption.
  • the first response message further includes address information of the first network element; the reasoning network element sends the encrypted analysis to the first network element according to the address information of the first network element. Result; the reasoning network element receives the decrypted analysis result from the first network element.
  • the first network element can decrypt the encrypted analysis result, thereby ensuring that the reasoning network element can obtain the decrypted analysis result.
  • the reasoning network element sends the encrypted analysis result and the association identifier to the first network element according to the address information of the first network element, and the association identifier is used for the first network element to determine The encryption algorithm corresponding to the encrypted model.
  • the first network element can accurately obtain the encryption algorithm corresponding to the encrypted model, and then accurately know the decryption algorithm to be used for decrypting the encrypted analysis, which can improve the efficiency of decryption.
  • the first response message further includes second indication information, where the second indication information is used to indicate the data type of the input data corresponding to the encrypted model.
  • the reasoning network element can perform corresponding preprocessing on the input data to obtain the data to be analyzed that meet the requirements, which can improve the efficiency of data reasoning.
  • the first request message further includes a manufacturer type of the inference network element and a model deployment platform type of the inference network element.
  • the training network element can judge whether the manufacturer type of the training network element and the reasoning network element are the same, and judge the training Whether the model deployment platforms of network elements and reasoning network elements are of the same type, so that it is convenient for training network elements to choose an appropriate method to provide data reasoning functions for reasoning network elements, which can improve the efficiency of data reasoning.
  • the reasoning network element before the reasoning network element sends the first request message to the training network element, it sends a second request message to the data management network element, where the second request message includes identification information of the analysis type, and the first request message includes The second request message is used to request a network element supporting the analysis type; the reasoning network element receives a second response message from the data management network element, and the second response message includes address information of the training network element.
  • the reasoning network element can request the discovery of the training network element from the data management network element, which can realize accurate discovery of the training network element that can provide the model.
  • the embodiment of the present application provides a communication method, which can be executed by a training network element or a module (such as a chip) applied to the training network element.
  • the method includes: the training network element receives a first request message from the reasoning network element, the first request message includes identification information of the analysis type, and the first request message is used to request support For the model of the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the type of model deployment platform of the reasoning network element and the training network element is the same; the training network element sends the first response to the reasoning network element message, the first response message includes the encrypted model or the address information of the encrypted model; the training network element receives the encrypted analysis result from the reasoning network element, and the encrypted analysis result is obtained according to the encrypted model; The training network element decrypts the encrypted analysis result to obtain the decrypted analysis result; the training network element sends the decrypted analysis result to the reasoning network element.
  • the inference network element and the training network element are deployed by different manufacturers, but the model deployment platform used by the two is the same, breaking the limitation that the model in the existing solution can only be shared with the manufacturer.
  • This solution provides a cross-vendor encrypted distribution process of the model, enhances the ability of the training network element to distribute the model encrypted, and avoids the risk of stealing the framework and parameters of the model by the deployment manufacturer of the inference network element.
  • the first request message also includes the manufacturer type of the inference network element and the model deployment platform type of the inference network element; the training network element sends a first response message to the inference network element Before, it is determined that the manufacturer type of the training network element is different from that of the reasoning network element, and the type of the model deployment platform of the reasoning network element and the training network element is the same.
  • the training network element can determine whether the manufacturer type of the training network element and the reasoning network element are the same, and determine whether the training network element Whether the type of model deployment platform of the inference network element is the same as that of the inference network element, so that it is convenient for the training network element to select an appropriate method to provide the data reasoning function for the reasoning network element, which can improve the efficiency of data reasoning.
  • the first response message further includes first indication information, where the first indication information indicates that the training network element decrypts the encrypted analysis result.
  • the reasoning network element can accurately know that the network element that encrypts the encrypted analysis result is the training network element according to the first indication information.
  • the first response message further includes second indication information, where the second indication information is used to indicate the data type of the input data corresponding to the encrypted model.
  • the reasoning network element can perform corresponding preprocessing on the input data to obtain the data to be analyzed that meet the requirements, which can improve the efficiency of data reasoning.
  • the training network element before the training network element receives the first request message from the reasoning network element, it sends a registration request message to the data management network element, and the registration request message includes the identification information of the analysis type and the training network element
  • the registration request message includes the identification information of the analysis type and the training network element
  • the model information of the unit, the model information includes the manufacturer type of the training network element and the type of the model deployment platform of the training network element.
  • the model information in the registration request message further includes the above-mentioned second indication information.
  • the model information in the registration request message further includes identification information of the second network element.
  • the model information in the registration request message further includes identification information of the first network element.
  • the training network element receives the encrypted analysis result and association identification from the reasoning network element; the training network element determines the encryption algorithm corresponding to the encrypted model according to the association identification; the training The network element determines a decryption algorithm according to the encryption algorithm; the training network element decrypts the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
  • the training network element can accurately obtain the encryption algorithm corresponding to the encrypted model, and then accurately know the decryption algorithm to be used for decrypting the encrypted analysis, which can improve the efficiency of decryption.
  • the embodiment of the present application provides a communication method, and the method may be executed by an inference network element or a module (such as a chip) applied to the inference network element.
  • the method includes: the inference network element sends a request message to the training network element, the request message includes identification information of the analysis type, and the request message is used to request a model that supports the analysis type.
  • the manufacturers of the training network element and the reasoning network element are of different types, and the model deployment platforms of the reasoning network element and the training network element are of different types;
  • the reasoning network element receives a response message from the training network element, and the response message includes the first Indication information and address information of the second network element, the first indication information indicates that the model that supports the analysis type is rejected, and the type of the model deployment platform supported by the second network element includes the type of the model deployment platform of the training network element;
  • the reasoning network element sends the data to be analyzed to the second network element according to the address information of the second network element, and the data to be analyzed is used for the second network element to generate encrypted data according to the encrypted model corresponding to the analysis type.
  • Analysis results the reasoning network element receives the decrypted analysis results from the training network element or the first network element, and the decrypted analysis results are obtained by the training network element or the first network element according to the encrypted analysis results.
  • the inference network element and the training network element are deployed by different manufacturers, and the model deployment platforms used by the two are different, which breaks the limitation that the model in the existing solution can only be shared with the manufacturer.
  • This solution provides a cross-vendor encrypted distribution process of the model, enhances the ability of the training network element to distribute the model encrypted, and avoids the risk of stealing the framework and parameters of the model by the deployment manufacturer of the inference network element.
  • the response message further includes a rejection reason value, where the rejection reason value is of a different manufacturer type from the training network element and the inference network element, and the model deployment of the inference network element and the training network element There are different types of platforms.
  • the inference network element can be notified of the reason for rejection through the rejection reason value, so that the inference network element no longer sends a model to the training network element to request support for the analysis type, which can reduce the overhead of reasoning.
  • the request message also includes the manufacturer type of the inference network element and the model deployment platform type of the inference network element.
  • the training network element can judge whether the manufacturer type of the training network element and the reasoning network element are the same, and judge whether the training network element Whether the type of model deployment platform is the same as that of the reasoning network element, so that it is convenient for the training network element to select an appropriate method to provide the data reasoning function for the reasoning network element, which can improve the efficiency of data reasoning.
  • the response message further includes second indication information, where the second indication information is used to indicate the data type of the input data corresponding to the encrypted model.
  • the reasoning network element can perform corresponding preprocessing on the input data to obtain the data to be analyzed that meet the requirements, which can improve the efficiency of data reasoning.
  • the reasoning network element sends the data to be analyzed and an association identifier to the second network element according to the address information of the second network element, and the association identifier is used for the first network element or the
  • the training network element determines the encryption algorithm corresponding to the encrypted model.
  • the training network element or the first network element can accurately obtain the encryption algorithm corresponding to the encrypted model, and then accurately know the decryption algorithm to be used for decrypting the encrypted analysis, which can improve the efficiency of decryption.
  • the embodiment of the present application provides a communication method, which can be executed by a training network element or a module (such as a chip) applied to the training network element.
  • the method includes: the training network element receives a request message from the reasoning network element, the request message includes identification information of the analysis type, and the request message is used to request a model that supports the analysis type, The manufacturer type of the training network element is different from that of the reasoning network element, and the type of the model deployment platform of the reasoning network element and the training network element is different; the training network element sends a response message to the reasoning network element, and the response message includes the first Indication information and address information of the second network element, the first indication information indicates that the model that supports the analysis type is rejected, and the type of the model deployment platform supported by the second network element includes the type of the model deployment platform of the training network element; The training network element receives the encrypted analysis result from the second network element, and the encrypted analysis result is obtained by the second network element
  • the inference network element and the training network element are deployed by different manufacturers, and the model deployment platforms used by the two are different, which breaks the limitation that the model in the existing solution can only be shared with the manufacturer.
  • This solution provides a cross-vendor encrypted distribution process of the model, enhances the ability of the training network element to distribute the model encrypted, and avoids the risk of stealing the framework and parameters of the model by the deployment manufacturer of the inference network element.
  • the request message also includes the manufacturer type of the inference network element and the model deployment platform type of the inference network element; before the training network element sends a response message to the inference network element, determine the The manufacturer types of the training network element and the inference network element are different, and the types of the model deployment platforms of the inference network element and the training network element are different.
  • the training NE can determine whether the manufacturer types of the training NE and the inference NE are the same, and determine whether the training NE and the inference NE have the same Whether the model deployment platforms of inference network elements are of the same type, so that it is convenient for training network elements to choose an appropriate method to provide data reasoning functions for reasoning network elements, which can improve the efficiency of data reasoning.
  • the response message further includes a rejection reason value, where the rejection reason value is of a different manufacturer type from the training network element and the inference network element, and the model deployment of the inference network element and the training network element There are different types of platforms.
  • the inference network element can be notified of the reason for rejection through the rejection reason value, so that the inference network element no longer sends a model to the training network element to request support for the analysis type, which can reduce the overhead of reasoning.
  • the training network element before the training network element receives the request message from the reasoning network element, it sends the identification information of the analysis type and the encrypted model corresponding to the analysis type to the second network element.
  • the response message further includes second indication information, where the second indication information is used to indicate the data type of the input data corresponding to the encrypted model.
  • the reasoning network element can perform corresponding preprocessing on the input data to obtain the data to be analyzed that meet the requirements, which can improve the efficiency of data reasoning.
  • the training network element receives the encrypted analysis result and association identifier from the second network element; the training network element determines the encryption algorithm corresponding to the encrypted model according to the association identifier; the The training network element determines a decryption algorithm according to the encryption algorithm; the training network element decrypts the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
  • the training network element can accurately obtain the encryption algorithm corresponding to the encrypted model, and then accurately know the decryption algorithm to be used for decrypting the encrypted analysis, which can improve the efficiency of decryption.
  • the embodiment of the present application provides a communication method, and the method may be executed by a first network element or a module (such as a chip) applied to the first network element.
  • the method includes: the first network element receives the encrypted analysis result; the first network element decrypts the encrypted analysis result to obtain the decrypted analysis result; the first network element The element sends the decrypted analysis result to the reasoning network element.
  • the first network element receives the encrypted analysis result from the reasoning network element.
  • the first network element receives the encrypted analysis result and the address information of the reasoning network element from the second network element;
  • the reasoning network element sends the decrypted analysis result.
  • the first network element before the first network element receives the encrypted analysis result, receives the association identifier from the training network element and the identifier of the decryption algorithm corresponding to the association identifier; the first network element receiving the encrypted analysis result and the association identifier; the first network element determines the decryption algorithm according to the association identifier; the first network element decrypts the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result .
  • the embodiment of the present application provides a communication method, and the method may be executed by a second network element or a module (such as a chip) applied to the second network element.
  • the method includes: the second network element receives the identification information of the analysis type from the training network element and the encrypted model supporting the analysis type, and the deployment of the model supported by the second network element
  • the type of platform includes the type of the model deployment platform of the training network element; the second network element receives the data to be analyzed from the reasoning network element; the second network element obtains the encrypted data according to the encrypted model and the data to be analyzed Analysis result; the second network element sends the encrypted analysis result and the address information of the reasoning network element for receiving the decrypted analysis result to the training network element or the first network element, and the decrypted analysis result is the training network element Or the first network element obtains it according to the encrypted analysis result.
  • the embodiment of the present application provides a communication method, and the method may be executed by an inference network element or a module (such as a chip) applied to the inference network element.
  • the method includes: the inference network element sends a request message to the data management network element, the request message includes identification information of the analysis type, and the request message is used to request the network element supporting the analysis type ;
  • the reasoning network element receives a response message from the data management network element, the response message includes at least one set of information, each set of information includes address information of a candidate training network element and model information of the candidate training network element, the candidate
  • the training network element supports the analysis type, and the model information of the candidate training network element includes the manufacturer type of the candidate training network element and the type of the model deployment platform of the candidate training network element; when the at least one set of information corresponds to at least one candidate training network element Among the network elements, there are one or more candidate training network elements that are different from the manufacturer type of the
  • This solution enhances the function of the data management network element.
  • the training network element first registers/updates the identification information of the supported analysis type and the corresponding model information to the data management network element, and then the reasoning network element finds available data from the data management network element.
  • Inference NEs and training NEs are deployed by different manufacturers, and the types of model deployment platforms used by the two are the same or different.
  • This solution provides a process for cross-vendor encrypted distribution of models, which enhances the ability of training NEs to encrypt and distribute models to avoid It eliminates the risk of the inference network element deployment manufacturer stealing the framework and parameters of the model, and breaks the limitation that the model can only be shared with the manufacturer in the existing solution.
  • the The reasoning network element determines the address information of the second network element according to the at least one set of information.
  • the model information of the candidate training network element includes address information of the second network element; the reasoning network element obtains the address information of the second network element from the model information of the candidate training network element. Address information.
  • the encrypted model in any of the above implementation methods is encrypted using one or more of a fully homomorphic encryption algorithm, a random secure average algorithm, or a differential privacy algorithm.
  • the reasoning network element in any of the above implementation methods may be an independent core network element or a functional module in the core network element.
  • the training network element in any of the above implementation methods may be an independent core network element or a functional module in the core network element.
  • the first network element in any of the above implementation methods may be an analysis result decryption network element, which can be used to decrypt the encrypted analysis result.
  • the second network element in any of the above implementation methods may be a model deployment and reasoning network element, which can be used to perform reasoning on the data to be analyzed according to the model, and obtain analysis results.
  • the model used is an encrypted model
  • reasoning can be performed on the data to be analyzed according to the encrypted model, and an encrypted analysis result can be obtained.
  • the embodiment of the present application provides a communication device, which may be an inference network element or a module (such as a chip) applied to an inference network element.
  • the device has the function of implementing any implementation method of the first aspect, any implementation method of the second aspect, or any implementation method of the seventh aspect. This function may be implemented by hardware, or may be implemented by executing corresponding software on the hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the embodiment of the present application provides a communication device, and the device may be an inference network element or a module (such as a chip) applied to the inference network element.
  • the device has the function of realizing any realization method of the above-mentioned second aspect or any realization method of the fourth aspect. This function may be implemented by hardware, or may be implemented by executing corresponding software on the hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the embodiment of the present application provides a communication device, and the device may be a first network element or a module (such as a chip) applied to the first network element.
  • the device has the function of implementing any implementation method of the fifth aspect above. This function may be implemented by hardware, or may be implemented by executing corresponding software on the hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the embodiment of the present application provides a communication device, and the device may be a second network element or a module (such as a chip) applied to the second network element.
  • the device has the function of implementing any implementation method of the sixth aspect above. This function may be implemented by hardware, or may be implemented by executing corresponding software on the hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the embodiment of the present application provides a communication device, including a processor and a memory; the memory is used to store computer instructions, and when the device is running, the processor executes the computer instructions stored in the memory, so that the device Execute any implementation method in the first aspect to the seventh aspect above.
  • the embodiment of the present application provides a communication device, including a unit or means (means) for performing each step of any implementation method in the first aspect to the seventh aspect.
  • the embodiment of the present application provides a communication device, including a processor and an interface circuit, the processor is used to communicate with other devices through the interface circuit, and execute any implementation method in the first aspect to the seventh aspect above .
  • the processor includes one or more.
  • the embodiment of the present application provides a communication device, including a processor coupled to the memory, the processor is used to call the program stored in the memory, so as to execute any implementation in the first aspect to the seventh aspect above method.
  • the memory may be located within the device or external to the device. And there may be one or more processors.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores instructions, and when it is run on a communication device, the above-mentioned first to seventh aspects Any implementation method of is executed.
  • the embodiment of the present application also provides a computer program product, the computer program product includes a computer program or instruction, when the computer program or instruction is run by the communication device, any of the above first to seventh aspects The implementation method is executed.
  • the embodiment of the present application further provides a chip system, including: a processor, configured to execute any implementation method in the first aspect to the third aspect above.
  • the embodiment of the present application further provides a communication system, including an inference network element for implementing any implementation method of the first aspect above and a training network element for implementing any implementation method of the second aspect above.
  • the embodiment of the present application further provides a communication system, including an inference network element for implementing any implementation method of the above third aspect and a training network element for implementing any implementation method of the above fourth aspect.
  • Figure 1 is a schematic diagram of a 5G network architecture based on a service architecture
  • Figure 2 is a schematic diagram of a 5G network architecture based on a point-to-point interface
  • FIG. 3 is a schematic flowchart of a communication method provided in an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a communication device provided in an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a communication device provided by an embodiment of the present application.
  • FIG. 1 is a schematic diagram of the fifth generation (the 5th generation, 5G) network architecture based on the service architecture.
  • the 5G network architecture shown in FIG. 1 may include terminal equipment, access network equipment, and core network equipment.
  • the terminal device is involved in accessing the data network (data network, DN) through the access network device and the core network.
  • data network data network
  • the core network equipment includes some or all of the following network elements: unified data management (unified data management, UDM) network elements, unified database (unified data repository, UDR), network exposure function (network exposure function, NEF) network elements (not shown in the figure), application function (application function, AF) network element, policy control function (policy control function, PCF) network element, access and mobility management function (access and mobility management function, AMF) network element , session management function (session management function, SMF) network element, user plane function (user plane function, UPF) network element, network data analysis function (Network Data Analytics Function, NWDAF) network element, network storage function (Network Repository Function, NRF) network element (not shown in the figure).
  • unified data management unified data management
  • UDR unified data repository
  • NEF network exposure function
  • application function application function, AF
  • policy control function policy control function
  • PCF policy control function
  • AMF access and mobility management function
  • AMF session management function
  • SMF session management function
  • user plane function user plane
  • the access network device may be a radio access network (radio access network, RAN) device.
  • radio access network radio access network
  • base station base station
  • evolved base station evolved NodeB, eNodeB
  • transmission reception point transmission reception point
  • TRP transmission reception point
  • next generation base station next generation NodeB, gNB
  • a unit for example, may be a centralized unit (CU) or a distributed unit (DU).
  • the radio access network equipment may be a macro base station, a micro base station or an indoor station, or a relay node or a donor node.
  • the embodiment of the present application does not limit the specific technology and specific equipment form adopted by the radio access network equipment.
  • the terminal device may be a user equipment (user equipment, UE), a mobile station, a mobile terminal, and the like.
  • Terminal devices can be widely used in various scenarios, such as device-to-device (D2D), vehicle-to-everything (V2X) communication, machine-type communication (MTC), Internet of Things (internet of things, IOT), virtual reality, augmented reality, industrial control, automatic driving, telemedicine, smart grid, smart furniture, smart office, smart wear, smart transportation, smart city, etc.
  • Terminal devices can be mobile phones, tablet computers, computers with wireless transceiver functions, wearable devices, vehicles, urban air vehicles (such as drones, helicopters, etc.), ships, robots, robotic arms, smart home devices, etc.
  • Access network equipment and terminal equipment can be fixed or mobile. Access network equipment and terminal equipment can be deployed on land, including indoor or outdoor, handheld or vehicle-mounted; they can also be deployed on water; they can also be deployed on aircraft, balloons and artificial satellites in the air.
  • the embodiments of the present application do not limit the application scenarios of the access network device and the terminal device.
  • the AMF network element includes functions such as mobility management and access authentication/authorization. In addition, it is also responsible for transferring user policies between terminal equipment and PCF.
  • the SMF network element includes the functions of executing session management, executing control policies issued by PCF, selecting UPF, and assigning Internet Protocol (IP) addresses to terminal devices.
  • IP Internet Protocol
  • the UPF network element as the interface with the data network, includes functions such as user plane data forwarding, session/flow-based accounting statistics, and bandwidth limitation.
  • UDM network element including the execution management of subscription data, user access authorization and other functions.
  • UDR including the access function of execution contract data, policy data, application data and other types of data.
  • NEF network elements are used to support the opening of capabilities and events.
  • the AF network element transmits the requirements from the application side to the network side, such as QoS requirements or user status event subscription.
  • the AF can be a third-party functional entity, or an application server deployed by an operator.
  • the PCF network element includes policy control functions such as charging for sessions and service flow levels, QoS bandwidth guarantee, mobility management, and terminal equipment policy decisions.
  • the NRF network element can be used to provide a network element discovery function, and provide network element information corresponding to the network element type based on the request of other network elements.
  • NRF also provides network element management services, such as network element registration, update, de-registration, network element status subscription and push, etc.
  • NWDAF network elements are mainly used to collect data (including one or more of terminal device data, access network device data, core network element data, and third-party application data), and provide data analysis services, which can output data analysis The results are used for network, network management and application execution policy decisions. NWDAF can utilize machine learning models for data analysis. In the 3rd generation partnership project (3GPP) Release 17, the training function and reasoning function of NWDAF are split. One NWDAF can only support model training function, or only support data reasoning function, or support model training at the same time. functions and data reasoning functions.
  • 3GPP 3rd generation partnership project
  • the NWDAF supporting the model training function may also be called the training NWDAF, or the NWDAF supporting the model training logical function (model training logical function, MTLF) (NWDAF (MTLF) for short).
  • Training NWDAF can perform model training based on the acquired data to obtain the trained model.
  • the NWDAF that supports the data reasoning function may also be called the reasoning NWDAF, or the NWDAF that supports the analysis logic function (analytics logical function, AnLF) (referred to as NWDAF (AnLF) for short).
  • Inference NWDAF can input the input data into the trained model to get analysis results or inference data.
  • the training NWDAF refers to an NWDAF that supports at least a model training function.
  • training NWDAF can also support data reasoning functions.
  • Inference NWDAF refers to NWDAF that supports at least data inference function.
  • inference NWDAF can also support the model training function. If an NWDAF supports both the model training function and the data reasoning function, the NWDAF may be called a training NWDAF, an inference NWDAF, or a training and reasoning NWDAF or NWDAF.
  • a NWDAF can be a single network element, or can be set up together with other network elements, for example, the NWDAF is set in a PCF network element or an AMF network element.
  • DN is a network outside the operator's network.
  • the operator's network can access multiple DNs, and various services can be deployed on the DN, which can provide data and/or voice services for terminal equipment.
  • DN is a private network of a smart factory.
  • the sensors installed in the workshop of the smart factory can be terminal devices.
  • the control server of the sensor is deployed in the DN, and the control server can provide services for the sensor.
  • the sensor can communicate with the control server, obtain instructions from the control server, and transmit the collected sensor data to the control server according to the instructions.
  • DN is a company's internal office network.
  • the mobile phone or computer of the company's employees can be a terminal device, and the employee's mobile phone or computer can access information and data resources on the company's internal office network.
  • Npcf, Nudr, Nudm, Naf, Namf, Nsmf, and Nnwdaf are the service interfaces provided by the above-mentioned PCF, UDR, UDM, AF, AMF, SMF, and NWDAF, respectively, and are used to call corresponding service operations.
  • N1, N2, N3, N4, and N6 are interface serial numbers, and the meanings of these interface serial numbers may refer to the description in FIG. 2 .
  • FIG. 2 is a schematic diagram of a 5G network architecture based on a point-to-point interface.
  • the introduction of the functions of the network elements can refer to the introduction of the functions of the corresponding network elements in FIG. 1 , and will not be repeated here.
  • the main difference between FIG. 2 and FIG. 1 is that: the interface between each control plane network element in FIG. 1 is a service interface, and the interface between each control plane network element in FIG. 2 is a point-to-point interface.
  • N1 the interface between the AMF and the terminal device, which can be used to transmit NAS signaling (such as including QoS rules from the AMF) to the terminal device.
  • N2 the interface between the AMF and the RAN, which can be used to transfer radio bearer control information from the core network side to the RAN.
  • N3 the interface between the RAN and the UPF, mainly used to transfer the uplink and downlink user plane data between the RAN and the UPF.
  • N4 The interface between SMF and UPF, which can be used to transfer information between the control plane and the user plane, including controlling the distribution of forwarding rules, QoS control rules, traffic statistics rules, etc. Information reporting.
  • N5 the interface between the AF and the PCF, which can be used for sending application service requests and reporting network events.
  • N6 the interface between UPF and DN, used to transfer the uplink and downlink user data flow between UPF and DN.
  • N7 the interface between PCF and SMF, which can be used to deliver protocol data unit (protocol data unit, PDU) session granularity and service data flow granularity control policy.
  • protocol data unit protocol data unit
  • PDU protocol data unit
  • N8 the interface between AMF and UDM, which can be used for AMF to obtain subscription data and authentication data related to access and mobility management from UDM, and for AMF to register information related to current mobility management of terminal equipment with UDM.
  • N9 a user plane interface between UPF and UPF, used to transmit uplink and downlink user data flows between UPFs.
  • N10 the interface between SMF and UDM, which can be used for SMF to obtain session management-related subscription data from UDM, and for SMF to register current session-related information of terminal equipment with UDM.
  • N11 the interface between SMF and AMF, which can be used to transfer PDU session tunnel information between RAN and UPF, transfer control messages sent to terminal devices, transfer radio resource control information sent to RAN, etc.
  • N15 the interface between PCF and AMF, which can be used to issue terminal device policies and access control related policies.
  • N23 the interface between PCF and NWDAF, through which NWDAF can collect data on PCF. It should be noted that the NWDAF may also have interfaces with other devices (such as AMF, UPF, access network devices, terminal devices, etc.), which are not fully shown in the figure.
  • N35 the interface between UDM and UDR, which can be used for UDM to obtain user subscription data information from UDR.
  • N36 the interface between PCF and UDR, which can be used for PCF to obtain policy-related subscription data and application data-related information from UDR.
  • the above-mentioned network element or function may be a network element in a hardware device, or a software function running on dedicated hardware, or a virtualization function instantiated on a platform (for example, a cloud platform).
  • a platform for example, a cloud platform.
  • the above-mentioned network element or function may be implemented by one device, or jointly implemented by multiple devices, or may be a functional module in one device, which is not specifically limited in this embodiment of the present application.
  • the data management network element in the embodiment of the present application may be the above-mentioned NRF, UDM or UDR, or may be a network element with the above-mentioned NRF, UDM or UDR function in future communications such as 6G networks.
  • the reasoning network element may be the above-mentioned reasoning NWDAF or a network element having the above-mentioned reasoning NWDAF function in a future communication such as a 6G network.
  • the training network element may be the above-mentioned training NWDAF or a network element with the above-mentioned training NWDAF function in future communications such as 6G networks.
  • the data management network element in the embodiment of the present application may be a network management side model management device, a network management side model management network element, or a network management side model management service.
  • the inference network element may be an inference device on the access network device side.
  • the training network element may be a network management side training device, a network management side training network element, or a network management side training service.
  • the data management network element in the embodiment of the present application may be a model management device on the access network device side.
  • the inference network element may be an inference device on the access network device side.
  • the training network element may be a training device on the access network device side.
  • an embodiment of the present application provides a communication method.
  • the manufacturer type of the training network element is different from the manufacturer type of the reasoning network element
  • the type of the model deployment platform of the training network element is the same as that of the model deployment platform of the reasoning network element.
  • the model deployment platform is a framework on which the model runs. Different model deployment platforms may have different dynamic calculation graphs, static calculation graphs, debugging methods, visualization or parallel features.
  • the type of model deployment platform is used to distinguish different model deployments. platform.
  • the type of model deployment platform can be represented by AI Platform ID (or Platform ID).
  • AI is the abbreviation of artificial intelligence.
  • this method comprises the following steps:
  • step 301 the reasoning network element sends a request message to the training network element.
  • the training network element receives the request message.
  • the request message includes analysis type identification information (analytics ID), and the request message is used to request a model that supports the analysis type indicated by the analysis type identification information.
  • analysis type identification information analytics ID
  • the identification information of the analysis type is used to indicate the analysis type, and the identification information of the analysis type may be, for example, service experience (service experience) or network element load information (NF load information).
  • the request message also includes the type of the manufacturer and the type of the model deployment platform.
  • the manufacturer type and the model deployment platform type in the request message refer to the manufacturer type of the inference network element and the model deployment platform type.
  • the manufacturer type can be, for example, Huawei, Ericsson, or Nokia.
  • the type of model deployment platform can be, for example, Mindspore, Tensorflow, or PyTorch.
  • the request message may also include the version of the model deployment platform of the reasoning network element.
  • the version of the model deployment platform may be, for example, V1.0 or V2.1.
  • the request message may also include an association identifier.
  • the inference network element can request model information from the training network element by calling the Nnwdaf_MLModelProvision_Subscribe service operation. That is, the request message in step 301 may be the Nnwdaf_MLModelProvision_Subscribe service operation.
  • the request message in step 301 is also referred to as a first request message.
  • Step 302 the training network element determines that the manufacturer types of the training network element and the inference network element are different and the model deployment platforms are of the same type.
  • the request message in the above step 301 carries the manufacturer type of the inference network element and the model deployment platform type of the inference network element, and the training network element judges whether the manufacturer type of the training network element is the same as the manufacturer type of the inference network element , and determine the type of the model deployment platform for the training network element and the type of the model deployment platform for the inference network element. If the manufacturer types of the training network element and the inference network element are different and the model deployment platforms of the training network element and the inference network element are of the same type, perform the following step 303 and subsequent steps, otherwise, the process ends.
  • the inference network element may know the manufacturer type and model deployment platform type of each training network element deployed on the inference network element, so the request message in the above step 301 does not need to carry the information of the inference network element
  • the manufacturer type and the model deployment platform type of the inference network element carry an indication information, which indicates whether the manufacturer type of the training network element is the same as the manufacturer type of the inference network element, and indicates the model deployment platform of the training network element.
  • the training network element can judge whether the manufacturer type of the training network element is the same as the manufacturer type of the reasoning network element according to the indication information, and determine the model deployment platform of the training network element Whether the type is the same as that of the model deployment platform of the inference network element. If the manufacturer types of the training network element and the inference network element are different and the model deployment platforms of the training network element and the inference network element are of the same type, perform the following step 303 and subsequent steps, otherwise, the process ends.
  • the manufacturer type of each inference network element and the type of model deployment platform can be configured on the training network element in advance, so the request message in the above step 301 does not need to carry the manufacturer type of the inference network element and the inference network
  • the type of the model deployment platform of the NE does not need to carry the above instruction information.
  • the training NE can judge whether the manufacturer type of the training NE is the same as the manufacturer type of the inference NE according to the local configuration information, and determine the model deployment platform of the training NE Whether the type of the model is the same as that of the model deployment platform of the inference network element. If the manufacturer types of the training network element and the inference network element are different and the model deployment platforms of the training network element and the inference network element are of the same type, perform the following step 303 and subsequent steps, otherwise, the process ends.
  • the above-mentioned end of the process refers to the end of the process in the embodiment of FIG. 3 , and other operations may be performed after the end of the process.
  • the training NE can provide the inference NE with an unencrypted model or an unencrypted model address information, and then infer network elements to obtain unencrypted analysis results based on the unencrypted model.
  • the solution of the embodiment in Figure 4 below can be adopted, so that the reasoning network element can obtain analysis result.
  • the reasoning network element can provide the third-party network element (such as the second network element).
  • the second network element uses the encrypted model and the data to be analyzed to obtain the encrypted analysis result, and then the first network element or the training network element decrypts the data to be analyzed to obtain the decrypted analysis result, and then the decrypted The analysis result is sent to the reasoning network element.
  • pre-configure the functions of the training network elements for example, pre-configuring the training network elements 1 to 10 to provide models only for inference network elements of the same manufacturer type and the same type of model deployment platform.
  • the training network element 1 if the training network element 1 receives the request message of the above step 301 from the inference network element, the training network element 1 defaults that the manufacturer type of the training network element 1 is different from the manufacturer type of the inference network element, Moreover, the type of the platform on which the model of the training network element 1 is deployed is the same as that of the platform on which the model of the inference network element is deployed. In this implementation method, step 302 does not need to be performed.
  • Step 303 the training network element sends a response message to the reasoning network element.
  • the reasoning network element receives the response message.
  • the response message includes the encrypted model or the address information of the encrypted model, where the encrypted model address information may be, for example, a uniform resource locator (uniform resource locator, URL) or a fully qualified domain name (fully qualified domain name, FQDN).
  • a uniform resource locator uniform resource locator, URL
  • a fully qualified domain name fully qualified domain name, FQDN
  • the model includes model architecture information and model parameters.
  • the model architecture information includes information such as the number of neural network layers in the model, the connection relationship between layers, and the activation function used by each layer.
  • Model parameters include parameter values for each layer of the neural network.
  • the encrypted model in the response message includes unencrypted model architecture information and encrypted model parameters.
  • the encrypted model in the response message includes encrypted model architecture information and encrypted model parameters.
  • the response message also includes address information of the first network element or instruction information used to instruct the training network element to decrypt the encrypted analysis result (in this embodiment, the instruction information is also may be referred to as the first indication information), the first network element is a third-party network element capable of decrypting analysis results, such as an NWDAF network element, and the first network element may also be called an analysis result decryption network element.
  • the instruction information is also may be referred to as the first indication information
  • the first network element is a third-party network element capable of decrypting analysis results, such as an NWDAF network element, and the first network element may also be called an analysis result decryption network element.
  • the response message further includes indication information for indicating the data type of the input data corresponding to the encrypted model (in this embodiment, the indication information may also be referred to as second indication information).
  • the indication information may be an event identifier (event ID).
  • the data type may be one or more of UE location or QoS Flow parameters.
  • the response message also includes the data format and/or processing parameters corresponding to each data type, and the reasoning network element calculates the data format and/or processing parameters corresponding to each data type
  • the input data corresponding to the type is preprocessed accordingly to obtain the data to be analyzed.
  • the data format includes one or more of the time window for data reporting (that is, when the data is reported), the size of the data cache (that is, how long the data is cached before reporting), and the processing parameters include the maximum value, One or more of minimum, average, or variance values.
  • the response message may also include the above association identifier.
  • the training network element can send the above information to the reasoning network element by calling the Nnwdaf_MLModelProvision_Notify service operation. That is, the response message in step 303 may be the Nnwdaf_MLModelProvision_Notify service operation.
  • the response message in step 303 is also referred to as a first response message.
  • step 304 the reasoning network element obtains an encrypted analysis result according to the encrypted model.
  • the reasoning network element also needs to obtain the encrypted model according to the encrypted address information of the model.
  • the reasoning network element may download the encrypted model from the address indicated by the address information of the encrypted model according to a file transfer protocol (file transfer protocol, FTP).
  • file transfer protocol file transfer protocol
  • the reasoning network element obtains the encrypted analysis result according to the data to be analyzed and the encrypted model, that is, input the data to be analyzed into the encrypted model to obtain the encrypted analysis result.
  • the data to be analyzed is the input data corresponding to the encrypted model, and the data to be analyzed is one or more data from other network elements (such as UE, SMF, AMF, access network equipment, PCF, UPF, or AF) ) collected.
  • other network elements such as UE, SMF, AMF, access network equipment, PCF, UPF, or AF
  • the reasoning network element obtains the decrypted analysis result according to the encrypted analysis result.
  • Two different implementation methods for the reasoning network element to obtain the decrypted analysis result are introduced below.
  • step 303 above carries instruction information (that is, the first instruction information) for instructing the training network element to decrypt the encrypted analysis result, then perform the following steps after step 304 305 to step 307.
  • instruction information that is, the first instruction information
  • step 308 to step 310 are performed after step 304 .
  • Step 305 the reasoning network element sends a request message to the training network element.
  • the training network element receives the request message.
  • the request message includes analysis type identification information and encrypted analysis results.
  • the request message is used to request decrypted analysis results.
  • the analysis type identification information is the same as the analysis type identification information in step 301 above.
  • the request message may also include the above association identifier.
  • the inference network element can send the identification information of the analysis type and the encrypted analysis result to the training network element by calling the Nnwdaf_AnalyticsDecryption_Request service operation. That is, the request message in step 305 may be the Nnwdaf_AnalyticsDecryption_Request service operation.
  • Step 306 the training network element decrypts the encrypted analysis result to obtain the decrypted analysis result.
  • the encrypted model can be encrypted using one or more of fully homomorphic encryption (fully homomorphic encryption) algorithm, stochastic safety average (stochastic safety average) algorithm or differential privacy (differential privacy) algorithm, then the training network element adopts The decryption algorithm corresponding to the encryption algorithm used by the encrypted model decrypts the encrypted analysis result to obtain the decrypted analysis result.
  • the training network element binds the encryption algorithm used by the encrypted model to the association identifier, and then In step 306, the training network element can first determine the encryption algorithm corresponding to the encrypted model according to the association identifier in the request message in step 305, and then determine the decryption algorithm according to the encryption algorithm, so that the encrypted analysis result can be analyzed according to the decryption algorithm Decryption is performed to obtain a decrypted analysis result.
  • Step 307 the training network element sends a response message to the reasoning network element.
  • the reasoning network element receives the response message.
  • the response message contains the decrypted analysis result.
  • the training network element can send the decrypted analysis result to the reasoning network element by calling the Nnwdaf_AnalyticsDecryption_Request Response service operation. That is, the response message in step 307 may be the Nnwdaf_AnalyticsDecryption_Request Response service operation.
  • Step 308 the reasoning network element sends a request message to the first network element.
  • the first network element receives the request message.
  • the request message includes analysis type identification information and encrypted analysis results.
  • the request message is used to request decrypted analysis results.
  • the analysis type identification information is the same as the analysis type identification information in step 301 above.
  • the request message may also include the above association identifier.
  • the inference network element may send the identification information of the analysis type and the encrypted analysis result to the first network element by calling the Nnf_AnalyticsDecryption_Request service operation. That is, the request message in step 308 may be an Nnf_AnalyticsDecryption_Request service operation.
  • Step 309 the first network element decrypts the encrypted analysis result to obtain the decrypted analysis result.
  • the encrypted model is encrypted using one or more of the fully homomorphic encryption algorithm, the random secure average algorithm, or the differential privacy algorithm, and the training network element adopts the decryption algorithm corresponding to the encryption algorithm used by the encrypted model.
  • the encrypted analysis result is decrypted to obtain the decrypted analysis result.
  • the training network element also sends the above association identification and encrypted model to the first network element corresponding decryption algorithm, and the request message in step 308 also carries the association identifier, so that in step 309, the first network element can first determine the encrypted model corresponding to the A decryption algorithm, so that the encrypted analysis result is decrypted according to the decryption algorithm to obtain the decrypted analysis result.
  • Step 310 the first network element sends a response message to the reasoning network element.
  • the reasoning network element receives the response message.
  • the response message contains the decrypted analysis result.
  • the first network element may send the decrypted analysis result to the reasoning network element by calling the Nnf_AnalyticsDecryption_Response service operation. That is, the response message in step 310 may be the Nnf_AnalyticsDecryption_Response service operation.
  • the inference network element and the training network element are deployed by different manufacturers, but the model deployment platform used by the two is the same.
  • This solution provides a cross-vendor encrypted distribution process of the model, which enhances the ability of the training network element to encrypt and distribute the model. It avoids the risk of the inference network element deployment manufacturer stealing the framework and parameters of the model, ensures the security of the model information, and breaks the limitation that the model can only be shared with the manufacturer in the existing solution.
  • each training network element can also register its own model information to the data management network element, so that when the address information of the training network element is not configured locally on the inference network element, the inference network element can request from the data management network element Find suitable network elements for training.
  • the training network element can send a registration request message to the data management network element.
  • the registration request message includes the identification information of the analysis type that the training network element can provide and the model information of the training network element.
  • the model information includes the manufacturer of the training network element Type and type of model deployment platform for training network elements.
  • the model information further includes indication information for indicating the data type of the input data corresponding to the encrypted model.
  • the response message further includes data formats and/or processing parameters corresponding to each data type.
  • the model information further includes address information of the first network element or instruction information for instructing the training network element to decrypt the encrypted analysis result.
  • the model information also includes address information of the second network element.
  • the second network element may be a trusted third-party network element, specifically, it may be a model deployment and reasoning network element. The second network element can reason the data to be analyzed according to the model, and obtain the analysis result. If the model used is an encrypted model, the second network element can reason the data to be analyzed according to the encrypted model, and obtain an encrypted analysis result.
  • the first network element in the model information of different training network elements may be the same network element or different network elements.
  • the second network element in the model information of different training network elements may be the same network element, or may be a different network element.
  • the reasoning network element may send a request message to the data management network element (in this embodiment, the request message is also referred to as a second request message), and the request message includes the above step 301.
  • the identification information of the analysis type the request message is used to request the network element supporting the analysis type, and then the data management network element sends a response message to the inference network element (in this embodiment, the response message is also called the second response message) , the response message includes the address information of the training network element described in step 301 above.
  • the data management network element determines that there are multiple training network elements supporting the above analysis type, the data management network element can provide the address information and model information of the multiple training network elements to the reasoning network element, and the reasoning network element selects A training network element.
  • An embodiment of the present application provides a communication method.
  • the manufacturer type of the training network element is different from the manufacturer type of the reasoning network element
  • the type of the model deployment platform of the training network element is different from the type of the model deployment platform of the reasoning network element.
  • this method comprises the following steps:
  • Step 401 the training network element encrypts an existing local model, and sends the encrypted model and identification information of the analysis type corresponding to the encrypted model to the second network element.
  • the type of model deployment platform supported by the second network element is relatively rich, and the type of model deployment platform supported by the second network element in the embodiment of the present application at least includes the type of the model deployment platform of the training network element.
  • the second network element for the meaning of the second network element, reference may be made to the foregoing description.
  • the training network element also sends the address information of the first network element to the second network element, and the first network element has the function of decrypting and analyzing the result.
  • the locally existing model of the training network element may be a model trained by the training network element, or may be a model obtained by the training network element from other training network elements.
  • This step 401 is an optional step.
  • other network elements or operators may pre-configure the above information to the second network element, such as the encrypted model, the identification information of the analysis type corresponding to the encrypted model, and the address of the first network element One or more of the information.
  • Step 402 the reasoning network element sends a request message to the training network element.
  • the training network element receives the request message.
  • This step 402 is the same as the above step 301, and reference may be made to the foregoing description.
  • Step 403 the training network element sends the encrypted update model and the identification information of the analysis type corresponding to the encrypted update model to the second network element.
  • This step is optional. After the training network element receives the above request message from the reasoning network element, if it is confirmed that the local model needs to be further trained, the training network element will trigger other network elements to perform data collection and subsequent model training process, and update the trained model After being encrypted, it is resent to the second network element.
  • the training network element determines that the manufacturers of the training network element and the reasoning network element are of different types and the types of model deployment platforms are different.
  • This step 404 is an optional step.
  • the implementation method of step 404 and various alternative implementation methods are similar to the description of the foregoing step 302, and reference may be made to the foregoing description.
  • Step 405 the training network element sends a response message to the reasoning network element.
  • the reasoning network element receives the response message.
  • the response message includes the address information of the second network element and indication information used to indicate that the model supporting the above analysis type is rejected (in this embodiment, the indication information is also referred to as first indication information).
  • the response message further includes indication information for indicating the data type of the input data corresponding to the encrypted model (in this embodiment, the indication information may also be referred to as second indication information).
  • the indication information may be an event identifier (event ID).
  • the response message further includes data formats and/or processing parameters corresponding to each data type.
  • the response message also includes a rejection reason value.
  • the rejection reason value is that the manufacturer types of the training network element and the inference network element are different, and the types of model deployment platforms of the inference network element and the training network element are different. .
  • the response message includes the association identifier.
  • the training network element can send the above information to the reasoning network element by calling the Nnwdaf_MLModelProvision_Notify service operation. That is, the response message in step 405 may be the Nnwdaf_MLModelProvision_Notify service operation.
  • Step 406 the reasoning network element sends a request message to the second network element according to the address information of the second network element.
  • the second network element receives the request message.
  • the request message includes the identification information of the data to be analyzed and the analysis type, and the request message is used to request to analyze the data to be analyzed.
  • the identification information of the analysis type is the same as the identification information of the analysis type in step 402 above.
  • the data to be analyzed is the input data corresponding to the encrypted model, and the data to be analyzed is one or more data obtained from other network elements (such as UE, SMF, AMF, access network equipment, PCF, UPF, or AF) by the reasoning network element. multiple) collected.
  • other network elements such as UE, SMF, AMF, access network equipment, PCF, UPF, or AF
  • the request message may also include the above association identifier.
  • the reasoning network element can send the above information to the second network element by calling the Nnf_AnalyticsInfo_Request service operation. That is, the request message in step 406 may be an Nnf_AnalyticsInfo_Request service operation.
  • Step 407 the second network element obtains an encrypted analysis result according to the encrypted model.
  • the second network element uses the locally deployed encrypted model and the data to be analyzed received from the reasoning network element to calculate and obtain an encrypted analysis result.
  • the encrypted model locally deployed on the second network element is configured from a training network element, other network elements, or a cloud operator.
  • the following steps 408 to 410 may be performed, or the following steps 411 to 413 may be performed.
  • Step 408 the second network element sends a request message to the training network element.
  • the training network element receives the request message.
  • the request message contains the identification information of the analysis type, the encrypted analysis result and the address information of the inference network element.
  • the request message is used to request the decrypted analysis result and send the decrypted analysis result to the inference network element.
  • the identification of the analysis type The information is the same as the identification information of the analysis type in step 402 above.
  • the request message may also include the above association identifier.
  • the second network element may send the identification information of the analysis type, the encrypted analysis result and the address information of the reasoning network element to the training network element by calling the Nnwdaf_AnalyticsDecryption_Request service operation. That is, the request message in step 408 may be the Nnwdaf_AnalyticsDecryption_Request service operation.
  • Step 409 the training network element decrypts the encrypted analysis result to obtain the decrypted analysis result.
  • This step 409 is the same as the above step 306, and reference may be made to the foregoing description.
  • Step 410 the training network element sends the decrypted analysis result to the reasoning network element.
  • the reasoning network element receives the decrypted analysis result.
  • the training network element can send the decrypted analysis result to the reasoning network element by calling the Nnwdaf_AnalyticsDecryption_Request Response service operation.
  • Step 411 the second network element sends a request message to the first network element.
  • the first network element receives the request message.
  • the request message contains the identification information of the analysis type, the encrypted analysis result and the address information of the reasoning network element.
  • the request message is used to request the decrypted analysis result and send the decrypted analysis result to the reasoning network element.
  • the identification of the analysis type The information is the same as the identification information of the analysis type in step 402 above.
  • the second network element may obtain the address information of the first network element through the above step 401 .
  • the request message may also include the above association identifier.
  • Step 412 the first network element decrypts the encrypted analysis result to obtain the decrypted analysis result.
  • This step 412 is the same as the above step 309, and reference may be made to the foregoing description.
  • Step 413 the first network element sends the decrypted analysis result to the reasoning network element.
  • the reasoning network element receives the decrypted analysis result.
  • the first network element can send the decrypted analysis result to the reasoning network element by calling the Nnwdaf_AnalyticsDecryption_Request Response service operation.
  • the inference network element and the training network element are deployed by different manufacturers, and the model deployment platforms used by the two are also different.
  • This solution provides a cross-vendor encrypted distribution process of the model, and enhances the ability of the training network element to distribute the model encrypted , avoiding the risk of the inference network element deployment manufacturer stealing the model's framework and parameters, ensuring the security of the model information, and breaking the limitation that the model can only be shared with the manufacturer in the existing solution.
  • FIG. 5 it is a communication method provided by an embodiment of the present application.
  • the method includes the following steps:
  • Step 501 the training network element sends a registration request message to the data management network element.
  • the data management network element receives the registration request message.
  • the registration request message includes identification information of the analysis type and model information, where the model information includes the manufacturer type, the type of the model deployment platform, the address information of the second network element, and also includes the address information of the first network element or for Instruction information indicating that the training network element decrypts the encrypted analysis result.
  • the registration request message may also include the version of the model deployment platform.
  • the registration request message further includes indication information for indicating the data type of the input data corresponding to the encrypted model.
  • the indication information may be an event identifier (event ID).
  • the registration request message also includes the data format and/or processing parameters corresponding to each data type.
  • the meanings of the first network element and the second network element can refer to the foregoing description, and will not be repeated here.
  • the training network element can request registration from the data management network element by calling the Nnrf_NFManagement_NFRegister Request service operation. That is, the registration request message in step 501 may be a Nnrf_NFManagement_NFRegister Request service operation.
  • Step 502 the data management network element sends a registration response message to the training network element.
  • the training network element receives the registration response message.
  • the data management network element can return a response to the registration request message to the training network element by calling the Nnrf_NFManagement_NFRegister Response service operation. That is, the registration response message in step 502 may be an Nnrf_NFManagement_NFRegister Response service operation.
  • Step 503 the training network element sends an update request message to the data management network element.
  • the data management network element receives the update request message.
  • the training network element may send an update request message to the data management network element to re-register the updated model information to the data management network element.
  • the information carried in the update request message is similar to the information carried in the registration request message in step 501 above, and reference may be made to the foregoing description.
  • the training network element can request registration update from the data management network element by calling the Nnrf_NFManagement_NFUpdateRequest service operation.
  • Step 504 the data management network element sends an update response message to the training network element.
  • the training network element receives the update response message.
  • the data management network element can return a response to the update request message to the training network element by calling the Nnrf_NFManagement_NFUpdate Response service operation.
  • step 503 to step 504 are optional steps.
  • Step 505 the reasoning network element sends a request message to the data management network element.
  • the data management network element receives the request message.
  • the request message includes identification information of the analysis type.
  • the request message also includes the manufacturer type of the inference network element and the type of the model deployment platform.
  • the request message is used to request to obtain a network element that supports the analysis type, specifically, to request to obtain a training network element or a third-party network element that supports the analysis type.
  • the inference network element can request the data management network element to discover available training network elements or third-party network elements by calling the Nnrf_NFDiscovery_Request service operation. That is, the request message in step 505 may be a Nnrf_NFDiscovery_Request service operation.
  • Step 506 the data management network element sends a response message to the reasoning network element.
  • the reasoning network element receives the response message.
  • the response message contains at least one set of information, each set of information includes address information of at least one candidate training network element and model information of the candidate training network element, the model information and the identification of the analysis type in the request message in step 505 above
  • the content contained in the model information can refer to the description of the foregoing step 501 .
  • the address information of the first network element in the model information of different candidate training network elements may be the same or different, and the address information of the second network element in the model information of different candidate training network elements may be the same or different. Can be different.
  • the data management network element can respond to the network element discovery request of the reasoning network element by calling the Nnrf_NFDiscovery_Request Response service operation. That is, the response message in step 506 may be the Nnrf_NFDiscovery_Request Response service operation.
  • Step 507 the inference network element selects the training network element or the second network element.
  • the inference network element selects the training network element or the second network element according to the following order.
  • the inference network element starts from the one or more candidate training network elements
  • One of the training network elements is selected as the training network element, for example, one is randomly selected or one is selected according to a predetermined rule.
  • the inference network element selects one of the one or more candidate training network elements as the training network element, such as randomly selecting one or select one according to predetermined rules.
  • the inference network element selects a second network element based on the model information of at least one candidate training network element corresponding to the multiple sets of information . For example, if the address of the second network element in the model information of the at least one candidate training network element is the same, an address of the second network element is randomly selected. For another example, if the address of the second network element in the model information of the at least one candidate training network element is not completely the same, one may be selected randomly or according to a predetermined rule.
  • step 301 to step 307 may be performed after step 507, or the above step 301 to step 304 and step 308 to step 310 may be performed.
  • step 406 to step 410 may be performed after step 507, or the above step 406 to step 407 and step 411 to step 413 may be performed.
  • the training network element first registers/updates the identification information of the supported analysis type and the corresponding model information to the data management network element, and then the reasoning network element finds available data from the data management network element. training network elements or third-party network elements. Inference NEs and training NEs are deployed by different manufacturers, and the types of model deployment platforms used by the two are the same or different.
  • This solution provides a process for cross-vendor encrypted distribution of models, which enhances the ability of training NEs to encrypt and distribute models to avoid It eliminates the risk of information such as the frame and parameters of the model being stolen by the deployment manufacturer of the reasoning network element, ensures the security of the model information, and breaks the limitation that the model can only be shared with the manufacturer in the existing solution.
  • the data management network element in the embodiment of the present invention is only an example, and as a possible implementation method, the role played by the data management network element in the embodiment of the present invention can be performed by other network elements (such as the model management network yuan) to execute.
  • the inference network element, the training network element, the first network element, and the second network element include corresponding hardware structures and/or software modules for performing respective functions.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software with reference to the units and method steps of the examples described in the embodiments disclosed in the present application. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
  • FIG. 6 and FIG. 7 are schematic structural diagrams of possible communication devices provided by the embodiments of the present application. These communication devices can be used to realize the functions of the reasoning network element, the training network element, the first network element, or the second network element in the above method embodiments, and thus can also realize the beneficial effects of the above method embodiments.
  • the communication device may be an inference network element, a training network element, a first network element or a second network element, or may be an A module (such as a chip) of the second network element.
  • a communication device 600 includes a processing unit 610 and a transceiver unit 620 .
  • the communication device 600 is configured to implement functions of the inference network element, the training network element, the first network element, or the second network element in the foregoing method embodiments.
  • the transceiver unit 620 is configured to send a first request message to the training network element, the first request message Including identification information of the analysis type, the first request message is used to request a model supporting the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the model deployment platform of the reasoning network element and the training network element
  • the types are the same; receive the first response message from the training network element, the first response message includes the encrypted model or the address information of the encrypted model, and the encrypted model supports the analysis type;
  • the processing unit 610 is configured to An encrypted model is obtained to obtain an encrypted analysis result; and a decrypted analysis result is obtained according to the encrypted analysis result.
  • the transceiver unit 620 is configured to send the encrypted analysis result to the training network element; and receive the decrypted analysis result from the training network element.
  • the first response message further includes first indication information, where the first indication information indicates that the training network element decrypts the encrypted analysis result.
  • the transceiving unit 620 is configured to send the encrypted analysis result and the association identifier to the training network element, and the association identifier is used for the training network element to determine the encryption algorithm corresponding to the encrypted model.
  • the first response message further includes the address information of the first network element; the processing unit 610 is configured to, according to the address information of the first network element, send a message to the first network element through the transceiver unit 620 The element sends the encrypted analysis result; receives the decrypted analysis result from the first network element.
  • the processing unit 610 is configured to send the encrypted analysis result and the association identifier to the first network element through the transceiver unit 620 according to the address information of the first network element, and the association identifier is used for The first network element determines an encryption algorithm corresponding to the encrypted model.
  • the first response message further includes second indication information, where the second indication information is used to indicate the data type of the input data corresponding to the encrypted model.
  • the first request message further includes a manufacturer type of the inference network element and a model deployment platform type of the inference network element.
  • the transceiver unit 620 is configured to send a second request message to the data management network element before sending the first request message to the training network element, where the second request message includes identification information of the analysis type , the second request message is used to request a network element supporting the analysis type; and receive a second response message from the data management network element, where the second response message includes address information of the training network element.
  • the transceiver unit 620 when the communication device is a training network element or a model (such as a chip) for training a network element, the transceiver unit 620 is configured to receive a first request message from an inference network element, the first request The message includes identification information of the analysis type, the first request message is used to request a model supporting the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the model deployment platform of the reasoning network element and the training network element the same type; send a first response message to the reasoning network element, the first response message includes the encrypted model or the encrypted model’s address information; receive the encrypted analysis result from the reasoning network element, the encrypted analysis result obtained according to the encrypted model; the processing unit 610 is configured to decrypt the encrypted analysis result to obtain the decrypted analysis result; the transceiver unit 620 is configured to send the decrypted analysis result to the reasoning network element.
  • the first request message also includes the manufacturer type of the inference network element and the model deployment platform type of the inference network element; the processing unit 610 is configured to send the inference network element to the inference network Before the element sends the first response message, it is determined that the manufacturer type of the training network element is different from that of the reasoning network element, and the type of the model deployment platform of the reasoning network element and the training network element is the same.
  • the first response message further includes first indication information, where the first indication information indicates that the training network element decrypts the encrypted analysis result.
  • the first response message further includes second indication information, where the second indication information is used to indicate the data type of the input data corresponding to the encrypted model.
  • the transceiver unit 620 is configured to send a registration request message to the data management network element before receiving the first request message from the reasoning network element, where the registration request message includes the identification information of the analysis type and the Model information of the training network element, where the model information includes the manufacturer type of the training network element and the type of the model deployment platform of the training network element.
  • the transceiver unit 620 is configured to receive the encrypted analysis result and the associated identifier from the reasoning network element; the processing unit 610 is configured to determine the encrypted model corresponding to the encrypted model according to the associated identifier.
  • An algorithm according to the encryption algorithm, a decryption algorithm is determined; according to the decryption algorithm, the encrypted analysis result is decrypted to obtain the decrypted analysis result.
  • the transceiver unit 620 is configured to send a request message to the training network element, and the request message includes an analysis type Identification information, the request message is used to request a model that supports the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the type of model deployment platform of the reasoning network element and the training network element is different; A response message from the training network element, where the response message includes first indication information and address information of the second network element, the first indication information indicates that the request to support the model of the analysis type is rejected, and the model deployment platform supported by the second network element
  • the type includes the type of the model deployment platform of the training network element;
  • the processing unit 610 is configured to send the data to be analyzed to the second network element through the transceiver unit 620 according to the address information of the second network element, and the data to be analyzed Used for the second network element to generate an encrypted analysis result according to the encrypted model
  • the response message further includes a rejection reason value, where the rejection reason value is of a different manufacturer type from the training network element and the inference network element, and the model deployment of the inference network element and the training network element There are different types of platforms.
  • the request message also includes the manufacturer type of the inference network element and the model deployment platform type of the inference network element.
  • the response message further includes second indication information, where the second indication information is used to indicate the data type of the input data corresponding to the encrypted model.
  • the processing unit 610 is configured to send the data to be analyzed and the association identifier to the second network element through the transceiver unit 620 according to the address information of the second network element, and the association identifier is used for the second network element.
  • the first network element or the training network element determines the encryption algorithm corresponding to the encrypted model.
  • the transceiver unit 620 is configured to receive a request message from an inference network element, and the request message includes an analysis type
  • the identification information of the request message is used to request a model that supports the analysis type, the manufacturer type of the training network element is different from that of the reasoning network element, and the type of model deployment platform of the reasoning network element and the training network element is different;
  • the inference network element sends a response message, the response message includes first indication information and address information of the second network element, the first indication information indicates that the request to support the model of the analysis type is rejected, and the model deployment platform supported by the second network element
  • the type includes the type of the model deployment platform of the training network element; receiving the encrypted analysis result from the second network element, the encrypted analysis result is the second network element according to the data to be analyzed of the reasoning network element and the analysis The encrypted model corresponding to the type is obtained; the processing unit 610 is used to decrypt the encrypted
  • the request message also includes the manufacturer type of the inference network element and the model deployment platform type of the inference network element; the processing unit 610 is configured to send the inference network element to the inference network element in the transceiver unit 620 Before responding to the message, it is determined that the manufacturer types of the training network element and the inference network element are different, and the model deployment platforms of the inference network element and the training network element are different in type.
  • the response message further includes a rejection reason value, where the rejection reason value is of a different manufacturer type from the training network element and the inference network element, and the model deployment of the inference network element and the training network element There are different types of platforms.
  • the transceiving unit 620 is configured to send the identification information of the analysis type and the encrypted model corresponding to the analysis type to the second network element before receiving the request message from the reasoning network element.
  • the response message further includes second indication information, where the second indication information is used to indicate the data type of the input data corresponding to the encrypted model.
  • the transceiver unit 620 is configured to receive the encrypted analysis result and the association identifier from the second network element; the processing unit 610 is configured to determine the encrypted model corresponding to the An encryption algorithm; according to the encryption algorithm, a decryption algorithm is determined; according to the decryption algorithm, the encrypted analysis result is decrypted to obtain the decrypted analysis result.
  • the transceiver unit 620 when the communication device is the first network element or a model (such as a chip) for the first network element, the transceiver unit 620 is used to receive the encrypted analysis result; the processing unit 610 is used to process the The encrypted analysis result is decrypted to obtain the decrypted analysis result; the transceiver unit 620 is configured to send the decrypted analysis result to the reasoning network element.
  • the transceiving unit 620 is configured to receive the encrypted analysis result from the reasoning network element.
  • the transceiver unit 620 is configured to receive the encrypted analysis result from the second network element and the address information of the reasoning network element; the processing unit 610 is configured to , sending the decrypted analysis result to the reasoning network element through the transceiver unit 620 .
  • the transceiver unit 620 is configured to receive the association identifier from the training network element and the identifier of the decryption algorithm corresponding to the association identifier before receiving the encrypted analysis result; receive the encrypted analysis result and the association identifier.
  • Identification the processing unit 610 is configured to determine the decryption algorithm according to the associated identification; decrypt the encrypted analysis result according to the decryption algorithm to obtain the decrypted analysis result.
  • the transceiver unit 620 when the communication device is the second network element or a model (such as a chip) for the second network element, the transceiver unit 620 is configured to receive identification information and support for the analysis type from the training network element
  • the type of the model deployment platform supported by the second network element includes the type of the model deployment platform of the training network element; receive the data to be analyzed from the reasoning network element; the processing unit 610 is configured to The encrypted model and the data to be analyzed obtain encrypted analysis results; the transceiver unit 620 is configured to send the encrypted analysis results to the training network element or the first network element and the reasoning network for receiving the decrypted analysis results
  • the decrypted analysis result is obtained by the training network element or the first network element according to the encrypted analysis result.
  • the transceiver unit 620 is configured to send a request message to the data management network element, and the request message includes the analysis type identification information, the request message is used to request a network element supporting the analysis type; receive a response message from the data management network element, the response message includes at least one set of information, and each set of information includes an address of a candidate training network element Information and model information of the candidate training network element, the candidate training network element supports the analysis type, the model information of the candidate training network element includes the manufacturer type of the candidate training network element and the model deployment platform type of the candidate training network element
  • the processing unit 610 is configured to be used when at least one candidate training network element corresponding to the at least one set of information has one or more candidate training network elements that are different from the vendor type of the reasoning network element and have the same type of model deployment platform, Select a candidate training network element from one or more candidate training network elements as the training network element.
  • the processing unit 610 is configured to: when at least one candidate training network element corresponding to the at least one set of information does not have a manufacturer type different from the inference network element and the same type of model deployment platform The candidate training network element determines the address information of the second network element according to the at least one set of information.
  • the model information of the candidate training network element includes address information of the second network element; the processing unit 610 is configured to obtain the second network element from the model information of the candidate training network element. element address information.
  • processing unit 610 and the transceiver unit 620 can be directly obtained by referring to related descriptions in the above method embodiments, and details are not repeated here.
  • the communication device 700 includes a processor 710 , and as a possible implementation method, the communication device 700 further includes an interface circuit 720 .
  • the processor 710 and the interface circuit 720 are coupled to each other. It can be understood that the interface circuit 720 may be a transceiver or an input-output interface.
  • the communication device 700 may further include a memory 730 for storing instructions executed by the processor 710 or storing input data required by the processor 710 to execute the instructions or storing data generated after the processor 710 executes the instructions.
  • the processor 710 is used to implement the functions of the processing unit 610
  • the interface circuit 720 is used to implement the functions of the transceiver unit 620 .
  • processor in the embodiments of the present application may be a central processing unit (central processing unit, CPU), and may also be other general processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
  • CPU central processing unit
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor can be a microprocessor, or any conventional processor.
  • the method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory, flash memory, read-only memory, programmable read-only memory, erasable programmable read-only memory, electrically erasable programmable read-only Memory, registers, hard disk, removable hard disk, CD-ROM or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the ASIC may be located in the access network device or the terminal device.
  • the processor and the storage medium may also exist in the access network device or the terminal device as discrete components.
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product comprises one or more computer programs or instructions. When the computer program or instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are executed in whole or in part.
  • the computer may be a general computer, a special computer, a computer network, an access network device, a terminal device or other programmable devices.
  • the computer program or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer program or instructions may be downloaded from a website, computer, A server or data center transmits to another website site, computer, server or data center by wired or wireless means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media.
  • the available medium may be a magnetic medium, such as a floppy disk, a hard disk, or a magnetic tape; it may also be an optical medium, such as a digital video disk; and it may also be a semiconductor medium, such as a solid state disk.
  • the computer readable storage medium may be a volatile or a nonvolatile storage medium, or may include both volatile and nonvolatile types of storage media.
  • “at least one” means one or more, and “multiple” means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship; in the formulas of this application, the character “/” indicates that the contextual objects are a "division” Relationship.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请实施例提供一种通信方法、通信装置及通信***。该方法包括:推理网元向训练网元发送第一请求消息,该第一请求消息包括分析类型的标识信息,训练网元与推理网元的厂商类型不同且模型部署平台的类型相同;接收来自训练网元的第一响应消息,该第一响应消息包括加密的模型或者该加密的模型的地址信息;根据该加密的模型得到加密的分析结果;根据该加密的分析结果获取解密的分析结果。该方案,推理网元和训练网元可以由不同厂商部署,打破了现有方案中模型只能同厂商部署的限制。

Description

一种通信方法、通信装置及通信***
相关申请的交叉引用
本申请要求在2021年09月03日提交中国专利局、申请号为202111030657.9、申请名称为“一种通信方法、通信装置及通信***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种通信方法、通信装置及通信***。
背景技术
训练网元可以训练模型,并将训练好的模型提供给推理网元,推理网元将待分析的数据输入到模型中进行推理,得到分析结果。
为支持推理网元选择合适的训练网元,目前一般是在推理网元的本地配置一个或多个训练网元的地址信息以及每个训练网元支持的分析类型的标识信息,推理网元可以根据待分析的数据对应的分析类型,从该一个或多个训练网元中选择一个可以提供模型的训练网元。并且,推理网元与每个训练网元的厂商相同,且使用的模型部署平台也相同。
然而,将推理网元和训练网元限定在同厂商,无法实现模型跨厂商共享。
发明内容
本申请实施例提供一种通信方法、通信装置及通信***,用以实现模型跨厂商共享。
第一方面,本申请实施例提供一种通信方法,该方法可以由推理网元或应用于推理网元中的模块(如芯片)来执行。以推理网元执行该通信方法为例,该方法包括:推理网元向训练网元发送第一请求消息,该第一请求消息包括分析类型的标识信息,该第一请求消息用于请求支持该分析类型的模型,该训练网元与该推理网元的厂商类型不同,该推理网元和该训练网元的模型部署平台的类型相同;该推理网元接收来自该训练网元的第一响应消息,该第一响应消息包括加密的模型或者该加密的模型的地址信息,该加密的模型支持该分析类型;该推理网元根据该加密的模型,得到加密的分析结果;该推理网元根据该加密的分析结果,获取解密的分析结果。
该方案,推理网元和训练网元由不同厂商部署,但二者使用的模型部署平台相同,打破了现有解决方案中模型只能同厂商共享的限制。该方案提供了模型跨厂商加密分发的流程,增强了训练网元将模型加密分发的能力,规避了推理网元的部署厂商窃取模型的框架和参数等信息的风险。
在一种可能的实现方法中,该推理网元向该训练网元发送该加密的分析结果;该推理网元接收来自该训练网元的该解密的分析结果。
该方案,由于训练网元是模型的加密网元,因此由训练网元对加密的分析结果进行解密,可以实现加密的分析结果的精确解密。
在一种可能的实现方法中,该第一响应消息中还包括第一指示信息,该第一指示信息指示由该训练网元对该加密的分析结果进行解密。
该方案,推理网元根据该第一指示信息,可以准确获知对加密的分析结果进行加密的网元是训练网元。
在一种可能的实现方法中,该推理网元向该训练网元发送该加密的分析结果和关联标识,该关联标识用于该训练网元确定该加密的模型对应的加密算法。
该方案,通过关联标识,使得训练网元可以准确获取加密的模型对应的加密算法,进而准确获知对加密的分析进行解密所要使用的解密算法,可以提升解密的效率。
在一种可能的实现方法中,该第一响应消息中还包括第一网元的地址信息;该推理网元根据该第一网元的地址信息,向该第一网元发送该加密的分析结果;该推理网元接收来自该第一网元的该解密的分析结果。
该方案,当训练网元无法对加密的分析结果进行解密时,可以由第一网元对加密的分析结果进行解密,从而可以保证推理网元可以获得解密的分析结果。
在一种可能的实现方法中,该推理网元根据该第一网元的地址信息,向该第一网元发送该加密的分析结果和关联标识,该关联标识用于该第一网元确定该加密的模型对应的加密算法。
该方案,通过关联标识,使得第一网元可以准确获取加密的模型对应的加密算法,进而准确获知对加密的分析进行解密所要使用的解密算法,可以提升解密的效率。
在一种可能的实现方法中,该第一响应消息中还包括第二指示信息,该第二指示信息用于指示该加密的模型对应的输入数据的数据类型。
该方案,通过该第二指示信息,使得推理网元对输入数据进行相应预处理,得到符合要求的待分析的数据,可以提升数据推理的效率。
在一种可能的实现方法中,该第一请求消息中还包含该推理网元的厂商类型和该推理网元的模型部署平台的类型。
该方案,通过在第一请求消息中携带推理网元的厂商类型和推理网元的模型部署平台的类型,使得训练网元可以判断训练网元与推理网元的厂商类型是否相同,以及判断训练网元与推理网元的模型部署平台的类型是否相同,从而便于训练网元选择合适的方法为推理网元提供数据推理功能,可以提升数据推理的效率。
在一种可能的实现方法中,该推理网元向训练网元发送第一请求消息之前,向数据管理网元发送第二请求消息,该第二请求消息包括该分析类型的标识信息,该第二请求消息用于请求支持该分析类型的网元;该推理网元接收来自该数据管理网元的第二响应消息,该第二响应消息包括该训练网元的地址信息。
该方案,推理网元可以从数据管理网元请求发现训练网元,可以实现准确发现能够提供模型的训练网元。
第二方面,本申请实施例提供一种通信方法,该方法可以由训练网元或应用于训练网元中的模块(如芯片)来执行。以训练网元执行该通信方法为例,该方法包括:训练网元接收来自推理网元的第一请求消息,该第一请求消息包括分析类型的标识信息,该第一请求消息用于请求支持该分析类型的模型,该训练网元与该推理网元的厂商类型不同,该推理网元和该训练网元的模型部署平台的类型相同;该训练网元向该推理网元发送第一响应消息,该第一响应消息包括加密的模型或者该加密的模型的地址信息;该训练网元接收来 自该推理网元的加密的分析结果,该加密的分析结果是根据该加密的模型得到的;该训练网元对该加密的分析结果进行解密,得到解密的分析结果;该训练网元向该推理网元发送该解密的分析结果。
该方案,推理网元和训练网元由不同厂商部署,但二者使用的模型部署平台相同,打破了现有解决方案中模型只能同厂商共享的限制。该方案提供了模型跨厂商加密分发的流程,增强了训练网元将模型加密分发的能力,规避了推理网元的部署厂商窃取模型的框架和参数等信息的风险。
在一种可能的实现方法中,该第一请求消息中还包括该推理网元的厂商类型和该推理网元的模型部署平台的类型;该训练网元向该推理网元发送第一响应消息之前,确定该训练网元与该推理网元的厂商类型不同,且该推理网元和该训练网元的模型部署平台的类型相同。
该方案,通过在第一请求消息中携带推理网元的厂商类型和推理网元的模型部署平台的类型,训练网元可以判断训练网元与推理网元的厂商类型是否相同,以及判断训练网元与推理网元的模型部署平台的类型是否相同,从而便于训练网元选择合适的方法为推理网元提供数据推理功能,可以提升数据推理的效率。
在一种可能的实现方法中,该第一响应消息中还包括第一指示信息,该第一指示信息指示由该训练网元对该加密的分析结果进行解密。
该方案,推理网元根据该第一指示信息,可以准确获知对加密的分析结果进行加密的网元是训练网元。
在一种可能的实现方法中,该第一响应消息中还包括第二指示信息,该第二指示信息用于指示该加密的模型对应的输入数据的数据类型。
该方案,通过该第二指示信息,使得推理网元对输入数据进行相应预处理,得到符合要求的待分析的数据,可以提升数据推理的效率。
在一种可能的实现方法中,该训练网元接收来自推理网元的第一请求消息之前,向数据管理网元发送注册请求消息,该注册请求消息包括该分析类型的标识信息和该训练网元的模型信息,该模型信息包括该训练网元的厂商类型和该训练网元的模型部署平台的类型。
在一种可能的实现方法中,该注册请求消息中的模型信息还包括上述第二指示信息。
在一种可能的实现方法中,该注册请求消息中的模型信息还包括第二网元的标识信息。
在一种可能的实现方法中,该注册请求消息中的模型信息还包括第一网元的标识信息。
在一种可能的实现方法中,该训练网元接收来自该推理网元的该加密的分析结果和关联标识;该训练网元根据该关联标识,确定该加密的模型对应的加密算法;该训练网元根据该加密算法,确定解密算法;该训练网元根据该解密算法对该加密的分析结果进行解密,得到该解密的分析结果。
该方案,通过关联标识,训练网元可以准确获取加密的模型对应的加密算法,进而准确获知对加密的分析进行解密所要使用的解密算法,可以提升解密的效率。
第三方面,本申请实施例提供一种通信方法,该方法可以由推理网元或应用于推理网元中的模块(如芯片)来执行。以推理网元执行该通信方法为例,该方法包括:推理网元向训练网元发送请求消息,该请求消息包括分析类型的标识信息,该请求消息用于请求支持该分析类型的模型,该训练网元与该推理网元的厂商类型不同,该推理网元和该训练网元的模型部署平台的类型不同;该推理网元接收来自该训练网元的响应消息,该响应消息 包括第一指示信息和第二网元的地址信息,该第一指示信息指示拒绝请求支持该分析类型的模型,该第二网元支持的模型部署平台的类型包括该训练网元的模型部署平台的类型;该推理网元根据该第二网元的地址信息,向该第二网元发送待分析的数据,该待分析的数据用于该第二网元根据该分析类型对应的加密的模型生成加密的分析结果;该推理网元接收来自该训练网元或者第一网元的解密的分析结果,该解密的分析结果是该训练网元或该第一网元根据该加密的分析结果得到的。
该方案,推理网元和训练网元由不同厂商部署,且二者使用的模型部署平台不同,打破了现有解决方案中模型只能同厂商共享的限制。该方案提供了模型跨厂商加密分发的流程,增强了训练网元将模型加密分发的能力,规避了推理网元的部署厂商窃取模型的框架和参数等信息的风险。
在一种可能的实现方法中,该响应消息还包括拒绝原因值,该拒绝原因值为该训练网元与该推理网元的厂商类型不同,且该推理网元和该训练网元的模型部署平台的类型不同。
该方案,通过拒绝原因值,可以告知推理网元被拒绝的原因,从而推理网元不再向训练网元发送用于请求支持分析类型的模型,可以减少推理的开销。
在一种可能的实现方法中,该请求消息中还包含该推理网元的厂商类型和该推理网元的模型部署平台的类型。
该方案,通过在请求消息中携带推理网元的厂商类型和推理网元的模型部署平台的类型,使得训练网元可以判断训练网元与推理网元的厂商类型是否相同,以及判断训练网元与推理网元的模型部署平台的类型是否相同,从而便于训练网元选择合适的方法为推理网元提供数据推理功能,可以提升数据推理的效率。
在一种可能的实现方法中,该响应消息中还包括第二指示信息,该第二指示信息用于指示该加密的模型对应的输入数据的数据类型。
该方案,通过该第二指示信息,使得推理网元对输入数据进行相应预处理,得到符合要求的待分析的数据,可以提升数据推理的效率。
在一种可能的实现方法中,该推理网元根据该第二网元的地址信息,向该第二网元发送待分析的数据和关联标识,该关联标识用于该第一网元或该训练网元确定该加密的模型对应的加密算法。
该方案,通过关联标识,使得训练网元或第一网元可以准确获取加密的模型对应的加密算法,进而准确获知对加密的分析进行解密所要使用的解密算法,可以提升解密的效率。
第四方面,本申请实施例提供一种通信方法,该方法可以由训练网元或应用于训练网元中的模块(如芯片)来执行。以训练网元执行该通信方法为例,该方法包括:训练网元接收来自推理网元的请求消息,该请求消息包括分析类型的标识信息,该请求消息用于请求支持该分析类型的模型,该训练网元与该推理网元的厂商类型不同,该推理网元和该训练网元的模型部署平台的类型不同;该训练网元向该推理网元发送响应消息,该响应消息包括第一指示信息和第二网元的地址信息,该第一指示信息指示拒绝请求支持该分析类型的模型,该第二网元支持的模型部署平台的类型包括该训练网元的模型部署平台的类型;该训练网元接收来自该第二网元的加密的分析结果,该加密的分析结果是该第二网元根据该推理网元的待分析的数据和该分析类型对应的加密的模型得到的;该训练网元对该加密的分析结果进行解密,得到解密的分析结果;该训练网元向该推理网元发送该解密的分析结果。
该方案,推理网元和训练网元由不同厂商部署,且二者使用的模型部署平台不同,打破了现有解决方案中模型只能同厂商共享的限制。该方案提供了模型跨厂商加密分发的流程,增强了训练网元将模型加密分发的能力,规避了推理网元的部署厂商窃取模型的框架和参数等信息的风险。
在一种可能的实现方法中,该请求消息中还包含该推理网元的厂商类型和该推理网元的模型部署平台的类型;该训练网元向该推理网元发送响应消息之前,确定该训练网元与该推理网元的厂商类型不同,且该推理网元和该训练网元的模型部署平台的类型不同。
该方案,通过在请求消息中携带推理网元的厂商类型和推理网元的模型部署平台的类型,训练网元可以判断训练网元与推理网元的厂商类型是否相同,以及判断训练网元与推理网元的模型部署平台的类型是否相同,从而便于训练网元选择合适的方法为推理网元提供数据推理功能,可以提升数据推理的效率。
在一种可能的实现方法中,该响应消息还包括拒绝原因值,该拒绝原因值为该训练网元与该推理网元的厂商类型不同,且该推理网元和该训练网元的模型部署平台的类型不同。
该方案,通过拒绝原因值,可以告知推理网元被拒绝的原因,从而推理网元不再向训练网元发送用于请求支持分析类型的模型,可以减少推理的开销。
在一种可能的实现方法中,该训练网元接收来自推理网元的请求消息之前,向该第二网元发送该分析类型的标识信息和该分析类型对应的该加密的模型。
在一种可能的实现方法中,该响应消息中还包括第二指示信息,该第二指示信息用于指示该加密的模型对应的输入数据的数据类型。
该方案,通过该第二指示信息,使得推理网元对输入数据进行相应预处理,得到符合要求的待分析的数据,可以提升数据推理的效率。
在一种可能的实现方法中,该训练网元接收来自该第二网元的该加密的分析结果和关联标识;该训练网元根据该关联标识,确定该加密的模型对应的加密算法;该训练网元根据该加密算法,确定解密算法;该训练网元根据该解密算法对该加密的分析结果进行解密,得到该解密的分析结果。
该方案,通过关联标识,训练网元可以准确获取加密的模型对应的加密算法,进而准确获知对加密的分析进行解密所要使用的解密算法,可以提升解密的效率。
第五方面,本申请实施例提供一种通信方法,该方法可以由第一网元或应用于第一网元中的模块(如芯片)来执行。以第一网元执行该通信方法为例,该方法包括:第一网元接收加密的分析结果;该第一网元对该加密的分析结果进行解密,得到解密的分析结果;该第一网元向推理网元发送该解密的分析结果。
在一种可能的实现方法中,该第一网元接收来自该推理网元的该加密的分析结果。
在一种可能的实现方法中,该第一网元接收来自第二网元的该加密的分析结果和该推理网元的地址信息;该第一网元根据该推理网元的地址信息,向该推理网元发送该解密的分析结果。
在一种可能的实现方法中,该第一网元接收加密的分析结果之前,该第一网元接收来自训练网元的关联标识和该关联标识对应的解密算法的标识;该第一网元接收该加密的分析结果和该关联标识;该第一网元根据该关联标识,确定该解密算法;该第一网元根据该解密算法对该加密的分析结果进行解密,得到该解密的分析结果。
第六方面,本申请实施例提供一种通信方法,该方法可以由第二网元或应用于第二网 元中的模块(如芯片)来执行。以第二网元执行该通信方法为例,该方法包括:第二网元接收来自训练网元的分析类型的标识信息和支持该分析类型的加密的模型,该第二网元支持的模型部署平台的类型包括该训练网元的模型部署平台的类型;第二网元接收来自推理网元的待分析的数据;该第二网元根据该加密的模型和该待分析的数据,得到加密的分析结果;该第二网元向训练网元或第一网元发送该加密的分析结果和用于接收解密的分析结果的该推理网元的地址信息,该解密的分析结果是该训练网元或该第一网元根据该加密的分析结果得到的。
第七方面,本申请实施例提供一种通信方法,该方法可以由推理网元或应用于推理网元中的模块(如芯片)来执行。以推理网元执行该通信方法为例,该方法包括:推理网元向数据管理网元发送请求消息,该请求消息包括分析类型的标识信息,该请求消息用于请求支持该分析类型的网元;该推理网元接收来自该数据管理网元的响应消息,该响应消息包括至少一组信息,每组信息中包括一个候选训练网元的地址信息和该候选训练网元的模型信息,该候选训练网元支持该分析类型,该候选训练网元的模型信息包括该候选训练网元的厂商类型和该候选训练网元的模型部署平台的类型;当该至少一组信息对应的至少一个候选训练网元中,存在与该推理网元的厂商类型不同且模型部署平台的类型相同的一个或多个候选训练网元,该推理网元从一个或多个候选训练网元中选择一个候选训练网元,作为训练网元。
该方案,增强了数据管理网元的功能,训练网元首先将支持的分析类型的标识信息及对应的模型信息注册/更新到数据管理网元,然后推理网元向数据管理网元发现可用的训练网元或第三方网元。推理网元和训练网元由不同厂商部署,二者使用的模型部署平台的类型相同或不同,该方案提供了模型跨厂商加密分发的流程,增强了训练网元将模型加密分发的能力,规避了推理网元的部署厂商窃取模型的框架和参数等信息的风险,打破了现有解决方案中模型只能同厂商共享的限制。
在一种可能的实现方法中,当该至少一组信息对应的至少一个候选训练网元中,不存在与该推理网元的厂商类型不同且模型部署平台的类型相同的候选训练网元,该推理网元根据该至少一组信息确定第二网元的地址信息。
在一种可能的实现方法中,该候选训练网元的模型信息中包括该第二网元的地址信息;该推理网元从该候选训练网元的模型信息中,获取该第二网元的地址信息。
在一种可能的实现方法中,上述任意实现方法中的加密的模型是使用全同态加密算法、随机安全平均算法或差分隐私算法中的一个或多个进行加密的。
在一种可能的实现方法中,上述任意实现方法中的推理网元可以是独立的核心网网元或者是核心网网元中的一个功能模块。
在一种可能的实现方法中,上述任意实现方法中的训练网元可以是独立的核心网网元或者是核心网网元中的一个功能模块。
在一种可能的实现方法中,上述任意实现方法中的第一网元可以是分析结果解密网元,可用于对加密的分析结果进行解密。
在一种可能的实现方法中,上述任意实现方法中的第二网元可以是模型部署和推理网元,可用于根据模型对待分析的数据进行推理,得到分析结果。如果使用的模型是加密的模型,则可以根据加密的模型对待分析的数据进行推理,得到加密的分析结果。
第八方面,本申请实施例提供一种通信装置,该装置可以是推理网元或应用于推理网 元中的模块(如芯片)。该装置具有实现上述第一方面的任意实现方法、第二方面的任意实现方法或第七方面的任意实现方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第九方面,本申请实施例提供一种通信装置,该装置可以是推理网元或应用于推理网元中的模块(如芯片)。该装置具有实现上述第二方面的任意实现方法或第四方面的任意实现方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第十方面,本申请实施例提供一种通信装置,该装置可以是第一网元或应用于第一网元中的模块(如芯片)。该装置具有实现上述第五方面的任意实现方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第十一方面,本申请实施例提供一种通信装置,该装置可以是第二网元或应用于第二网元中的模块(如芯片)。该装置具有实现上述第六方面的任意实现方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块。
第十二方面,本申请实施例提供一种通信装置,包括处理器和存储器;该存储器用于存储计算机指令,当该装置运行时,该处理器执行该存储器存储的计算机指令,以使该装置执行上述第一方面至第七方面中的任意实现方法。
第十三方面,本申请实施例提供一种通信装置,包括用于执行上述第一方面至第七方面中的任意实现方法的各个步骤的单元或手段(means)。
第十四方面,本申请实施例提供一种通信装置,包括处理器和接口电路,所述处理器用于通过接口电路与其它装置通信,并执行上述第一方面至第七方面中的任意实现方法。该处理器包括一个或多个。
第十五方面,本申请实施例提供一种通信装置,包括与存储器耦合的处理器,该处理器用于调用所述存储器中存储的程序,以执行上述第一方面至第七方面中的任意实现方法。该存储器可以位于该装置之内,也可以位于该装置之外。且该处理器可以是一个或多个。
第十六方面,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在通信装置上运行时,使得上述第一方面至第七方面中的任意实现方法被执行。
第十七方面,本申请实施例还提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当计算机程序或指令被通信装置运行时,使得上述第一方面至第七方面中的任意实现方法被执行。
第十八方面,本申请实施例还提供一种芯片***,包括:处理器,用于执行上述第一方面至第三方面中的任意实现方法。
第十九方面,本申请实施例还提供一种通信***,包括用于实现上述第一方面的任意实现方法的推理网元和用于实现上述第二方面的任意实现方法的训练网元。
第二十方面,本申请实施例还提供一种通信***,包括用于实现上述第三方面的任意实现方法的推理网元和用于实现上述第四方面的任意实现方法的训练网元。
附图说明
图1为基于服务化架构的5G网络架构示意图;
图2为基于点对点接口的5G网络架构示意图;
图3为本申请实施例提供的一种通信方法的流程示意图;
图4为本申请实施例提供的一种通信方法的流程示意图;
图5为本申请实施例提供的一种通信方法的流程示意图;
图6为本申请实施例提供的一种通信装置示意图;
图7为本申请实施例提供的一种通信装置示意图。
具体实施方式
图1为基于服务化架构的第五代(the 5th generation,5G)网络架构示意图。图1所示的5G网络架构中可包括终端设备、接入网设备以及核心网设备。终端设备通过接入网设备和核心网涉及接入数据网络(data network,DN)。其中,核心网设备包括以下网元中的部分或者全部:统一数据管理(unified data management,UDM)网元、统一数据库(unified data repository,UDR)、网络开放功能(network exposure function,NEF)网元(图中未示出)、应用功能(application function,AF)网元、策略控制功能(policy control function,PCF)网元、接入与移动性管理功能(access and mobility management function,AMF)网元、会话管理功能(session management function,SMF)网元、用户面功能(user plane function,UPF)网元、网络数据分析功能(Network Data Analytics Function,NWDAF)网元、网络存储功能(Network Repository Function,NRF)网元(图中未示出)。
接入网设备可以是无线接入网(radio access network,RAN)设备。例如:基站(base station)、演进型基站(evolved NodeB,eNodeB)、发送接收点(transmission reception point,TRP)、5G移动通信***中的下一代基站(next generation NodeB,gNB)、第六代(the 6th generation,6G)移动通信***中的下一代基站、未来移动通信***中的基站或无线保真(wireless fidelity,WiFi)***中的接入节点等;也可以是完成基站部分功能的模块或单元,例如,可以是集中式单元(central unit,CU),也可以是分布式单元(distributed unit,DU)。无线接入网设备可以是宏基站,也可以是微基站或室内站,还可以是中继节点或施主节点等。本申请的实施例对无线接入网设备所采用的具体技术和具体设备形态不做限定。
终端设备可以是用户设备(user equipment,UE)、移动台、移动终端等。终端设备可以广泛应用于各种场景,例如,设备到设备(device-to-device,D2D)、车物(vehicle to everything,V2X)通信、机器类通信(machine-type communication,MTC)、物联网(internet of things,IOT)、虚拟现实、增强现实、工业控制、自动驾驶、远程医疗、智能电网、智能家具、智能办公、智能穿戴、智能交通、智慧城市等。终端设备可以是手机、平板电脑、带无线收发功能的电脑、可穿戴设备、车辆、城市空中交通工具(如无人驾驶机、直升机等)、轮船、机器人、机械臂、智能家居设备等。
接入网设备和终端设备可以是固定位置的,也可以是可移动的。接入网设备和终端设备可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上;还可以部署在空中的飞机、气球和人造卫星上。本申请的实施例对接入网设备和终端设备的应用场景不做限定。
AMF网元,包含执行移动性管理、接入鉴权/授权等功能。此外,还负责在终端设备与PCF间传递用户策略。
SMF网元,包含执行会话管理、PCF下发控制策略的执行、UPF的选择、终端设备的互联网协议(internet protocol,IP)地址分配等功能。
UPF网元,作为和数据网络的接口,包含完成用户面数据转发、基于会话/流级的计费统计,带宽限制等功能。
UDM网元,包含执行管理签约数据、用户接入授权等功能。
UDR,包含执行签约数据、策略数据、应用数据等类型数据的存取功能。
NEF网元,用于支持能力和事件的开放。
AF网元,传递应用侧对网络侧的需求,例如,QoS需求或用户状态事件订阅等。AF可以是第三方功能实体,也可以是运营商部署的应用服务器。
PCF网元,包含负责针对会话、业务流级别进行计费、QoS带宽保障及移动性管理、终端设备策略决策等策略控制功能。
NRF网元,可用于提供网元发现功能,基于其他网元的请求,提供网元类型对应的网元信息。NRF还提供网元管理服务,如网元注册、更新、去注册以及网元状态订阅和推送等。
NWDAF网元,主要用于收集数据(包括终端设备数据、接入网设备数据、核心网网元数据以及第三方应用数据中的一种或者多种),并提供数据分析服务,可以输出数据分析结果,供网络、网管及应用执行策略决策使用。NWDAF可以利用机器学习模型进行数据分析。第三代合作伙伴计划(3rd generation partnership project,3GPP)Release 17中将NWDAF的训练功能和推理功能进行拆分,一个NWDAF可以仅支持模型训练功能,或仅支持数据推理功能,或同时支持模型训练功能和数据推理功能。其中,支持模型训练功能的NWDAF也可以称为训练NWDAF,或称为支持模型训练逻辑功能(model training logical function,MTLF)的NWDAF(简称为NWDAF(MTLF))。训练NWDAF可以根据获取的数据进行模型训练,得到训练后的模型。支持数据推理功能的NWDAF也可以称为推理NWDAF,或称为支持分析逻辑功能(analytics logical function,AnLF)的NWDAF(简称为NWDAF(AnLF))。推理NWDAF可以将输入数据输入到训练后的模型,得到分析结果或推理数据。本申请实施例中,训练NWDAF指的是至少支持模型训练功能的NWDAF。作为一种可能的实现方法,训练NWDAF也可以支持数据推理功能。推理NWDAF指的是至少支持数据推理功能的NWDAF。作为一种可能的实现方法,推理NWDAF也可以支持模型训练功能。如果一个NWDAF同时支持模型训练功能和数据推理功能,则该NWDAF可以称为训练NWDAF、推理NWDAF或训练推理NWDAF或NWDAF。本申请实施例中,一个NWDAF可以是一个单独的网元,也可以与其他网元合设,例如将NWDAF设置到PCF网元或者AMF网元中。
DN,是位于运营商网络之外的网络,运营商网络可以接入多个DN,DN上可部署多种业务,可为终端设备提供数据和/或语音等服务。例如,DN是某智能工厂的私有网络,智能工厂安装在车间的传感器可为终端设备,DN中部署了传感器的控制服务器,控制服务器可为传感器提供服务。传感器可与控制服务器通信,获取控制服务器的指令,根据指令将采集的传感器数据传送给控制服务器等。又例如,DN是某公司的内部办公网络,该公司员工的手机或者电脑可为终端设备,员工的手机或者电脑可以访问公司内部办公网络 上的信息、数据资源等。
图1中Npcf、Nudr、Nudm、Naf、Namf、Nsmf、Nnwdaf分别为上述PCF、UDR、UDM、AF、AMF、SMF、NWDAF提供的服务化接口,用于调用相应的服务化操作。N1、N2、N3、N4,以及N6为接口序列号,这些接口序列号的含义可参见图2中的描述。
图2为基于点对点接口的5G网络架构示意图,其中的网元的功能的介绍可以参考图1中对应的网元的功能的介绍,不再赘述。图2与图1的主要区别在于:图1中的各个控制面网元之间的接口是服务化的接口,图2中的各个控制面网元之间的接口是点对点的接口。
在图2所示的架构中,各个网元之间的接口名称及功能如下:
1)、N1:AMF与终端设备之间的接口,可以用于向终端设备传递NAS信令(如包括来自AMF的QoS规则)等。
2)、N2:AMF与RAN之间的接口,可以用于传递核心网侧至RAN的无线承载控制信息等。
3)、N3:RAN与UPF之间的接口,主要用于传递RAN与UPF间的上下行用户面数据。
4)、N4:SMF与UPF之间的接口,可以用于控制面与用户面之间传递信息,包括控制面向用户面的转发规则、QoS控制规则、流量统计规则等的下发以及用户面的信息上报。
5)、N5:AF与PCF之间的接口,可以用于应用业务请求下发以及网络事件上报。
6)、N6:UPF与DN的接口,用于传递UPF与DN之间的上下行用户数据流。
7)、N7:PCF与SMF之间的接口,可以用于下发协议数据单元(protocol data unit,PDU)会话粒度以及业务数据流粒度控制策略。
8)、N8:AMF与UDM间的接口,可以用于AMF向UDM获取接入与移动性管理相关签约数据与鉴权数据,以及AMF向UDM注册终端设备当前移动性管理相关信息等。
9)、N9:UPF和UPF之间的用户面接口,用于传递UPF间的上下行用户数据流。
10)、N10:SMF与UDM间的接口,可以用于SMF向UDM获取会话管理相关签约数据,以及SMF向UDM注册终端设备当前会话相关信息等。
11)、N11:SMF与AMF之间的接口,可以用于传递RAN和UPF之间的PDU会话隧道信息、传递发送给终端设备的控制消息、传递发送给RAN的无线资源控制信息等。
12)、N15:PCF与AMF之间的接口,可以用于下发终端设备策略及接入控制相关策略。
13)、N23:PCF与NWDAF之间的接口,NWDAF可以通过该接口收集PCF上的数据。需要说明的是,NWDAF还可以与其它设备(如AMF、UPF、接入网设备、终端设备等)之间有接口,图中并未完全示出。
14)、N35:UDM与UDR间的接口,可以用于UDM从UDR中获取用户签约数据信息。
15)、N36:PCF与UDR间的接口,可以用于PCF从UDR中获取策略相关签约数据以及应用数据相关信息。
可以理解的是,上述网元或者功能既可以是硬件设备中的网络元件,也可以是在专用硬件上运行软件功能,或者是平台(例如,云平台)上实例化的虚拟化功能。作为一种可能的实现方法,上述网元或者功能可以由一个设备实现,也可以由多个设备共同实现,还可 以是一个设备内的一个功能模块,本申请实施例对此不作具体限定。
作为一种实现方法,本申请实施例中的数据管理网元可以是上述NRF、UDM或UDR,也可以是未来通信如6G网络中具有上述NRF、UDM或UDR的功能的网元。推理网元可以是上述推理NWDAF或未来通信如6G网络中具有上述推理NWDAF的功能的网元。训练网元可以是上述训练NWDAF或未来通信如6G网络中具有上述训练NWDAF的功能的网元。
作为一种实现方法,本申请实施例中的数据管理网元可以是网管侧模型管理设备、网管侧模型管理网元或网管侧模型管理服务。推理网元可以是接入网设备侧推理设备。训练网元可以是网管侧训练设备、网管侧训练网元或网管侧训练服务。
作为一种实现方法,本申请实施例中的数据管理网元可以是接入网设备侧模型管理设备。推理网元可以是接入网设备侧推理设备。训练网元可以是接入网设备侧训练设备。
为实现模型跨厂商共享,本申请实施例提供一种通信方法。该方法中,训练网元的厂商类型与推理网元的厂商类型不同,且训练网元的模型部署平台的类型与推理网元的模型部署平台的类型相同。其中,模型部署平台是模型运行所依赖的一种框架,不同模型部署平台的动态计算图、静态计算图、调试方式、可视化或并行特性可能不同,模型部署平台的类型用于区分不同的模型部署平台。示例性的,厂商类型可以通过Vendor ID来表示,比如Vendor ID=1表示厂商A,Vendor ID=2表示厂商B。示例性的,模型部署平台的类型可以通过AI Platform ID(或Platform ID)来表示,比如AI Platform ID=1标识模型部署平台A,AI Platform ID=2表示模型部署平台B。其中,AI是人工智能(artificial intelligence)的简称。这里对厂商类型和模型部署平台的类型做了统一说明,后面不再赘述。
参考图3,该方法包括以下步骤:
步骤301,推理网元向训练网元发送请求消息。相应的,训练网元接收请求消息。
该请求消息中包括分析类型的标识信息(analytics ID),该请求消息用于请求支持该分析类型的标识信息指示的分析类型的模型。分析类型的标识信息用于指示分析类型,分析类型的标识信息比如可以业务体验(service experience)或网元负载信息(NF load information)等。
作为一种可能的实现方法,该请求消息中还包括厂商类型和模型部署平台的类型。其中,请求消息中的厂商类型、模型部署平台的类型指的是推理网元的厂商类型、模型部署平台的类型。厂商类型比如可以是华为、爱立信或诺基亚等。模型部署平台的类型比如可以是Mindspore、Tensorflow或PyTorch等。
作为一种可能的实现方法,该请求消息中还可以包含推理网元的模型部署平台的版本。模型部署平台的版本比如可以是V1.0或V2.1等。
作为一种可能的实现方法,该请求消息中还可以包含关联标识。
作为一种实现方法,推理网元可以通过调用Nnwdaf_MLModelProvision_Subscribe服务操作向训练网元请求模型信息。即,该步骤301中的请求消息可以是Nnwdaf_MLModelProvision_Subscribe服务操作。
本申请实施例中,也将该步骤301中的请求消息称为第一请求消息。
步骤302,训练网元确定训练网元与推理网元的厂商类型不同且模型部署平台的类型相同。
作为一种实现方法,上述步骤301的请求消息中携带推理网元的厂商类型和推理网元的模型部署平台的类型,训练网元判断训练网元的厂商类型与推理网元的厂商类型是否相同,以及判断训练网元的模型部署平台的类型与推理网元的模型部署平台的类型。如果训练网元与推理网元的厂商类型不同且训练网元与推理网元的模型部署平台的类型相同,则执行以下步骤303及后续步骤,否则流程结束。
作为另一种实现方法,推理网元可以知晓在该推理网元上部署的各个训练网元的厂商类型和模型部署平台的类型,则上述步骤301的请求消息中可以不需要携带推理网元的厂商类型和推理网元的模型部署平台的类型,而是携带一个指示信息,该指示信息指示训练网元的厂商类型与推理网元的厂商类型是否相同,以及指示训练网元的模型部署平台的类型与推理网元的模型部署平台的类型是否相同,从而训练网元可以根据该指示信息判断训练网元的厂商类型与推理网元的厂商类型是否相同,以及判断训练网元的模型部署平台的类型与推理网元的模型部署平台的类型是否相同。如果训练网元与推理网元的厂商类型不同且训练网元与推理网元的模型部署平台的类型相同,则执行以下步骤303及后续步骤,否则流程结束。
作为另一种实现方法,可以在训练网元上提前配置各个推理网元的厂商类型和模型部署平台的类型,则上述步骤301的请求消息中可以不需要携带推理网元的厂商类型和推理网元的模型部署平台的类型,也不需要携带上述指示信息,训练网元可以根据本地配置信息判断训练网元的厂商类型与推理网元的厂商类型是否相同,以及判断训练网元的模型部署平台的类型与推理网元的模型部署平台的类型是否相同。如果训练网元与推理网元的厂商类型不同且训练网元与推理网元的模型部署平台的类型相同,则执行以下步骤303及后续步骤,否则流程结束。
需要说明的是,上述所说的流程结束指的是在该图3的实施例中流程结束,在流程结束之后,还可以执行其它操作。比如,如果训练网元与推理网元的厂商类型相同且训练网元与推理网元的模型部署平台的类型相同,则训练网元可以向推理网元提供未加密的模型或未加密的模型的地址信息,然后推理网元根据未加密的模型得到未加密的分析结果。再比如,如果训练网元与推理网元的厂商类型不同且训练网元与推理网元的模型部署平台的类型不同,则可以采用以下图4的实施例的方案,使得推理网元可以获得分析结果。再比如,如果训练网元与推理网元的厂商类型相同且训练网元与推理网元的模型部署平台的类型不同,则推理网元可以向第三方网元(如第二网元)提供待分析的数据,由第二网元使用加密的模型和待分析的数据得到加密的分析结果,然后由第一网元或训练网元对待分析的数据进行解密得到解密的分析结果,然后将解密的分析结果发送给推理网元。
当然,还可以预先配置训练网元的功能,比如预先配置训练网元1至训练网元10只为厂商类型相同且模型部署的平台的类型相同的推理网元提供模型。以训练网元1为例,如果训练网元1收到来自推理网元的上述步骤301的请求消息,该训练网元1默认该训练网元1的厂商类型与推理网元的厂商类型不同,且训练网元1的模型部署的平台的类型与推理网元的模型部署的平台的类型相同。该实现方法下,不需要执行该步骤302。
步骤303,训练网元向推理网元发送响应消息。相应的,推理网元接收响应消息。
该响应消息中包含加密的模型或加密的模型的地址信息,其中加密的模型的地址信息比如可以是统一资源定位符(uniform resource locator,URL)或完全限定域名(fully qualified domain name,FQDN)。
以该模型是神经网络模型为例,该模型包括模型架构信息和模型参数。其中,模型架构信息包括模型中的神经网络层数、层与层之间的连接关系以及每一层使用的激活函数等信息。模型参数包括神经网络的每一层的参数值。
作为一种实现方法,响应消息中的加密的模型包括未加密的模型架构信息和加密的模型参数。作为另一种实现方法,响应消息中的加密的模型包括加密的模型架构信息和加密的模型参数。
作为一种可能的实现方法,该响应消息中还包含第一网元的地址信息或用于指示由该训练网元对加密的分析结果进行解密的指示信息(该实施例中,该指示信息也可以称为第一指示信息),该第一网元是具备分析结果解密功能的第三方网元,比如可以是一个NWDAF网元,该第一网元也可以称为分析结果解密网元。
作为一种可能的实现方法,该响应消息中还包含用于指示加密的模型对应的输入数据的数据类型的指示信息(该实施例中,该指示信息也可以称为第二指示信息)。比如该指示信息可以是一个事件标识(event ID)。示例性的,数据类型可以是UE位置或QoS Flow参数中的一个或多个。
作为一种可能的实现方法,该响应消息中还包括每种数据类型对应的数据格式和/或处理参数,推理网元根据每种数据类型对应的该数据格式和/或处理参数,对该数据类型对应的输入数据做相应的预处理,得到待分析的数据。示例性的,数据格式包括数据上报的时间窗(也就是数据在何时上报)、数据缓存的大小(也就是数据缓存到多大时才上报)中的一个或者多个,处理参数包括最大值、最小值、平均值或者方差值中的一个或者多个。这里对数据类型、数据格式以及处理参数的解释也适用于后续其它实施例,后面不再赘述。
作为一种可能的实现方法,该响应消息中还可以包含上述关联标识。
作为一种实现方法,训练网元可以通过调用Nnwdaf_MLModelProvision_Notify服务操作向推理网元发送上述信息。即,该步骤303中的响应消息可以是Nnwdaf_MLModelProvision_Notify服务操作。
本申请实施例中,也将该步骤303中的响应消息称为第一响应消息。
步骤304,推理网元根据加密的模型,得到加密的分析结果。
如果上述步骤303的响应消息中携带的是加密的模型的地址信息,则推理网元还需要根据该加密的模型的地址信息,获取加密的模型。比如,推理网元可以根据文件传输协议(file transfer protocol,FTP)从该加密的模型的地址信息所指示的地址下载该加密的模型。
推理网元根据待分析的数据和加密的模型得到加密的分析结果,也即将待分析的数据输入到加密的模型中,得到加密的分析结果。待分析的数据是加密的模型对应的输入数据,待分析的数据是推理网元从其它网元(如UE、SMF、AMF、接入网设备、PCF、UPF或者AF等中的一个或多个)收集到的。
在步骤304之后,推理网元根据加密的分析结果,获取解密的分析结果,下面介绍推理网元获取解密的分析结果的两种不同实现方法。
作为第一种实现方法,如果上述步骤303的响应消息中携带用于指示由该训练网元对加密的分析结果进行解密的指示信息(即第一指示信息),则在步骤304之后执行以下步骤305至步骤307。
作为第二种实现方法,如果上述步骤303的响应消息中携带第一网元的地址信息,则在步骤304之后执行步骤308至步骤310。
步骤305,推理网元向训练网元发送请求消息。相应的,训练网元接收该请求消息。
该请求消息中包含分析类型的标识信息和加密的分析结果,该请求消息用于请求解密的分析结果,该分析类型的标识信息与上述步骤301的分析类型的标识信息相同。
作为一种可能的实现方法,该请求消息中还可以包含上述关联标识。
作为一种实现方法,推理网元可以通过调用Nnwdaf_AnalyticsDecryption_Request服务操作向训练网元发送分析类型的标识信息和加密的分析结果。即,该步骤305中的请求消息可以是Nnwdaf_AnalyticsDecryption_Request服务操作。
步骤306,训练网元对加密的分析结果进行解密,得到解密的分析结果。
其中,加密的模型可以是使用全同态加密(fully homomorphicencryption)算法、随机安全平均(stochastic safety average)算法或差分隐私(differential privacy)算法中的一个或多个进行加密的,则训练网元采用与加密的模型使用的加密算法相应的解密算法,对加密的分析结果进行解密,得到解密的分析结果。
如果上述步骤301的请求消息和步骤305的请求消息中均携带上述关联标识,则在上述步骤303之前或之后,训练网元将加密的模型所使用的加密算法与该关联标识进行绑定,进而在该步骤306中,训练网元可以先根据该步骤305的请求消息中的该关联标识确定加密的模型对应的加密算法,然后根据加密算法,确定解密算法,从而根据解密算法对加密的分析结果进行解密,得到解密的分析结果。
步骤307,训练网元向推理网元发送响应消息。相应的,推理网元接收响应消息。
该响应消息中包含解密的分析结果。
作为一种实现方法,训练网元可以通过调用Nnwdaf_AnalyticsDecryption_Request Response服务操作向推理网元发送解密的分析结果。即,该步骤307中的响应消息可以是Nnwdaf_AnalyticsDecryption_Request Response服务操作。
步骤308,推理网元向第一网元发送请求消息。相应的,第一网元接收该请求消息。
该请求消息中包含分析类型的标识信息和加密的分析结果,该请求消息用于请求解密的分析结果,该分析类型的标识信息与上述步骤301的分析类型的标识信息相同。
作为一种可能的实现方法,该请求消息中还可以包含上述关联标识。
作为一种实现方法,推理网元可以通过调用Nnf_AnalyticsDecryption_Request服务操作向第一网元发送分析类型的标识信息和加密的分析结果。即,该步骤308中的请求消息可以是Nnf_AnalyticsDecryption_Request服务操作。
步骤309,第一网元对加密的分析结果进行解密,得到解密的分析结果。
其中,加密的模型是使用全同态加密算法、随机安全平均算法或差分隐私算法中的一个或多个进行加密的,则训练网元采用与加密的模型使用的加密算法相应的解密算法,对加密的分析结果进行解密,得到解密的分析结果。
作为一种可能的实现方法,如果上述步骤303的响应消息中包含第一网元的地址信息,则该步骤303之前或之后,训练网元还向第一网元发送上述关联标识和加密的模型对应的解密算法,并且该上述步骤308的请求消息中也携带上述关联标识,从而在该步骤309中,第一网元可以先根据步骤308的请求消息中的该关联标识确定加密的模型对应的解密算法,从而根据解密算法对加密的分析结果进行解密,得到解密的分析结果。
步骤310,第一网元向推理网元发送响应消息。相应的,推理网元接收响应消息。
该响应消息中包含解密的分析结果。
作为一种实现方法,第一网元可以通过调用Nnf_AnalyticsDecryption_Response服务操作向推理网元发送解密的分析结果。即,该步骤310中的响应消息可以是Nnf_AnalyticsDecryption_Response服务操作。
上述方案中,推理网元和训练网元由不同厂商部署,但二者使用的模型部署平台相同,该方案提供了模型跨厂商加密分发的流程,增强了训练网元将模型加密分发的能力,规避了推理网元的部署厂商窃取模型的框架和参数等信息的风险,保障了模型信息的安全,且打破了现有解决方案中模型只能同厂商共享的限制。
作为一种实现方法,各个训练网元还可以将各自的模型信息注册至数据管理网元,从而当推理网元本地没有配置训练网元的地址信息时,推理网元可以从数据管理网元请求发现合适的训练网元。比如,训练网元可以向数据管理网元发送注册请求消息,该注册请求消息包括该训练网元能够提供的分析类型的标识信息和训练网元的模型信息,该模型信息包括训练网元的厂商类型和训练网元的模型部署平台的类型。作为一种可能的实现方法,该模型信息中还包含用于指示加密的模型对应的输入数据的数据类型的指示信息。作为一种可能的实现方法,该响应消息中还包括每种数据类型对应的数据格式和/或处理参数。作为一种可能的实现方法,该模型信息中还包含第一网元的地址信息或用于指示由该训练网元对加密的分析结果进行解密的指示信息。其中,该第一网元的含义可以参考前述描述。作为一种可能的实现方法,该模型信息中还包含第二网元的地址信息。该第二网元可以是一个可信的第三方网元,具体的,可以是模型部署和推理网元。第二网元可根据模型对待分析的数据进行推理,得到分析结果。如果使用的模型是加密的模型,则第二网元可以根据加密的模型对待分析的数据进行推理,得到加密的分析结果。
需要说明的是,不同的训练网元在向数据管理网元注册模型信息时,不同的训练网元的模型信息中的第一网元可以是同一个网元,也可以是不同的网元。同样的,不同的训练网元的模型信息中的第二网元可以是同一个网元,也可以是不同的网元。
作为一种实现方法,推理网元在上述步骤301之前,可以向数据管理网元发送请求消息(该实施例中,该请求消息也称为第二请求消息),该请求消息包括上述步骤301中的分析类型的标识信息,该请求消息用于请求支持该分析类型的网元,然后数据管理网元向推理网元发送响应消息(该实施例中,该响应消息也称为第二响应消息),该响应消息包括上述步骤301中的描述的训练网元的地址信息。其中,如果数据管理网元确定有多个训练网元支持上述分析类型,则数据管理网元可以将该多个训练网元的地址信息以及模型信息提供给推理网元,由推理网元从中选择一个训练网元。
本申请实施例提供一种通信方法。该方法中,训练网元的厂商类型与推理网元的厂商类型不同,且训练网元的模型部署平台的类型与推理网元的模型部署平台的类型不同。
参考图4,该方法包括以下步骤:
步骤401,训练网元将本地已有的模型进行加密,向第二网元发送加密的模型和该加密的模型对应的分析类型的标识信息。
其中,第二网元支持的模型部署平台的类型比较丰富,本申请实施例中的第二网元支持的模型部署平台的类型至少包括训练网元的模型部署平台的类型。该第二网元的含义可以参考前述描述。
作为一种可能的实现方法,训练网元还向第二网元发送第一网元的地址信息,该第一 网元具有解密分析结果功能。
可以理解的是,训练网元本地已有的模型可以是该训练网元训练所得的模型,也可以是该训练网元从其他训练网元获取的模型。
该步骤401为可选步骤。当不执行该步骤401时,可以由其它网元或者是运营商,向第二网元预配置上述信息,如加密的模型、加密的模型对应的分析类型的标识信息、第一网元的地址信息中的一个或多个。
步骤402,推理网元向训练网元发送请求消息。相应的,训练网元接收请求消息。
该步骤402与上述步骤301相同,可参考前述描述。
步骤403,训练网元向第二网元发送加密的更新模型和该加密的更新模型对应的分析类型的标识信息。
该步骤为可选步骤。训练网元在接收到推理网元的上述请求消息后,如果确认本地模型需要进行进一步训练,则训练网元触发到其它网元进行数据收集以及后续的模型训练过程,并将训练得到的更新模型进行加密后重新发送给第二网元。
步骤404,训练网元确定训练网元与推理网元的厂商类型不同且模型部署平台的类型不同。
该步骤404为可选步骤。该步骤404的实现方法以及多种不同的可替代实现方法,与前述步骤302的描述类似,可以参考前述描述。
步骤405,训练网元向推理网元发送响应消息。相应的,推理网元接收响应消息。
该响应消息中包含第二网元的地址信息和用于指示拒绝请求支持上述分析类型的模型的指示信息(该实施例中,也将该指示信息称为第一指示信息)。
作为一种可能的实现方法,该响应消息中还包含用于指示加密的模型对应的输入数据的数据类型的指示信息(该实施例中,该指示信息也可以称为第二指示信息)。比如该指示信息可以是一个事件标识(event ID)。作为一种可能的实现方法,该响应消息中还包括每种数据类型对应的数据格式和/或处理参数。
作为一种可能的实现方法,该响应消息中还包含拒绝原因值,该拒绝原因值为训练网元与推理网元的厂商类型不同,且推理网元和训练网元的模型部署平台的类型不同。
作为一种可能的实现方法,如果上述步骤402的请求消息中包含关联标识,则该响应消息中包含该关联标识。
作为一种实现方法,训练网元可以通过调用Nnwdaf_MLModelProvision_Notify服务操作向推理网元发送上述信息。即,该步骤405中的响应消息可以是Nnwdaf_MLModelProvision_Notify服务操作。
步骤406,推理网元根据第二网元的地址信息,向第二网元发送请求消息。相应的,第二网元接收该请求消息。
该请求消息中包含待分析的数据和分析类型的标识信息,该请求消息用于请求对待分析的数据进行分析。该分析类型的标识信息与上述步骤402的分析类型的标识信息相同。
其中,待分析的数据是加密的模型对应的输入数据,待分析的数据是推理网元从其它网元(如UE、SMF、AMF、接入网设备、PCF、UPF或者AF等中的一个或多个)收集到的。
作为一种可能的实现方法,该请求消息中还可以包含上述关联标识。
作为一种实现方法,推理网元可以通过调用Nnf_AnalyticsInfo_Request服务操作向第 二网元发送上述信息。即,该步骤406中的请求消息可以是Nnf_AnalyticsInfo_Request服务操作。
步骤407,第二网元根据加密的模型,得到加密的分析结果。
具体的,第二网元使用本地部署的加密的模型以及从推理网元接收的待分析的数据计算得到加密的分析结果。其中,第二网元上本地部署的加密的模型是来自训练网元、其它网元或云运营商配置。
在第二网元得到加密的分析结果之后,可以执行以下步骤408至步骤410,或者执行以下步骤411至步骤413。
步骤408,第二网元向训练网元发送请求消息。相应的,训练网元接收该请求消息。
该请求消息中包含分析类型的标识信息、加密的分析结果和推理网元的地址信息,该请求消息用于请求解密的分析结果以及将解密的分析结果发送给推理网元,该分析类型的标识信息与上述步骤402的分析类型的标识信息相同。
作为一种可能的实现方法,该请求消息中还可以包含上述关联标识。
作为一种实现方法,第二网元可以通过调用Nnwdaf_AnalyticsDecryption_Request服务操作向训练网元发送分析类型的标识信息、加密的分析结果和推理网元的地址信息。即,该步骤408中的请求消息可以是Nnwdaf_AnalyticsDecryption_Request服务操作。
步骤409,训练网元对加密的分析结果进行解密,得到解密的分析结果。
该步骤409同上述步骤306,可参考前述描述。
步骤410,训练网元向推理网元发送解密的分析结果。相应的,推理网元接收解密的分析结果。
作为一种实现方法,训练网元可以通过调用Nnwdaf_AnalyticsDecryption_Request Response服务操作向推理网元发送解密的分析结果。
步骤411,第二网元向第一网元发送请求消息。相应的,第一网元接收该请求消息。
该请求消息中包含分析类型的标识信息、加密的分析结果和推理网元的地址信息,该请求消息用于请求解密的分析结果以及将解密的分析结果发送给推理网元,该分析类型的标识信息与上述步骤402的分析类型的标识信息相同。
其中,第二网元可以通过上述步骤401,获得第一网元的地址信息。
作为一种可能的实现方法,该请求消息中还可以包含上述关联标识。
步骤412,第一网元对加密的分析结果进行解密,得到解密的分析结果。
该步骤412同上述步骤309,可参考前述描述。
步骤413,第一网元向推理网元发送解密的分析结果。相应的,推理网元接收解密的分析结果。
作为一种实现方法,第一网元可以通过调用Nnwdaf_AnalyticsDecryption_Request Response服务操作向推理网元发送解密的分析结果。
上述方案中,推理网元和训练网元由不同厂商部署,且二者使用的模型部署平台也不同,该方案提供了模型跨厂商加密分发的流程,增强了训练网元将模型加密分发的能力,规避了推理网元的部署厂商窃取模型的框架和参数等信息的风险,保障了模型信息的安全,且打破了现有解决方案中模型只能同厂商共享的限制。
参考图5,为本申请实施例提供的一种通信方法。该方法包括以下步骤:
步骤501,训练网元向数据管理网元发送注册请求消息。相应的,数据管理网元接收该注册请求消息。
该注册请求消息中包含分析类型的标识信息和模型信息,其中,模型信息包括厂商类型、模型部署平台的类型、第二网元的地址信息,以及还包括第一网元的地址信息或用于指示由该训练网元对加密的分析结果进行解密的指示信息。
作为一种可能的实现方法,该注册请求消息中还可以包含模型部署平台的版本。
作为一种可能的实现方法,该注册请求消息中还包含用于指示加密的模型对应的输入数据的数据类型的指示信息。比如该指示信息可以是一个事件标识(event ID)。作为一种可能的实现方法,该注册请求消息中还包括每种数据类型对应的数据格式和/或处理参数。
其中,关于分析类型的标识信息、厂商类型、模型部署平台的类型、模型部署平台的版本、第一网元、第二网元的含义可以参考前述描述,不再赘述。
作为一种实现方法,训练网元可以通过调用Nnrf_NFManagement_NFRegister Request服务操作向数据管理网元请求注册。即,该步骤501中的注册请求消息可以是Nnrf_NFManagement_NFRegister Request服务操作。
步骤502,数据管理网元向训练网元发送注册响应消息。相应的,训练网元接收该注册响应消息。
作为一种实现方法,数据管理网元可以通过调用Nnrf_NFManagement_NFRegister Response服务操作向训练网元返回针对注册请求消息的响应。即,该步骤502中的注册响应消息可以是Nnrf_NFManagement_NFRegister Response服务操作。
步骤503,训练网元向数据管理网元发送更新请求消息。相应的,数据管理网元接收该更新请求消息。
如果训练网元的模型信息发生更新,比如模型部署平台的版本发生更新,则训练网元可以向数据管理网元发送更新请求消息,以将更新的模型信息重新注册至数据管理网元。
其中,更新请求消息中携带的信息与上述步骤501的注册请求消息中携带的信息类似,可以参考前述描述。
作为一种实现方法,训练网元可以通过调用Nnrf_NFManagement_NFUpdateRequest服务操作向数据管理网元请求注册更新。
步骤504,数据管理网元向训练网元发送更新响应消息。相应的,训练网元接收该更新响应消息。
作为一种实现方法,数据管理网元可以通过调用Nnrf_NFManagement_NFUpdate Response服务操作向训练网元返回针对更新请求消息的响应。
上述步骤503至步骤504为可选步骤。
步骤505,推理网元向数据管理网元发送请求消息。相应的,数据管理网元接收该请求消息。
该请求消息包括分析类型的标识信息。作为一种可能的实现方法,该请求消息还包括推理网元的厂商类型和模型部署平台的类型。
该请求消息用于请求获取支持该分析类型的网元,具体的,用于请求获取支持该分析类型的训练网元或第三方网元。
作为一种实现方法,推理网元可以通过调用Nnrf_NFDiscovery_Request服务操作向数据管理网元请求发现可用的训练网元或第三方网元。即,该步骤505中的请求消息可以是 Nnrf_NFDiscovery_Request服务操作。
步骤506,数据管理网元向推理网元发送响应消息。相应的,推理网元接收该响应消息。
该响应消息中包含至少一组信息,每组信息中包括至少一个候选训练网元的地址信息以及该候选训练网元的模型信息,该模型信息与上述步骤505的请求消息中的分析类型的标识信息对应,该模型信息中包含的内容可以参考前述步骤501的描述。
需要说明的是,不同的候选训练网元的模型信息中的第一网元的地址信息可以相同也可以不同,不同的候选训练网元的模型信息中的第二网元的地址信息可以相同也可以不同。
作为一种实现方法,数据管理网元可以通过调用Nnrf_NFDiscovery_Request Response服务操作响应推理网元的网元发现请求。即,该步骤506中的响应消息可以是Nnrf_NFDiscovery_Request Response服务操作。
步骤507,推理网元选择训练网元或第二网元。
如果上述步骤506的响应消息中包含多组信息,则推理网元根据下列顺序选择训练网元或第二网元。
如果该多组信息对应的至少一个候选训练网元中存在与推理网元厂商类型相同且模型部署平台的类型相同的一个或多个候选训练网元,则推理网元从该一个或多个候选训练网元中选择一个作为训练网元,比如随机选择一个或根据预定的规则选择一个。
如果该多组信息对应的至少一个候选训练网元中不存在与推理网元厂商类型相同且模型部署平台的类型相同的候选训练网元,但该多组信息对应的至少一个候选训练网元中存在与推理网元厂商类型不同且模型部署平台的类型相同的一个或多个候选训练网元,则推理网元从该一个或多个候选训练网元中选择一个作为训练网元,比如随机选择一个或根据预定的规则选择一个。
如果该多组信息对应的至少一个候选训练网元中不存在与推理网元厂商类型相同且模型部署平台的类型相同的候选训练网元,并且该多组信息对应的至少一个候选训练网元中也不存在与推理网元厂商类型不同且模型部署平台的类型相同的候选训练网元,则推理网元根据该多组信息对应的至少一个候选训练网元的模型信息,选择一个第二网元。比如,如果该至少一个候选训练网元的模型信息中的第二网元的地址均相同,则随机选择一个第二网元的地址。再比如,如果该至少一个候选训练网元的模型信息中的第二网元的地址不完全相同,则可以从中随机选择一个或者根据预定的规则选择一个。
如果推理网元选择的是一个训练网元,则该步骤507之后可以执行上述步骤301至步骤307,或者执行上述步骤301至步骤304以及步骤308至步骤310。
如果推理网元选择的是一个第二网元,则该步骤507之后可以执行上述步骤406至步骤410,或者执行上述步骤406至步骤407以及步骤411至步骤413。
上述方案中,增强了数据管理网元的功能,训练网元首先将支持的分析类型的标识信息及对应的模型信息注册/更新到数据管理网元,然后推理网元向数据管理网元发现可用的训练网元或第三方网元。推理网元和训练网元由不同厂商部署,二者使用的模型部署平台的类型相同或不同,该方案提供了模型跨厂商加密分发的流程,增强了训练网元将模型加密分发的能力,规避了推理网元的部署厂商窃取模型的框架和参数等信息的风险,保障了模型信息的安全,且打破了现有解决方案中模型只能同厂商共享的限制。
可以理解的是,本发明实施例中数据管理网元仅是示例,作为一种可能的实现方法, 本发明实施例中数据管理网元所起到的作用可以由其他网元(如模型管理网元)执行。
可以理解的是,为了实现上述实施例中功能,推理网元、训练网元、第一网元和第二网元包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的单元及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用场景和设计约束条件。
图6和图7为本申请的实施例提供的可能的通信装置的结构示意图。这些通信装置可以用于实现上述方法实施例中推理网元、训练网元、第一网元或第二网元的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请的实施例中,该通信装置可以是推理网元、训练网元、第一网元或第二网元,也可以是应用于推理网元、训练网元、第一网元或第二网元的模块(如芯片)。
如图6所示,通信装置600包括处理单元610和收发单元620。通信装置600用于实现上述方法实施例中推理网元、训练网元、第一网元或第二网元的功能。
在第一个实施例中,当该通信装置是推理网元或用于推理网元的模型(如芯片),收发单元620,用于向训练网元发送第一请求消息,该第一请求消息包括分析类型的标识信息,该第一请求消息用于请求支持该分析类型的模型,该训练网元与该推理网元的厂商类型不同,该推理网元和该训练网元的模型部署平台的类型相同;接收来自该训练网元的第一响应消息,该第一响应消息包括加密的模型或者该加密的模型的地址信息,该加密的模型支持该分析类型;处理单元610,用于根据该加密的模型,得到加密的分析结果;根据该加密的分析结果,获取解密的分析结果。
在一种可能的实现方法中,收发单元620,用于向该训练网元发送该加密的分析结果;接收来自该训练网元的该解密的分析结果。
在一种可能的实现方法中,该第一响应消息中还包括第一指示信息,该第一指示信息指示由该训练网元对该加密的分析结果进行解密。
在一种可能的实现方法中,收发单元620,用于向该训练网元发送该加密的分析结果和关联标识,该关联标识用于该训练网元确定该加密的模型对应的加密算法。
在一种可能的实现方法中,该第一响应消息中还包括第一网元的地址信息;处理单元610,用于根据该第一网元的地址信息,通过收发单元620向该第一网元发送该加密的分析结果;接收来自该第一网元的该解密的分析结果。
在一种可能的实现方法中,处理单元610,用于根据该第一网元的地址信息,通过收发单元620向该第一网元发送该加密的分析结果和关联标识,该关联标识用于该第一网元确定该加密的模型对应的加密算法。
在一种可能的实现方法中,该第一响应消息中还包括第二指示信息,该第二指示信息用于指示该加密的模型对应的输入数据的数据类型。
在一种可能的实现方法中,该第一请求消息中还包含该推理网元的厂商类型和该推理网元的模型部署平台的类型。
在一种可能的实现方法中,收发单元620,用于在向训练网元发送第一请求消息之前,向数据管理网元发送第二请求消息,该第二请求消息包括该分析类型的标识信息,该第二请求消息用于请求支持该分析类型的网元;接收来自该数据管理网元的第二响应消息,该 第二响应消息包括该训练网元的地址信息。
在第二个实施例中,当该通信装置是训练网元或用于训练网元的模型(如芯片),收发单元620,用于接收来自推理网元的第一请求消息,该第一请求消息包括分析类型的标识信息,该第一请求消息用于请求支持该分析类型的模型,该训练网元与该推理网元的厂商类型不同,该推理网元和该训练网元的模型部署平台的类型相同;向该推理网元发送第一响应消息,该第一响应消息包括加密的模型或者该加密的模型的地址信息;接收来自该推理网元的加密的分析结果,该加密的分析结果是根据该加密的模型得到的;处理单元610,用于对该加密的分析结果进行解密,得到解密的分析结果;收发单元620,用于向该推理网元发送该解密的分析结果。
在一种可能的实现方法中,该第一请求消息中还包括该推理网元的厂商类型和该推理网元的模型部署平台的类型;处理单元610,用于在收发单元620向该推理网元发送第一响应消息之前,确定该训练网元与该推理网元的厂商类型不同,且该推理网元和该训练网元的模型部署平台的类型相同。
在一种可能的实现方法中,该第一响应消息中还包括第一指示信息,该第一指示信息指示由该训练网元对该加密的分析结果进行解密。
在一种可能的实现方法中,该第一响应消息中还包括第二指示信息,该第二指示信息用于指示该加密的模型对应的输入数据的数据类型。
在一种可能的实现方法中,收发单元620,用于接收来自推理网元的第一请求消息之前,向数据管理网元发送注册请求消息,该注册请求消息包括该分析类型的标识信息和该训练网元的模型信息,该模型信息包括该训练网元的厂商类型和该训练网元的模型部署平台的类型。
在一种可能的实现方法中,收发单元620,用于接收来自该推理网元的该加密的分析结果和关联标识;处理单元610,用于根据该关联标识,确定该加密的模型对应的加密算法;根据该加密算法,确定解密算法;根据该解密算法对该加密的分析结果进行解密,得到该解密的分析结果。
在第三个实施例中,当该通信装置是推理网元或用于推理网元的模型(如芯片),收发单元620,用于向训练网元发送请求消息,该请求消息包括分析类型的标识信息,该请求消息用于请求支持该分析类型的模型,该训练网元与该推理网元的厂商类型不同,该推理网元和该训练网元的模型部署平台的类型不同;接收来自该训练网元的响应消息,该响应消息包括第一指示信息和第二网元的地址信息,该第一指示信息指示拒绝请求支持该分析类型的模型,该第二网元支持的模型部署平台的类型包括该训练网元的模型部署平台的类型;处理单元610,用于根据该第二网元的地址信息,通过收发单元620向该第二网元发送待分析的数据,该待分析的数据用于该第二网元根据该分析类型对应的加密的模型生成加密的分析结果;收发单元620,用于接收来自该训练网元或者第一网元的解密的分析结果,该解密的分析结果是该训练网元或该第一网元根据该加密的分析结果得到的。
在一种可能的实现方法中,该响应消息还包括拒绝原因值,该拒绝原因值为该训练网元与该推理网元的厂商类型不同,且该推理网元和该训练网元的模型部署平台的类型不同。
在一种可能的实现方法中,该请求消息中还包含该推理网元的厂商类型和该推理网元的模型部署平台的类型。
在一种可能的实现方法中,该响应消息中还包括第二指示信息,该第二指示信息用于 指示该加密的模型对应的输入数据的数据类型。
在一种可能的实现方法中,处理单元610,用于根据该第二网元的地址信息,通过收发单元620向该第二网元发送待分析的数据和关联标识,该关联标识用于该第一网元或该训练网元确定该加密的模型对应的加密算法。
在第四个实施例中,当该通信装置是训练网元或用于训练网元的模型(如芯片),收发单元620,用于接收来自推理网元的请求消息,该请求消息包括分析类型的标识信息,该请求消息用于请求支持该分析类型的模型,该训练网元与该推理网元的厂商类型不同,该推理网元和该训练网元的模型部署平台的类型不同;向该推理网元发送响应消息,该响应消息包括第一指示信息和第二网元的地址信息,该第一指示信息指示拒绝请求支持该分析类型的模型,该第二网元支持的模型部署平台的类型包括该训练网元的模型部署平台的类型;接收来自该第二网元的加密的分析结果,该加密的分析结果是该第二网元根据该推理网元的待分析的数据和该分析类型对应的加密的模型得到的;处理单元610,用于对该加密的分析结果进行解密,得到解密的分析结果;收发单元620,用于向该推理网元发送该解密的分析结果。
在一种可能的实现方法中,该请求消息中还包含该推理网元的厂商类型和该推理网元的模型部署平台的类型;处理单元610,用于在收发单元620向该推理网元发送响应消息之前,确定该训练网元与该推理网元的厂商类型不同,且该推理网元和该训练网元的模型部署平台的类型不同。
在一种可能的实现方法中,该响应消息还包括拒绝原因值,该拒绝原因值为该训练网元与该推理网元的厂商类型不同,且该推理网元和该训练网元的模型部署平台的类型不同。
在一种可能的实现方法中,收发单元620,用于接收来自推理网元的请求消息之前,向该第二网元发送该分析类型的标识信息和该分析类型对应的该加密的模型。
在一种可能的实现方法中,该响应消息中还包括第二指示信息,该第二指示信息用于指示该加密的模型对应的输入数据的数据类型。
在一种可能的实现方法中,收发单元620,用于接收来自该第二网元的该加密的分析结果和关联标识;处理单元610,用于根据该关联标识,确定该加密的模型对应的加密算法;根据该加密算法,确定解密算法;根据该解密算法对该加密的分析结果进行解密,得到该解密的分析结果。
在第五个实施例中,当该通信装置是第一网元或用于第一网元的模型(如芯片),收发单元620,用于接收加密的分析结果;处理单元610,用于对该加密的分析结果进行解密,得到解密的分析结果;收发单元620,用于向推理网元发送该解密的分析结果。
在一种可能的实现方法中,收发单元620,用于接收来自该推理网元的该加密的分析结果。
在一种可能的实现方法中,收发单元620,用于接收来自第二网元的该加密的分析结果和该推理网元的地址信息;处理单元610,用于根据该推理网元的地址信息,通过收发单元620向该推理网元发送该解密的分析结果。
在一种可能的实现方法中,收发单元620,用于接收加密的分析结果之前,接收来自训练网元的关联标识和该关联标识对应的解密算法的标识;接收该加密的分析结果和该关联标识;处理单元610,用于根据该关联标识,确定该解密算法;根据该解密算法对该加密的分析结果进行解密,得到该解密的分析结果。
在第六个实施例中,当该通信装置是第二网元或用于第二网元的模型(如芯片),收发单元620,用于接收来自训练网元的分析类型的标识信息和支持该分析类型的加密的模型,该第二网元支持的模型部署平台的类型包括该训练网元的模型部署平台的类型;接收来自推理网元的待分析的数据;处理单元610,用于根据该加密的模型和该待分析的数据,得到加密的分析结果;收发单元620,用于向训练网元或第一网元发送该加密的分析结果和用于接收解密的分析结果的该推理网元的地址信息,该解密的分析结果是该训练网元或该第一网元根据该加密的分析结果得到的。
在第七个实施例中,当该通信装置是推理网元或用于推理网元的模型(如芯片),收发单元620,用于向数据管理网元发送请求消息,该请求消息包括分析类型的标识信息,该请求消息用于请求支持该分析类型的网元;接收来自该数据管理网元的响应消息,该响应消息包括至少一组信息,每组信息中包括一个候选训练网元的地址信息和该候选训练网元的模型信息,该候选训练网元支持该分析类型,该候选训练网元的模型信息包括该候选训练网元的厂商类型和该候选训练网元的模型部署平台的类型;处理单元610,用于当该至少一组信息对应的至少一个候选训练网元中,存在与该推理网元的厂商类型不同且模型部署平台的类型相同的一个或多个候选训练网元,从一个或多个候选训练网元中选择一个候选训练网元,作为训练网元。
在一种可能的实现方法中,处理单元610,用于当该至少一组信息对应的至少一个候选训练网元中,不存在与该推理网元的厂商类型不同且模型部署平台的类型相同的候选训练网元,根据该至少一组信息确定第二网元的地址信息。
在一种可能的实现方法中,该候选训练网元的模型信息中包括该第二网元的地址信息;处理单元610,用于从该候选训练网元的模型信息中,获取该第二网元的地址信息。
有关上述处理单元610和收发单元620更详细的描述可以直接参考上述方法实施例中相关描述直接得到,这里不加赘述。
如图7所示,通信装置700包括处理器710,作为一种可能的实现方法,该通信装置700还包括接口电路720。处理器710和接口电路720之间相互耦合。可以理解的是,接口电路720可以为收发器或输入输出接口。作为一种可能的实现方法,通信装置700还可以包括存储器730,用于存储处理器710执行的指令或存储处理器710运行指令所需要的输入数据或存储处理器710运行指令后产生的数据。
当通信装置700用于实现上述方法实施例时,处理器710用于实现上述处理单元610的功能,接口电路720用于实现上述收发单元620的功能。
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(central processing unit,CPU),还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器、闪存、只读存储器、可编程只读存储器、可擦除可编程只读存储器、电可擦除可编程只读存储器、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式 的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于接入网设备或终端设备中。当然,处理器和存储介质也可以作为分立组件存在于接入网设备或终端设备中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、接入网设备、终端设备或者其它可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘;还可以是半导体介质,例如,固态硬盘。该计算机可读存储介质可以是易失性或非易失性存储介质,或可包括易失性和非易失性两种类型的存储介质。
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。在本申请的文字描述中,字符“/”,一般表示前后关联对象是一种“或”的关系;在本申请的公式中,字符“/”,表示前后关联对象是一种“相除”的关系。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。

Claims (33)

  1. 一种通信方法,其特征在于,包括:
    推理网元向训练网元发送第一请求消息,所述第一请求消息包括分析类型的标识信息,所述第一请求消息用于请求支持所述分析类型的模型,所述训练网元与所述推理网元的厂商类型不同,所述推理网元和所述训练网元的模型部署平台的类型相同;
    所述推理网元接收来自所述训练网元的第一响应消息,所述第一响应消息包括加密的模型或者所述加密的模型的地址信息,所述加密的模型支持所述分析类型;
    所述推理网元根据所述加密的模型,得到加密的分析结果;
    所述推理网元根据所述加密的分析结果,获取解密的分析结果。
  2. 如权利要求1所述的方法,其特征在于,所述推理网元根据所述加密的分析结果,获取解密的分析结果,包括:
    所述推理网元向所述训练网元发送所述加密的分析结果;
    所述推理网元接收来自所述训练网元的所述解密的分析结果。
  3. 如权利要求2所述的方法,其特征在于,所述第一响应消息中还包括第一指示信息,所述第一指示信息指示由所述训练网元对所述加密的分析结果进行解密。
  4. 如权利要求2或3所述的方法,其特征在于,所述推理网元向所述训练网元发送所述加密的分析结果,包括:
    所述推理网元向所述训练网元发送所述加密的分析结果和关联标识,所述关联标识用于所述训练网元确定所述加密的模型对应的加密算法。
  5. 如权利要求1所述的方法,其特征在于,所述第一响应消息中还包括第一网元的地址信息;
    所述推理网元根据所述加密的分析结果,获取解密的分析结果,包括:
    所述推理网元根据所述第一网元的地址信息,向所述第一网元发送所述加密的分析结果;
    所述推理网元接收来自所述第一网元的所述解密的分析结果。
  6. 如权利要求5所述的方法,其特征在于,所述推理网元根据所述第一网元的地址信息,向所述第一网元发送所述加密的分析结果,包括:
    所述推理网元根据所述第一网元的地址信息,向所述第一网元发送所述加密的分析结果和关联标识,所述关联标识用于所述第一网元确定所述加密的模型对应的加密算法。
  7. 如权利要求1至6中任一项所述的方法,其特征在于,所述第一响应消息中还包括第二指示信息,所述第二指示信息用于指示所述加密的模型对应的输入数据的数据类型。
  8. 如权利要求1至7中任一项所述的方法,其特征在于,所述第一请求消息中还包含所述推理网元的厂商类型和所述推理网元的模型部署平台的类型。
  9. 如权利要求1至8中任一项所述的方法,其特征在于,所述推理网元向训练网元发送第一请求消息之前,还包括:
    所述推理网元向数据管理网元发送第二请求消息,所述第二请求消息包括所述分析类型的标识信息,所述第二请求消息用于请求支持所述分析类型的网元;
    所述推理网元接收来自所述数据管理网元的第二响应消息,所述第二响应消息包括所 述训练网元的地址信息。
  10. 一种通信方法,其特征在于,包括:
    训练网元接收来自推理网元的第一请求消息,所述第一请求消息包括分析类型的标识信息,所述第一请求消息用于请求支持所述分析类型的模型,所述训练网元与所述推理网元的厂商类型不同,所述推理网元和所述训练网元的模型部署平台的类型相同;
    所述训练网元向所述推理网元发送第一响应消息,所述第一响应消息包括加密的模型或者所述加密的模型的地址信息;
    所述训练网元接收来自所述推理网元的加密的分析结果,所述加密的分析结果是根据所述加密的模型得到的;
    所述训练网元对所述加密的分析结果进行解密,得到解密的分析结果;
    所述训练网元向所述推理网元发送所述解密的分析结果。
  11. 如权利要求10所述的方法,其特征在于,所述第一请求消息中还包括所述推理网元的厂商类型和所述推理网元的模型部署平台的类型;
    所述训练网元向所述推理网元发送第一响应消息之前,还包括:
    所述训练网元确定所述训练网元与所述推理网元的厂商类型不同,且所述推理网元和所述训练网元的模型部署平台的类型相同。
  12. 如权利要求10或11所述的方法,其特征在于,所述第一响应消息中还包括第一指示信息,所述第一指示信息指示由所述训练网元对所述加密的分析结果进行解密。
  13. 如权利要求10至12中任一项所述的方法,其特征在于,所述第一响应消息中还包括第二指示信息,所述第二指示信息用于指示所述加密的模型对应的输入数据的数据类型。
  14. 如权利要求10至13中任一项所述的方法,其特征在于,所述训练网元接收来自推理网元的第一请求消息之前,还包括:
    所述训练网元向数据管理网元发送注册请求消息,所述注册请求消息包括所述分析类型的标识信息和所述训练网元的模型信息,所述模型信息包括所述训练网元的厂商类型和所述训练网元的模型部署平台的类型。
  15. 如权利要求10至14中任一项所述的方法,其特征在于,所述训练网元接收来自所述推理网元的加密的分析结果,包括:
    所述训练网元接收来自所述推理网元的所述加密的分析结果和关联标识;
    所述训练网元对所述加密的分析结果进行解密,得到解密的分析结果,包括:
    所述训练网元根据所述关联标识,确定所述加密的模型对应的加密算法;
    所述训练网元根据所述加密算法,确定解密算法;
    所述训练网元根据所述解密算法对所述加密的分析结果进行解密,得到所述解密的分析结果。
  16. 一种通信方法,其特征在于,包括:
    推理网元向训练网元发送请求消息,所述请求消息包括分析类型的标识信息,所述请求消息用于请求支持所述分析类型的模型,所述训练网元与所述推理网元的厂商类型不同,所述推理网元和所述训练网元的模型部署平台的类型不同;
    所述推理网元接收来自所述训练网元的响应消息,所述响应消息包括第一指示信息和第二网元的地址信息,所述第一指示信息指示拒绝请求支持所述分析类型的模型,所述第二网元支持的模型部署平台的类型包括所述训练网元的模型部署平台的类型;
    所述推理网元根据所述第二网元的地址信息,向所述第二网元发送待分析的数据,所述待分析的数据用于所述第二网元根据所述分析类型对应的加密的模型生成加密的分析结果;
    所述推理网元接收来自所述训练网元或者第一网元的解密的分析结果,所述解密的分析结果是所述训练网元或所述第一网元根据所述加密的分析结果得到的。
  17. 如权利要求16所述的方法,其特征在于,所述响应消息还包括拒绝原因值,所述拒绝原因值为所述训练网元与所述推理网元的厂商类型不同,且所述推理网元和所述训练网元的模型部署平台的类型不同。
  18. 如权利要求16或17所述的方法,其特征在于,所述请求消息中还包含所述推理网元的厂商类型和所述推理网元的模型部署平台的类型。
  19. 如权利要求16至18中任一项所述的方法,其特征在于,所述响应消息中还包括第二指示信息,所述第二指示信息用于指示所述加密的模型对应的输入数据的数据类型。
  20. 如权利要求16至19中任一项所述的方法,其特征在于,所述加密的模型是使用全同态加密算法、随机安全平均算法或差分隐私算法中的一个或多个进行加密的。
  21. 如权利要求16至20中任一项所述的方法,其特征在于,所述推理网元根据所述第二网元的地址信息,向所述第二网元发送待分析的数据,包括:
    所述推理网元根据所述第二网元的地址信息,向所述第二网元发送待分析的数据和关联标识,所述关联标识用于所述第一网元或所述训练网元确定所述加密的模型对应的加密算法。
  22. 一种通信方法,其特征在于,包括:
    训练网元接收来自推理网元的请求消息,所述请求消息包括分析类型的标识信息,所述请求消息用于请求支持所述分析类型的模型,所述训练网元与所述推理网元的厂商类型不同,所述推理网元和所述训练网元的模型部署平台的类型不同;
    所述训练网元向所述推理网元发送响应消息,所述响应消息包括第一指示信息和第二网元的地址信息,所述第一指示信息指示拒绝请求支持所述分析类型的模型,所述第二网元支持的模型部署平台的类型包括所述训练网元的模型部署平台的类型;
    所述训练网元接收来自所述第二网元的加密的分析结果,所述加密的分析结果是所述第二网元根据所述推理网元的待分析的数据和所述分析类型对应的加密的模型得到的;
    所述训练网元对所述加密的分析结果进行解密,得到解密的分析结果;
    所述训练网元向所述推理网元发送所述解密的分析结果。
  23. 如权利要求22所述的方法,其特征在于,所述请求消息中还包含所述推理网元的厂商类型和所述推理网元的模型部署平台的类型;
    所述训练网元向所述推理网元发送响应消息之前,还包括:
    所述训练网元确定所述训练网元与所述推理网元的厂商类型不同,且所述推理网元和所述训练网元的模型部署平台的类型不同。
  24. 如权利要求22或23所述的方法,其特征在于,所述响应消息还包括拒绝原因值,所述拒绝原因值为所述训练网元与所述推理网元的厂商类型不同,且所述推理网元和所述训练网元的模型部署平台的类型不同。
  25. 如权利要求22至24中任一项所述的方法,其特征在于,所述训练网元接收来自推理网元的请求消息之前,还包括:
    所述训练网元向所述第二网元发送所述分析类型的标识信息和所述分析类型对应的所述加密的模型。
  26. 如权利要求22至25中任一项所述的方法,其特征在于,所述响应消息中还包括第二指示信息,所述第二指示信息用于指示所述加密的模型对应的输入数据的数据类型。
  27. 如权利要求22至26中任一项所述的方法,其特征在于所述训练网元接收来自所述第二网元的加密的分析结果,包括:
    所述训练网元接收来自所述第二网元的所述加密的分析结果和关联标识;
    所述训练网元对所述加密的分析结果进行解密,得到解密的分析结果,包括:
    所述训练网元根据所述关联标识,确定所述加密的模型对应的加密算法;
    所述训练网元根据所述加密算法,确定解密算法;
    所述训练网元根据所述解密算法对所述加密的分析结果进行解密,得到所述解密的分析结果。
  28. 一种通信装置,其特征在于,包括处理器和存储器;所述存储器用于存储计算机指令,当所述装置运行时,所述处理器执行所述存储器存储的所述计算机指令,以使所述装置执行上述权利要求1至9、16至21中任一项所述方法,或执行上述权利要求10至15、22至27中任一项所述方法。
  29. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有计算机程序或指令,当所述计算机程序或指令被通信装置执行时,实现如权利要求1至27中任一项所述的方法。
  30. 一种通信***,其特征在于,包括:
    推理网元,用于执行如权利要求1至9中任一项所述的方法;以及
    训练网元,用于向所述推理网元发送加密的模型或者所述加密的模型的地址信息。
  31. 一种通信***,其特征在于,包括:
    推理网元,用于向训练网元发送第一请求消息,所述第一请求消息包括分析类型的标识信息,所述第一请求消息用于请求支持所述分析类型的模型;以及
    所述训练网元,用于执行如权利要求10至15中任一项所述的方法。
  32. 一种通信***,其特征在于,包括:
    推理网元,用于执行如权利要求16至21中任一项所述的方法;以及
    训练网元,用于向所述推理网元发送第二网元的地址信息,所述第二网元支持的模型部署平台的类型包括所述训练网元的模型部署平台的类型。
  33. 一种通信***,其特征在于,包括:
    推理网元,用于向训练网元发送请求消息,所述请求消息包括分析类型的标识信息,所述请求消息用于请求支持所述分析类型的模型;以及
    所述训练网元,用于执行如权利要求22至27中任一项所述的方法。
PCT/CN2022/114043 2021-09-03 2022-08-22 一种通信方法、通信装置及通信*** WO2023030077A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111030657.9 2021-09-03
CN202111030657.9A CN115767514A (zh) 2021-09-03 2021-09-03 一种通信方法、通信装置及通信***

Publications (1)

Publication Number Publication Date
WO2023030077A1 true WO2023030077A1 (zh) 2023-03-09

Family

ID=85332899

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114043 WO2023030077A1 (zh) 2021-09-03 2022-08-22 一种通信方法、通信装置及通信***

Country Status (2)

Country Link
CN (1) CN115767514A (zh)
WO (1) WO2023030077A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569288A (zh) * 2019-09-11 2019-12-13 中兴通讯股份有限公司 一种数据分析方法、装置、设备和存储介质
CN111083722A (zh) * 2019-04-15 2020-04-28 中兴通讯股份有限公司 模型的推送、模型的请求方法及装置、存储介质
CN112311564A (zh) * 2019-07-23 2021-02-02 华为技术有限公司 应用mos模型的训练方法、设备及***
CN112784992A (zh) * 2019-11-08 2021-05-11 ***通信有限公司研究院 一种网络数据分析方法、功能实体及电子设备
WO2021155579A1 (zh) * 2020-02-07 2021-08-12 华为技术有限公司 一种数据分析方法、装置及***

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083722A (zh) * 2019-04-15 2020-04-28 中兴通讯股份有限公司 模型的推送、模型的请求方法及装置、存储介质
CN112311564A (zh) * 2019-07-23 2021-02-02 华为技术有限公司 应用mos模型的训练方法、设备及***
CN110569288A (zh) * 2019-09-11 2019-12-13 中兴通讯股份有限公司 一种数据分析方法、装置、设备和存储介质
CN112784992A (zh) * 2019-11-08 2021-05-11 ***通信有限公司研究院 一种网络数据分析方法、功能实体及电子设备
WO2021155579A1 (zh) * 2020-02-07 2021-08-12 华为技术有限公司 一种数据分析方法、装置及***

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NTT DOCOMO, SAMSUNG: "NWDAF decomposition", 3GPP DRAFT; S2-2100411, 3RD GENERATION PARTNERSHIP PROJECT (3GPP), MOBILE COMPETENCE CENTRE ; 650, ROUTE DES LUCIOLES ; F-06921 SOPHIA-ANTIPOLIS CEDEX ; FRANCE, vol. SA WG2, no. eMeeting; 20210222 - 20210305, 18 February 2021 (2021-02-18), Mobile Competence Centre ; 650, route des Lucioles ; F-06921 Sophia-Antipolis Cedex ; France, XP052172749 *

Also Published As

Publication number Publication date
CN115767514A (zh) 2023-03-07

Similar Documents

Publication Publication Date Title
US11903048B2 (en) Connecting to virtualized mobile core networks
CN111901135B (zh) 一种数据分析方法及装置
EP3836577B1 (en) Session management method and device for user groups
WO2020007202A1 (zh) 一种数据传输方法、装置及***
CN109314839A (zh) 服务层的业务导向
EP4016961A1 (en) Information obtaining method and device
WO2020015634A1 (zh) 一种mec信息获取方法及装置
US11558813B2 (en) Apparatus and method for network automation in wireless communication system
KR20240060722A (ko) 논리적 tsn 브리지를 위한 방법 및 장치
US11889568B2 (en) Systems and methods for paging over WiFi for mobile terminating calls
US20220263879A1 (en) Multicast session establishment method and network device
CN115226103A (zh) 一种通信方法及装置
WO2023213177A1 (zh) 一种通信方法及装置
US20220225463A1 (en) Communications method, apparatus, and system
WO2022267652A1 (zh) 一种通信方法、通信装置及通信***
WO2023030077A1 (zh) 一种通信方法、通信装置及通信***
WO2021218244A1 (zh) 通信方法、装置及***
WO2021138784A1 (zh) 一种接入网络的方法、装置及***
CN115915196A (zh) 一种链路状态检测方法、通信装置及通信***
WO2023016298A1 (zh) 一种业务感知方法、通信装置及通信***
WO2023056784A1 (zh) 数据收集方法、通信装置及通信***
WO2023061207A1 (zh) 一种通信方法、通信装置及通信***
WO2023082858A1 (zh) 确定移动性管理策略的方法、通信装置及通信***
WO2023050781A1 (zh) 一种通信方法及通信装置
WO2023231450A1 (zh) 一种时间同步方法及通信装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863216

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE