CN117979310A - Model enhancement training method and device, electronic equipment and storage medium - Google Patents

Model enhancement training method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117979310A
CN117979310A CN202311848311.9A CN202311848311A CN117979310A CN 117979310 A CN117979310 A CN 117979310A CN 202311848311 A CN202311848311 A CN 202311848311A CN 117979310 A CN117979310 A CN 117979310A
Authority
CN
China
Prior art keywords
machine learning
model
network data
data analysis
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311848311.9A
Other languages
Chinese (zh)
Inventor
常洁
林黛娣
严黎明
毕家瑜
黄海昆
项春林
陈正文
曾祥宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi IoT Technology Co Ltd
Original Assignee
Tianyi IoT Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi IoT Technology Co Ltd filed Critical Tianyi IoT Technology Co Ltd
Priority to CN202311848311.9A priority Critical patent/CN117979310A/en
Publication of CN117979310A publication Critical patent/CN117979310A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a model enhancement training method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an analysis information request of a service consumer, and transmitting the analysis information request to a multi-node cluster; acquiring a plurality of network data analysis function groups according to the analysis information request; obtaining a corresponding machine learning model; when the model training is aimed at non-commonalities, generating a secret key based on the acquired network data analysis function group and a corresponding machine learning model; obtaining an encrypted sample based on the key; based on the machine learning model, combining the encrypted sample to obtain a fused machine learning model; model training is carried out on the fusion machine learning model through all local samples, and the trained fusion machine learning model is transmitted to each network data analysis function group; and updating and iterating each machine learning model until a preset condition is reached. The invention can efficiently perform model enhancement training and can be widely applied to the technical field of data processing.

Description

Model enhancement training method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a model enhancement training method, device, electronic apparatus, and storage medium.
Background
In the 5G network architecture, NWDAF network elements are introduced into a core network and used as bearing entities for customized data collection and intelligent analysis, so that the 5G network architecture not only can collect data from NF, AF and OAM of a 5G core network (5 GC), but also has the capability of intelligent analysis (such as calculation, training, reasoning, prediction and the like), and the analysis result is output to NF or AF or OAM for decision-making of NF, AF or OAM. Along with the continuous evolution of the standard, the network analysis flow and the data collection flow of NWDAF network elements are enhanced, and the R16 and R17 standards respectively define 14 UCs and 7 UCs mainly based on network self-optimization, such as: auxiliary UPF selection, auxiliary QoS parameter adjustment, auxiliary AMF mobility management, auxiliary realization of network element/slice load balancing, UE related data collection analysis, auxiliary abnormal time detection and positioning, such as congestion feedback and the like. At present, NWDAF deployment implementation is basically carried out around each UC, and model training is carried out around each UC, and simultaneously, user portraits, terminal behaviors, network experience portraits, connection portraits and the like are combined.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems in the related art to some extent. Therefore, the invention provides a model enhancement training method, a device, electronic equipment and a storage medium, which can efficiently perform model enhancement training.
In one aspect, an embodiment of the present invention provides a model enhancement training method, including:
acquiring an analysis information request of a service consumer, and transmitting the analysis information request to a multi-node cluster; the multi-node cluster comprises a plurality of network data analysis function groups;
Acquiring a plurality of network data analysis function groups from the multi-node cluster according to the analysis information request; obtaining machine learning models in each network data analysis function group;
when the machine learning model trained in the acquired network data analysis function groups is model training aiming at non-commonalities, generating a secret key based on the acquired network data analysis function groups and the corresponding machine learning model; obtaining an encrypted sample based on the key;
Based on the machine learning model corresponding to the acquired network data analysis functional group, combining the encrypted sample to acquire a fusion machine learning model; model training is carried out on the fusion machine learning model through all local samples, and the trained fusion machine learning model is transmitted to each network data analysis function group of the multi-node cluster;
And updating and iterating the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the trained fusion machine learning model until a preset condition is reached.
Optionally, obtaining the request for analysis information of the service consumer includes at least one of:
Acquiring a subscription message sent by a service consumer to a machine learning model;
A request message sent by a service consumer for a machine learning model is obtained.
Optionally, the request for analysis information includes an analysis ID; according to the analysis information request, a plurality of network data analysis function groups are obtained from the multi-node cluster, and the method comprises the following steps:
and determining the type of the preselected model based on the analysis ID, and further dynamically selecting cluster members through the multi-node cluster to obtain a plurality of network data analysis functional groups.
Optionally, before the step of obtaining a plurality of network data analysis function groups from the multi-node cluster; the method further comprises the steps of:
Each network data analysis function group trains a machine learning model of the network data analysis function group with local data.
Optionally, obtaining the encrypted sample based on the key includes:
and encrypting the acquired sample data of the plurality of network data analysis functional groups through the secret key to obtain an encrypted sample.
Optionally, updating and iterating the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the trained fusion machine learning until a preset condition is reached, including:
Based on the trained fusion machine learning model, respectively carrying out new sample training on each network data analysis function group of the multi-node cluster, and correspondingly updating the machine learning model corresponding to each network data analysis function group until the model training reaches the preset precision.
Optionally, the method further comprises:
returning the training completion result of the machine learning model corresponding to each network data analysis function group to the multi-node cluster;
the results of the training completion are pushed to the service consumer through the multi-node cluster.
In another aspect, an embodiment of the present invention provides a model enhancement training apparatus, including:
The first module is used for acquiring an analysis information request of the service consumer and transmitting the analysis information request to the multi-node cluster; the multi-node cluster comprises a plurality of network data analysis function groups;
The second module is used for acquiring a plurality of network data analysis function groups from the multi-node cluster according to the analysis information request; obtaining machine learning models in each network data analysis function group;
a third module for generating a key based on the acquired network data analysis functional group and the corresponding machine learning model when the machine learning model trained in the acquired plurality of network data analysis functional groups is model training for non-commonalities; obtaining an encrypted sample based on the key;
A fourth module, configured to obtain a fused machine learning model based on the machine learning model corresponding to the acquired network data analysis functional group in combination with the encrypted sample; model training is carried out on the fusion machine learning model through all local samples, and the trained fusion machine learning model is transmitted to each network data analysis function group of the multi-node cluster;
and a fifth module, configured to update and iterate the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the trained fusion machine learning model until a preset condition is reached.
Optionally, the apparatus further comprises:
A sixth module, configured to return a result of training completion of the machine learning model corresponding to each network data analysis functional group to the multi-node cluster;
And a seventh module for pushing the training completed result to the service consumer through the multi-node cluster.
In another aspect, an embodiment of the present invention provides an electronic device, including: a processor and a memory; the memory is used for storing programs; the processor executes the program to implement the model enhancement training method.
In another aspect, embodiments of the present invention provide a computer storage medium in which a processor-executable program is stored, which when executed by a processor is configured to implement the model-enhanced training method described above.
The embodiment of the invention acquires an analysis information request of a service consumer and transmits the analysis information request to a multi-node cluster; the multi-node cluster comprises a plurality of network data analysis function groups; acquiring a plurality of network data analysis function groups from the multi-node cluster according to the analysis information request; obtaining machine learning models in each network data analysis function group; when the machine learning model trained in the acquired network data analysis function groups is model training aiming at non-commonalities, generating a secret key based on the acquired network data analysis function groups and the corresponding machine learning model; obtaining an encrypted sample based on the key; based on the machine learning model corresponding to the acquired network data analysis functional group, combining the encrypted sample to acquire a fusion machine learning model; model training is carried out on the fusion machine learning model through all local samples, and the trained fusion machine learning model is transmitted to each network data analysis function group of the multi-node cluster; and updating and iterating the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the trained fusion machine learning model until a preset condition is reached. According to the embodiment of the invention, through collaborative training of different network data analysis functional groups and model training aiming at non-commonalities, model enhancement training can be efficiently performed.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate and do not limit the invention.
FIG. 1 is a schematic diagram of an implementation environment for model enhancement training provided by an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a model enhancement training method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an expanded flow for obtaining an analysis information request according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an expansion flow of a model enhancement training method based on local data training according to an embodiment of the present invention;
Fig. 5 is a schematic diagram of an unfolding flow of training result pushing provided in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an overall business flow of a model enhanced training method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a business flow of NWDAF subscription to services provided by an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a model-enhanced training device according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
Fig. 10 is a block diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It should be noted that although functional block diagrams are depicted as block diagrams, and logical sequences are shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than the block diagrams in the system. The terms first/S100, second/S200, and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
It should be noted that, in order to facilitate understanding of the technical solution of the present invention, technical terms that may occur in the embodiments of the present invention are first explained:
NWDAF (Network DATA ANALYTICS Function, network data analysis Function);
OAM (Operation Administration AND MAINTENANCE, operation maintenance management);
In a 5G core network (5 GC), each node is called a Network Function (NF), wherein a node of an external application server is called an AF (application function);
QoS (Quality of Service ) refers to a network that can provide better service capability for specified network communications using various underlying technologies, and is a security mechanism of the network, which is a technology for solving the problems of network delay and congestion.
UC (Use Case ).
It can be understood that the model enhancement training method provided by the embodiment of the invention can be applied to any computer equipment with data processing and computing capabilities, and the computer equipment can be various terminals or servers. When the computer device in the embodiment is a server, the server is an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud data blocks, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content distribution networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. Alternatively, the terminal is a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like, but is not limited thereto.
FIG. 1 is a schematic view of an implementation environment according to an embodiment of the present invention. Referring to fig. 1, the implementation environment includes at least one terminal 102 and a server 101. The terminal 102 and the server 101 can be connected through a network in a wireless or wired mode to complete data transmission and exchange.
The server 101 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud data blocks, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligent platforms, and the like.
In addition, server 101 may also be a node server in a blockchain network. The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like.
The terminal 102 may be, but is not limited to, a smart phone, tablet, notebook, desktop, smart box, smart watch, etc. The terminal 102 and the server 101 may be directly or indirectly connected through wired or wireless communication, which is not limited in this embodiment of the present invention.
Exemplary based on the implementation environment shown in fig. 1, the embodiment of the present invention provides a model enhancement training method, which is described below by taking an example that the model enhancement training method is applied to the server 101 as an example, and it is understood that the model enhancement training method may also be applied to the terminal 102.
Referring to fig. 2, fig. 2 is a flowchart of a model enhancement training method applied to a server according to an embodiment of the present invention, where an execution body of the model enhancement training method may be any one of the foregoing computer devices (including a server or a terminal). Referring to fig. 2, the method includes the steps of:
s100, acquiring an analysis information request of a service consumer, and transmitting the analysis information request to a multi-node cluster;
It should be noted that the multi-node cluster includes a plurality of network data analysis function groups; as shown in fig. 3, in some embodiments, obtaining an analysis information request of a service consumer may include at least one of: s101, acquiring subscription information of a machine learning model sent by a service consumer; s102, acquiring a request message sent by a service consumer for the machine learning model.
Illustratively, in some embodiments, NWDAF service subscriptions:
1. NWDAF service consumers subscribe to the ML (machine learning) model with NWDAF (including MTLF): ① NWDAF service consumer-providable parameters include analyzing the ID list and allowing selection of ML models, such as S-NSSAI and region selection ML models according to which the ML model applies; ② ML model address, ML model application time, application area, etc.
2. NWDAF the service consumer requests the ML model from NWDAF (containing MTLF): ① Requesting an ML model associated with the analysis ID to perform analysis deduction; ② When NWDAF (including MTLF) receives a request for ML model information, it is determined whether the existing ML model requires further training, if so, the NWDAF data collection operation needs to be initiated and used for ML model training, providing an ML model address after training is completed, and so on.
S200, acquiring a plurality of network data analysis function groups from the multi-node cluster according to the analysis information request; obtaining machine learning models in each network data analysis function group;
It should be noted that, the analysis information request includes an analysis ID; in some embodiments, according to the analysis information request, obtaining a plurality of network data analysis function groups from the multi-node cluster may include: and determining the type of the preselected model based on the analysis ID, and further dynamically selecting cluster members through the multi-node cluster to obtain a plurality of network data analysis functional groups.
Illustratively, in some embodiments, non-commonalities require dynamic selection NWDAF of parameters such as user type, request analysis of ID, reasoning about ML models, model training time-consuming, etc.
It should be further noted that, as shown in fig. 4, in some embodiments, before the step of obtaining a plurality of network data analysis function groups from the multi-node cluster; the method may further comprise: each network data analysis function group trains a machine learning model of the network data analysis function group with local data.
Illustratively, in some embodiments, the multi-node cluster NWDAF selects the corresponding NWDAF group according to the analysis ID of the request and the allowed selection ML model, uses the local data training model, and the local data includes terminal information, network information, user information, application information, and the like, and the multi-node cluster dynamically selects cluster members.
S300, when the machine learning model trained in the acquired multiple network data analysis functional groups is the model training aiming at the non-commonality members, generating a secret key based on the acquired network data analysis functional groups and the corresponding machine learning model; obtaining an encrypted sample based on the key;
It should be noted that, in some embodiments, obtaining the encrypted sample based on the key may include: and encrypting the acquired sample data of the plurality of network data analysis functional groups through the secret key to obtain an encrypted sample.
Illustratively, in some embodiments, the non-commonalities need to analyze the ID, infer the ML model, model training takes time, etc. parameters according to the user type, dynamically select NWDAF, and dynamically generate the one-time key according to the data thereon.
In some embodiments, the commonality data (associated with the commonality members) is trained with public keys, and in particular, by distributing the public keys to a plurality NWDAF of groups, a plurality NWDAF of groups cryptographically training the common user with a cryptographic sample pair Ji Binglian and a central NWDAF.
It should be further noted that, according to the request or subscription information of the service consumer, the multi-node cluster NWDAF selects the relevant NWDAF, NWDAF1, …, NWDAFN according to the user and subscription type, and is respectively responsible for different training models ML1, …, MLQ and having different samples (such as UE ID, slice ID, group ID, DNN, application ID, priority, expected UE behavior parameters, UE communication behavior [ rate, start time ], UE location, timestamp, TAC, base station software and hardware resource group, user ID, network element ID used, network element load, etc.), and because the participants have different data, the model trained by the final terminal has different model parameters, and meanwhile, the cost factor of the algorithm is considered.
S400, based on the machine learning model corresponding to the acquired network data analysis function group, combining the encrypted sample to acquire a fusion machine learning model; model training is carried out on the fusion machine learning model through all local samples, and the trained fusion machine learning model is transmitted to each network data analysis function group of the multi-node cluster;
Illustratively, in some embodiments, multi-node cluster NWDAF forms a fused ML model based on the encrypted samples and models ML1, …, MLQ after each NWDAF, …, NWDAFN training is completed, and performs fused ML model training based on all existing samples.
S500, updating and iterating the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the trained fusion machine learning model until a preset condition is reached.
It should be noted that, in some embodiments, step S500 may include: based on the trained fusion machine learning model, respectively carrying out new sample training on each network data analysis function group of the multi-node cluster, and correspondingly updating the machine learning model corresponding to each network data analysis function group until the model training reaches the preset precision.
Illustratively, in some embodiments, the fused ML model is sent to NWDAF, …, NWDAFN for respective new sample training, the original ML model 1 x, …, and ML model N x are updated, and the update iteration is completed until the accuracy meets the result requirement, and the result is returned to the multi-node cluster NWDAF for model preservation.
In some embodiments, as shown in fig. 5, the method may further include: t100, returning the training completion result of the machine learning model corresponding to each network data analysis function group to the multi-node cluster; and T200, pushing the training result to the service consumer through the multi-node cluster.
Illustratively, in some embodiments, the multi-node cluster NWDAF model maintains the inference output and pushes it to the service consumer via a response or notification.
For the purpose of illustrating the principles of the present invention in detail, the following general flow chart of the present invention is described in connection with certain specific embodiments, and it is to be understood that the following is illustrative of the principles of the present invention and is not to be construed as limiting the present invention.
The embodiment of the invention realizes NWDAF-based enhancement model training, and comprises the following steps: according to the subscription/request ML model, the multi-node cluster NWDAF selects the corresponding NWDAF group according to the analysis ID of the request and the permission selection ML model, the members of the multi-node cluster NWDAF dynamically select the members thereof according to the related algorithm, and the dynamic key is generated; if the existing ML model needs further training, relevant data are collected from NF/OAM/AF to train the ML model, a fusion ML model is formed based on encrypted corpus samples, UC and different ML models after each NWDAF training is completed, fusion ML model training is carried out based on all existing samples, the fusion ML model is sent to each NWDAF to train each new sample, the original ML model 1 x, … and the ML model Q x are updated, updating iteration is completed, the result is returned to the multi-node cluster NWDAF to carry out model 1 x, …, model Q x and fusion model storage, and aggregation and updating of the training model are completed; the applicability is stronger for different users, different UCs and different training ML models, the data of the multi-node cluster NWDAF are all encrypted data, the applicability is stronger for NF service consumers, and the method has a certain degree of practicability.
The method is mainly applied to enhancing NWDAF enhancement model training, and is mainly used for NWDAF service consumers to send subscription/request ML models, the multi-node cluster NWDAF selects corresponding NWDAF groups according to the analysis ID of the request and the permission of selecting the ML models, a local data training model is used, local data comprises terminal information, network information, user information, application information and the like, and the multi-node cluster dynamically selects cluster members; if the existing ML model needs further training, relevant data are collected from NF/OAM/AF to train the ML model, common data are trained through public keys, non-common data form a fused ML model based on dynamic selection UC and different ML models according to selected data types, training difficulty and the like, the fused ML model is sent to each NWDAF to train each new sample according to a key pair generated dynamically, the original ML model 1 x, … and the ML model N x are updated, updating iteration is completed, a result is returned to a multi-node cluster NWDAF to conduct model 1 x, …, model N x and fused model are saved, aggregation and updating of the training model are completed, next iteration is started according to requirements, a time stamp is saved at the same time, and the multi-node cluster NWDAF is used for NF service consumers to call or actively push after subscription; the method has stronger applicability to different users, different UCs and different training ML models, can be used for fusing scenes of different UCs, has stronger applicability to NF service consumers because the multi-node cluster data are all encrypted data, and has a certain degree of practicability.
The technical principle of the embodiment of the present invention is described below with reference to the accompanying drawings, as shown in fig. 6, and the flow is as follows:
1-2 are the existing processes: the service consumer may send an analyze information request to the NRF or multi-node cluster NWDAF;
Specifically, as shown in fig. 7, NWDAF service subscription:
1. NWDAF service consumers subscribe to the ML model with NWDAF (including MTLF): ① NWDAF service consumer-providable parameters include analyzing the ID list and allowing selection of ML models, such as S-NSSAI and region selection ML models according to which the ML model applies; ② ML model address, ML model application time, application area, etc.
2. NWDAF the service consumer requests the ML model from NWDAF (containing MTLF): ① Requesting an ML model associated with the analysis ID to perform analysis deduction; ② When NWDAF (including MTLF) receives a request for ML model information, it is determined whether the existing ML model requires further training, if so, the NWDAF data collection operation needs to be initiated and used for ML model training, providing an ML model address after training is completed, and so on.
3-5 Flow: in the analysis information request, NF service consumers can provide one or more requested analysis IDs for reasoning ML models, such as UE mobility analysis of analysis ID4, qoS parameter adjustment, and one or more requested regions of interest, after the multi-node cluster NWDAF finds out, the cluster members dynamically compose, and the common member method specifically performs encryption training on Ji Binglian and central NWDAF by distributing public keys to multiple NWDAF groups, and multiple NWDAF groups perform encryption sample pairs on common users; non-commonalities need to analyze ID according to user type, request, infer ML model, model training time consumption and other parameters, dynamically select NWDAF, dynamically generate disposable key according to data on the parameters
6. According to the request or subscription information of the service consumer, the multi-node cluster NWDAF selects the relevant NWDAF according to the type of the user and subscription, NWDAF1, …, NWDAFN are respectively responsible for different training models ML1, …, MLQ and possess different samples (such as UE ID, slice ID, group ID, DNN, application ID, priority, expect UE behavior parameters, UE communication behavior [ rate, start time ], UE position, timestamp, TAC, base station software and hardware resource group, user ID, network element ID used, network element load, etc.), and because the participants possess different data, the final training model possesses different model parameters, and meanwhile consider the cost factor of the algorithm
7,8, After the multi-node clusters NWDAF are trained in NWDAF, …, NWDAFN respectively, forming a fusion ML model based on encrypted samples and models ML1, … and MLQ, training the fusion ML model based on all existing samples, sending the fusion ML model to NWDAF1, … and NWDAFN for training the new samples respectively, updating original ML models 1, … and ML models N, completing updating iteration until the precision reaches the result requirement, and returning the result to the multi-node clusters NWDAF for model preservation.
In summary, in the embodiment of the present invention, according to NWDAF service consumers send subscription/request ML models, the multi-node cluster NWDAF selects corresponding NWDAF groups according to the analysis ID of the request and the permission to select ML models, and uses local data training models, where the local data includes terminal information, network information, user information, application information, and the like, and the multi-node cluster dynamically selects cluster members; if the existing ML model needs further training, relevant data are collected from NF/OAM/AF to train the ML model, common data are trained through public keys, non-common data form a fused ML model based on dynamic selection UC and different ML models according to selected data types, training difficulty and the like, the fused ML model is sent to each NWDAF to train each new sample according to a key pair generated dynamically, the original ML model 1 x, … and the ML model N x are updated, updating iteration is completed, a result is returned to a multi-node cluster NWDAF to conduct model 1 x, …, model N x and fused model are saved, aggregation and updating of the training model are completed, next iteration is started according to requirements, a time stamp is saved at the same time, and the multi-node cluster NWDAF is used for NF service consumers to call or actively push after subscription; the method has stronger applicability to different users, different UCs and different training ML models, can be used for fusing scenes of different UCs, has stronger applicability to NF service consumers, forms a fused ML model based on encrypted samples and an original training ML model, carries out fused ML model training based on all existing samples, sends the fused ML model to the original NWDAF for respective new sample training, forms ML models 1, … and ML model N, and completes updating iteration. Compared with the prior art, the method is more suitable for fusion scene training under different UCs and has stronger adaptability.
In another aspect, as shown in fig. 8, an embodiment of the present invention provides a model enhancement training apparatus 800, including: a first module 810 for obtaining an analysis information request of a service consumer, transmitting the analysis information request to a multi-node cluster; the multi-node cluster comprises a plurality of network data analysis function groups; a second module 820 for obtaining a plurality of network data analysis function groups from the multi-node cluster according to the analysis information request; obtaining machine learning models in each network data analysis function group; a third module 830, configured to generate a key based on the acquired network data analysis functional group and the corresponding machine learning model when the machine learning model trained in the acquired plurality of network data analysis functional groups is model training for non-commonalities; obtaining an encrypted sample based on the key; a fourth module 840, configured to obtain a fused machine learning model based on the machine learning model corresponding to the acquired network data analysis functional group in combination with the encrypted sample; model training is carried out on the fusion machine learning model through all local samples, and the trained fusion machine learning model is transmitted to each network data analysis function group of the multi-node cluster; a fifth module 850, configured to update and iterate the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the trained fusion machine learning model until a preset condition is reached.
In some embodiments, the apparatus may further include: a sixth module, configured to return a result of training completion of the machine learning model corresponding to each network data analysis functional group to the multi-node cluster; and a seventh module for pushing the training completed result to the service consumer through the multi-node cluster.
The content of the method embodiment of the invention is suitable for the device embodiment, the specific function of the device embodiment is the same as that of the method embodiment, and the achieved beneficial effects are the same as those of the method.
On the other hand, as shown in fig. 9, an embodiment of the present invention further provides an electronic device 900, which includes at least one processor 910, and at least one memory 920 for storing at least one program; take a processor 910 and a memory 920 as examples.
The processor 910 and the memory 920 may be connected by a bus or other means.
Memory 920 acts as a non-transitory computer readable storage medium that may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, memory 920 may include high-speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some implementations, the memory 920 may optionally include memory located remotely from the processor, which may be connected to the device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The above described embodiments of the electronic device are merely illustrative, wherein the units described as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In particular, FIG. 10 schematically shows a block diagram of a computer system for implementing an electronic device of an embodiment of the invention.
It should be noted that, the computer system 1000 of the electronic device shown in fig. 10 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present invention.
As shown in fig. 10, the computer system 1000 includes a central processing unit 1001 (Central Processing Unit, CPU) which can execute various appropriate actions and processes according to a program stored in a Read-Only Memory 1002 (ROM) or a program loaded from a storage portion 1008 into a random access Memory 1003 (Random Access Memory, RAM). In the random access memory 1003, various programs and data necessary for the system operation are also stored. The cpu 1001, the rom 1002, and the ram 1003 are connected to each other via a bus 1004. An Input/Output interface 1005 (i.e., an I/O interface) is also connected to bus 1004.
The following components are connected to the input/output interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output portion 1007 including a Cathode Ray Tube (CRT), a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), and a speaker, etc.; a storage portion 1008 including a hard disk or the like; and a communication section 1009 including a network interface card such as a local area network card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The drive 1010 is also connected to the input/output interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in the drive 1010, so that a computer program read out therefrom is installed as needed in the storage section 1008.
In particular, the processes described in the various method flowcharts may be implemented as computer software programs according to embodiments of the invention. For example, embodiments of the present invention include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1009, and/or installed from the removable medium 1011. The computer programs, when executed by the central processor 1001, perform the various functions defined in the system of the present invention.
It should be noted that, the computer readable medium shown in the embodiments of the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
Another aspect of the embodiments of the present invention also provides a computer-readable storage medium storing a program that is executed by a processor to implement the foregoing method.
The content of the method embodiment of the invention is applicable to the computer readable storage medium embodiment, the functions of the computer readable storage medium embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the foregoing method.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present invention may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present invention.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features may be integrated in a single physical device and/or software module or may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, randomAccess Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution apparatus, device, or apparatus, such as a computer-based apparatus, processor-containing apparatus, or other apparatus that can fetch the instructions from the instruction execution apparatus, device, or apparatus and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution apparatus, device, or apparatus.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution device. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments, and those skilled in the art can make various equivalent modifications or substitutions without departing from the spirit of the present invention, and the equivalent modifications or substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. A model-enhanced training method, comprising:
acquiring an analysis information request of a service consumer, and transmitting the analysis information request to a multi-node cluster; the multi-node cluster comprises a plurality of network data analysis function groups;
Acquiring a plurality of network data analysis function groups from the multi-node cluster according to the analysis information request; obtaining machine learning models in each network data analysis function group;
Generating a key based on the acquired network data analysis function group and the corresponding machine learning model when the machine learning model trained in the acquired plurality of network data analysis function groups is model training for non-commonalities; obtaining an encrypted sample based on the key;
Based on the machine learning model corresponding to the acquired network data analysis function group, combining the encrypted sample to acquire a fusion machine learning model; model training is carried out on the fusion machine learning model through all local samples, and the trained fusion machine learning model is transmitted to each network data analysis function group of the multi-node cluster;
And updating and iterating the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the trained fusion machine learning model until a preset condition is reached.
2. The model enhanced training method of claim 1, wherein the obtaining an analysis information request of a service consumer comprises at least one of:
Acquiring a subscription message sent by the service consumer to the machine learning model;
and acquiring a request message sent by the service consumer to the machine learning model.
3. The model enhancement training method according to claim 1, wherein the analysis information request includes an analysis ID; the obtaining, according to the analysis information request, a plurality of network data analysis function groups from the multi-node cluster includes:
And determining a preselected model type based on the analysis ID, and further dynamically selecting cluster members through the multi-node cluster to obtain a plurality of network data analysis function groups.
4. The model enhancement training method according to claim 1, wherein the step of obtaining a plurality of the network data analysis function groups from the multi-node cluster is preceded by; the method further comprises the steps of:
each of the network data analysis function groups trains the machine learning model of the network data analysis function group with local data.
5. The model-enhanced training method of claim 1, wherein the obtaining an encrypted sample based on the key comprises:
and encrypting the acquired sample data of the plurality of network data analysis function groups through the secret key to obtain an encrypted sample.
6. The model enhancement training method according to claim 1, wherein the updating iteration of the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the training-completed fused machine learning until a preset condition is reached includes:
And based on the trained fusion machine learning model, respectively carrying out new sample training on each network data analysis function group of the multi-node cluster, and further correspondingly updating the machine learning model corresponding to each network data analysis function group until model training reaches preset precision.
7. The model enhanced training method of claim 1, wherein the method further comprises:
Returning the training completion result of the machine learning model corresponding to each network data analysis function group to the multi-node cluster;
Pushing the training completed results to the service consumer through the multi-node cluster.
8. A model-enhanced training device, comprising:
the first module is used for acquiring an analysis information request of a service consumer and transmitting the analysis information request to the multi-node cluster; the multi-node cluster comprises a plurality of network data analysis function groups;
The second module is used for acquiring a plurality of network data analysis function groups from the multi-node cluster according to the analysis information request; obtaining machine learning models in each network data analysis function group;
A third module configured to generate a key based on the acquired network data analysis function group and the corresponding machine learning model when the machine learning model trained in the acquired plurality of network data analysis function groups is model training for non-commonalities; obtaining an encrypted sample based on the key;
A fourth module, configured to obtain a fused machine learning model based on the machine learning model corresponding to the obtained network data analysis function group in combination with the encrypted sample; model training is carried out on the fusion machine learning model through all local samples, and the trained fusion machine learning model is transmitted to each network data analysis function group of the multi-node cluster;
And a fifth module, configured to update and iterate the machine learning model corresponding to each network data analysis function group of the multi-node cluster based on the trained fusion machine learning model until a preset condition is reached.
9. An electronic device comprising a processor and a memory;
The memory is used for storing programs;
the processor executing the program implements the method of any one of claims 1 to 7.
10. A computer storage medium in which a processor executable program is stored, characterized in that the processor executable program is for implementing the method according to any one of claims 1 to 7 when being executed by the processor.
CN202311848311.9A 2023-12-28 2023-12-28 Model enhancement training method and device, electronic equipment and storage medium Pending CN117979310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311848311.9A CN117979310A (en) 2023-12-28 2023-12-28 Model enhancement training method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311848311.9A CN117979310A (en) 2023-12-28 2023-12-28 Model enhancement training method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117979310A true CN117979310A (en) 2024-05-03

Family

ID=90846758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311848311.9A Pending CN117979310A (en) 2023-12-28 2023-12-28 Model enhancement training method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117979310A (en)

Similar Documents

Publication Publication Date Title
CN110263936B (en) Horizontal federal learning method, device, equipment and computer storage medium
JP2022530580A (en) Multi-entity resource, security, and service management in edge computing deployments
US8843463B2 (en) Providing content by using a social network
CN109522330A (en) Cloud platform data processing method, device, equipment and medium based on block chain
CN111061956A (en) Method and apparatus for generating information
CN108810047B (en) Method and device for determining information push accuracy rate and server
CN110781373B (en) List updating method and device, readable medium and electronic equipment
CN110909521A (en) Synchronous processing method and device for online document information and electronic equipment
CN108028768A (en) The method and system of application version is installed by short-range communication
CN112787837A (en) Data sharing method, device and system
CN110866040A (en) User portrait generation method, device and system
CN105337841A (en) Information processing method and system, client, and server
CN111767411A (en) Knowledge graph representation learning optimization method and device and readable storage medium
KR20220146557A (en) Network Monitoring at the Service Enabler Architecture Layer (SEAL)
CN113807926A (en) Recommendation information generation method and device, electronic equipment and computer readable medium
CN110719526B (en) Video playing method and device
CN112367241A (en) Message generation and message transmission method, device, equipment and computer readable medium
CN111209432A (en) Information acquisition method and device, electronic equipment and computer readable medium
Park et al. Collaborative virtual 3D object modeling for mobile augmented reality streaming services over 5G networks
Fu et al. A distributed microservice-aware paradigm for 6G: Challenges, principles, and research opportunities
CN112152879B (en) Network quality determination method, device, electronic equipment and readable storage medium
CN116150249B (en) Table data export method, apparatus, electronic device and computer readable medium
CN111278085A (en) Method and device for acquiring target network
US20230137345A1 (en) System and method for decentralized user controlled social media
CN117979310A (en) Model enhancement training method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination