CN112183619A - Digital model fusion method and device - Google Patents

Digital model fusion method and device Download PDF

Info

Publication number
CN112183619A
CN112183619A CN202011030377.3A CN202011030377A CN112183619A CN 112183619 A CN112183619 A CN 112183619A CN 202011030377 A CN202011030377 A CN 202011030377A CN 112183619 A CN112183619 A CN 112183619A
Authority
CN
China
Prior art keywords
fusion
model
feature information
type
digital
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011030377.3A
Other languages
Chinese (zh)
Inventor
汪利鹏
陈卓
李侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Three Eye Spirit Information Technology Co ltd
Original Assignee
Nanjing Three Eye Spirit Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Three Eye Spirit Information Technology Co ltd filed Critical Nanjing Three Eye Spirit Information Technology Co ltd
Priority to CN202011030377.3A priority Critical patent/CN112183619A/en
Publication of CN112183619A publication Critical patent/CN112183619A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a method and a device for fusing digital models, wherein the method comprises the following steps: determining a model fusion type, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion; determining a corresponding model fusion mode according to the model fusion type and carrying out model fusion; the method and the device can effectively, accurately and conveniently realize the fusion between the digital models.

Description

Digital model fusion method and device
Technical Field
The application relates to the field of model fusion, in particular to a digital model fusion method and device.
Background
HI Model (Human intelligenc Model): based on knowledge-driven, a digitized model, called the HI model, is formed that models human cognition and experience. Its advantages are clear physical meaning of logic and parameters, easy regulation of model parameters, and high adaptability of obtained model. The HI model often requires many key parameters, which, if not well obtained, may affect the effect of the model. The HI model is essentially a solidification of various human experience knowledge and methods, and is based on more business logic principles, ideas and ideas, and emphasizes causal relationships.
AI Model (Artificial intelligency Model): models formed mainly based on data-driven, using relevant algorithms such as machine learning, are called AI models. In the AI model, some data analysis models are also widely used, including basic data analysis models (e.g., algorithm models for performing basic processing such as regression, clustering, classification, and dimension reduction on data), machine learning models (e.g., models such as neural networks are used to further identify and predict data), and intelligent control structure models, and the AI model starts from data itself more, but considers the principle of mechanism more, and emphasizes the end-to-end correlation more.
The inventor finds that currently, regarding model fusion, the center of gravity of academia and industry is mainly in the fusion of AI models, and more specifically, mainly in the fusion of results of AI models, i.e., the method of model integration. The model integration method is mainly based on the model result, and performs fusion integration on a plurality of small models, common methods include Voting, Bagging, Boosting and the like, but the consideration on the models is insufficient, and meanwhile, the model fusion lacks the consideration on a plurality of different owners. When different models participating in fusion belong to different owners, data used by reasoning of the models cannot directly participate in fusion due to data privacy. There is currently no specific approach for the fusion of the HI model, and the fusion of the HI and AI models.
Disclosure of Invention
Aiming at the problems in the prior art, the application provides a digital model fusion method and device, which can effectively, accurately and conveniently realize the fusion between digital models.
In order to solve at least one of the above problems, the present application provides the following technical solutions:
in a first aspect, the present application provides a method for fusing digital models, including:
determining a model fusion type, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion;
and determining a corresponding model fusion mode according to the model fusion type and carrying out model fusion.
Further, the determining a corresponding model fusion mode and performing model fusion according to the model fusion type includes:
extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
and if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion, performing feature fusion according to a preset fusion target of the model fusion and the model features of the model to obtain the model subjected to the feature fusion.
Further, the determining a corresponding model fusion mode and performing model fusion according to the model fusion type includes:
extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
and if the model fusion type is any one of AI fusion and AI co-fusion, determining the same-layer parameters with the similarity between the two models exceeding a threshold value according to the model structure of the model, and performing parameter fusion according to the same-layer parameters to obtain the model subjected to parameter fusion.
Further, the determining a corresponding model fusion mode and performing model fusion according to the model fusion type includes:
extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
and if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion, performing result fusion according to a preset fusion target of the model fusion and the model evaluation index of the model to obtain the model subjected to the result fusion.
In a second aspect, the present application provides a digital model fusion apparatus, including:
the model fusion type determining module is used for determining a model fusion type, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion;
and the model fusion mode determining module is used for determining a corresponding model fusion mode according to the model fusion type and carrying out model fusion.
Further, the model fusion mode determination module includes:
a feature information extraction unit, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and the characteristic fusion unit is used for carrying out characteristic fusion according to a preset fusion target of the model fusion and the model characteristics of the model to obtain the model after the characteristic fusion if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion.
Further, the model fusion mode determination module includes:
a feature information extraction unit, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and the parameter fusion unit is used for determining the same-layer parameters with the similarity between the two models exceeding a threshold value according to the model structure of the model if the model fusion type is any one of AI fusion and AI co-fusion, and performing parameter fusion according to the same-layer parameters to obtain the model subjected to parameter fusion.
Further, the model fusion mode determination module includes:
a feature information extraction unit, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and the result fusion unit is used for carrying out result fusion according to a preset fusion target of the model fusion and the model evaluation index of the model to obtain the model after the result fusion if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion.
In a third aspect, the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the digital model fusion method when executing the program.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for merging of digitized models as described.
According to the technical scheme, the model fusion type is determined, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion; and determining a corresponding model fusion mode according to the model fusion type, performing model fusion, and effectively, accurately and conveniently realizing the fusion between the digital models.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a digital model fusion method according to an embodiment of the present application;
FIG. 2 is a second flowchart illustrating a digital model fusion method according to an embodiment of the present application;
FIG. 3 is a third flowchart illustrating a digital model fusion method according to an embodiment of the present application;
FIG. 4 is a fourth flowchart illustrating a digital model fusion method according to an embodiment of the present application;
FIG. 5 is a block diagram of a digital model fusion apparatus according to an embodiment of the present invention;
FIG. 6 is a second block diagram of a digital model fusion apparatus according to an embodiment of the present application;
FIG. 7 is a third block diagram of a digital model fusion apparatus according to an embodiment of the present invention;
FIG. 8 is a fourth block diagram of a digital model fusion apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Considering that model fusion is currently aimed at, the center of gravity of academia and industry is mainly in the fusion of AI models, and more specifically, in the fusion of results of AI models, i.e., a method called model integration. The model integration method is mainly based on the model result, and performs fusion integration on a plurality of small models, common methods include Voting, Bagging, Boosting and the like, but the consideration on the models is insufficient, and meanwhile, the model fusion lacks the consideration on a plurality of different owners. When different models participating in fusion belong to different owners, data used by reasoning of the models cannot directly participate in fusion due to data privacy. Aiming at the problems of HI model fusion and HI and AI model fusion, no specific method exists at present, the application provides a digital model fusion method and a digital model fusion device, wherein the model comprises an HI model and an AI model, and the model fusion types comprise HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion; and determining a corresponding model fusion mode according to the model fusion type, performing model fusion, and effectively, accurately and conveniently realizing the fusion between the digital models.
In order to effectively, accurately and conveniently realize the fusion between digital models, the present application provides an embodiment of a digital model fusion method, and referring to fig. 1, the digital model fusion method specifically includes the following contents:
step S101: determining a model fusion type, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion.
Optionally, the models belong to different owners, and the data involved in model inference is divided into local data and remote data. Therefore, the fusion method can be divided into six types: HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion, wherein co-fusion refers to a fusion mode type requiring multiparty cooperation.
Wherein, HI fusion refers to fusion of a group of HI models in the same owner.
AI fusion refers to the fusion of a set of AI models in the same owner.
HI + AI fusion refers to the fusion of a set of HI models and a set of AI models in the same owner.
HI co-fusion refers to the fusion of a set of HI models of different owners.
AI co-fusion refers to the fusion of a set of AI models of different owners.
The HI + AI co-fusion is the fusion of a set of HI models and a set of AI models of different owners.
Step S102: and determining a corresponding model fusion mode according to the model fusion type and carrying out model fusion.
Optionally, at least 3 model fusion modes may be set correspondingly according to different model fusion types, including: feature fusion (fusion at the feature level), parameter fusion (fusion at the parameter level), and result fusion (fusion at the model level).
Optionally, feature level fusion: the method is suitable for 6 conditions, and specifically comprises the steps of extracting the characteristics of each model, fusing the characteristic levels and constructing a new model. Meanwhile, the method also comprises feature level integration, namely integrating and using a plurality of participating model features; feature level compression, i.e., compression using key features through contrast learning, etc.
Optionally, parameter level fusion: is suitable for AI and AI fusion. And (4) performing parameter level fusion between AI models with similar structures, and using model parameters of partial layers as initial parameters of the new model.
Optionally, model level fusion: all 6 cases apply, including model result integration and process integration. Meanwhile, the method also comprises the following steps of model compression: compressing the complex model into a simpler model; model integration: multiple models are integrated into a new model.
Optionally, before the fusion, a target of the model fusion may be preset, for example, the target a: better performance, target B: faster speed.
Optionally, information of the model to be fused is extracted, where the information includes model features, model structures, model parameters, and model evaluation indexes.
Wherein, for the AI model, the model is characterized by an input layer of the AI model after data processing. The model structure is an algorithm and a network structure used by the AI model. The model parameters are parameters of each layer obtained by training in the model.
For the HI model, the characteristics of the model are data items used by an expert for decision making after data processing. The model structure is an expert thought network structure. The model parameters are threshold values set by the expert for decision of each layer.
The model evaluation index is an evaluation value such as accuracy of the model.
Optionally, a suitable fusion mode is selected for the model fusion classification condition. Wherein, the parameter layer is only suitable for AI fusion and AI co-fusion. Feature level fusion and model level fusion are applicable to all 6 fusion classification cases.
As can be seen from the above description, the digital model fusion method provided in the embodiment of the present application can determine the model fusion type, where the model includes an HI model and an AI model, and the model fusion type includes HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion, and HI + AI co-fusion; and determining a corresponding model fusion mode according to the model fusion type, performing model fusion, and effectively, accurately and conveniently realizing the fusion between the digital models.
In an embodiment of the digital model fusion method of the present application, referring to fig. 2, the step S102 may further include the following steps:
step S201: extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
step S202: and if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion, performing feature fusion according to a preset fusion target of the model fusion and the model features of the model to obtain the model subjected to the feature fusion.
The method specifically comprises the following steps:
1, let the N model feature sets participating in the fusion be Features1, Features2, FeaturesN, respectively.
2, for fusion purpose a (better performance), we take the feature union of each model as the feature set featurees 1 ═ featurees 2 @ uegouueufuetsn.
For fusion purpose B (faster speed), we take the feature intersection of each model as the feature set: features1 ═ Features2 ≈ n. A new fusion model will be made based on this fused feature set.
In addition, in another embodiment of the present application, for the purpose B, key feature extraction may be further adopted to extract a most key feature set in each model as a feature set.
Specifically, the obtained feature set is used as the fusion model feature, and the AI model with the best model evaluation index among the models participating in the fusion is adapted to obtain the fusion model.
Optionally, if all the models participating in the fusion are HI models, a reference classification model is used, and a fusion model is obtained based on the characteristics of the fusion model.
In an embodiment of the digital model fusion method of the present application, referring to fig. 3, the step S102 may further include the following steps:
step S301: extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
step S302: and if the model fusion type is any one of AI fusion and AI co-fusion, determining the same-layer parameters with the similarity between the two models exceeding a threshold value according to the model structure of the model, and performing parameter fusion according to the same-layer parameters to obtain the model subjected to parameter fusion.
The method specifically comprises the following steps:
1, comparing the model structures of the models S1 and S2 participating in the fusion, and finding out the same part.
And 2, aiming at the shallow deep learning model, finding out continuous same layers in the structures of the layers of the two models.
And 3, setting the model with higher accuracy and the parameters of the same layer in each model as the parameters of the fusion model to each participant.
And 4, the participator obtains a fusion model.
In an embodiment of the digital model fusion method of the present application, referring to fig. 4, the step S102 may further include the following steps:
step S401: extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
step S402: and if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion, performing result fusion according to a preset fusion target of the model fusion and the model evaluation index of the model to obtain the model subjected to the result fusion.
The method specifically comprises the following steps:
1, for fusion purpose a (better performance), model integration was chosen. And fusing result layers of the plurality of HI models and AI models.
And 2, selecting N models with the best model evaluation indexes from the models participating in the fusion. (N is a configuration parameter)
3, integrating the selected Model1, Model2, Model into a fusion Model S.
4, for a certain Input data Input, the Output of each selected model is Output1, Output 2. The Output of ModelS is defined as Output1 ═ Output2 @ u.
For fusion purpose B (faster speed), model compression is chosen.
And 6, selecting a Model1 with the best Model evaluation index from the models participating in the fusion.
And 7, setting a compression ratio R. And compressing the Model1 to obtain a fusion Model S.
Optionally, taking the shallow neural network model as an example, the compressing step includes:
(1) all intermediate layers, Layer1, Layer2, LayerK, were chosen except for the input and output layers
(2) For LayerI, the number of neurons was compressed, and R N neurons were retained, yielding LayerIS, where R is the number of original neurons of LayerI.
(3) The input Layer, Layer1S, Layer2S,. the LayerKS, the output Layer were combined to obtain a fusion model.
In a specific embodiment of the present application, taking model fusion of people involved in yellow as an example, Nanjing creates an HI model A about people involved in yellow, where the used features include: 1. age; 2. whether or not there are cases of opening room for more than 4 men within half a year; 3. whether an opening record exists with the suspected personnel.
The packet head establishes an AI model B about the person involved in yellow, wherein the used characteristics comprise: 1. age; 2. number of open houses in half a year, 3. cultural degree.
Step 1: and determining the fusion type, wherein the two models belong to different owners respectively, so that the fusion method is a HI + AI collaborative fusion mode.
Step 2: extracting feature information of the model, wherein the feature information comprises: model characteristics, model structure, model parameters and model evaluation indexes.
Model A: the model features include the above 3 points; the model structure is HI model (); the model parameter is not available; and model evaluation indexes are none.
Model B: the model features include the above 3 points; the model structure is an AI model (such as a shallow neural network); model parameters; model evaluation index (accuracy 90%).
And step 3:
we chose feature level fusion and chose better performance.
The characteristics of the two models are collected to obtain
1. Age; 2. whether or not there are cases of opening room for more than 4 men within half a year; 3. whether an opening record exists with a suspected person; 4. opening the house for half a year; 5. the degree of culture.
And 4, step 4:
and taking the obtained feature set as fusion model features, and using the AI model with the best model evaluation index in the models participating in fusion to perform adaptation to obtain a fusion model.
Here, model B is used for adaptation, modifying the input layer of model B.
A fusion model C was obtained.
In order to effectively, accurately and conveniently realize the fusion between digital models, the present application provides an embodiment of a digital model fusion device for realizing all or part of the contents of the digital model fusion method, and referring to fig. 5, the digital model fusion device specifically includes the following contents:
the model fusion type determining module 10 is configured to determine a model fusion type, where the model includes an HI model and an AI model, and the model fusion type includes HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion, and HI + AI co-fusion;
and the model fusion mode determining module 20 is configured to determine a corresponding model fusion mode according to the model fusion type and perform model fusion.
As can be seen from the above description, the digital model fusion device provided in the embodiment of the present application can determine the model fusion type, where the model includes an HI model and an AI model, and the model fusion type includes HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion, and HI + AI co-fusion; and determining a corresponding model fusion mode according to the model fusion type, performing model fusion, and effectively, accurately and conveniently realizing the fusion between the digital models.
In an embodiment of the digital model fusion apparatus of the present application, referring to fig. 6, the model fusion mode determining module 20 includes:
a feature information extracting unit 21, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and the feature fusion unit 22 is configured to perform feature fusion according to a preset fusion target of the model fusion and the model features of the model if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI collaborative fusion, AI collaborative fusion, and HI + AI collaborative fusion, so as to obtain the model after the feature fusion.
In an embodiment of the digital model fusion apparatus of the present application, referring to fig. 7, the model fusion mode determining module 20 includes:
a feature information extracting unit 23, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and the parameter fusion unit 24 is configured to determine, according to the model structure of the model, a peer parameter with a similarity exceeding a threshold between the two models if the model fusion type is any one of AI fusion and AI co-fusion, and perform parameter fusion according to the peer parameter to obtain a model after the parameter fusion.
In an embodiment of the digital model fusion apparatus of the present application, referring to fig. 8, the model fusion mode determining module 20 includes:
a feature information extracting unit 25, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and a result fusion unit 26, configured to perform result fusion according to a preset fusion target of the model fusion and a model evaluation index of the model if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion, and HI + AI co-fusion, to obtain the model after the result fusion.
In terms of hardware, in order to effectively, accurately and conveniently implement fusion between digital models, the present application provides an embodiment of an electronic device for implementing all or part of contents in the digital model fusion method, where the electronic device specifically includes the following contents:
a processor (processor), a memory (memory), a communication Interface (Communications Interface), and a bus; the processor, the memory and the communication interface complete mutual communication through the bus; the communication interface is used for realizing information transmission between the digital model fusion device and relevant equipment such as a core service system, a user terminal, a relevant database and the like; the logic controller may be a desktop computer, a tablet computer, a mobile terminal, and the like, but the embodiment is not limited thereto. In this embodiment, the logic controller may be implemented with reference to the embodiments of the digital model fusion method and the digital model fusion device in the embodiments, and the contents thereof are incorporated herein, and repeated descriptions are omitted.
It is understood that the user terminal may include a smart phone, a tablet electronic device, a network set-top box, a portable computer, a desktop computer, a Personal Digital Assistant (PDA), an in-vehicle device, a smart wearable device, and the like. Wherein, intelligence wearing equipment can include intelligent glasses, intelligent wrist-watch, intelligent bracelet etc..
In practical applications, part of the digital model fusion method may be performed on the electronic device side as described above, or all operations may be performed in the client device. The selection may be specifically performed according to the processing capability of the client device, the limitation of the user usage scenario, and the like. This is not a limitation of the present application. The client device may further include a processor if all operations are performed in the client device.
The client device may have a communication module (i.e., a communication unit), and may be communicatively connected to a remote server to implement data transmission with the server. The server may include a server on the task scheduling center side, and in other implementation scenarios, the server may also include a server on an intermediate platform, for example, a server on a third-party server platform that is communicatively linked to the task scheduling center server. The server may include a single computer device, or may include a server cluster formed by a plurality of servers, or a server structure of a distributed apparatus.
Fig. 9 is a schematic block diagram of a system configuration of an electronic device 9600 according to an embodiment of the present application. As shown in fig. 9, the electronic device 9600 can include a central processor 9100 and a memory 9140; the memory 9140 is coupled to the central processor 9100. Notably, this fig. 9 is exemplary; other types of structures may also be used in addition to or in place of the structure to implement telecommunications or other functions.
In one embodiment, the digital model fusion method functionality may be integrated into the central processor 9100. The central processor 9100 may be configured to control as follows:
step S101: determining a model fusion type, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion.
Step S102: and determining a corresponding model fusion mode according to the model fusion type and carrying out model fusion.
As can be seen from the above description, in the electronic device provided in the embodiment of the present application, a model fusion type is determined, where the model includes an HI model and an AI model, and the model fusion type includes HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion, and HI + AI co-fusion; and determining a corresponding model fusion mode according to the model fusion type, performing model fusion, and effectively, accurately and conveniently realizing the fusion between the digital models.
In another embodiment, the digital model fusion apparatus may be configured separately from the central processor 9100, for example, the digital model fusion apparatus may be configured as a chip connected to the central processor 9100, and the function of the digital model fusion method may be realized by the control of the central processor.
As shown in fig. 9, the electronic device 9600 may further include: a communication module 9110, an input unit 9120, an audio processor 9130, a display 9160, and a power supply 9170. It is noted that the electronic device 9600 also does not necessarily include all of the components shown in fig. 9; in addition, the electronic device 9600 may further include components not shown in fig. 9, which may be referred to in the prior art.
As shown in fig. 9, a central processor 9100, sometimes referred to as a controller or operational control, can include a microprocessor or other processor device and/or logic device, which central processor 9100 receives input and controls the operation of the various components of the electronic device 9600.
The memory 9140 can be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information relating to the failure may be stored, and a program for executing the information may be stored. And the central processing unit 9100 can execute the program stored in the memory 9140 to realize information storage or processing, or the like.
The input unit 9120 provides input to the central processor 9100. The input unit 9120 is, for example, a key or a touch input device. Power supply 9170 is used to provide power to electronic device 9600. The display 9160 is used for displaying display objects such as images and characters. The display may be, for example, an LCD display, but is not limited thereto.
The memory 9140 can be a solid state memory, e.g., Read Only Memory (ROM), Random Access Memory (RAM), a SIM card, or the like. There may also be a memory that holds information even when power is off, can be selectively erased, and is provided with more data, an example of which is sometimes called an EPROM or the like. The memory 9140 could also be some other type of device. Memory 9140 includes a buffer memory 9141 (sometimes referred to as a buffer). The memory 9140 may include an application/function storage portion 9142, the application/function storage portion 9142 being used for storing application programs and function programs or for executing a flow of operations of the electronic device 9600 by the central processor 9100.
The memory 9140 can also include a data store 9143, the data store 9143 being used to store data, such as contacts, digital data, pictures, sounds, and/or any other data used by an electronic device. The driver storage portion 9144 of the memory 9140 may include various drivers for the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging applications, contact book applications, etc.).
The communication module 9110 is a transmitter/receiver 9110 that transmits and receives signals via an antenna 9111. The communication module (transmitter/receiver) 9110 is coupled to the central processor 9100 to provide input signals and receive output signals, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 9110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, may be provided in the same electronic device. The communication module (transmitter/receiver) 9110 is also coupled to a speaker 9131 and a microphone 9132 via an audio processor 9130 to provide audio output via the speaker 9131 and receive audio input from the microphone 9132, thereby implementing ordinary telecommunications functions. The audio processor 9130 may include any suitable buffers, decoders, amplifiers and so forth. In addition, the audio processor 9130 is also coupled to the central processor 9100, thereby enabling recording locally through the microphone 9132 and enabling locally stored sounds to be played through the speaker 9131.
Embodiments of the present application further provide a computer-readable storage medium capable of implementing all steps in the digital model fusion method with a server or a client as an execution subject in the above embodiments, where the computer-readable storage medium stores thereon a computer program, and when the computer program is executed by a processor, the computer program implements all steps in the digital model fusion method with a server or a client as an execution subject, for example, when the processor executes the computer program, the processor implements the following steps:
step S101: determining a model fusion type, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion.
Step S102: and determining a corresponding model fusion mode according to the model fusion type and carrying out model fusion.
As can be seen from the foregoing description, in the computer-readable storage medium provided in this embodiment of the present application, a model fusion type is determined, where the model includes an HI model and an AI model, and the model fusion type includes HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion, and HI + AI co-fusion; and determining a corresponding model fusion mode according to the model fusion type, performing model fusion, and effectively, accurately and conveniently realizing the fusion between the digital models.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the invention are explained by applying specific embodiments in the invention, and the description of the embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for digital model fusion, the method comprising:
determining a model fusion type, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion;
and determining a corresponding model fusion mode according to the model fusion type and carrying out model fusion.
2. The method for fusing digital models according to claim 1, wherein determining the corresponding model fusion mode and performing model fusion according to the model fusion type comprises:
extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
and if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion, performing feature fusion according to a preset fusion target of the model fusion and the model features of the model to obtain the model subjected to the feature fusion.
3. The method for fusing digital models according to claim 1, wherein determining the corresponding model fusion mode and performing model fusion according to the model fusion type comprises:
extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
and if the model fusion type is any one of AI fusion and AI co-fusion, determining the same-layer parameters with the similarity between the two models exceeding a threshold value according to the model structure of the model, and performing parameter fusion according to the same-layer parameters to obtain the model subjected to parameter fusion.
4. The method for fusing digital models according to claim 1, wherein determining the corresponding model fusion mode and performing model fusion according to the model fusion type comprises:
extracting feature information of the model, wherein the feature information comprises: model characteristics, model structures, model parameters and model evaluation indexes;
and if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion, performing result fusion according to a preset fusion target of the model fusion and the model evaluation index of the model to obtain the model subjected to the result fusion.
5. A digital model fusion apparatus, comprising:
the model fusion type determining module is used for determining a model fusion type, wherein the model comprises an HI model and an AI model, and the model fusion type comprises HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion;
and the model fusion mode determining module is used for determining a corresponding model fusion mode according to the model fusion type and carrying out model fusion.
6. The digital model fusion device of claim 5, wherein the model fusion mode determining module comprises:
a feature information extraction unit, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and the characteristic fusion unit is used for carrying out characteristic fusion according to a preset fusion target of the model fusion and the model characteristics of the model to obtain the model after the characteristic fusion if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion.
7. The digital model fusion device of claim 5, wherein the model fusion mode determining module comprises:
a feature information extraction unit, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and the parameter fusion unit is used for determining the same-layer parameters with the similarity between the two models exceeding a threshold value according to the model structure of the model if the model fusion type is any one of AI fusion and AI co-fusion, and performing parameter fusion according to the same-layer parameters to obtain the model subjected to parameter fusion.
8. The digital model fusion device of claim 5, wherein the model fusion mode determining module comprises:
a feature information extraction unit, configured to extract feature information of the model, where the feature information includes: model characteristics, model structures, model parameters and model evaluation indexes;
and the result fusion unit is used for carrying out result fusion according to a preset fusion target of the model fusion and the model evaluation index of the model to obtain the model after the result fusion if the model fusion type is any one of HI fusion, AI fusion, HI + AI fusion, HI co-fusion, AI co-fusion and HI + AI co-fusion.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method of model fusion according to any one of claims 1 to 4 are implemented when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method for digital model fusion according to any one of claims 1 to 4.
CN202011030377.3A 2020-09-27 2020-09-27 Digital model fusion method and device Pending CN112183619A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011030377.3A CN112183619A (en) 2020-09-27 2020-09-27 Digital model fusion method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011030377.3A CN112183619A (en) 2020-09-27 2020-09-27 Digital model fusion method and device

Publications (1)

Publication Number Publication Date
CN112183619A true CN112183619A (en) 2021-01-05

Family

ID=73943561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011030377.3A Pending CN112183619A (en) 2020-09-27 2020-09-27 Digital model fusion method and device

Country Status (1)

Country Link
CN (1) CN112183619A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598708A (en) * 2013-10-31 2015-05-06 大连智友软件科技有限公司 Detection data fusion method based on Dasarathy model
US20180218238A1 (en) * 2017-01-30 2018-08-02 James Peter Tagg Human-Artificial Intelligence Hybrid System
CN108734210A (en) * 2018-05-17 2018-11-02 浙江工业大学 A kind of method for checking object based on cross-module state multi-scale feature fusion
CN109598285A (en) * 2018-10-24 2019-04-09 阿里巴巴集团控股有限公司 A kind of processing method of model, device and equipment
CN109815329A (en) * 2018-12-13 2019-05-28 平安科技(深圳)有限公司 The model integrated and prediction technique, electronic device, computer equipment of text quality inspection
CN110688528A (en) * 2019-09-26 2020-01-14 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for generating classification information of video
CN111126607A (en) * 2020-04-01 2020-05-08 阿尔法云计算(深圳)有限公司 Data processing method, device and system for model training

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598708A (en) * 2013-10-31 2015-05-06 大连智友软件科技有限公司 Detection data fusion method based on Dasarathy model
US20180218238A1 (en) * 2017-01-30 2018-08-02 James Peter Tagg Human-Artificial Intelligence Hybrid System
CN108734210A (en) * 2018-05-17 2018-11-02 浙江工业大学 A kind of method for checking object based on cross-module state multi-scale feature fusion
CN109598285A (en) * 2018-10-24 2019-04-09 阿里巴巴集团控股有限公司 A kind of processing method of model, device and equipment
CN109815329A (en) * 2018-12-13 2019-05-28 平安科技(深圳)有限公司 The model integrated and prediction technique, electronic device, computer equipment of text quality inspection
CN110688528A (en) * 2019-09-26 2020-01-14 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for generating classification information of video
CN111126607A (en) * 2020-04-01 2020-05-08 阿尔法云计算(深圳)有限公司 Data processing method, device and system for model training

Similar Documents

Publication Publication Date Title
CN110956956A (en) Voice recognition method and device based on policy rules
CN107995370B (en) Call control method, device, storage medium and mobile terminal
CN104035995B (en) Group's label generating method and device
US20190026274A1 (en) Adversarial method and system for generating user preferred contents
CN111768231B (en) Product information recommendation method and device
CN112231497B (en) Information classification method and device, storage medium and electronic equipment
EP4371027A1 (en) Intelligent task completion detection at a computing device
US20180067991A1 (en) Using Structured Smart Digital Memory to Personalize Digital Agent and Bot Scenarios
CN113254491A (en) Information recommendation method and device, computer equipment and storage medium
CN111048115A (en) Voice recognition method and device
CN111480348B (en) System and method for audio-based augmented reality
US20230325944A1 (en) Adaptive wellness collaborative media system
CN113065879A (en) Data stream quality inspection method and system
CN112820302B (en) Voiceprint recognition method, voiceprint recognition device, electronic equipment and readable storage medium
CN112183619A (en) Digital model fusion method and device
US20210312257A1 (en) Distributed neuromorphic infrastructure
CN114662452A (en) Privacy-removing text label analysis method and device
CN115205009A (en) Account opening business processing method and device based on virtual technology
CN116301329A (en) Intelligent device active interaction method, device, equipment and storage medium
CN114726818A (en) Network social contact method, device, equipment and computer readable storage medium
CN113886674A (en) Resource recommendation method and device, electronic equipment and storage medium
Kelleher et al. Finding common ground for citizen empowerment in the smart city
CN112818084A (en) Information interaction method, related device, equipment and computer readable medium
US20220383188A1 (en) Merging models on an edge server
CN116775810A (en) Session emotion analysis method and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination