CN116226204A - Scene determination method, device, equipment and storage medium based on joint learning platform - Google Patents

Scene determination method, device, equipment and storage medium based on joint learning platform Download PDF

Info

Publication number
CN116226204A
CN116226204A CN202111433565.5A CN202111433565A CN116226204A CN 116226204 A CN116226204 A CN 116226204A CN 202111433565 A CN202111433565 A CN 202111433565A CN 116226204 A CN116226204 A CN 116226204A
Authority
CN
China
Prior art keywords
scene
application
information
client
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111433565.5A
Other languages
Chinese (zh)
Inventor
张敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinzhi I Lai Network Technology Co ltd
Original Assignee
Xinzhi I Lai Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinzhi I Lai Network Technology Co ltd filed Critical Xinzhi I Lai Network Technology Co ltd
Priority to CN202111433565.5A priority Critical patent/CN116226204A/en
Publication of CN116226204A publication Critical patent/CN116226204A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The disclosure relates to the technical field of machine learning, and provides a scene determination method, device, equipment and storage medium based on a joint learning platform. The method comprises the following steps: loading scene information corresponding to a preset application scene; acquiring scene demand information sent by a client, and judging whether scene information matched with the scene demand information exists or not; if scene information matched with the scene demand information exists, a plurality of functional programs corresponding to the application scene are called; when a combined application of a client to a plurality of functional programs is received, establishing a joint learning application community corresponding to an application scene and a corresponding scene strategy; when training data fed back by a client is received, checking the training data by utilizing a scene strategy to obtain a training data checking result; and determining the scene of the client in the joint learning application community according to the training data verification result. The method and the device can well assist each participant to quickly and accurately find the joint learning scene matched with the business requirement.

Description

Scene determination method, device, equipment and storage medium based on joint learning platform
Technical Field
The disclosure relates to the technical field of machine learning, and in particular relates to a scene determination method, device and equipment based on a joint learning platform and a storage medium.
Background
In general, different business requirements correspond to different joint learning scenes, for example, the business requirements are prediction annual gas consumption, and the corresponding joint learning scenes are prediction scenes of gas load; for another example, the service requirement is the annual power consumption prediction, and the corresponding joint learning scenario is the power load prediction scenario. Often, joint learning algorithms, joint types, required training data, etc. designed to address different business needs will also vary.
Therefore, each participant who wants to participate in the joint learning finds a joint learning scene adapted to own business requirements and joins the joint learning, so as to obtain a joint learning model meeting the expectations of the participants, solve the business requirements of the participants, and first, the participants need to do earlier work to find the joint learning scene adapted to own business requirements and join the joint learning model.
However, no method for assisting each participant to quickly and accurately find a joint learning scenario adapted to the service requirement is provided in the prior art.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method, apparatus, device and storage medium for determining a scenario based on a joint learning platform, so as to provide a method capable of assisting each participant to quickly and accurately find a joint learning scenario adapted to a service requirement thereof.
In a first aspect of an embodiment of the present disclosure, a scene determining method based on a joint learning platform is provided, including:
loading scene information corresponding to a preset application scene;
acquiring scene demand information sent by a client, and judging whether scene information matched with the scene demand information exists or not;
if scene information matched with the scene demand information exists, a plurality of functional programs corresponding to the application scene are called;
when a combined application of a client to a plurality of functional programs is received, establishing a joint learning application community corresponding to an application scene and a corresponding scene strategy;
when training data fed back by a client is received, checking the training data by utilizing a scene strategy to obtain a training data checking result;
and determining the scene of the client in the joint learning application community according to the training data verification result.
In a second aspect of the embodiments of the present disclosure, a scene determining device based on a joint learning platform is provided, including:
The loading module is configured to load scene information corresponding to a preset application scene;
the judging module is configured to acquire scene demand information sent by the client and judge whether scene information matched with the scene demand information exists or not;
the calling module is configured to call a plurality of functional programs corresponding to the application scene if scene information matched with the scene demand information exists;
the establishing module is configured to establish a joint learning application community and a corresponding scene strategy corresponding to an application scene when receiving a combined application of the client to the plurality of functional programs;
the verification module is configured to verify the training data by utilizing a scene strategy when receiving the training data fed back by the client so as to obtain a training data verification result;
and the scene determining module is configured to determine the scene of the client in the joint learning application community according to the training data verification result.
In a third aspect of the disclosed embodiments, an electronic device is provided, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect of the disclosed embodiments, a computer-readable storage medium is provided, which stores a computer program which, when executed by a processor, implements the steps of the above-described method.
Compared with the prior art, the beneficial effects of the embodiment of the disclosure at least comprise: loading scene information corresponding to a preset application scene; acquiring scene demand information sent by a client, and judging whether scene information matched with the scene demand information exists or not; if scene information matched with the scene demand information exists, a plurality of functional programs corresponding to the application scene are called; when a combined application of a client to a plurality of functional programs is received, establishing a joint learning application community corresponding to an application scene and a corresponding scene strategy; when training data fed back by a client is received, checking the training data by utilizing a scene strategy to obtain a training data checking result; according to the training data verification result, determining the scene of the client in the joint learning application community, and well assisting each participant to quickly and accurately find the joint learning scene which is matched with the service requirement.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required for the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of a joint learning architecture provided by an embodiment of the present disclosure;
fig. 2 is a flow chart of a scenario determination method based on a joint learning platform according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a scene determining device based on a joint learning platform according to an embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
Joint learning is to comprehensively utilize multiple AI (Artificial Intelligence ) technologies, jointly mine data value by joint multiparty cooperation, and promote new intelligent business states and modes based on joint modeling. The joint learning has at least the following characteristics:
(1) Under different application scenes, a plurality of model aggregation optimization strategies are established by utilizing screening and/or combination of an AI algorithm and privacy protection calculation so as to obtain a high-level and high-quality model.
(2) Based on a plurality of model aggregation optimization strategies, a performance method for improving the joint learning engine is obtained, wherein the performance method can be used for improving the overall performance of the joint learning engine by solving the problems of information interaction, intelligent perception, exception handling mechanism and the like under a parallel computing architecture and large-scale cross-domain network.
(3) The requirements of multiparty users in all scenes are acquired, the real contribution degree of all joint participants is determined and reasonably evaluated through a mutual trust mechanism, and distribution excitation is carried out.
Based on the mode, AI technical ecology based on joint learning can be established, the industry data value is fully exerted, and the scene of the vertical field is promoted to fall to the ground.
A modular joint learning service platform and system according to embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of a joint learning architecture according to an embodiment of the present disclosure. As shown in fig. 1, the framework of joint learning may include a service platform 100, as well as participants 102, 103, and 104. Wherein the service platform 100 may comprise a server (central node) 101.
In the joint learning process, a basic model may be established by the server 101, and the server 101 transmits the model to the participants 102, 103, and 104 with which a communication connection is established. The basic model may also be uploaded to the server 101 after any party has established, and the server 101 sends the model to the other parties with whom it has established a communication connection. The server 101 initiates the model training of the participants 102, 103 and 104 using the training data, obtains updated model parameters, aggregates the model parameters sent by the participants 102, 103 and 104, obtains global model parameters, and returns the global model parameters to the participants 102, 103 and 104. Participant 102, participant 103 and participant 104 iterate the respective models according to the received global model parameters until the models eventually converge, thereby enabling training of the models. It should be noted that the number of participants is not limited to the above three, but may be set as needed, and the embodiment of the present disclosure is not limited thereto.
Fig. 2 is a flowchart of a scenario determination method based on a joint learning platform according to an embodiment of the present disclosure. The joint learning platform based scene determination method of fig. 2 may be performed by the server 101 of fig. 1. As shown in fig. 2, the scene determining method based on the joint learning platform includes:
step S201, loading scene information corresponding to a preset application scene.
The preset application scenario generally refers to a service scenario classified according to a service type. Generally, a business scenario corresponds to solving a class (or a class) of business problems, for example, a civil building gas use business scenario may involve predicting the class of business problems with gas loads. As another example, business scenarios for business electricity use may involve electricity load prediction for this type of business problem. Therefore, different services can be classified into different service scenes (application scenes) according to the above classification criteria.
It may be appreciated that the division of the application scenario (or "joint learning scenario") may be further divided according to other criteria, for example, may be divided according to enterprise types, and specifically may be divided into an internet enterprise joint learning scenario, a financial enterprise joint learning scenario, a medical service enterprise joint learning scenario, and the like.
The scene information comprises information such as scene names (such as XX load prediction, soft measurement and the like) of each application scene, scene profile contents (such as ' utilizing joint learning technology, joint third party data, expanding dimensions of employee data, such as financial credit and consumption data of employees, greatly improving accuracy rate and public belief evaluation of credit assessment and the like), joint modes (such as ' joint modes: transverse/longitudinal '), algorithms (such as ' algorithms: XGBOOST '), number of participants (such as ' participants: X persons ', namely the number of active participants in the scene), encryption modes (such as non-encryption, homomorphic encryption and the like) and the like.
The name of the application scenario may be named according to its corresponding service requirement. For example, for a business requirement to be a predicted annual electricity load, the name of its federated scenario may be named "electricity load prediction". For another example, for a business need to predict annual gas load, the name of its federated scenario may be named "gas load prediction".
As an example, a correspondence table of application scenes and scene information may be established in advance, as shown in table 1 below.
Table 1 correspondence table of application scenes and scene information
Figure BDA0003381158970000061
Step S202, obtaining scene demand information sent by a client, and judging whether scene information matched with the scene demand information exists or not.
The scene requirement information generally refers to application information of a certain application scene that a participant wants to apply for joining, for example, application information that a participant a wants to apply for joining in an XX application scene.
As an example, when the participant a wants to apply to join the application scenario 1, the scenario requirement information may be sent to the service console by clicking/touching a preset application scenario selection icon or the like presented on the client interface. For example, assuming that the party a clicks on the selection icon of "application scenario 1" shown on the client interface, the requirement information of "application scenario 1" may be sent to the service platform via the client. At this time, the service platform can obtain the scene demand information sent by the client.
After receiving the scene demand information sent by the client, the service platform can query and judge whether scene information corresponding to the application scene 1 exists according to the table 1.
In step S203, if there is scene information matching the scene requirement information, a plurality of function programs corresponding to the application scene are retrieved.
In connection with the above example, if there is scene information corresponding to "application scene 1", a plurality of function programs corresponding to "application scene 1" are called.
Wherein, the plurality of functional programs include, but are not limited to, a scene name display functional program, a scene introduction display functional program, a joint mode display functional program, an algorithm display functional program, a scene liveness display functional program, and the like.
Step S204, when a combined application of the client to a plurality of functional programs is received, a joint learning application community corresponding to the application scene and a corresponding scene strategy are established.
In combination with the above example, it is assumed that when receiving a combined application of the above scenario name display function program, scenario profile display function program, joint mode display function program, algorithm display function program, and scenario liveness display function program by a participant through a client, a joint learning application community corresponding to "application scenario 1" is established. In the joint learning application community, the scene name, the scene brief introduction, the joint mode, the used algorithm and the scene liveness (namely the number of participants in the application community) of the application scene 1 can be integrally displayed, so that a user can know the service scene adapted to the scene, the user can be better assisted to find the joint learning scene adapted to the own service requirement, and the user can participate in joint learning under the scene, so that the expected joint learning model is obtained, and the service requirement is further solved.
As an example, when receiving a combined application of the above-mentioned multiple scenario name display function programs, scenario profile display function programs, joint mode display function programs, algorithm display function programs, and scenario activity display function programs by a participant through a client, a joint learning application community corresponding to each application scenario may be established, and all or part of each joint learning application community is displayed on an interface, so that the participant can know and select the joint learning application community that the participant wants to join.
Typically, a joint learning application community (corresponding to an application scenario) may correspondingly set a data verification rule (i.e., a scenario policy) for verifying whether training data fed back by a client matches an application scenario of the joint learning application community.
Step S205, when receiving training data fed back by the client, checking the training data by using a scene strategy to obtain a training data checking result.
The training data refers to data of a joint model corresponding to a joint learning task in a joint learning application community, wherein the data is used by a participant to participate in the joint model.
The scene policy (i.e. preset data verification rule) generally refers to a policy (rule) for judging whether training data provided by a participant matches an application scene applied by the participant.
And step S206, determining the scene of the client in the joint learning application community according to the training data verification result.
As an example, when training data provided by a participant passes a preset data verification rule, that is, the training data provided by the participant matches with a joint learning application community to which the participant applies, a client of the participant may be allowed to join a corresponding application scenario in the joint learning application community to which the participant applies (i.e., an application scenario of the client in the joint learning application community is determined), so as to train a certain joint learning task together with other participants (clients) in the joint learning application community, thereby obtaining an application model capable of solving service requirements of the participant.
For example, the training data a provided by the participant a passes the verification of the preset air load prediction scene policy, that is, the training data a provided by the participant a is matched with the application scene of air load prediction in the joint learning application community, at this time, the training data a provided by the participant a may be allowed to join the air load prediction scene in the joint learning application community, that is, it is determined that the scene of a in the joint learning application community is the air load prediction.
According to the technical scheme provided by the embodiment of the disclosure, scene information corresponding to a preset application scene is loaded; acquiring scene demand information sent by a client, and judging whether scene information matched with the scene demand information exists or not; if scene information matched with the scene demand information exists, a plurality of functional programs corresponding to the application scene are called; when a combined application of a client to a plurality of functional programs is received, establishing a joint learning application community and a scene strategy corresponding to an application scene; when training data fed back by a client is received, checking the training data by utilizing a scene strategy to obtain a training data checking result; according to the training data verification result, determining the scene of the client in the joint learning application community, and well assisting each participant to quickly and accurately find the joint learning scene which is matched with the service requirement.
In some embodiments, the step S201 specifically includes:
acquiring a scene access request sent by a client, wherein the scene access request comprises a scene information loading instruction;
executing the scene information loading instruction, and loading to obtain the scene information of the application scene corresponding to the scene information loading instruction.
As an example, a participant may send a scene access request to a service platform by clicking/touching a preset "application scene access" icon (which may be a static icon or a dynamic icon (e.g., a floating icon), etc.) or the like presented on his client interface. At this time, when the service platform monitors the triggering operation such as clicking/touching on the client by the participant, the service platform may receive the scene access request sent by the participant via the client. The preset application scene access icon can be a scene access icon or a scene access icon with graphic design. The specific shapes, text descriptions and the like of the icons can be flexibly designed according to actual situations, and the present disclosure is not limited.
After receiving a scene access request sent by a client, the service platform can load and obtain scene information of an application scene corresponding to a scene information loading instruction by executing the scene information loading instruction carried by the request. For example, the application scenario corresponding to the scenario information loading instruction is application scenario 1, so that the scenario information corresponding to the application scenario 1 can be obtained according to the corresponding relation between the application scenario and the scenario information in table 1, and the scenario information of the application scenario 1 is displayed on the current page of the client, so that the participant (user) can conveniently view and know the related scenario information, and the participant can be helped to quickly and accurately find the joint learning scenario adapted to the service requirement of the participant.
In some embodiments, the step S202 specifically includes:
extracting at least one key information of scene information;
it is determined whether the at least one key information contains information identical or similar to the scene demand information.
As an example, assume that the scene information of the application scene 1 includes: scene name is "gas load prediction"; the scene profile information is "the dimension of employee data is extended by combining third party data by using a combined learning technology, for example: financial credit and consumption data of staff and the like, so that accuracy rate and public belief evaluation are greatly improved; the combination mode is 'transverse combination'; the algorithm used is "XGBOOST"; the number of participants was "97". The key information can be extracted by text analysis of the scene information, for example, the key information can be extracted by text recognition and analysis of the scene information by using the existing OCR technology.
For example, it is assumed that the key information is extracted by text analysis of the scene information of the application scene 1: air load prediction, extended employee data, financial credit, consumption data, lateral federation, XGBOOST. And the participant A clicks the selection icon of the 'application scene 1' displayed on the client interface, namely, scene demand information of the 'application scene 1' (gas load prediction) is sent to the service platform through the client. At this time, the service platform further determines whether the key information of "air load prediction, extended employee data, financial credit, consumption data, lateral union, XGBOOST" contains the same or similar information as "application scenario 1" (air load prediction).
The information similar to "application scenario 1" (using gas load prediction) may be description information of an application scenario (e.g., load prediction) belonging to the same category as "application scenario 1" (using gas load prediction) or the same scenario (e.g., gas load prediction) described in a different description language.
In some embodiments, the step S203 specifically includes:
and if at least one piece of key information comprises the same or similar information as the scene demand information, invoking at least three function programs of an application scene name display function program, a scene introduction display function program, a joint mode display function program, a used joint algorithm display function program or a scene activity display function program corresponding to the same or similar information as the scene demand information.
As an example, a correspondence between each application scenario and its corresponding function program may be pre-established, and each application scenario may be associated with and stored in its function program according to the correspondence. Exemplary, the correspondence between the application scenario and the function program is shown in table 2 below.
Table 2 correspondence table of application scenario and function program
Figure BDA0003381158970000101
In combination with the above example, it is found through detection that the key information includes the same key information "air load prediction" as "application scenario 1" (air load prediction), and then at least three functions of the application scenario name display function program, the scenario profile display function program, the joint mode display function program, the used joint algorithm display function program or the scenario activity display function program corresponding to the "application scenario 1" (air load prediction) may be further retrieved according to the above table 2.
In some embodiments, the step of establishing a joint learning application community and a corresponding scene policy corresponding to an application scene when receiving a combined application of the client to the plurality of functional programs includes:
when a combined application of at least two function programs in the application scene name display function program, the scene introduction display function program, the combined mode display function program and the used combined algorithm display function program or the scene liveness display function program is received by the client, a combined learning application community containing at least two function programs of the combined application and a corresponding scene strategy are established.
In combination with the above example, when receiving a combined application of the application scene name display function program, the scene profile display function program, the joint mode display function program, the used joint algorithm display function program and the scene activity display function program corresponding to the application scene 1 by the participant through the client, a joint learning application community including the function program of the combined application is established, and meanwhile, the scene policy of the joint learning application community (that is, a data verification policy/data verification rule for verifying whether training data provided by the participant matches with the joint learning application community) can be correspondingly set. In the joint learning application community, an integration page of a scene name, a scene brief introduction, a joint mode, a joint algorithm used and scene liveness corresponding to the application scene 1 can be displayed to a client, so that a participant (user) can conveniently view and know relevant scene information, and the participant can be helped to quickly and accurately find a joint learning scene which is matched with the service requirement of the participant.
In some embodiments, the step S205 specifically includes:
providing a data entry template corresponding to the application scene for the client, wherein the data entry template comprises a data entry requirement description and a reference data entry table;
receiving training data fed back by a client based on the data entry requirement description and a reference data entry table;
verifying the training data by utilizing a scene strategy to obtain a training data verification result;
according to the training data verification result, determining the scene of the client in the joint learning application community, wherein the method comprises the following steps:
and when the verification result is passed, determining the scene of the client in the joint learning application community, and storing the training data in association with the application scene.
As an example, assuming that the scenario demand information sent by the participant to the service platform via its client is a workload prediction, the service platform establishes a joint learning application community corresponding to the workload prediction according to the above steps, then a data entry template corresponding to the workload prediction may be provided to the client. The data entry template may have a description of data entry requirements for training data for the application scenario, including data format, numerical range, and other requirements, as well as reference to a data entry table. For example, participating Fang Jia wants to predict the annual gas load of 20xx years, and the first applies for the joint learning scenario of adding gas load prediction, at which time a data entry template may be provided to the participant first. The data input template comprises the following data input requirement description: (1) the amount of the monthly air load for 12 months continuously for at least 8 years is not limited at most, and the monthly air load is continuous for the whole year, and preferably, the monthly air load is not lost; (2) if an annual gas load of 20XX years is to be predicted, at least a gas load of the whole year of the year preceding 20XX years is to be provided; (3) the monthly gas load is the total gas consumption of the month (note: accumulated value, not instantaneous value), the unit is everywhere, and the gas load needs to be provided according to the requirement; (4) the data is mainly used for joint training (training data); (5) after the relevant data is prepared as required, please upload the data in CSV (comma separated value file format) file format through the "select data" channel.
The CSV file may be named according to the naming mode of "data.csv", and then uploaded, that is, "data name+file format suffix". For example, it may be named "training data 1.Csv". Illustratively, the CSV file includes a list-type data entry requirement description (shown in table 3 below) of three columns of "column names", "types", "descriptions", and the data entry table may be shown in table 4 below.
Table 3 data entry requirement specification table
Figure BDA0003381158970000121
Figure BDA0003381158970000131
Table 4 data entry table
Figure BDA0003381158970000132
As an example, the participant a may specify the preparation data according to the data entry requirement of the above table 3, enter the corresponding air load data in the table 4, store the data as the "training data 1.Csv" file, and upload the data to the service platform through the "selection data" channel, where the service platform may obtain the training data 1 provided by the participant a.
Then, a data verification rule (i.e., a scene policy corresponding to an application scene in the joint learning application community) may be generated according to the above description, and verification may be performed on the training data 1 participating in the Fang Jia feedback. The data verification rule is generated based on the above description (1) and will be described in detail below. Firstly, the key information of 8 years, 12 months continuously and the gas load for month can be extracted from the above description (1), and the data check rule of training data of the gas load for month, the quantity of the training data is more than or equal to 96, can be generated according to the key information.
In combination with the above example, assuming that the training data a uploaded by the user first is 100 monthly air loads, then according to the data verification rule "the training data is the monthly air loads, and the number of the training data is greater than or equal to 96", the training data uploaded by the user first is verified, and a verification result of "pass verification" (or "fail verification") can be obtained. And when the verification result is that the user passes, allowing the client of the participant A to join the joint learning application community, and storing the user A training data a into a preset storage space of an application scene of gas load prediction. Meanwhile, user A, gas load prediction and training data a can be input in the corresponding column in the table 5, so that corresponding training data can be searched and called when joint learning is started later.
Table 5 association table of training data and application scenario
Figure BDA0003381158970000141
In the subsequent joint learning, the participant can select a certain training data of the participant in a certain joint learning scene to correspondingly train a certain joint learning task. For example, the application scenario of adding the air load prediction is participated in Fang Jia, in which the training data 1 and 2 are uploaded by the armor, and when the armor wants to join the joint learning task a in the scenario, the armor can select to use the training data 1 as the training data or the training data 2 as the training data.
According to the technical scheme provided by the embodiment of the disclosure, through data verification of the training data uploaded by the participant through the client, the relevance between each training data and the application scene in the joint learning application community can be ensured, and the problem that subsequent joint learning cannot be performed or the performance of a joint model obtained through training is poor due to the fact that the training data and the application scene are not adapted is avoided.
In some embodiments, obtaining the scene requirement information sent by the client, and after judging whether there is scene information matched with the scene requirement information, further includes:
loading a preset extended application function program when no scene information matched with the scene demand information exists;
sending a scene data acquisition instruction to a client, and receiving scene data fed back by the client according to the data acquisition instruction;
and processing the scene data by using the extended application function program to generate an extended application scene.
As an example, assuming that the party a clicks on the selection icon of "more scenes to be developed" shown on the client interface, the requirement information of "more scenes to be developed" may be sent to the service platform via the client. At this time, the service platform can obtain the scene demand information sent by the client. Through detection, no scene information matched with the scene requirement information of more scenes to be developed (namely, corresponding scene information cannot be queried according to table 1) exists at present, a preset extended application function program can be loaded, and a scene data acquisition instruction is sent to a client. The scene data collection instruction generally refers to an instruction for collecting data such as "scene name, scene profile, joint mode, training model, training data" and the like. When the client receives the scene data acquisition instruction, relevant data can be collected according to the instruction and uploaded to the service platform. After receiving scene data fed back by a client, the service platform analyzes and integrates the scene data by using preset expansion application function degrees, trains a training model by using the training data to obtain an application model, further determines whether the service platform supports the expansion application scene by testing the performance of the application model, and can further generate the expansion application scene if the application model effect reaches the standard by testing the steps.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein in detail.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
Fig. 3 is a schematic structural diagram of a scene determining device based on a joint learning platform according to an embodiment of the disclosure. As shown in fig. 3, the scene determining device based on the joint learning platform includes:
the loading module 301 is configured to load scene information corresponding to a preset application scene;
a judging module 302, configured to obtain the scene requirement information sent by the client, and judge whether there is scene information matched with the scene requirement information;
a retrieving module 303 configured to retrieve a plurality of functional programs corresponding to the application scenario if there is scenario information matching the scenario requirement information;
the establishing module 304 is configured to establish a joint learning application community and a corresponding scene policy corresponding to an application scene when a combined application of the client to the plurality of functional programs is received;
the verification module 305 is configured to, when receiving training data fed back by the client, verify the training data by using a scene policy to obtain a training data verification result;
The scene determination module 306 is configured to determine a scene of the client in the joint learning application community according to the training data verification result.
According to the technical scheme provided by the embodiment of the disclosure, the loading module 301 is configured to load scene information corresponding to a preset application scene; the judging module 302 is configured to acquire scene demand information sent by the client and judge whether scene information matched with the scene demand information exists or not; the retrieving module 303 is configured to retrieve a plurality of functional programs corresponding to the application scenario if there is scenario information matching the scenario requirement information; the establishing module 304 is configured to establish a joint learning application community corresponding to an application scene when a combined application of the client to the plurality of functional programs is received; the verification module 305 is configured to, when receiving training data fed back by the client, verify the training data by using a scene policy to obtain a training data verification result; the scene determining module 306 is configured to determine a scene of the client in the joint learning application community according to the training data verification result, so that each participant can be well assisted to quickly and accurately find a joint learning scene adapted to the service requirement.
In some embodiments, the loading module 301 includes:
the acquisition unit is configured to acquire a scene access request sent by the client, wherein the scene access request comprises a scene information loading instruction;
the execution unit is configured to execute the scene information loading instruction and load the scene information of the application scene corresponding to the scene information loading instruction.
In some embodiments, the determining module 302 includes:
an extraction unit configured to extract at least one key information of the scene information;
and a judging unit configured to judge whether the at least one piece of key information contains information identical or similar to the scene demand information.
In some embodiments, the foregoing invoking module 303 includes:
and the calling unit is configured to call at least three function programs of an application scene name display function program, a scene introduction display function program, a joint mode display function program, a used joint algorithm display function program or a scene activity display function program corresponding to the information which is the same as or similar to the scene demand information if the at least one piece of key information contains the information which is the same as or similar to the scene demand information.
In some embodiments, the establishing module 304 includes:
The establishing unit is configured to establish a joint learning application community containing at least two function programs of the combined application and a corresponding scene strategy when a combined application of the at least two function programs of the application scene name display function program, the scene introduction display function program, the combined mode display function program, the used combined algorithm display function program or the scene activity display function program is received by the client.
In some embodiments, the verification module 305 includes:
the data input system comprises a providing unit, a data input unit and a data input unit, wherein the providing unit is configured to provide a data input template corresponding to an application scene for a client, and the data input template comprises a data input requirement description and a reference data input table;
the receiving unit is configured to receive training data fed back by the client based on the data entry requirement specification and the reference data entry table;
the verification unit is configured to verify the training data by utilizing a scene strategy so as to obtain a training data verification result;
the scene determination module 306 includes:
and the scene determining unit is configured to determine the scene of the client in the joint learning application community when the verification result is passed, and store the training data in association with the application scene.
In some embodiments, the apparatus further comprises:
the expansion loading module is configured to load a preset expansion application function program when scene information matched with the scene demand information does not exist;
the acquisition module is configured to send a scene data acquisition instruction to the client and receive scene data fed back by the client according to the data acquisition instruction;
and the scene expansion module is configured to process the scene data by utilizing the expansion application function program to generate an expansion application scene.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not constitute any limitation on the implementation process of the embodiments of the disclosure.
Fig. 4 is a schematic diagram of an electronic device 400 provided by an embodiment of the present disclosure. As shown in fig. 4, the electronic apparatus 400 of this embodiment includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps of the various method embodiments described above are implemented by processor 401 when executing computer program 403. Alternatively, the processor 401, when executing the computer program 403, performs the functions of the modules/units in the above-described apparatus embodiments.
Illustratively, the computer program 403 may be partitioned into one or more modules/units, which are stored in the memory 402 and executed by the processor 401 to complete the present disclosure. One or more of the modules/units may be a series of computer program instruction segments capable of performing particular functions for describing the execution of the computer program 403 in the electronic device 400.
The electronic device 400 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. Electronic device 400 may include, but is not limited to, a processor 401 and a memory 402. It will be appreciated by those skilled in the art that fig. 4 is merely an example of an electronic device 400 and is not intended to limit the electronic device 400, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., an electronic device may also include an input-output device, a network access device, a bus, etc.
The processor 401 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 402 may be an internal storage unit of the electronic device 400, for example, a hard disk or a memory of the electronic device 400. The memory 402 may also be an external storage device of the electronic device 400, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the electronic device 400. Further, the memory 402 may also include both internal storage units and external storage devices of the electronic device 400. The memory 402 is used to store computer programs and other programs and data required by the electronic device. The memory 402 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementations, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present disclosure may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included in the scope of the present disclosure.

Claims (10)

1. The scene determination method based on the joint learning platform is characterized by comprising the following steps of:
loading scene information corresponding to a preset application scene;
acquiring scene demand information sent by a client, and judging whether scene information matched with the scene demand information exists or not;
if scene information matched with the scene demand information exists, a plurality of functional programs corresponding to the application scene are called;
when a combined application of the client to the plurality of functional programs is received, establishing a joint learning application community and a corresponding scene strategy corresponding to the application scene;
When training data fed back by the client is received, checking the training data by utilizing the scene strategy to obtain a training data checking result;
and determining the scene of the client in the joint learning application community according to the training data verification result.
2. The method according to claim 1, wherein loading scene information corresponding to a preset application scene includes:
acquiring a scene access request sent by a client, wherein the scene access request comprises a scene information loading instruction;
executing the scene information loading instruction, and loading to obtain scene information of an application scene corresponding to the scene information loading instruction.
3. The method according to claim 1, wherein the obtaining the scene requirement information sent by the client, and determining whether there is scene information matching the scene requirement information, includes:
extracting at least one key information of the scene information;
and judging whether the at least one piece of key information contains information identical or similar to the scene requirement information.
4. The method of claim 3, wherein retrieving the plurality of function programs corresponding to the application scenario if there is scenario information matching the scenario requirement information, comprises:
And if the at least one piece of key information comprises the same or similar information as the scene demand information, invoking at least three function programs of an application scene name display function program, a scene brief introduction display function program, a joint mode display function program, a used joint algorithm display function program or a scene activity display function program corresponding to the same or similar information as the scene demand information.
5. The method of claim 4, wherein when the combined application of the plurality of functional programs by the client is received, establishing a joint learning application community and a corresponding scene policy corresponding to the application scene comprises:
when a combined application of the client to at least two function programs of the application scene name display function program, the scene introduction display function program, the combined mode display function program, the used combined algorithm display function program or the scene liveness display function program is received, a combined learning application community containing at least two function programs of the combined application and a corresponding scene strategy are established.
6. The method according to claim 1, wherein when receiving the training data fed back by the client, the checking the training data by using the scene policy to obtain a training data checking result includes:
Providing a data entry template corresponding to the application scene to the client, wherein the data entry template comprises a data entry requirement description and a reference data entry table;
receiving training data fed back by the client based on the data entry requirement description and a reference data entry table;
checking the training data by utilizing the scene strategy to obtain a training data checking result;
the determining the scene of the client in the joint learning application community according to the training data verification result comprises the following steps:
and when the verification result is that the verification result is passed, determining the scene of the client in the joint learning application community, and storing the training data in association with the application scene.
7. The method according to claim 1, wherein the obtaining the scene requirement information sent by the client, and after determining whether there is scene information matching the scene requirement information, further comprises:
loading a preset extended application function program when no scene information matched with the scene demand information exists;
sending a scene data acquisition instruction to the client, and receiving scene data fed back by the client according to the data acquisition instruction;
And processing the scene data by using the extended application function program to generate an extended application scene.
8. A scene determination device based on a joint learning platform, comprising:
the loading module is configured to load scene information corresponding to a preset application scene;
the judging module is configured to acquire scene demand information sent by the client and judge whether scene information matched with the scene demand information exists or not;
a calling module configured to call a plurality of function programs corresponding to the application scene if scene information matching the scene demand information exists;
the establishing module is configured to establish a joint learning application community and a corresponding scene strategy corresponding to the application scene when receiving a combined application of the client to the plurality of functional programs;
the verification module is configured to verify the training data by utilizing the scene strategy when receiving the training data fed back by the client so as to obtain a training data verification result;
and the scene determining module is configured to determine the scene of the client in the joint learning application community according to the training data verification result.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202111433565.5A 2021-11-29 2021-11-29 Scene determination method, device, equipment and storage medium based on joint learning platform Pending CN116226204A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111433565.5A CN116226204A (en) 2021-11-29 2021-11-29 Scene determination method, device, equipment and storage medium based on joint learning platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111433565.5A CN116226204A (en) 2021-11-29 2021-11-29 Scene determination method, device, equipment and storage medium based on joint learning platform

Publications (1)

Publication Number Publication Date
CN116226204A true CN116226204A (en) 2023-06-06

Family

ID=86568286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111433565.5A Pending CN116226204A (en) 2021-11-29 2021-11-29 Scene determination method, device, equipment and storage medium based on joint learning platform

Country Status (1)

Country Link
CN (1) CN116226204A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828288A (en) * 2023-08-28 2023-09-29 广州信邦智能装备股份有限公司 Composite intelligent inspection robot capable of being applied to multiple scenes and related system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116828288A (en) * 2023-08-28 2023-09-29 广州信邦智能装备股份有限公司 Composite intelligent inspection robot capable of being applied to multiple scenes and related system
CN116828288B (en) * 2023-08-28 2024-01-02 广州信邦智能装备股份有限公司 Composite intelligent inspection robot capable of being applied to multiple scenes and related system

Similar Documents

Publication Publication Date Title
US11050690B2 (en) Method for providing recording and verification service for data received and transmitted by messenger service, and server using method
CN110378749B (en) Client similarity evaluation method and device, terminal equipment and storage medium
CN108833458B (en) Application recommendation method, device, medium and equipment
CN107886414B (en) Order combination method and equipment and computer storage medium
CN107918618B (en) Data processing method and device
CN108650289B (en) Method and device for managing data based on block chain
CN115510249A (en) Knowledge graph construction method and device, electronic equipment and storage medium
CN116226204A (en) Scene determination method, device, equipment and storage medium based on joint learning platform
CN111598700A (en) Financial wind control system and method
CN107368407B (en) Information processing method and device
Simmons et al. Designing and implementing cloud-based digital forensics hands-on labs
CN114202018A (en) Modular joint learning method and system
CN114154166A (en) Abnormal data identification method, device, equipment and storage medium
CN109857748B (en) Contract data processing method and device and electronic equipment
CN109636627B (en) Insurance product management method, device, medium and electronic equipment based on block chain
CN110909072B (en) Data table establishment method, device and equipment
CN115858322A (en) Log data processing method and device and computer equipment
CN113032817B (en) Data alignment method, device, equipment and medium based on block chain
CN112258009B (en) Intelligent government affair request processing method
CN113743692B (en) Business risk assessment method, device, computer equipment and storage medium
Finn et al. Exploring big'crisis' data in action: potential positive and negative externalities.
Naoum et al. An Enhanced Model for e-Government (A Comparative Study between Jordanian and Iraqi Citizens)
US20190304040A1 (en) System and Method for Vetting Potential Jurors
KR20200141571A (en) Integrated History Management System of Police Manpower based on Block Chain
CN111813694B (en) Test method, test device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination