CN111241745B - Gradual model selection method, equipment and readable storage medium - Google Patents

Gradual model selection method, equipment and readable storage medium Download PDF

Info

Publication number
CN111241745B
CN111241745B CN202010024336.7A CN202010024336A CN111241745B CN 111241745 B CN111241745 B CN 111241745B CN 202010024336 A CN202010024336 A CN 202010024336A CN 111241745 B CN111241745 B CN 111241745B
Authority
CN
China
Prior art keywords
model
feature
training
feature set
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010024336.7A
Other languages
Chinese (zh)
Other versions
CN111241745A (en
Inventor
唐兴兴
黄启军
陈瑞钦
林冰垠
李诗琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010024336.7A priority Critical patent/CN111241745B/en
Publication of CN111241745A publication Critical patent/CN111241745A/en
Priority to PCT/CN2020/134035 priority patent/WO2021139462A1/en
Application granted granted Critical
Publication of CN111241745B publication Critical patent/CN111241745B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a gradual model selection method, equipment and a readable storage medium, wherein the gradual model selection method comprises the following steps: receiving configuration parameters sent by a client associated with a server, acquiring a feature set to be trained, training a preset model to be trained based on the feature set to be trained and the configuration parameters, acquiring a first initial training model, respectively calculating first type saliency and second type saliency corresponding to the feature set to be trained, respectively carrying out cyclic training on the first initial training model based on the first type saliency and the second type saliency to obtain a cyclic training model set, selecting a target training model from the first initial training model and the cyclic training model set based on the configuration parameters, generating visual data corresponding to the target training model, and feeding the visual data back to the client. The method solves the technical problems of high modeling threshold and low efficiency of gradually selecting the mode in the prior art.

Description

Gradual model selection method, equipment and readable storage medium
Technical Field
The present application relates to the field of machine learning technology of financial technology (Fintech), and in particular, to a progressive model selection method, apparatus, and readable storage medium.
Background
With the continuous development of financial technology, especially internet technology finance, more and more technologies (such as distributed, blockchain Blockchain, artificial intelligence, etc.) are applied in the finance field, but the finance industry also puts higher demands on the technologies, such as distribution of backlog corresponding to the finance industry.
With the continuous development of computer software and artificial intelligence, the application of machine learning modeling is also more and more widespread, in the prior art, a logistic regression model is usually used for modeling in the scenes of financial wind control, medical models and the like, and in the logistic regression model modeling, a gradual selection mode is an important model selection strategy, compared with the training of adding all features into a model, the model is effectively prevented from being overfitted, but the current gradual selection mode generally requires modeling staff to have higher code development capability and can only be implemented in a single machine, namely, the current implementation of the gradual selection mode has higher threshold requirements on the modeling staff, and the modeling time of the gradual selection mode is long due to the fact that the current implementation of the gradual selection mode can only be implemented in a single machine, so that the modeling efficiency is lower, and the technical problems of high modeling threshold and low modeling efficiency of the gradual selection mode exist in the prior art.
Disclosure of Invention
The application mainly aims to provide a gradual model selection method, equipment and a readable storage medium, which aim to solve the technical problems of high gradual model selection modeling threshold and low efficiency in the prior art.
In order to achieve the above object, the present application provides a progressive model selection method, which is applied to a server, the progressive model selection method comprising:
Receiving configuration parameters sent by a client associated with the server, acquiring a feature set to be trained, and training a preset model to be trained based on the feature set to be trained and the configuration parameters to obtain a first initial training model;
respectively calculating the first type saliency and the second type saliency corresponding to the feature set to be trained;
Based on the first type salience and the second type salience respectively, carrying out cyclic training on the first initial training model to obtain a cyclic training model set;
Selecting a target training model from the first initial training model and the set of cyclic training models based on the configuration parameters;
And generating visual data corresponding to the target training model, and feeding back the visual data to the client.
Optionally, the feature set to be trained comprises a first model feature set and a second model feature set, the cyclic training model set comprises a first cyclic training model set and a second cyclic training model set,
The step of performing a cyclic training on the first initial training model based on each of the first type of salience and each of the second type of salience, respectively, to obtain a cyclic training model set includes:
based on the salience of each first type, eliminating the features to be eliminated, which meet the requirement of eliminating the salience in the first model feature set;
Based on the first model feature set after rejection, carrying out cyclic training update on the first initial training model until the feature to be rejected does not exist in the first model feature set, and obtaining the first cyclic training model set;
selecting target features meeting preset significance requirements from the second model feature set based on the second type significance;
Adding the target feature into the first model feature set, and performing cyclic training on the updated first initial training model based on the first model feature set added with the target feature until the feature to be eliminated does not exist in the first model feature set added with the target feature and the target feature does not exist in the second model feature set, so as to obtain the second cyclic training model set.
Optionally, the step of eliminating the features to be eliminated, which meet the requirement of eliminating the saliency in the first model feature set, based on each of the first type saliency includes:
Comparing the salience of each first type to select the feature with the lowest salience from the first model feature set as the feature to be selected;
Comparing the significance to be selected of the feature to be selected with a preset eliminating significance threshold value;
If the significance to be selected is smaller than the preset eliminating significance threshold value, judging that the feature to be selected meets the preset eliminating significance requirement, and taking the feature to be selected as the feature to be eliminated.
Optionally, the configuration parameters include iterative training completion decision conditions, the first set of cyclic training models includes one or more first model elements,
The step of circularly training and updating the first initial training model based on the first model feature set after the elimination until the feature to be eliminated does not exist in the first model feature set, and the step of obtaining the first circularly training model set comprises the following steps:
Based on the removed first model feature set, carrying out iterative training update on the first initial training model until the first initial training model meets the iterative training completion judging condition, and obtaining one of the first model elements;
And recalculating the first type significance of each element in the first model feature set after rejection to repeatedly perform rejection of the feature to be rejected and iterative training updating of the updated first initial training model until the feature to be rejected does not exist in the first model feature set, thereby obtaining the first cyclic training model set.
Optionally, the step of selecting, based on each of the second type saliences, a target feature in the second model feature set that meets a preset salience requirement includes:
Comparing the saliency of each second type to select the most salient feature with the highest saliency from the second model feature set;
Comparing the target significance corresponding to the most significant feature with the preset significance threshold;
And if the target significance is greater than or equal to the preset significance threshold, judging that the most significant feature meets the preset significance requirement, and taking the most significant feature as the target feature.
Optionally, the second set of cyclic training models includes one or more second model elements,
The step of adding the target feature to the first model feature set, and performing cyclic training on the updated first initial training model based on the first model feature set after adding the target feature until the feature to be eliminated does not exist in the first model feature set after adding the target feature and the target feature does not exist in the second model feature set, and the step of obtaining the second cyclic training model set includes:
adding the target feature into the first model feature set to update the first model feature set and the second model feature set, and obtaining an updated first model feature set and an updated second model feature set;
Based on the updated first model feature set, performing iterative training update on the first initial training model to obtain one of the second model elements;
recalculating the first type significance of each element in the updated first model feature set to repeatedly perform the elimination of the feature to be eliminated and the iterative training updating of the updated first initial training model to obtain one or more second model elements until the feature to be eliminated does not exist in the first model feature set, and jumping out of a first circulation flow corresponding to the first model feature set;
and recalculating the second type significance of each element in the updated second model feature set to repeatedly select the target feature in the second model feature set, adding the target feature into the first model feature set to repeatedly execute the first circulation flow, obtaining one or more second model elements, and jumping out of the second circulation flow corresponding to the second model feature set until the target feature does not exist in the second model feature set.
Optionally, the feature set to be trained comprises a first model feature set and a second model feature set,
The step of calculating the first type saliency and the second type saliency corresponding to the feature set to be trained respectively comprises the following steps:
Calculating wald chi-square values corresponding to elements in the first model feature set;
Calculating a first type significance of each element in the first model feature set based on each wald chi-square value and the degree of freedom of each element in the first model feature set;
Calculating a scoring chi-square value corresponding to each element in the second model feature set;
And calculating the second type significance of each element in the second model feature set based on each scoring chi-square value and the degree of freedom of each element in the second model feature set.
Optionally, the step of selecting a target training model from the first initial training model and the set of cyclic training models based on the configuration parameters includes:
Obtaining a model selection strategy in the parameter configuration, wherein the model selection strategy comprises an AUC (Area enclosed by a coordinate axis Under a subject working characteristic Curve) value and an AIC (Akaike information criterion, red pool information criterion) value;
If the model selection strategy is the AUC value, comparing the AUC values of all elements in the circulating training model set to select the element corresponding to the largest AUC value as the target training model;
and if the model selection strategy is the AIC value, comparing the AIC values of all elements in the cyclic training model set to select the element corresponding to the minimum AIC value as the target training model.
Optionally, the client comprises a visual interface,
The step of generating the visual data corresponding to the target training model and feeding back the visual data to the client comprises the following steps:
Obtaining alternative characteristic data, selection summary data and training process data corresponding to a model selection process of the target training model;
And generating visual data which corresponds to the alternative characteristic data, the selection summary data and the training process data together, and feeding back the visual data to the visual interface in real time.
In order to achieve the above object, the present application further provides a progressive model selection method applied to a client, the progressive model selection method comprising:
Receiving a model selection task, sending configuration parameters corresponding to the model selection task to a server associated with the client, so that the server performs model selection based on the configuration parameters and the acquired characteristics to be trained, acquires a target training model, and acquires visual data corresponding to the target training model to send the visual data to the client;
and receiving the visual data fed back by the server side, and displaying the visual data on a preset visual interface.
The application also provides a gradual model selection device, which is applied to a server, and comprises:
The first training module is used for receiving configuration parameters sent by a client associated with the server, acquiring a feature set to be trained, and training a preset model to be trained based on the feature set to be trained and the configuration parameters to obtain a first initial training model;
The computing module is used for respectively computing the first type saliency and the second type saliency corresponding to the feature set to be trained;
The second training module is used for carrying out cyclic training on the first initial training model based on the first type saliency and the second type saliency respectively to obtain a cyclic training model set;
The selecting module is used for selecting a target training model from the first initial training model and the circulating training model set based on the configuration parameters;
And the feedback module is used for generating the visual data corresponding to the target training model and feeding the visual data back to the client.
Optionally, the second training module includes:
The eliminating sub-module is used for eliminating the to-be-eliminated characteristics which meet the preset eliminating significance requirement in the first model characteristic set based on the first type significance;
The first cyclic training sub-module is used for carrying out cyclic training update on the first initial training model based on the first model feature set after the elimination until the feature to be eliminated does not exist in the first model feature set, and obtaining the first cyclic training model set;
A selecting sub-module, configured to select, based on each of the second type saliences, a target feature that meets a preset saliency requirement in the second model feature set;
and the second cyclic training sub-module is used for adding the target feature into the first model feature set, and carrying out cyclic training on the updated first initial training model based on the first model feature set added with the target feature until the feature to be eliminated does not exist in the first model feature set added with the target feature and the target feature does not exist in the second model feature set, so as to obtain the second cyclic training model set.
Optionally, the rejection submodule includes:
the first selecting unit is used for comparing the saliency of each first type so as to select the feature with the lowest saliency in the first model feature set as the feature to be selected;
The first comparison unit is used for comparing the significance to be selected of the feature to be selected with a preset eliminating significance threshold value;
The first judging unit is configured to judge that the feature to be selected meets the preset rejection significance requirement if the significance to be selected is smaller than the preset rejection significance threshold, and take the feature to be selected as the feature to be rejected.
Optionally, the first cyclic training submodule includes:
the first iterative training unit is used for carrying out iterative training update on the first initial training model based on the first model feature set after the elimination until the first initial training model meets the iterative training completion judging condition, and obtaining one of the first model elements;
And the second iterative training unit is used for recalculating the first type significance of each element in the characteristic set of the first model after being removed so as to repeatedly remove the characteristic to be removed and update the updated first initial training model until the characteristic to be removed does not exist in the characteristic set of the first model, and the first cyclic training model set is obtained.
Optionally, the selecting submodule includes:
The second selecting unit is used for comparing the saliency of each second type so as to select the most salient feature with the highest saliency in the second model feature set;
the second comparison unit is used for comparing the target significance corresponding to the most significant feature with the preset significance threshold value;
and the second judging unit is used for judging that the most significant feature meets the preset significance requirement if the target significance is larger than or equal to the preset significance threshold value, and taking the most significant feature as the target feature.
Optionally, the cyclic training submodule includes:
the updating unit is used for adding the target feature into the first model feature set to update the first model feature set and the second model feature set, and obtaining an updated first model feature set and an updated second model feature set;
A third iterative training unit, configured to recalculate the first type significance of each element in the updated first model feature set, so as to repeatedly perform rejection of the feature to be rejected and iterative training update on the updated first initial training model, obtain one or more second model elements, and skip a first circulation flow corresponding to the first model feature set until the feature to be rejected does not exist in the first model feature set;
And the circulation unit is used for recalculating the second type significance of each element in the updated second model feature set so as to repeatedly select the target feature in the second model feature set, adding the target feature into the first model feature set, repeatedly executing the first circulation flow to obtain one or more second model elements until the target feature does not exist in the second model feature set, and jumping out of the second circulation flow corresponding to the second model feature set.
Optionally, the computing module includes:
The first computing sub-module is used for computing wald chi-square values corresponding to elements in the first model feature set;
A second computing sub-module, configured to calculate a first type significance of each element in the first model feature set based on each wald chi-square value and a degree of freedom of each element in the first model feature set;
a third calculation sub-module, configured to calculate a score chi-square value corresponding to each element in the second model feature set;
and a fourth computing sub-module, configured to compute a second type saliency of each element in the second model feature set based on each scoring square value and a degree of freedom of each element in the second model feature set.
Optionally, the selecting module includes:
an acquisition sub-module, configured to acquire a model selection policy in the parameter configuration, where the model selection policy includes an AUC value and an AIC value;
A first selecting sub-module, configured to compare the AUC values of the elements in the set of the cyclic training models if the model selection policy is the AUC value, so as to select an element corresponding to the AUC value that is the largest as the target training model;
And the second selecting sub-module is used for comparing the AIC values of all elements in the circulating training model set if the model selection strategy is the AIC value so as to select the element corresponding to the minimum AIC value as the target training model.
Optionally, the feedback module includes:
the acquisition sub-module is used for acquiring alternative characteristic data, selection summary data and training process data corresponding to a model selection process of the target training model;
And the feedback sub-module is used for generating the visual data which corresponds to the alternative characteristic data, the selection summary data and the training process data together and feeding the visual data back to the visual interface in real time.
To achieve the above object, the present application also provides a progressive model selection apparatus applied to a client, the progressive model selection apparatus comprising:
The sending module is used for receiving a model selection task, sending configuration parameters corresponding to the model selection task to a server associated with the client, enabling the server to perform model selection based on the configuration parameters and the acquired characteristics to be trained, acquiring a target training model, and acquiring visual data corresponding to the target training model so as to send the visual data to the client;
And the receiving module is used for receiving the visual data fed back by the server and displaying the visual data on a preset visual interface.
The present application also provides a progressive model selection apparatus comprising: the system comprises a memory, a processor and a program of the progressive model selection method stored on the memory and executable on the processor, wherein the program of the progressive model selection method can realize the steps of the progressive model selection method when being executed by the processor.
The present application also provides a readable storage medium having stored thereon a program for implementing a stepwise model selection method, which when executed by a processor implements the steps of the stepwise model selection method as described above.
According to the method, configuration parameters sent by a client associated with the server are received, a feature set to be trained is obtained, a preset model to be trained is trained based on the feature set to be trained and the configuration parameters, a first initial training model is obtained, first type saliency and second type saliency corresponding to the feature set to be trained are calculated respectively, the first initial training model is circularly trained based on the first type saliency and the second type saliency respectively, a circular training model set is obtained, a target training model is selected from the first initial training model and the circular training model set based on the configuration parameters, visual data corresponding to the target training model is generated, and the visual data is fed back to the client. The method comprises the steps of firstly receiving configuration parameters sent by a client and obtaining a feature set to be trained, training a preset model to be trained based on the feature set to be trained and the configuration parameters to obtain a first initial training model, further respectively calculating first type saliency and second type saliency corresponding to the feature set to be trained, further respectively carrying out cyclic training on the first initial training model based on the first type saliency and the second type saliency to obtain a cyclic training model set, further selecting a target training model from the first initial training model and the cyclic training model set based on the configuration parameters, further generating visual data corresponding to the target training model, and further feeding the visual data back to the client. The application also provides a method for selecting the model of the step-by-step selection mode of the non-coding distributed modeling and the visual modeling, which is characterized in that a user only needs to set and send necessary configuration parameters to the step-by-step model selection server through the client, the step-by-step model selection server can feed back visual data and step-by-step model selection results corresponding to the corresponding step-by-step model selection process, namely, the client and the step-by-step model selection server are in communication connection to perform model modeling, so that the distributed modeling is realized, the modeling efficiency of the step-by-step selection mode is improved compared with the single machine, the step-by-step selection mode modeling is further improved, the step-by-step model selection results corresponding to the modeling parameters are converted into visual data and fed back to the client, and the user only needs to input necessary model parameters at a visual interface of the client, so that the requirement on the code development capability of the user is eliminated, the non-code modeling and the visual modeling are further realized, the capability requirements on modeling staff are greatly reduced, and the technical problems of high modeling threshold and low efficiency of the step-by-step selection mode in the prior art are solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a first embodiment of a progressive model selection method of the present application;
FIG. 2 is a schematic diagram of a visual interface for performing the parameter configuration in the progressive model selection method of the present application;
FIG. 3 is a flow chart of a second embodiment of the progressive model selection method of the present application;
FIG. 4 is a schematic diagram of the first cycle flow in the progressive model selection method of the present application;
FIG. 5 is a schematic diagram of a model selection process corresponding to the second cyclic process and the first cyclic process in the stepwise model selection method of the present application;
FIG. 6 is a flow chart of a third embodiment of a progressive model selection method of the present application;
fig. 7 is a schematic device structure diagram of a hardware running environment according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In a first embodiment of the present application, referring to fig. 1, the progressive model selection method includes:
Step S10, receiving configuration parameters sent by a client associated with the server, acquiring a feature set to be trained, and training a preset model to be trained based on the feature set to be trained and the configuration parameters to obtain a first initial training model;
In this embodiment, it should be noted that, the client includes a visual interface, a user may perform parameter configuration on a preset model to be trained on the visual interface to obtain the configuration parameters, as shown in fig. 2, which is a schematic diagram of the visual interface performing the parameter configuration, where parameters such as the maximum iteration coefficient, the minimum convergence error, a progressive model selection mode, and a class weight are all parameters that need to be set before model training, the progressive model selection mode includes a forward selection mode, a backward selection mode, and a progressive selection mode, and the like, the progressive model selection method is applied to a progressive model selection server, the feature to be trained includes one or more features, each feature includes one or more feature data, the preset model to be trained includes a logistic regression model, the feature set to be trained includes a first model feature set and a second model feature set, where the first model feature set is a feature set that has been added to the preset model to be trained, and the second model feature set is a feature set that has not been added to the preset model to be trained, and the feature set includes one or more feature sets.
Receiving configuration parameters sent by a client associated with the server, acquiring a feature set to be trained, training a preset model to be trained based on the feature set to be trained and the configuration parameters, and obtaining a first initial training model, specifically, receiving the configuration parameters sent by the client associated with the server, extracting the feature set to be trained from a local database of the preset server, iteratively training the preset model to be trained based on feature data of each feature in the feature set of the first model to be trained, and stopping training when a training completion judging condition in the configuration parameters is reached, so as to obtain the first initial training model, wherein the training completion judging condition comprises that the maximum algebra is reached, the minimum convergence error is reached, and the like, and when the feature is not added to the preset model to be trained at first, the preset model to be trained has only intercept items.
Step S20, respectively calculating the first type saliency and the second type saliency corresponding to the feature set to be trained;
In this embodiment, it should be noted that, both the first type of salience and the second type of salience may be determined based on a pearson correlation value, that is, when the pearson correlation value is less than or equal to a preset pearson correlation threshold, it is determined that a feature corresponding to the first type of salience or the second type of salience meets a preset salience requirement, that is, a feature corresponding to the first type of salience or the second type of salience appears to be significant, and when the pearson correlation value is greater than the preset pearson correlation threshold, it is determined that a feature corresponding to the first type of salience or the second type of salience does not meet the preset salience requirement, that is, a feature corresponding to the first type of salience or the second type of salience appears to be non-significant, and the feature set to be trained includes a first model feature set and a second model feature set.
And respectively calculating the first type salience and the second type salience corresponding to the feature set to be trained, specifically, calculating the first type salience corresponding to each feature in the first model feature set, and calculating the second type salience corresponding to each feature in the second model feature set.
Wherein the feature set to be trained comprises a first model feature set and a second model feature set,
The step of calculating the first type saliency and the second type saliency corresponding to the feature set to be trained respectively comprises the following steps:
step S21, wald chi-square values corresponding to all elements in the first model feature set are calculated;
in this embodiment, wald chi-square values corresponding to each element in the first model feature set are calculated, specifically, wald chi-square values corresponding to each element in the first model feature set are calculated based on a preset wald chi-square value calculation formula, where the preset wald chi-square value calculation formula is as follows:
wherein, Wherein S 1 is the chi-square value of wald, X is the characteristic data representation matrix corresponding to the characteristic set to be trained, wherein X includes n pieces of data, each piece of data includes k number values, and X can be represented by a matrix, and model parameters obtained by training the preset model to be trained based on X are θ 0, where θ is a vector (θ 1、θ2、…、θk-1、θk) in k dimensions, and the characteristic set to be trained can be divided into a first model characteristic set and a second model characteristic set, the characteristic data representation matrix corresponding to the first model characteristic set is X 0, the characteristic data representation matrix corresponding to the second model characteristic set is X 1, wherein X 0 includes n pieces of data, each piece of data includes (k-t) number values, and model parameters obtained by training the preset model to be trained corresponding to X 0 are θ 0, and in θ 3776 is a vector (θ 1、θ2、…、θk-t),X1) in k dimensions, each piece of data includes n pieces of data, each piece of data includes t to be trained, the target data corresponds to a value to be c=p, and the probability of the model is calculated as p=h under the conditions that all the values are satisfied, and the probability of h=h is zero, and the probability of the model is calculated under the conditions that all the values are satisfied by using h=1.
Step S21, calculating the first type significance of each element in the first model feature set based on each wald chi-square value and the degree of freedom of each element in the first model feature set;
In this embodiment, the degree of freedom is related to the number of feature data corresponding to a feature, and for example, the degree of freedom is 99 assuming that 100 pieces of different data exist in the feature data.
Calculating a first type significance of each element in the first model feature set based on each of the wald chi square values and the degrees of freedom of each element in the first model feature set, specifically, calculating a pearson correlation value of each element in the first model feature set by a preset pearson correlation value calculation formula based on each of the wald chi square values and the degrees of freedom of each element in the first model feature set, and further, determining a first type significance of each element in the first model feature set by each of the pearson correlation values, for example, assuming that each of the pearson correlation values is 0.001, 0.01, and 0.05, respectively, the corresponding determination of each of the first type significance measures 10, 1, and 0.2, wherein the greater the measurement value is, the more significant the first significance is.
Step S23, calculating the scoring chi-square value corresponding to each element in the second model feature set.
In this embodiment, the scoring chi-square value corresponding to each element in the second model feature set is calculated, specifically, the scoring chi-square value corresponding to each element in the second model feature set is calculated based on a preset scoring chi-square value calculation formula, where the preset scoring chi-square value calculation formula is as follows:
wherein,
Wherein S2 is the score square value, X is a feature data representation matrix corresponding to the feature set to be trained, X includes n pieces of data, each piece of data includes k number values, and X may be represented by the matrix, and model parameters obtained by training the preset model to be trained based on X are θ, where θ is a vector (θ 1、θ2、…、θk-1、θk) in k dimensions, and the feature set to be trained may be divided into a first model feature set X and a second model feature set, where feature data representation matrix corresponding to the first model feature set is X 0, feature data representation matrix corresponding to the second model feature set is X 1, where X 0 includes n pieces of data, each piece of data includes (k-t) number values, and model parameters obtained by training the preset model to be trained corresponding to X 0 are θ 0, where θ 0 is a vector (θ 3) in k dimensions includes n pieces of data, each piece of data includes t, output data corresponding to the target to be trained is Y35, and probability of existence probability of the target to be predicted is P35, and P is 35.
Step S24, calculating the second type salience of each element in the second model feature set based on each scoring chi-square value and the degree of freedom of each element in the second model feature set.
In this embodiment, the second type salience of each element in the second model feature set is calculated based on each scoring chi-square value and the degree of freedom of each element in the second model feature set, specifically, the pearson correlation value of each element in the second model feature set is calculated by a preset pearson correlation value calculation formula based on each scoring chi-square value and the degree of freedom of each element in the second model feature set, and then the second type salience of each element in the second model feature set is determined by each pearson correlation value.
Step S30, carrying out cyclic training on the first initial training model based on the first type salience and the second type salience respectively to obtain a cyclic training model set;
in this embodiment, the cyclic training model set includes a first cyclic training model set and a second cyclic training model set, where the first cyclic training model set includes one or more first model elements, and the second cyclic training model set includes one or more second model elements.
And performing loop training on the first initial training model based on the first type significance and the second type significance respectively to obtain a loop training model set, specifically, gradually rejecting features to be rejected which meet preset rejection significance requirements in the first model feature set based on the first type significance, iteratively training and updating the first initial training model based on the first model feature set after rejecting one feature to be rejected each time to obtain a first model element until a preset first loop termination condition is reached, obtaining the first loop training model set based on the second type significance, selecting target features which meet preset significance requirements in the second model feature set based on the second type significance, adding the target features into the first model feature set, and performing one-time loop training on the first initial training model after updating based on the first model feature set after adding one target feature each time, wherein the first loop training element is subjected to one-time or the second loop termination condition is not reached, and the first loop termination condition is not reached.
Step S40, selecting a target training model from the first initial training model and the cyclic training model set based on the configuration parameters;
in this embodiment, it should be noted that the configuration parameters include a model selection policy.
And selecting a target training model from the first initial training model and the cyclic training model set based on the configuration parameters. Specifically, based on the model selection strategy, selecting a model which best accords with the model selection strategy from the elements of the first initial training model and the cyclic training model set as the target training model.
Wherein the step of selecting a target training model from the first initial training model and the set of cyclic training models based on the configuration parameters comprises:
step S41, obtaining a model selection strategy in the parameter configuration, wherein the model selection strategy comprises an AUC value and an AIC value;
in this embodiment, it should be noted that, the AUC value is the standard of the training model, and the greater the AUC value, the better the training model, where the AUC value is the area enclosed by the axis under the ROC (receiver operating characteristic curve, subject working characteristic curve) curve, and the value of this area is not greater than 1, where the ROC curve is a curve drawn according to a series of different classification manners (demarcation value or decision threshold), with the true positive rate (sensitivity) as the ordinate, the false positive rate (1-specificity) as the abscissa, and the AIC value is a value calculated based on the AIC criterion, where the AIC criterion is a standard for measuring the fitting preference of the statistical model.
Step S42, if the model selection strategy is the AUC value, comparing the AUC values of the elements in the circulating training model set to select the element corresponding to the largest AUC value as the target training model;
In this embodiment, if the model selection policy is the AUC value, the AUC values of the elements in the cyclic training model set are compared to select the element corresponding to the biggest AUC value as the target training model, specifically, if the model selection policy is the AUC value, the AUC values are compared to obtain the biggest AUC value, and the training model corresponding to the biggest AUC value is used as the target training model, where the training model includes a first initial training model and the elements in the cyclic training model set.
And step S43, if the model selection strategy is the AIC value, comparing the AIC values of all elements in the cyclic training model set to select the element corresponding to the smallest AIC value as the target training model.
In this embodiment, if the model selection policy is the AIC value, the AIC values of the elements in the cyclic training model set are compared to select an element corresponding to the minimum AIC value as the target training model, and specifically, if the model selection policy is the AIC value, the AIC values are compared to obtain the minimum AIC value, and a training model corresponding to the minimum AIC value is used as the target training model, where the training model includes a first initial training model and the elements in the cyclic training model set.
And S50, generating visual data corresponding to the target training model, and feeding back the visual data to the client.
In this embodiment, it should be noted that, the visual data includes alternative feature visual data, model selection summary visual data and training process visual data, where the alternative feature is a feature in the feature set to be trained, and the model selection summary data includes summary data for performing model selection on model elements in the first initial training model and the cyclic training model set.
Generating visual data corresponding to the target training model, feeding the visual data back to the client, specifically, generating visual data corresponding to an acquisition process corresponding to the target training model, wherein the acquisition process comprises a feature selection process, a model training process, a model selection process and the like, and further feeding the visual data back to a visual interface of the client for displaying to a client, wherein the feature selection process is a process of selecting features in the feature set to be trained, the model training process is a process of training the target model, the target model comprises a preset model to be trained, a first initial training model, model elements and the like, and the model selection process is a process of selecting the target training model based on a preset model selection strategy.
Wherein the client comprises a visual interface,
The step of generating the visual data corresponding to the target training model and feeding back the visual data to the client comprises the following steps:
step S51, obtaining alternative characteristic data, selection summary data and training process data corresponding to a model selection process of the target training model;
In this embodiment, the model selection process of the target training model includes a model iterative training process, a feature selection process, a model selection process, and the like, where the feature selection process is a process of rejecting the feature to be rejected, and the model selection process is a process of selecting the target training model based on a preset model selection policy.
And acquiring alternative characteristic data, selection summary data and training process data corresponding to a model selection process of the target training model, and specifically acquiring the alternative characteristic data of the characteristic selection process, the selection summary data of the model selection process and the training process data of the model iterative training process in real time.
And step S52, generating visual data which corresponds to the alternative characteristic data, the selection summary data and the training process data together, and feeding back the visual data to the visual interface in real time.
In this embodiment, the visual data includes graphic data, table data, and the like.
Generating the visual data which are corresponding to the alternative feature data, the selection summary data and the training process data in common, feeding back the visual data to the visual interface in real time, specifically, generating the visual data which are corresponding to the alternative feature data, the selection summary data and the training process data in common in real time, and feeding back the visual data to the visual interface in real time, wherein the time interval for feeding back the visual data to the visual interface in real time can be set by a user of a step-by-step model selection server, and a client user can inquire the visual data on the client in real time.
According to the method, configuration parameters sent by a client associated with the server are received, a feature set to be trained is obtained, a preset model to be trained is trained based on the feature set to be trained and the configuration parameters, a first initial training model is obtained, first type saliency and second type saliency corresponding to the feature set to be trained are calculated respectively, the first initial training model is circularly trained based on the first type saliency and the second type saliency respectively, a circular training model set is obtained, a target training model is selected from the first initial training model and the circular training model set based on the configuration parameters, visual data corresponding to the target training model is generated, and the visual data is fed back to the client. That is, in this embodiment, the configuration parameters sent by the client are received and the feature set to be trained is obtained first, training on a preset model to be trained is performed based on the feature set to be trained and the configuration parameters, a first initial training model is obtained, and then calculation of a first type significance and a second type significance corresponding to the feature set to be trained is performed respectively, and further, cyclic training on the first initial training model is performed based on each first type significance and each second type significance respectively, a cyclic training model set is obtained, and further, selection of a target training model is performed among the first initial training model and the cyclic training model set based on the configuration parameters, and further, generation of visual data corresponding to the target training model is performed, and further, the visual data is fed back to the client. That is, the present embodiment provides a model selection method of a stepwise selection mode of a non-coding distributed modeling and a visual modeling, a user only needs to set and send necessary configuration parameters to a stepwise model selection server through a client, the stepwise model selection server can feed back visual data and stepwise model selection results corresponding to a corresponding stepwise model selection process, that is, the client and the stepwise model selection server are in communication connection to perform model modeling, so that the distributed modeling is realized, and further compared with the stepwise selection mode performed by a single machine, the modeling efficiency of the stepwise selection mode is improved, and the stepwise model selection results corresponding to the acquired modeling parameters are converted into visual data and fed back to the client.
Further, referring to fig. 3, in another embodiment of the stepwise model selection method according to the first embodiment of the present application, the feature set to be trained comprises a first model feature set and a second model feature set, the cyclic training model set comprises a first cyclic training model set and a second cyclic training model set,
The step of performing a cyclic training on the first initial training model based on each of the first type of salience and each of the second type of salience, respectively, to obtain a cyclic training model set includes:
Step S31, eliminating the features to be eliminated, which meet the requirement of eliminating the saliency in the first model feature set, based on the saliency of each first type;
In this embodiment, based on each of the first type saliences, features to be removed that meet a preset removing salience requirement are removed from the first model feature set, specifically, based on each of the first type saliences, features to be selected with the lowest salience are selected from the first model feature set, whether the features to be selected meet the preset removing salience requirement is judged, if the features to be selected meet the preset removing salience requirement, the features to be selected are used as the features to be removed, if the features to be selected do not meet the preset removing salience requirement, a first circulation flow corresponding to the first model feature set is skipped, wherein a schematic diagram of the first circulation flow is shown in fig. 4, the data is training data corresponding to each feature in the feature set to be trained, the training model is the preset model to be trained, the features added into the model are each feature in the first model feature set, and the threshold is the preset removing salience threshold.
The step of eliminating the features to be eliminated, which meet the requirement of eliminating the preset significance, in the first model feature set based on the first type significance comprises the following steps:
Step S311, comparing the saliency of each first type, so as to select the feature with the lowest saliency in the first model feature set as the feature to be selected;
in this embodiment, the saliency of each first type is compared to select a feature with the lowest saliency in the first model feature set as a feature to be selected, and specifically, the pearson correlation value corresponding to each saliency of the first type is compared to select a feature with the largest pearson correlation value as the feature to be selected.
Step S312, comparing the significance to be selected of the feature to be selected with a preset eliminating significance threshold;
In this embodiment, it should be noted that the preset threshold of eliminating salience may be set by a user, and the salience to be selected is the salience of the first type of the feature to be selected.
Comparing the significance to be selected of the feature to be selected with a preset rejection significance threshold, and specifically comparing the pearson correlation value corresponding to the significance to be selected of the feature to be selected with the rejection pearson correlation threshold corresponding to the preset rejection significance threshold.
Step S313, if the significance to be selected is smaller than the preset rejection significance threshold, determining that the feature to be selected meets the preset rejection significance requirement, and taking the feature to be selected as the feature to be rejected.
In this embodiment, if the salience to be selected is smaller than the preset reject salience threshold, the feature to be selected is determined to meet the preset reject salience requirement, and the feature to be selected is taken as the feature to be rejected, specifically, if the salience to be selected is smaller than the preset reject salience threshold, the pearson correlation value corresponding to the salience to be selected is indicated to be larger than the reject pearson correlation threshold, the feature to be selected is not obvious, further, the feature to be selected is determined to meet the preset reject salience requirement, and the feature to be selected is taken as the feature to be rejected, if the salience to be selected is larger than or equal to the preset reject salience threshold, the pearson correlation value corresponding to the salience to be selected is indicated to be smaller than or equal to the reject pearson correlation threshold, further, the feature to be selected is determined to meet the reject salience requirement, and the first cycle preset flow is determined.
Step S32, based on the first model feature set after rejection, carrying out cyclic training update on the first initial training model until the feature to be rejected does not exist in the first model feature set, and obtaining the first cyclic training model set;
In this embodiment, the first initial training model is cyclically trained and updated based on the first model feature set after being removed until the feature to be removed does not exist in the first model feature set, so as to obtain the first cyclic training model set, specifically, training data corresponding to each feature of the first model feature set after being removed is input into the first initial training model, so as to iteratively train and update the first initial training model, obtain the updated first initial training model, and use the updated first initial training model as one first model element, further, the first cyclic process is repeatedly executed again and again, that is, other features to be removed are removed again in the first model feature set after being removed, the first model element is obtained by iteratively training and updating the updated first initial training model based on the first model feature set after being removed again, the above process is repeatedly executed, so as to obtain a plurality of first model elements until the first cyclic process reaches the first cyclic process termination condition.
Wherein the configuration parameters include iterative training completion decision conditions, the first set of cyclic training models includes one or more first model elements,
The step of circularly training and updating the first initial training model based on the first model feature set after the elimination until the feature to be eliminated does not exist in the first model feature set, and the step of obtaining the first circularly training model set comprises the following steps:
step S321, performing iterative training update on the first initial training model based on the eliminated first model feature set until the first initial training model meets the iterative training completion judgment condition, so as to obtain one of the first model elements;
in this embodiment, it should be noted that the conditions for determining completion of iterative training include reaching the maximum number of iterations, reaching the minimum convergence error, and the like.
Step S322, recalculating the first type significance of each element in the first model feature set after rejection, so as to repeatedly perform rejection of the feature to be rejected and iterative training update of the updated first initial training model, until the feature to be rejected does not exist in the first model feature set, and obtaining the first cyclic training model set.
In this embodiment, it should be noted that, each time one feature to be removed is removed from the first model feature set, iterative training update is performed on the first initial training model after the previous update based on the removed first model feature set until the first initial training model after the previous update meets the iterative training completion determination condition, and then the updated first initial training model is obtained, that is, one of the first model elements is obtained, until all the features to be removed in the first model feature set are removed, and then one or more first model elements are obtained, that is, the first model feature set is obtained.
Step S33, selecting target features meeting preset significance requirements from the second model feature set based on the second type significance;
in this embodiment, it should be noted that, the second type of salience may be determined based on a pearson correlation value, and when the pearson correlation value is less than or equal to a preset pearson correlation threshold, it is determined that a feature corresponding to the second type of salience meets a preset salience requirement, that is, the feature corresponding to the second type of salience appears to be significant, and when the pearson correlation value is greater than the preset pearson correlation threshold, it is determined that the feature corresponding to the second type of salience does not meet the preset salience requirement, that is, the feature corresponding to the second salience appears to be non-significant.
Selecting a target feature meeting a preset significance requirement in the second model feature set based on each second type significance, specifically selecting a most significant feature with highest significance in the second model feature set based on each second type significance, judging whether the most significant feature meets the preset significance requirement, if the most significant feature meets the preset significance requirement, taking the most significant feature as the target feature, if the feature to be selected does not meet the preset significance requirement, jumping out a second circulation flow corresponding to the second model feature set, and according to a model selection flow schematic diagram of the second circulation flow and the first circulation flow, wherein the data is training data corresponding to each feature in the feature set to be trained, the training model is the preset model to be trained, the feature added into the model is the features in the first model feature set, the threshold is the preset rejection significance threshold, and the significance is the preset significance requirement.
The step of selecting the target feature meeting the preset significance requirement in the second model feature set based on each second type significance comprises the following steps:
step S331, comparing the saliency of each second type, so as to select the most significant feature with the highest saliency in the second model feature set;
In this embodiment, the saliency of each second type is compared to select the most significant feature with the highest saliency in the second model feature set, and specifically, the pearson correlation value corresponding to each second type of saliency is compared to select the feature with the smallest pearson correlation value as the most significant feature.
Step S332, comparing the target significance corresponding to the most significant feature with the preset significance threshold;
In this embodiment, it should be noted that the preset saliency threshold may be set by a user, and the target saliency is the second type saliency of the most significant feature.
And comparing the target significance corresponding to the most significant feature with the preset significance threshold, and particularly comparing the pearson correlation value corresponding to the target significance of the most significant feature with the pearson correlation threshold corresponding to the preset significance threshold.
Step S333, if the target saliency is greater than or equal to the preset saliency threshold, determining that the most salient feature meets the preset saliency requirement, and taking the most salient feature as the target feature.
In this embodiment, if the target salience is greater than or equal to the preset salience threshold, it is determined that the most significant feature meets the preset salience requirement, and the most significant feature is taken as the target feature, specifically, if the target salience is greater than or equal to the preset salience threshold, it is indicated that the pearson correlation value corresponding to the target salience is less than or equal to the pearson correlation threshold, the feature to be selected is significant, and further, it is determined that the most significant feature to be selected meets the preset salience requirement, and the most significant feature is taken as the target feature, if the target salience is less than the preset salience threshold, it is indicated that the pearson correlation value corresponding to the target salience is greater than the pearson correlation threshold, and further, it is determined that the most significant feature does not meet the preset salience requirement, and the second cycle is skipped.
Step S34, adding the target feature to the first model feature set, and performing a cyclic training on the updated first initial training model based on the first model feature set after adding the target feature until the feature to be eliminated does not exist in the first model feature set after adding the target feature and the target feature does not exist in the second model feature set, so as to obtain the second cyclic training model set.
In this embodiment, it should be noted that, adding the target feature to the first model feature set, performing a cyclic training on the updated first initial training model based on the first model feature set after adding the target feature until the feature to be removed does not exist in the first model feature set after adding the target feature and the target feature does not exist in the second model feature set, to obtain the second cyclic training model set, specifically, adding the target feature to the first model feature set, adding the first model feature set after adding the target feature to the first initial training model after the last iterative training update, performing an iterative training update on the first initial training model after the last iterative training update to obtain the updated first initial training model, and taking the updated first initial training model as one of the second model elements, further, repeatedly performing the first cyclic process until the feature set after adding the target feature set, that is, i.e. the first cyclic process is completed, and the feature set is again removed, and repeatedly performing a second cyclic process until the feature set is obtained, and the feature set is repeatedly removed again, and the feature is obtained again, and the first cyclic process is repeatedly performed until the feature set is repeatedly removed, and the feature set is repeatedly obtained, until the first model feature set added with the target feature reaches the preset first circulation flow termination condition and the second model feature set reaches the preset second circulation flow termination condition, that is, until the feature to be removed does not exist in the first model feature set and the target feature does not exist in the second model feature set.
Wherein the second set of cyclic training models comprises one or more second model elements,
The step of adding the target feature to the first model feature set, and performing cyclic training on the updated first initial training model based on the first model feature set after adding the target feature until the feature to be eliminated does not exist in the first model feature set after adding the target feature and the target feature does not exist in the second model feature set, and the step of obtaining the second cyclic training model set includes:
Step S341, adding the target feature into the first model feature set to update the first model feature set and the second model feature set, and obtaining an updated first model feature set and an updated second model feature set;
In this embodiment, the target feature is added to the first model feature set to update the first model feature set and the second model feature set, so as to obtain an updated first model feature set and an updated second model feature set, specifically, the target feature is added to the first model feature set to update the number of features and information included in the first model feature set, and the number of features and information included in the second model feature set are updated, so as to obtain an updated first model feature set and an updated second model feature set, for example, assuming that the first model feature set includes a feature X 1 and a feature X 2, wherein the target feature is X 1, the second model feature set includes a feature X 3 and a feature X 4, and the updated first model feature set includes a feature X 2, and the updated second model feature set includes a feature X 1, a feature X 3 and a feature X 4.
Step S342, performing iterative training update on the first initial training model based on the updated first model feature set, to obtain one of the second model elements;
In this embodiment, based on the updated first model feature set, performing iterative training update on the first initial training model to obtain one of the second model elements, specifically, adding the updated first model feature set to the first initial training model updated last time, so as to perform iterative training update on the first initial training model until the first initial training model reaches the iterative training completion determination condition, thereby obtaining the first initial training model updated this time, that is, obtaining one of the second model elements.
Step S343, recalculating the first type significance of each element in the updated first model feature set, so as to repeatedly perform the elimination of the feature to be eliminated and the iterative training update of the updated first initial training model, obtain one or more second model elements, and jump out the first circulation flow corresponding to the first model feature set until the feature to be eliminated does not exist in the first model feature set;
In this embodiment, it should be noted that, after adding one target feature to the first model feature set, the first circulation process is re-executed until the first model feature set does not have the feature to be removed, so as to obtain one or more second model elements.
Step S344, recalculate the second type significance of each element in the updated second model feature set, so as to repeatedly select the target feature in the second model feature set, add the target feature into the first model feature set, and repeatedly execute the first circulation flow to obtain one or more second model elements, until the target feature does not exist in the second model feature set, and jump out of the second circulation flow corresponding to the second model feature set.
In this embodiment, the second type significance of each element in the updated second model feature set is recalculated, so as to repeatedly select the target feature in the second model feature set, add the target feature to the first model feature set, repeatedly execute the first circulation flow, obtain one or more second model elements, until the target feature does not exist in the second model feature set, skip the second circulation flow corresponding to the second model feature set, specifically, recalculate the second type significance of each element in the updated second model feature set, reselect a target feature to add the first model feature set based on each recalculated second type significance, and re-execute the first circulation flow until the feature to be removed does not exist in the first model feature set, obtain one or more second model elements, further, continue to reselect a target feature in the second model feature set, so as to re-execute the first circulation flow until the second model feature does not exist in the second circulation flow, and obtain the target feature set.
According to the embodiment, the features to be eliminated, which meet the preset eliminating significance requirement, are eliminated from the first model feature set based on the first type significance, the first initial training model is circularly trained and updated based on the first model feature set after elimination until the features to be eliminated do not exist in the first model feature set, the first circulating training model set is obtained, further, the target features, which meet the preset significance requirement, are selected from the second model feature set based on the second type significance, the target features are added into the first model feature set, the updated first initial training model is circularly trained based on the first model feature set after the target features are added until the features to be eliminated do not exist in the first model feature set after the target features are added, and the target features do not exist in the second model feature set, so that the second circulating training model set is obtained. That is, the embodiment gradually eliminates the feature to be eliminated in the first model feature set based on each first type significance, iteratively trains and updates the first initial training model based on the eliminated first model feature set to obtain the first model element, gradually selects the target feature in the second model feature set based on each second type significance to add the first model feature set, so as to iteratively train and update the first initial training model to obtain the second model element until the feature to be eliminated does not exist in the first model feature set and the target feature does not exist in the second model feature set, and obtains the circulating training model set, thereby realizing model selection of gradual selection mode, and further laying a foundation for realizing model selection of gradual selection mode of non-coding distributed modeling and visual modeling, so as to lay a foundation for solving the technical problems of high threshold and low efficiency of gradual selection mode modeling in the prior art.
Further, referring to fig. 6, in another embodiment of the forward model selection method according to the first embodiment of the present application, the forward model selection method is applied to a client, and the forward model selection method includes:
Step A10, receiving a model selection task, sending configuration parameters corresponding to the model selection task to a server associated with the client, so that the server can perform model selection based on the configuration parameters to obtain a target training model, and obtaining visual data corresponding to the target training model to send the visual data to the client;
in this embodiment, it should be noted that the model selection task includes a target model requirement, where the target model requirement is determined by the configuration parameter, and the configuration parameter includes parameters such as a large iteration coefficient, a minimum convergence error, a model selection mode, and the like.
Receiving a model selection task, sending configuration parameters corresponding to the model selection task to a server associated with the client, so that the server performs model selection based on the configuration parameters, obtains a target training model, obtains visual data corresponding to the target training model, sends the visual data to the client, specifically receives the model selection task, matches the configuration parameters corresponding to the model selection task in a preset local database or sets the configuration parameters by a user based on the model selection task, further sends the configuration parameters to the server associated with the client, performs training update on the preset model to be trained based on the configuration parameters and a locally obtained model set to be trained, obtains a first initial training model, performs cyclic training update on the first initial training model, obtains a cyclic training model set, selects a model conforming to a preset model selection strategy in each cyclic training model set as the target training model, converts the visual data corresponding to the target training model into the visual training data based on the visual training model selection strategy set, and summarizes the visual training data, wherein the visual training data comprises the visual training model set to be trained based on the visual training model characteristics, the visual training data summarization characteristic selection process summarization data, and the visual training process summarization data summarization method comprises the visual training data summarization data.
And step A20, receiving the visual data fed back by the server side, and displaying the visual data on a preset visual interface.
In this embodiment, it should be noted that, the client may query, in real time, the visual data corresponding to the process data of the server on the preset visual interface, and may query the process data in a process of performing model selection or after model selection is completed, where the client is in communication connection with the server.
In the embodiment, a model selection task is received, and configuration parameters corresponding to the model selection task are sent to a server associated with the client, so that the server performs model selection based on the configuration parameters, a target training model is obtained, visual data corresponding to the target training model is obtained, the visual data are sent to the client, the visual data fed back by the server are received, and the visual data are displayed on a preset visual interface. That is, the present embodiment provides a model selection method for transcoding-free distributed modeling and visual modeling, where a user only needs to set and send necessary configuration parameters to a server through a client, and the server can feed back corresponding visual data, that is, the present embodiment implements distributed modeling, improves modeling efficiency when performing model selection, and the model selection process has no requirement on code development capability of the user, reduces capability threshold requirements on modeling personnel, and because the server can convert process data corresponding to the target training model into visual data to feed back to the client, further reduces capability threshold requirements on modeling personnel, and the visual data is convenient for the modeling personnel to understand and read, further can further improve modeling efficiency of the modeling personnel, so that technical problems of high and low efficiency of forward mode modeling threshold selection in the prior art are solved.
Referring to fig. 7, fig. 7 is a schematic device structure diagram of a hardware running environment according to an embodiment of the present application.
As shown in fig. 7, the progressive model selection apparatus may include: a processor 1001, such as a CPU, memory 1005, and a communication bus 1002. Wherein a communication bus 1002 is used to enable connected communication between the processor 1001 and a memory 1005. The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the progressive model selection device may further include a rectangular user interface, a network interface, a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and the like. The rectangular user interface may include a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also include a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
It will be appreciated by those skilled in the art that the progressive model selection device structure shown in fig. 7 does not constitute a limitation of the progressive model selection device, and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components.
As shown in fig. 7, an operating system, a network communication module, and a progressive model selection program may be included in a memory 1005, which is a computer-storage readable storage medium. The operating system is a program that manages and controls the progressive model selection device hardware and software resources, supporting the progressive model selection program and the execution of other software and/or programs. The network communication module is used to enable communication between components within the memory 1005 and with other hardware and software in the progressive model selection system.
In the progressive model selection apparatus shown in fig. 7, a processor 1001 is configured to execute a progressive model selection program stored in a memory 1005, implementing the steps of the progressive model selection method described in any one of the above.
The specific implementation manner of the step-by-step model selecting device of the present application is substantially the same as that of each embodiment of the step-by-step model selecting method described above, and will not be described herein.
The embodiment of the application also provides a gradual model selection device, which is applied to the server and comprises:
The first training module is used for receiving configuration parameters sent by a client associated with the server, acquiring a feature set to be trained, and training a preset model to be trained based on the feature set to be trained and the configuration parameters to obtain a first initial training model;
The computing module is used for respectively computing the first type saliency and the second type saliency corresponding to the feature set to be trained;
The second training module is used for carrying out cyclic training on the first initial training model based on the first type saliency and the second type saliency respectively to obtain a cyclic training model set;
The selecting module is used for selecting a target training model from the first initial training model and the circulating training model set based on the configuration parameters;
And the feedback module is used for generating the visual data corresponding to the target training model and feeding the visual data back to the client.
Optionally, the second training module includes:
The eliminating sub-module is used for eliminating the to-be-eliminated characteristics which meet the preset eliminating significance requirement in the first model characteristic set based on the first type significance;
The first cyclic training sub-module is used for carrying out cyclic training update on the first initial training model based on the first model feature set after the elimination until the feature to be eliminated does not exist in the first model feature set, and obtaining the first cyclic training model set;
A selecting sub-module, configured to select, based on each of the second type saliences, a target feature that meets a preset saliency requirement in the second model feature set;
and the second cyclic training sub-module is used for adding the target feature into the first model feature set, and carrying out cyclic training on the updated first initial training model based on the first model feature set added with the target feature until the feature to be eliminated does not exist in the first model feature set added with the target feature and the target feature does not exist in the second model feature set, so as to obtain the second cyclic training model set.
Optionally, the rejection submodule includes:
the first selecting unit is used for comparing the saliency of each first type so as to select the feature with the lowest saliency in the first model feature set as the feature to be selected;
The first comparison unit is used for comparing the significance to be selected of the feature to be selected with a preset eliminating significance threshold value;
The first judging unit is configured to judge that the feature to be selected meets the preset rejection significance requirement if the significance to be selected is smaller than the preset rejection significance threshold, and take the feature to be selected as the feature to be rejected.
Optionally, the first cyclic training submodule includes:
the first iterative training unit is used for carrying out iterative training update on the first initial training model based on the first model feature set after the elimination until the first initial training model meets the iterative training completion judging condition, and obtaining one of the first model elements;
And the second iterative training unit is used for recalculating the first type significance of each element in the characteristic set of the first model after being removed so as to repeatedly remove the characteristic to be removed and update the updated first initial training model until the characteristic to be removed does not exist in the characteristic set of the first model, and the first cyclic training model set is obtained.
Optionally, the selecting submodule includes:
The second selecting unit is used for comparing the saliency of each second type so as to select the most salient feature with the highest saliency in the second model feature set;
the second comparison unit is used for comparing the target significance corresponding to the most significant feature with the preset significance threshold value;
and the second judging unit is used for judging that the most significant feature meets the preset significance requirement if the target significance is larger than or equal to the preset significance threshold value, and taking the most significant feature as the target feature.
Optionally, the cyclic training submodule includes:
the updating unit is used for adding the target feature into the first model feature set to update the first model feature set and the second model feature set, and obtaining an updated first model feature set and an updated second model feature set;
A third iterative training unit, configured to recalculate the first type significance of each element in the updated first model feature set, so as to repeatedly perform rejection of the feature to be rejected and iterative training update on the updated first initial training model, obtain one or more second model elements, and skip a first circulation flow corresponding to the first model feature set until the feature to be rejected does not exist in the first model feature set;
And the circulation unit is used for recalculating the second type significance of each element in the updated second model feature set so as to repeatedly select the target feature in the second model feature set, adding the target feature into the first model feature set, repeatedly executing the first circulation flow to obtain one or more second model elements until the target feature does not exist in the second model feature set, and jumping out of the second circulation flow corresponding to the second model feature set.
Optionally, the computing module includes:
The first computing sub-module is used for computing wald chi-square values corresponding to elements in the first model feature set;
A second computing sub-module, configured to calculate a first type significance of each element in the first model feature set based on each wald chi-square value and a degree of freedom of each element in the first model feature set;
a third calculation sub-module, configured to calculate a score chi-square value corresponding to each element in the second model feature set;
and a fourth computing sub-module, configured to compute a second type saliency of each element in the second model feature set based on each scoring square value and a degree of freedom of each element in the second model feature set.
Optionally, the selecting module includes:
an acquisition sub-module, configured to acquire a model selection policy in the parameter configuration, where the model selection policy includes an AUC value and an AIC value;
A first selecting sub-module, configured to compare the AUC values of the elements in the set of the cyclic training models if the model selection policy is the AUC value, so as to select an element corresponding to the AUC value that is the largest as the target training model;
And the second selecting sub-module is used for comparing the AIC values of all elements in the circulating training model set if the model selection strategy is the AIC value so as to select the element corresponding to the minimum AIC value as the target training model.
Optionally, the feedback module includes:
the acquisition sub-module is used for acquiring alternative characteristic data, selection summary data and training process data corresponding to a model selection process of the target training model;
And the feedback sub-module is used for generating the visual data which corresponds to the alternative characteristic data, the selection summary data and the training process data together and feeding the visual data back to the visual interface in real time.
The specific implementation manner of the step-by-step model selecting device of the present application is substantially the same as that of each embodiment of the step-by-step model selecting method described above, and will not be described herein.
To achieve the above object, an embodiment of the present application further provides a progressive model selection apparatus, which is applied to a client, the progressive model selection apparatus including:
The sending module is used for receiving a model selection task, sending configuration parameters corresponding to the model selection task to a server associated with the client, enabling the server to perform model selection based on the configuration parameters and the acquired characteristics to be trained, acquiring a target training model, and acquiring visual data corresponding to the target training model so as to send the visual data to the client;
And the receiving module is used for receiving the visual data fed back by the server and displaying the visual data on a preset visual interface.
The specific implementation manner of the step-by-step model selecting device of the present application is substantially the same as that of each embodiment of the step-by-step model selecting method described above, and will not be described herein.
Embodiments of the present application provide a readable storage medium, and the readable storage medium stores one or more programs, which are further executable by one or more processors for implementing the steps of the progressive model selection method described in any one of the above.
The specific implementation of the readable storage medium of the present application is substantially the same as the above embodiments of the progressive model selection method, and will not be described herein.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein, or any application, directly or indirectly, within the scope of the application.

Claims (11)

1. The gradual model selection method is characterized by being applied to a server, and comprises the following steps:
Receiving configuration parameters sent by a client associated with the server, acquiring a feature set to be trained, and training a preset model to be trained based on the feature set to be trained and the configuration parameters to obtain a first initial training model, wherein the preset model to be trained only comprises intercept items;
respectively calculating the first type saliency and the second type saliency corresponding to the feature set to be trained;
Based on the first type salience and the second type salience respectively, carrying out cyclic training on the first initial training model to obtain a cyclic training model set;
Selecting a target training model from the first initial training model and the set of cyclic training models based on the configuration parameters;
Generating visual data corresponding to an acquisition process corresponding to the target training model, and feeding the visual data back to the client;
The feature set to be trained comprises a first model feature set and a second model feature set, the cyclic training model set comprises a first cyclic training model set and a second cyclic training model set,
The step of performing a cyclic training on the first initial training model based on each of the first type of salience and each of the second type of salience, respectively, to obtain a cyclic training model set includes:
based on the salience of each first type, eliminating the features to be eliminated, which meet the requirement of eliminating the salience in the first model feature set;
Based on the first model feature set after rejection, carrying out cyclic training update on the first initial training model until the feature to be rejected does not exist in the first model feature set, and obtaining the first cyclic training model set;
selecting target features meeting preset significance requirements from the second model feature set based on the second type significance;
Adding the target feature into the first model feature set, and performing cyclic training on the updated first initial training model based on the first model feature set added with the target feature until the feature to be eliminated does not exist in the first model feature set added with the target feature and the target feature does not exist in the second model feature set, so as to obtain the second cyclic training model set.
2. The progressive model selection method of claim 1 wherein the step of culling features to be culled that meet a preset culling significance requirement in the first model feature set based on each of the first type significance comprises:
Comparing the salience of each first type to select the feature with the lowest salience from the first model feature set as the feature to be selected;
Comparing the significance to be selected of the feature to be selected with a preset eliminating significance threshold value;
If the significance to be selected is smaller than the preset eliminating significance threshold value, judging that the feature to be selected meets the preset eliminating significance requirement, and taking the feature to be selected as the feature to be eliminated.
3. The progressive model selection method of claim 1 wherein the configuration parameters comprise iterative training completion decision conditions, the first set of cyclical training models comprising one or more first model elements,
The step of circularly training and updating the first initial training model based on the first model feature set after the elimination until the feature to be eliminated does not exist in the first model feature set, and the step of obtaining the first circularly training model set comprises the following steps:
Based on the removed first model feature set, carrying out iterative training update on the first initial training model until the first initial training model meets the iterative training completion judging condition, and obtaining one of the first model elements;
And recalculating the first type significance of each element in the first model feature set after rejection to repeatedly perform rejection of the feature to be rejected and iterative training updating of the updated first initial training model until the feature to be rejected does not exist in the first model feature set, thereby obtaining the first cyclic training model set.
4. The progressive model selection method of claim 1 wherein the step of selecting target features in the second model feature set that meet a preset saliency requirement based on each of the second type of salience comprises:
Comparing the saliency of each second type to select the most salient feature with the highest saliency from the second model feature set;
Comparing the target significance corresponding to the most significant feature with the preset significance threshold;
And if the target significance is greater than or equal to the preset significance threshold, judging that the most significant feature meets the preset significance requirement, and taking the most significant feature as the target feature.
5. The progressive model selection method of claim 1 wherein the second set of cyclic training models comprises one or more second model elements,
The step of adding the target feature to the first model feature set, and performing cyclic training on the updated first initial training model based on the first model feature set after adding the target feature until the feature to be eliminated does not exist in the first model feature set after adding the target feature and the target feature does not exist in the second model feature set, and the step of obtaining the second cyclic training model set includes:
adding the target feature into the first model feature set to update the first model feature set and the second model feature set, and obtaining an updated first model feature set and an updated second model feature set;
Based on the updated first model feature set, performing iterative training update on the first initial training model to obtain one of the second model elements;
recalculating the first type significance of each element in the updated first model feature set to repeatedly perform the elimination of the feature to be eliminated and the iterative training updating of the updated first initial training model to obtain one or more second model elements until the feature to be eliminated does not exist in the first model feature set, and jumping out of a first circulation flow corresponding to the first model feature set;
and recalculating the second type significance of each element in the updated second model feature set to repeatedly select the target feature in the second model feature set, adding the target feature into the first model feature set to repeatedly execute the first circulation flow, obtaining one or more second model elements, and jumping out of the second circulation flow corresponding to the second model feature set until the target feature does not exist in the second model feature set.
6. The progressive model selection method of claim 1 wherein the feature set to be trained comprises a first model feature set and a second model feature set,
The step of calculating the first type saliency and the second type saliency corresponding to the feature set to be trained respectively comprises the following steps:
Calculating wald chi-square values corresponding to elements in the first model feature set;
Calculating a first type significance of each element in the first model feature set based on each wald chi-square value and the degree of freedom of each element in the first model feature set;
Calculating a scoring chi-square value corresponding to each element in the second model feature set;
And calculating the second type significance of each element in the second model feature set based on each scoring chi-square value and the degree of freedom of each element in the second model feature set.
7. The progressive model selection method of claim 1 wherein the step of selecting a target training model from the first initial training model and the set of cyclic training models based on the configuration parameters comprises:
Obtaining a model selection strategy in the parameter configuration, wherein the model selection strategy comprises an AUC value and an AIC value;
If the model selection strategy is the AUC value, comparing the AUC values of all elements in the circulating training model set to select the element corresponding to the largest AUC value as the target training model;
and if the model selection strategy is the AIC value, comparing the AIC values of all elements in the cyclic training model set to select the element corresponding to the minimum AIC value as the target training model.
8. The progressive model selection method of claim 1, wherein the client comprises a visualization interface,
The step of generating the visual data corresponding to the target training model and feeding back the visual data to the client comprises the following steps:
Obtaining alternative characteristic data, selection summary data and training process data corresponding to a model selection process of the target training model;
And generating visual data which corresponds to the alternative characteristic data, the selection summary data and the training process data together, and feeding back the visual data to the visual interface in real time.
9. A stepwise model selection method, wherein the stepwise model selection method is applied to a client, the stepwise model selection method comprising:
Receiving a model selection task, and sending configuration parameters corresponding to the model selection task to a server associated with the client so that the server trains a preset model to be trained only comprising intercept items based on the configuration parameters and the acquired feature set to be trained to obtain a first initial training model; the feature set to be trained comprises a first model feature set and a second model feature set, and the first type saliency and the second type saliency corresponding to the feature set to be trained are calculated respectively; based on the first type salience and the second type salience respectively, carrying out cyclic training on the first initial training model to obtain a cyclic training model set; the cyclic training model set comprises a first cyclic training model set and a second cyclic training model set; based on the salience of each first type, eliminating the features to be eliminated, which meet the requirement of eliminating the salience in the first model feature set; based on the first model feature set after rejection, carrying out cyclic training update on the first initial training model until the feature to be rejected does not exist in the first model feature set, and obtaining the first cyclic training model set; selecting target features meeting preset significance requirements from the second model feature set based on the second type significance; adding the target feature into the first model feature set, and performing cyclic training on the updated first initial training model based on the first model feature set added with the target feature until the feature to be eliminated does not exist in the first model feature set added with the target feature and the target feature does not exist in the second model feature set, so as to obtain the second cyclic training model set; selecting a target training model from the first initial training model and the cyclic training model set based on the configuration parameters, and acquiring visual data corresponding to an acquisition process corresponding to the target training model so as to send the visual data to the client;
and receiving the visual data fed back by the server side, and displaying the visual data on a preset visual interface.
10. A progressive model selection apparatus, characterized in that the progressive model selection apparatus comprises: a memory, a processor and a program stored on the memory for implementing the progressive model selection method,
The memory is used for storing a program for realizing a gradual model selection method;
The processor is configured to execute a program implementing the progressive model selection method to implement the steps of the progressive model selection method as claimed in any one of claims 1 to 8 or 9.
11. A readable storage medium, characterized in that it has stored thereon a program implementing a stepwise model selection method, which program is executed by a processor to implement the steps of the stepwise model selection method according to any one of claims 1 to 8 or 9.
CN202010024336.7A 2020-01-09 2020-01-09 Gradual model selection method, equipment and readable storage medium Active CN111241745B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010024336.7A CN111241745B (en) 2020-01-09 2020-01-09 Gradual model selection method, equipment and readable storage medium
PCT/CN2020/134035 WO2021139462A1 (en) 2020-01-09 2020-12-04 Stepwise model selection method and device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010024336.7A CN111241745B (en) 2020-01-09 2020-01-09 Gradual model selection method, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111241745A CN111241745A (en) 2020-06-05
CN111241745B true CN111241745B (en) 2024-05-24

Family

ID=70864097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010024336.7A Active CN111241745B (en) 2020-01-09 2020-01-09 Gradual model selection method, equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN111241745B (en)
WO (1) WO2021139462A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241746B (en) * 2020-01-09 2024-01-26 深圳前海微众银行股份有限公司 Forward model selection method, apparatus, and readable storage medium
CN111241745B (en) * 2020-01-09 2024-05-24 深圳前海微众银行股份有限公司 Gradual model selection method, equipment and readable storage medium
CN113407680B (en) * 2021-06-30 2023-06-02 竹间智能科技(上海)有限公司 Heterogeneous integrated model screening method and electronic equipment
CN113746899B (en) * 2021-07-29 2023-04-07 济南浪潮数据技术有限公司 Cloud platform access method and device
CN113780582B (en) * 2021-09-15 2023-04-07 杭银消费金融股份有限公司 Wind control feature screening method and system based on machine learning model
CN116704427B (en) * 2023-04-19 2024-01-26 广东建设职业技术学院 3D CNN-based cyclic construction process monitoring method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550746A (en) * 2015-12-08 2016-05-04 北京旷视科技有限公司 Training method and training device of machine learning model
WO2017219991A1 (en) * 2016-06-23 2017-12-28 华为技术有限公司 Optimization method and apparatus suitable for model of pattern recognition, and terminal device
CN108875289A (en) * 2017-05-08 2018-11-23 腾讯科技(深圳)有限公司 A kind of algorithm adjustment method, client, background server and system
CN109409528A (en) * 2018-09-10 2019-03-01 平安科技(深圳)有限公司 Model generating method, device, computer equipment and storage medium
CN110298389A (en) * 2019-06-11 2019-10-01 上海冰鉴信息科技有限公司 More wheels circulation feature selection approach and device when training pattern
CN110400215A (en) * 2019-07-31 2019-11-01 浪潮软件集团有限公司 Small micro- Enterprise Credit Rating Model construction method and system towards family, enterprise
CN110543946A (en) * 2018-05-29 2019-12-06 百度在线网络技术(北京)有限公司 method and apparatus for training a model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180314971A1 (en) * 2017-04-26 2018-11-01 Midea Group Co., Ltd. Training Machine Learning Models On A Large-Scale Distributed System Using A Job Server
CN111241745B (en) * 2020-01-09 2024-05-24 深圳前海微众银行股份有限公司 Gradual model selection method, equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550746A (en) * 2015-12-08 2016-05-04 北京旷视科技有限公司 Training method and training device of machine learning model
WO2017219991A1 (en) * 2016-06-23 2017-12-28 华为技术有限公司 Optimization method and apparatus suitable for model of pattern recognition, and terminal device
CN108875289A (en) * 2017-05-08 2018-11-23 腾讯科技(深圳)有限公司 A kind of algorithm adjustment method, client, background server and system
CN110543946A (en) * 2018-05-29 2019-12-06 百度在线网络技术(北京)有限公司 method and apparatus for training a model
CN109409528A (en) * 2018-09-10 2019-03-01 平安科技(深圳)有限公司 Model generating method, device, computer equipment and storage medium
CN110298389A (en) * 2019-06-11 2019-10-01 上海冰鉴信息科技有限公司 More wheels circulation feature selection approach and device when training pattern
CN110400215A (en) * 2019-07-31 2019-11-01 浪潮软件集团有限公司 Small micro- Enterprise Credit Rating Model construction method and system towards family, enterprise

Also Published As

Publication number Publication date
CN111241745A (en) 2020-06-05
WO2021139462A1 (en) 2021-07-15

Similar Documents

Publication Publication Date Title
CN111241745B (en) Gradual model selection method, equipment and readable storage medium
US20190034516A1 (en) Method and apparatus for acquiring an evaluation index
CN111210022B (en) Backward model selecting method, apparatus and readable storage medium
US11657612B2 (en) Method and apparatus for identifying video
US10313746B2 (en) Server, client and video processing method
CN113095512A (en) Federal learning modeling optimization method, apparatus, medium, and computer program product
CN114298322B (en) Federal learning method and apparatus, system, electronic device, and computer readable medium
US11367284B2 (en) Method and apparatus for commenting video
EP4239491A1 (en) Method and system for processing data tables and automatically training machine learning model
CN110428114B (en) Fruit tree yield prediction method, device, equipment and computer readable storage medium
CN111428883A (en) Federal modeling method, device and readable storage medium based on backward law
CN111241746B (en) Forward model selection method, apparatus, and readable storage medium
US20230114293A1 (en) Method for training a font generation model, method for establishing a font library, and device
CN111428884A (en) Federal modeling method, device and readable storage medium based on forward law
KR20210091057A (en) Method and apparatus for detecting temporal action of video, electronic device and stroage medium
CN113743607A (en) Training method of anomaly detection model, anomaly detection method and device
CN112561081B (en) Conversion method and device of deep learning model, electronic equipment and storage medium
CN110782340B (en) Interactive modeling method, device and equipment of decision tree model and storage medium
CN112418046B (en) Exercise guiding method, storage medium and system based on cloud robot
CN113505896A (en) Longitudinal federated learning modeling optimization method, apparatus, medium, and program product
CN111767923B (en) Image data detection method, device and computer readable storage medium
CN106202222B (en) Method and device for determining hot event
CN111461971B (en) Image processing method, device, equipment and computer readable storage medium
CN113793298A (en) Pulmonary nodule detection model construction optimization method, equipment, storage medium and product
CN117196957B (en) Image resolution conversion method and device based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant