CN113052328B - Deep learning model production system, electronic device, and storage medium - Google Patents

Deep learning model production system, electronic device, and storage medium Download PDF

Info

Publication number
CN113052328B
CN113052328B CN202110363166.XA CN202110363166A CN113052328B CN 113052328 B CN113052328 B CN 113052328B CN 202110363166 A CN202110363166 A CN 202110363166A CN 113052328 B CN113052328 B CN 113052328B
Authority
CN
China
Prior art keywords
production line
deep learning
learning model
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110363166.XA
Other languages
Chinese (zh)
Other versions
CN113052328A (en
Inventor
林达华
曹阳
李兆松
张行程
陈恺
杨冠姝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Technology Development Co Ltd
Original Assignee
Shanghai Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Technology Development Co Ltd filed Critical Shanghai Sensetime Technology Development Co Ltd
Priority to CN202110363166.XA priority Critical patent/CN113052328B/en
Publication of CN113052328A publication Critical patent/CN113052328A/en
Priority to PCT/CN2021/124453 priority patent/WO2022205835A1/en
Application granted granted Critical
Publication of CN113052328B publication Critical patent/CN113052328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Feedback Control In General (AREA)

Abstract

The present disclosure relates to a deep learning model production system, an electronic device, and a storage medium, the system including a user development platform, the user development platform comprising: a production line selection module for displaying a set list of the selected model production lines in response to a selection operation for the model production lines in the production line list; the setting module is used for responding to the setting operation aiming at the first setting item in the setting list, determining a target deep learning model for realizing a target task, a training mode of the target deep learning model and a data set for training the target deep learning model; and the training module is used for responding to the training triggering operation aiming at the target deep learning model, training the target deep learning model according to the data set and the training mode, and obtaining the target deep learning model to be deployed. The method and the device can realize efficient setting operation based on flow, and realize automatic generation of the target deep learning model to be deployed.

Description

Deep learning model production system, electronic device, and storage medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a deep learning model production system, an electronic device, and a storage medium.
Background
The deep learning technology has wide application in the directions of computer vision, natural language processing, voice recognition, recommendation systems and the like. When applying deep learning techniques, it is common to rely on specialized technicians to customize the deep learning model required for different application scenarios, such as customizing data acquisition modes, network type selection, network parameter configuration, and the like.
Disclosure of Invention
The present disclosure proposes a deep learning model production technique.
According to an aspect of the present disclosure, there is provided a deep learning model production system including: a production line selection module, configured to respond to a selection operation for a model production line in a production line list, and display a setting list of a selected model production line, where the selected model production line is used to generate a target deep learning model to be deployed, the setting list includes a first setting item of the selected model production line, the first setting item is used to set a target deep learning model, a training mode of the target deep learning model, and a dataset used to train the target deep learning model, and the production line list is used to display production line information of the model production line to be selected; the setting module is used for responding to the setting operation of the first setting item in the setting list, determining a target deep learning model for realizing a target task, a training mode of the target deep learning model and a data set for training the target deep learning model; and the training module is used for responding to the training triggering operation aiming at the target deep learning model, and training the target deep learning model according to the data set and the training mode to obtain the target deep learning model to be deployed.
In one possible implementation manner, the setting operation of the first setting item includes: data set setting operation, network type setting operation and training mode setting operation; the determining, in response to a setting operation for a first setting item in the setting list, a target deep learning model for implementing the target task, a training manner of the target deep learning model, and a dataset for training the target deep learning model includes: determining a target deep learning model for implementing the target task in response to a network type setting operation for the target deep learning model; determining a dataset for training the target deep learning model in response to a dataset setup operation for the dataset of the target task; and determining a training mode of the target deep learning model in response to a training mode setting operation for the target deep learning model.
In one possible implementation manner, the training manner includes a training device for performing training, a training end index, and a specified size of sample data; training the target deep learning model according to the data set and the training mode to obtain a target deep learning model to be deployed, wherein the training method comprises the following steps: adjusting the size of the sample data in the dataset according to the designated size to obtain an adjusted dataset; and training the target deep learning model in the training equipment according to the adjusted data set and the training ending index to obtain a target deep learning model to be deployed.
In a possible implementation manner, the setting list further includes a second setting item, where the second setting item is used to set packaging parameters of the target deep learning model to be deployed, the packaging parameters include a packaging manner and/or a deployment device, and the user development platform further includes: the deployment module is used for responding to the setting operation of the second setting item in the setting list, packaging the target deep learning model to be deployed, and obtaining a packaging file of the target deep learning model to be deployed, wherein the setting operation of the second setting item comprises the following steps: and setting the encapsulation mode and/or the deployment equipment, wherein the encapsulation file is used for deploying the target deep learning model to be deployed in the deployment equipment.
In a possible implementation manner, the setting list further includes a third setting item for importing a data set, a fourth setting item for annotating the data set, and a fifth setting item for evaluating a target deep learning model to be deployed, and the user development platform further includes: an importing module, configured to obtain an original data set of the target task in response to an importing operation for a third setting item in the setting list, where sample data in the original data set is data that meets a preset data collection standard, where the preset data collection standard is used to indicate collection of sample data in the original data set; the labeling module is used for responding to the labeling operation for the fourth setting item in the setting list, labeling the sample data in the original data set according to a preset data labeling standard, and obtaining the data set for training the target deep learning model; the evaluation module is used for responding to the setting operation of the fifth setting item in the setting list, displaying the performance evaluation result of the target deep learning model to be deployed according to the set network evaluation index, wherein the network evaluation index is used for evaluating the performance of the deep learning model to be deployed.
In one possible implementation, the user development platform further includes a line purchase item for entering a line store platform in response to a click operation for the line purchase item.
In one possible implementation, the system further includes a line store platform, the system further includes the line store platform, information is transmitted between the line store platform and the user development platform through a communication interface, and the line store platform is used for selling model production lines.
In one possible implementation, in case of successful purchase of a model production line by the production line store platform, production line information of the successfully purchased model production line is added to the production line list.
In one possible implementation manner, the system further includes an expert development platform, and the expert development platform and the production line shop platform perform information transmission through a communication interface, where the expert development platform includes: the production line building module is used for responding to building operation of the model production line to obtain a pre-built model production line, wherein the pre-built model production line is used for generating a target deep learning model to be deployed; and the production line release module is used for responding to release operation of the pre-built model production line and releasing the pre-built model production line to the production line store platform so as to display and purchase the pre-built model production line on the production line store platform.
In one possible implementation, the building operation includes a network configuration operation for a model production line; the responding to the building operation aiming at the model production line obtains a pre-built model production line, which comprises the following steps: responding to the network configuration operation aiming at the model production line to obtain at least one deep learning model, wherein the network configuration operation comprises the operation of configuring at least one of a network structure, a network algorithm and algorithm parameters; and pre-training the at least one deep learning model to obtain a pre-trained deep learning model in the pre-built model production line, wherein the pre-trained deep learning model corresponds to the network type to be set in the setting module of the user development platform.
In one possible implementation, the building operation further comprises: information editing operation, training configuration operation, standard editing operation and index configuration operation aiming at a model production line;
the method for obtaining the pre-built model production line in response to the building operation for the model production line further comprises the following steps: obtaining production line information of the pre-built model production line in response to the information editing operation for the model production line, wherein the production line information is used for identifying the pre-built model production line in the production line store platform and the user development platform; responding to the training configuration operation aiming at the model production line to obtain a training mode to be set in the pre-built model production line, wherein the training mode to be set is used for setting in a setting module of the user development platform; responding to the standard configuration operation aiming at the model production line to obtain a preset data acquisition standard and a preset data labeling standard of the preset model production line, wherein the preset data acquisition standard is used for being displayed in an interface of an importing module of the user development platform, and the preset data labeling standard is used for being displayed in an interface of a labeling module of the user development platform; and responding to the index configuration operation aiming at the model production line to obtain a network evaluation index to be set in the pre-built model production line, wherein the network evaluation index to be set is used for setting in an evaluation module of the user development platform.
In one possible implementation, the target task includes an image processing task, the image processing task including: at least one of image recognition, image segmentation, image classification, and keypoint detection.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to execute the system described above.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described system.
In the embodiment of the disclosure, the user development platform can realize the flow setting operation of generating the target deep learning model to be deployed based on the pre-constructed model production line, so that the target deep learning model to be deployed can be automatically generated based on the flow setting operation in high efficiency when the training triggering operation of the target deep learning model is responded. The method solves the problem that the prior art can not meet the high-efficiency, flow-path and automatic generation requirements of the industry on the neural network.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 illustrates a block diagram of a deep learning model production system according to an embodiment of the present disclosure.
FIG. 2 illustrates a block diagram of a user development platform in accordance with an embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of a production line management interface, according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of a production line list implemented in accordance with the present disclosure.
Fig. 5 shows a schematic diagram of a setting interface of a first setting item according to an embodiment of the present disclosure.
Fig. 6 shows a schematic diagram of a setup interface for a second setup item according to an embodiment of the disclosure.
Fig. 7 shows a schematic diagram of a setting interface of a third setting item according to an embodiment of the present disclosure.
Fig. 8 shows a schematic diagram of a setting interface of a fourth setting item according to an embodiment of the present disclosure.
Fig. 9 shows a schematic diagram of a setting interface of a fifth setting item according to an embodiment of the present disclosure.
Fig. 10 illustrates a block diagram of a production line store platform according to an embodiment of the present disclosure.
Fig. 11 shows a schematic diagram of a production line presentation interface according to an embodiment of the present disclosure.
Fig. 12 shows a block diagram of an expert development platform in accordance with an embodiment of the present disclosure.
FIG. 13 illustrates a schematic diagram of a management interface for a model production line, according to an embodiment of the present disclosure.
Fig. 14 shows a schematic diagram of a network configuration interface according to an embodiment of the present disclosure.
Fig. 15 shows a schematic diagram of an information editing interface according to an embodiment of the present disclosure.
FIG. 16 illustrates a schematic diagram of a training configuration interface, according to an embodiment of the present disclosure.
Fig. 17 shows a schematic diagram of a standard configuration interface according to an embodiment of the present disclosure.
Fig. 18 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Fig. 19 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It should be understood that the terms "first," "second," and "third," etc. in the claims, specification, and drawings of this disclosure are used for distinguishing between different objects and not for describing a particular sequential order. The terms "comprises" and "comprising" when used in the specification and claims of this disclosure are taken to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
Fig. 1 illustrates a block diagram of a deep learning model production system according to an embodiment of the present disclosure. As shown in fig. 1, the deep learning model production system includes:
an expert development platform 11 for building a model production line and publishing a pre-built or built model production line to a production line store platform 12, the pre-built or built model production line being used for generating a target deep learning model to be deployed;
a line shop platform 12 for displaying and selling pre-built or built model lines to add the purchased model lines to the user development platform 13;
a user development platform 13 for generating a target deep learning model to be deployed based on the purchased model production line.
In one possible implementation manner, the deep learning model production system may perform method steps in a system in an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, etc., or may be implemented by a processor in the terminal device invoking computer readable instructions stored in a memory, or may perform method steps in the system in a server.
In one possible implementation, the information is transmitted between the production line store platform 12 and the user development platform 13 through a communication interface, and the information is transmitted between the expert development platform 11 and the production line store platform 12 through a communication interface. That is, the expert development platform 11, the production line shop platform 12 and the user development platform 13 can realize information interaction by calling the communication interface.
It should be understood that the expert development platform 11, the production line shop platform 12, and the user development platform 13 may be independently developed applications or integrated applications, and the embodiments of the present disclosure are not limited thereto. The development of expert development platform 11, line store platform 12, user development platform 13 may be accomplished by any known technique by those skilled in the art, and no limitation is placed on the embodiments of the present disclosure.
According to the embodiment of the disclosure, an expert development platform of a model production line built by a professional technician can be provided, the expert development platform is used for purchasing a shop user platform of the model production line pre-built or built, and a user development platform of a target deep learning model to be deployed is generated by a common user in favor of the model production line pre-built or built, so that the model production line built by the professional technician can be purchased and used by the common user to generate the deep learning model to be deployed on equipment, and thus the common user can approach the level of the professional technician based on the efficiency and the precision of the deep learning model to be deployed generated by the model production line pre-built or built, and the model production line to be deployed has higher practicability and universality.
As described above, the deep learning model production system includes a user development platform, fig. 2 shows a block diagram of the user development platform according to an embodiment of the present disclosure, as shown in fig. 2, including:
a production line selection module 131 for displaying a set list of the selected model production lines in response to a selection operation for the model production lines in the production line list.
Wherein the selected model production line is used to generate the target deep learning model to be deployed, for example, the model production line may be a vehicle detection model production line, used to generate a vehicle detection model, etc., which is not limited to the embodiments of the present disclosure. The setting list comprises a first setting item of the selected model production line, the first setting item is used for setting a target deep learning model, a training mode of the target deep learning model and a data set for training the target deep learning model, and the production line list is used for displaying production line information of the selected model production line.
In one possible implementation, a line list may be provided in the line selection module, in which line information of model lines purchased by a user in the line store may be displayed, so that a set list of selected model lines is displayed in response to a selection operation for the model lines. It should be understood that the list of settings for the selected model production line may be presented in a display interface of the terminal device.
In one possible implementation, the line list, such as the line name, the line identifier, etc., may be presented in a display interface of the terminal device, so as to facilitate the user in performing the selection operation on the model line. It should be appreciated that a trigger control responsive to the selection operation may be included in the line list, e.g., a trigger control responsive to a click operation, a touch operation, to determine the model line selected by the user. Embodiments of the present disclosure are not limited with respect to implementations of trigger controls.
Fig. 3 shows a schematic diagram of a production line management interface, according to an embodiment of the present disclosure. Fig. 4 shows a schematic diagram of a production line list implemented in accordance with the present disclosure. The line list shown in fig. 4 may be a list developed by clicking on the line information "AAA" of the model line shown in fig. 3, which may facilitate the user to select any model line and display a set list of the selected model line in response to a selection operation for any model line. It should be appreciated that a trigger control responsive to a click operation for the model production line may be included in the production line list, as well as a control (e.g., a slider bar) for pulling up and down the list.
The display under "AAA" in fig. 3 may be a setting list of the selected model production line, where names, or identifiers, etc. of the respective setting items of the model production line may be displayed, so as to display a setting interface of the selected setting item in response to a clicking operation of the user on any setting item, thereby facilitating the user to implement a setting operation for generating the target deep learning model to be deployed in the displayed setting interface. It should be appreciated that a trigger control responsive to a click operation for a setting item may be included in the settings list.
It should be noted that the first, second, third, etc. in the present disclosure may be used to distinguish between different setting items, and do not limit the positions of the setting items in the setting list. It should be understood that, according to actual needs, a person skilled in the art may set the position of the first setting item in the setting list, for example, the setting item c in fig. 3 may be the first setting item.
The setting module 132 is configured to determine a target deep learning model for implementing the target task, a training manner of the target deep learning model, and a data set for training the target deep learning model in response to a setting operation for a first setting item in the setting list.
In one possible implementation, the target task may include an image processing task. The image processing tasks include: at least one of image recognition, image segmentation, image classification, and keypoint detection. It should be appreciated that the target tasks may also include speech processing tasks such as speech recognition tasks, semantic recognition tasks, voiceprint recognition tasks, natural language processing tasks, and the like. The disclosed embodiments are not limited with respect to the task type of the target task.
In one possible implementation, the target deep learning model, training patterns, and data sets may be set in a setting interface of the first setting item. Fig. 5 shows a schematic diagram of a setting interface of a first setting item according to an embodiment of the present disclosure. As shown in fig. 5, a drop-down frame for setting a data set, a drop-down frame for setting a network type, a drop-down frame for setting a training end index, and a drop-down frame for setting a training device for performing training may be provided in the setting interface of the first setting item.
The network type can be used for indicating the deep learning model under different network structures, network algorithms and algorithm parameters, the deep learning model indicated by the network type can be a pre-trained deep learning model, and a target deep learning model for realizing a target task can be determined by setting the network type. The data set in the first setting item may be a data set for training the target deep learning model, which may be a labeled data set, it being understood that a plurality of labeled data sets may be provided for setting.
The training mode may include a training end index and a training device (e.g., an image processor GPU, a central processing unit CPU of X86, etc.) for performing training, among others. The training ending index is used for indicating a condition for ending training of the target deep learning model, for example, the number of iteration rounds reaches a set threshold, the iteration time reaches the set threshold, and the like. The training mode of the target deep learning model can be determined by setting the training ending index and training equipment.
It should be understood that, the user may display the corresponding drop-down list by clicking each drop-down box in the setting interface, so that the user may perform setting operation on the target deep learning model, training mode and data set according to the actual requirement. The target deep learning model, the training mode and the data set by the user are the determined target deep learning model, the training mode and the data set.
The related drop-down box control may be provided in the setting interface of the first setting item to implement a setting operation for the first setting item, which is not limited in the embodiments of the present disclosure. Of course, the setting operation for the first setting item may also be implemented through other types of controls, such as multiple boxes, etc., which are not limited to the embodiments of the present disclosure.
It should be noted that, the setting content displayed in the setting interface of the first setting item in fig. 5 is an implementation manner provided by the embodiment of the present disclosure. It should be understood that the present disclosure should not be limited thereto, and those skilled in the art may refine the settings for the target deep learning model, training patterns, and data sets according to actual needs.
For example, the data set may be divided into a training set and a testing set, and the duty ratio of the training set to the testing set is set; and under the condition that the training ending index is set to stop according to the iteration rounds, an iteration round threshold value can be set to end the training of the target deep learning model according to the set iteration round threshold value. The setting content in the first setting item for generating the target deep learning model to be deployed can be determined according to actual requirements, and the embodiment of the disclosure is not limited.
The training module 133 is configured to train the target deep learning model according to the data set and the training manner in response to a training trigger operation for the target deep learning model, so as to obtain the target deep learning model to be deployed.
In one possible implementation, the training trigger operation for the target deep learning model may be implemented, for example, by clicking on a trigger control at "start training" in the setting interface of the first setting item shown in fig. 5, and starting training the target deep learning model by responding to the trigger operation for the trigger control. It should be understood that the style, location, implementation, etc. of the trigger control may be determined according to actual requirements, and the embodiments of the present disclosure are not limited thereto.
As described above, the training regimen may include a training end indicator and a training device for performing the training. In one possible implementation manner, training the target deep learning model according to the data set and the training manner to obtain the target deep learning model to be deployed may include: and according to the data set and the training ending index, training the target deep learning model in the set training equipment to obtain the target deep learning model to be deployed. Wherein, the embodiments of the present disclosure are not limited with respect to the training process of the target deep learning model.
In the embodiment of the disclosure, the process setting operation of generating the target deep learning model to be deployed can be realized based on the pre-constructed model production line, so that the target deep learning model to be deployed can be automatically generated based on the process setting operation in high efficiency when the training triggering operation of the target deep learning model is responded.
As described above, the setting operations for the target deep learning model, the training manner, and the training set may be performed in the setting interface of the first setting item, and in one possible implementation, the setting operations of the first setting item include: a data set setting operation, a network type setting operation, and a training mode setting operation, the determining a target deep learning model for implementing a target task, a training mode of the target deep learning model, and a data set for training the target deep learning model in response to the setting operation for a first setting item in a setting list, comprising:
Determining a target deep learning model for implementing the target task in response to a network type setting operation for the target deep learning model; determining a dataset for training a target deep learning model in response to a dataset setting operation for the dataset of the target task; and determining a training mode of the target deep learning model in response to the training mode setting operation for the target deep learning model.
As described above, the network type may be used to indicate the deep learning model under different network structures, network algorithms, and algorithm parameters, and the deep learning model indicated by the network type may be a pre-trained deep learning model, and the target deep learning model for implementing the target task may be determined by setting the network type.
As described above, the training regimen may include training equipment for performing training and a training end indicator. The training mode setting operation may include a setting operation for the training device and the training end index.
In one possible implementation, the training pattern may further include a specified size of the sample data for the case where the sample data is image data. Correspondingly, the training mode setting operation can also comprise a setting operation of the specified size of the sample data. Wherein the specified size may include a specified image resolution. By the method, the sample data in the data set can be adjusted to the sample data in the specified size, so that the sample data in the data set can meet the size requirement of the target deep learning model on the input data in the training process, and the training effect of the target deep learning model is improved.
In one possible implementation, the sample data in the dataset may also be adjusted with a default specified size, in which case the specified size of the sample data may not be set; or in the case where the sample data is non-image data (such as voice), the specified size of the sample data is not set. It should be appreciated that in these cases, the setting content for the specified size of the sample data may not be presented in the setting interface of the first setting item.
In one possible implementation manner, the network type setting operation, the data set setting operation and the training mode setting operation for the target deep learning model can be realized by clicking each drop-down frame control provided in the setting interface of the first setting item and triggering a drop-down list displayed by each drop-down frame control.
The network type setting operation may be, for example, clicking a network type identifier displayed in a corresponding drop-down list; the data set setting operation may be, for example, clicking on the data set identifier shown in the corresponding drop-down list; the training mode setting operation may be, for example, clicking on a training mode identifier in the corresponding drop-down list, or the like. It should be appreciated that trigger controls for each set operation may be provided in the drop-down list to determine the user selected data set, training mode, and target deep learning model corresponding to the network type.
In the embodiment of the disclosure, the target deep learning model, the data set for training the target deep learning model and the training mode of the target deep learning model can be conveniently determined through the network setting operation, the data set setting operation and the training mode setting operation, so that the automatic generation of the target deep learning model to be deployed is conveniently and efficiently realized.
As described above, the training manner may include a training device for performing training, a training end index, and a specified size of sample data, and in one possible implementation, training the target deep learning model according to the data set and the training manner, to obtain a target deep learning model to be deployed, including:
adjusting the size of the sample data in the data set according to the designated size to obtain an adjusted data set;
and training the target deep learning model in training equipment according to the adjusted data set and the training ending index to obtain the target deep learning model to be deployed.
As described above, the sample data may include image data and the specified size may include a specified image resolution. The adjusting the size of the sample data in the dataset according to the specified size, the obtaining the adjusted dataset may include: and adjusting the image resolution of the image data in the dataset to the specified image resolution to obtain an adjusted dataset. Wherein adjusting the image resolution of the image data, e.g., normalization processing, scaling processing, etc., may be accomplished using any known image processing technique, and embodiments of the present disclosure are not limited in this regard.
In one possible implementation manner, the training device for performing training may be a training device set for a training manner in the first setting item; it will be appreciated that a default training device may be employed, i.e., no setup may be provided for the training device, and the practice of the present disclosure is not limited. Wherein training the target deep learning model in the training device, i.e. performing training of the target deep learning model in the training device.
In one possible implementation, the training end indicator may be a training end indicator set for the training mode in the first setting item; it is to be appreciated that a default end of training indicator may be employed, i.e., no equipment may be provided for the end of training indicator, and implementation of the present disclosure is not limited.
It should be understood that the setting operation for the training manner in the first setting item may include a setting operation for at least one of a training device for performing training, a training end index, and a specified size of sample data.
In one possible implementation, the adjusted data set, the set training end index, and the target deep learning model may be transmitted to a training device for training the target deep learning model to perform training of the target deep learning model in the training device.
In one possible implementation manner, training the target deep learning model in the training device according to the adjusted data set and the training end index to obtain the target deep learning model to be deployed may include: inputting the sample data in the adjusted data set into a target deep learning model to obtain an output result; according to the loss of the output result, adjusting network parameters of a target deep learning model; and under the condition that the adjusted target deep learning model meets the training ending index, obtaining the target deep learning model to be deployed. It should be appreciated that any known deep learning model training system may be used to train the target deep learning model, and embodiments of the present disclosure are not limited in this regard.
According to the training method and the training device, training of the target deep learning model can be achieved automatically according to the set training set and training mode, and the target deep learning model to be deployed, which meets actual demands of users, is obtained.
It is contemplated that the trained deep learning model is typically packaged, i.e., packaged, to enable deployment of the deep learning model in a relevant device. In a possible implementation manner, the setting list further includes a second setting item, where the second setting item is used to encapsulate the target deep learning model to be deployed, and the system further includes:
Responding to the setting operation aiming at the second setting item in the setting list, and packaging the target deep learning model to be deployed to obtain a packaging file of the target deep learning model to be deployed, wherein the setting operation of the second setting item comprises the following steps: and setting the encapsulation mode and/or the deployment equipment, wherein the encapsulation file is used for deploying the target deep learning model to be deployed in the deployment equipment.
The encapsulation mode can indicate an inference framework required for encapsulating the target deep learning model to be deployed, and the inference framework can enable the target deep learning model to be deployed to adapt to various deployment devices. Inference frameworks, which may include, for example, a TensorRT (a high performance deep learning model Inference (Infinite) engine for deploying applications of deep learning models), openVINO (a tool set for deploying deep learning models) developed by Intel.
The deployment device may indicate a processor type of the device to be deployed by the target deep learning model to be deployed, which may include, for example: x86 processors, arm processors, CUDA (Compute Unified Device Architecture, unified computing device architecture) processors, GPUs, and the like.
The encapsulation file may be, for example, an SDK (Software Development Kit ), where the target deep learning model to be deployed may be encapsulated using any known encapsulation technique, without limitation to the disclosed embodiments.
Fig. 6 shows a schematic diagram of a setup interface for a second setup item according to an embodiment of the disclosure. As shown in fig. 6, a drop-down box for performing equipment on the packaging mode, a drop-down box for setting deployment equipment, a drop-down box for setting a network version, and radio buttons for setting whether deployment is quantized may be provided in the setting interface of the second setting item.
The network version is used for selecting target deep learning models to be deployed of different versions, and it is understood that multiple setting operations can be performed based on the first setting item, namely multiple training is performed on the target deep learning models, so that multiple versions of target deep learning models to be deployed are obtained; whether the deployment is quantized or not is used for indicating whether the target deep learning model to be deployed is quantized or not, and the quantized target deep learning model is packaged. Wherein quantization of the target deep learning model to be deployed may be achieved using any known quantization technique, and embodiments of the present disclosure are not limited in this regard.
It should be understood that, the user may display the corresponding drop-down list by clicking each drop-down box in the setting interface, so that the user may perform setting operation on the packaging mode and the deployment device according to the actual requirement. The encapsulation mode, the deployment device and the like set by the user are the determined encapsulation mode, the determined deployment device and the like.
Wherein, a related drop-down box control may be provided in the setting interface of the second setting item to implement a setting operation for the second setting item, which is not limited to the embodiments of the present disclosure. Of course, the setting operation for the second setting item may also be implemented through other types of controls, such as multiple boxes, etc., which are not limited to the embodiments of the present disclosure.
In one possible implementation, in response to the setting operation for the second setting item in the setting list, the method may further include a triggering operation of triggering the target deep learning model to be deployed to start encapsulation, for example, by clicking a triggering control at "confirm" in the setting interface shown in fig. 6, to trigger encapsulation of the target deep learning model to be deployed. It should be understood that the style, location, implementation, etc. of the trigger control may be determined according to actual requirements, and the embodiments of the present disclosure are not limited thereto.
It should be understood that the setup interface of the second setup item illustrated in fig. 6 above is one implementation provided by the embodiments of the present disclosure. Those skilled in the art may adjust the content set in the setting interface of the second setting item according to actual needs, and the embodiments of the present disclosure are not limited thereto. For example, in some cases, the target deep learning model to be deployed may be quantified by default, and then a radio button in the setting interface of the second setting item may be removed to determine whether to quantify the deployment.
In the embodiment of the disclosure, various packaging requirements of users on the target deep learning model to be deployed can be met, so that the obtained packaging files are effectively deployed in various deployment devices.
In a possible implementation manner, the setting list further includes a third setting item for importing a data set, a fourth setting item for annotating the data set, and a fifth setting item for evaluating a target deep learning model to be deployed, and the user development platform further includes:
the importing module is used for responding to importing operation for a third setting item in the setting list, obtaining an original data set of the target task, wherein sample data in the original data set meets preset data acquisition standards, and the preset data acquisition standards are used for indicating acquisition of the sample data in the original data set;
the labeling module is used for responding to the labeling operation for the fourth setting item in the setting list, labeling the sample data in the original data set according to the preset data labeling standard, and obtaining a data set for training the target deep learning model;
the evaluation module is used for responding to the setting operation of the fifth setting item in the setting list, displaying the performance evaluation result of the target deep learning model to be deployed according to the set network evaluation index, wherein the network evaluation index is used for evaluating the performance of the deep learning model to be deployed.
In one possible implementation, the setting interface of the third setting item may display a preset data collection standard to guide the user to collect an original sample set that meets the preset data collection standard, for example, the data collection standard of the vehicle image may include: shooting the height of 3.5-7 m, shooting the headstock and the whole car body, and resolving power of 1080p; a minimum of 100 images, no more than 100 vehicles per image, etc.
In one possible implementation manner, the setting interface of the third setting item may include a file upload control for importing a data set, for example, a data set is imported by dragging a file, or a data set is imported by searching a local storage path of the data set, and the importing manner of the data set is not limited by the embodiments of the present disclosure.
It should be appreciated that the original data set of the target task may import more than one. The imported raw data sets may be transmitted to other electronic devices and stored to facilitate automated labeling of the raw data sets.
Fig. 7 shows a schematic diagram of a setting interface of a third setting item according to an embodiment of the present disclosure. As shown in fig. 7, the local storage path of the original dataset may be determined by clicking on a control at "select"; by clicking the 'import' button, the original data set is triggered to be acquired according to the storage path of the original data set, so that the import operation of the original data set is realized.
As described above, the original data set of the target task may include a plurality of sets, and in one possible implementation, a drop-down frame for selecting the original data set may be displayed in the setting interface of the fourth setting item, so that the user may select any original data set to label, to obtain a data set for training the target deep learning model. It should be appreciated that the labeling operation for the fourth setting item may include a selection operation for the original dataset.
In one possible implementation, the preset data labeling criteria may be preset data labeling criteria, for example, a labeling frame with a filter size of less than 30×30. The preset data labeling standard may be displayed in the setting interface of the fourth setting item to inform the user of the preset data labeling standard, or a plurality of preset data labeling standards may be provided for the user to select, which is not limited in the embodiments of the present disclosure.
In one possible implementation, the labeling of the sample data in the original dataset according to the preset data labeling criteria may be implemented using any known data labeling technique, for example, the original dataset may be automatically labeled by a data labeling algorithm. In this case, the labeling operation for the fourth setting item may further include a triggering operation of triggering the start of labeling of the sample data in the original data set.
In one possible implementation, a labeling tool (e.g., labelMe: a Javascript labeling tool for online image labeling) for manually labeling the original dataset may also be provided in the setting interface of the fourth setting item. In this case, the labeling operation for the fourth setting item may further include a labeling operation of manually labeling the original data set. The labeling mode of the original data set can be set according to actual requirements, and the embodiment of the disclosure is not limited.
Fig. 8 shows a schematic diagram of a setting interface of a fourth setting item in an embodiment of the present disclosure. As shown in fig. 8, a drop-down box control for selecting an original data set is shown in the setting interface of the fourth setting item, and it should be understood that the original data set shown in the drop-down box may be the selected original data set; the user can click a button at the 'automatic labeling' position to realize automatic labeling for the selected original data set; the button at the manual annotation position can be clicked to trigger the display of the annotation tool for manual annotation.
In one possible implementation, the network evaluation index is used to evaluate the performance of the trained target deep learning model. The network evaluation index may include, for example: at least one of accuracy, precision, recall, and F1 Score (F1-Score).
It should be appreciated that the setting operation for the fifth setting item may include a setting operation for a network evaluation index to exhibit a performance evaluation result in response to the set network evaluation index. Fig. 9 is a schematic diagram illustrating a setting interface of a fifth setting item according to an embodiment of the present disclosure, and as shown in fig. 9, setting of a network evaluation index may be implemented in the form of a multi-box. Wherein, the performance evaluation result can be triggered and displayed by clicking the "confirm" button in fig. 9. It should be appreciated that the setting operation for the fifth setting item may include a triggering operation that triggers the presentation of the performance evaluation result.
In one possible implementation, the network evaluation index setting interface may further include a confidence threshold option to facilitate the user to view performance evaluation results at different confidence thresholds. The confidence level may be a confidence level of the output result of the deep learning model, for example, a confidence level of the vehicle detection, a possibility that the characterizable detection result indicates as a vehicle, and a confidence threshold may be used to indicate a performance evaluation result when the target deep learning model reaches the confidence threshold.
In one possible implementation, the performance evaluation results may be presented in the form of a list, a graph (e.g., a graph), or the like. The display form of the performance evaluation result can be set according to actual requirements, and the embodiment of the disclosure is not limited.
It should be noted that, the setting interfaces of the third setting item, the fourth setting item, and the fifth setting item shown in fig. 7, fig. 8, and fig. 9 are one implementation manner provided by the embodiments of the present disclosure. It should be understood that the disclosure should not be limited thereto, and those skilled in the art may design the setting interfaces of the third setting item, the fourth setting item, and the fifth setting item according to actual needs, and the embodiments of the disclosure are not limited thereto.
In the embodiment of the disclosure, the original data set can be imported and marked, so that the training set is conveniently set when the target deep learning model is trained; and by displaying the performance evaluation result of the target deep learning model to be deployed, the user can know whether the training effect of the target deep learning model to be deployed meets the performance requirement.
In one possible implementation, the user development platform further includes a line purchase item for entering a line store platform in response to a click operation for the line purchase item. By the method, a user can conveniently enter the production line shop platform at any time to purchase a pre-built or built model production line.
Wherein, the production line purchase item can be realized in the form of an entry button of a production line store platform; the line purchase item may be correspondingly configured with a jump address of the line store platform so that a user may enter the line store platform based on the jump address when clicking the line purchase item. Embodiments of the present disclosure are not limited in terms of implementation of line purchases.
In one possible implementation, the line purchase item may be disposed at any location in any interface in the user development platform, for example, in a line list, in a setup list of a model line, in a setup interface of each setup item, without limitation to embodiments of the present disclosure.
As described above, the deep learning model production system further comprises a production line store platform, information transmission is carried out between the production line store platform and the user development platform through a communication interface, and the production line store platform is used for selling the model production line. Fig. 10 shows a block diagram of a production line store platform, as shown in fig. 10, according to an embodiment of the present disclosure, including:
the production line display module 121 is configured to display a model production line that is pre-built or built, where the model production line is built through an expert development platform.
In one possible implementation, the pre-built or built model production line may be displayed in a production line display interface of a production line store platform. FIG. 11 illustrates a schematic diagram of a line presentation interface in which a key "buy" for purchasing a model line may be provided, as shown in FIG. 11, and a user clicking the "buy" key may enter a payment page, in accordance with an embodiment of the present disclosure; the user can click a detail key in the production line display interface to check the detailed information of the selected model production line; the "search" case may also be clicked on to search for the model line to purchase.
It should be appreciated that the line presentation interface illustrated in fig. 11 is one implementation provided by embodiments of the present disclosure, and it should be understood that the present disclosure should not be limited thereto, and that one skilled in the art may design the line presentation interface according to actual needs, and the embodiments of the present disclosure are not limited thereto.
The construction process of the pre-constructed or constructed model production line will be described below, and is not described here for brevity.
The line purchase module 122 is configured to determine a purchased model line in response to a purchase operation for the displayed model line.
In one possible implementation, the purchasing operation for the model production line for display may include: the click operation of the "buy" key shown in the interface is shown for the production line as shown in fig. 11. After clicking the "buy" button, a payment interface may be entered. It should be appreciated that payment controls for making online payments may be provided in the payment interface, and those skilled in the art may implement online payments for model production lines using any known technique, and embodiments of the present disclosure are not limited with respect to implementation of online payments.
It should be understood that the model line for which payment has been completed, but the model line for which purchase was successful. In one possible implementation, in case of successful purchase of the model production line through the production line store platform, production line information of the successfully purchased model production line is added to a production line list of the user development platform so that a user can generate a target deep learning model to be deployed using the successfully purchased model production line. The implementation manner of the production line list may refer to the manner disclosed in the foregoing embodiments of the present disclosure, and is not described herein in detail.
In one possible implementation, the line store platform may further include a line sending module for sending the successfully purchased model line to the user development platform to display the successfully purchased model line in a line display list of the user development platform, wherein the line display list is used to display and select the successfully purchased model line.
In the disclosed embodiments, a user may purchase different model production lines through a production line store platform to generate target deep learning models to be deployed that achieve different target tasks.
As described above, the deep learning model production system may include an expert development platform. FIG. 12 illustrates a block diagram of an expert development platform, as shown in FIG. 12, that includes:
the production line building module 111 is configured to obtain a pre-built or built model production line in response to a building operation for the model production line, where the pre-built or built model production line is used to generate a target deep learning model to be deployed.
FIG. 13 illustrates a schematic diagram of a management interface for a model production line, according to an embodiment of the present disclosure. In one possible implementation, the information of the built model production line may also be displayed in a management interface as shown in fig. 13, and as shown in fig. 13, the responsible person may be the name of the person who builds the model production line. It should be appreciated that controls (e.g., slider bars) for pulling up and down the slide list may be provided in the management interface to expose more model production lines.
Wherein, as shown in fig. 13, the management interface of the model production line can also provide a search box (the box on the left side of "search" in fig. 13) and a search trigger control (the box on the left side of "search" in fig. 13) for searching the model production line so as to quickly search the model production line; clicking on the trigger control at the "detail" corresponding to any model production line in fig. 13 to show more detailed information of the model production line, clicking on the trigger control at the "edit" in fig. 13 can edit the model production line, clicking on the trigger control at the "release" in fig. 13 can release the pre-built or already built model production line to the production line store platform.
In one possible implementation, the build interface of the model production line may be entered by clicking on the trigger control at "Create production line" in FIG. 13 to perform a build operation for the model production line. The entering mode of the building interface of the model production line can be determined according to actual requirements, and the embodiment of the disclosure is not limited. It should be appreciated that controls on various types of interfaces may be provided in the build interface, such as edit boxes, drop-down boxes, multiple choice boxes, single choice boxes, and the like, to enable build operations for the model production line.
In one possible implementation, the building operation for the model production line may include: network configuration operations, information editing operations, training configuration operations, standard editing operations, and index configuration operations for the model production line.
Wherein the network configuration operation is used for configuring at least one pre-trained deep learning model in the model production line; an information editing operation for editing production line information of the model production line; training configuration operation, which is used for configuring a training mode to be set in a model production line; the standard configuration operation is used for configuring a preset data acquisition standard and a preset data labeling standard of the model production line; the index configuration operation is used for configuring network evaluation indexes to be set in the model production line.
It will be appreciated that after completion of the above described construction operations for the model production line, a pre-constructed or constructed model production line may be obtained. The configuration results corresponding to each building operation can be connected in series by using a known assembly series tool, so as to obtain a pre-built or built model production line, which is not limited to the embodiment of the disclosure.
The production line publishing module 112 is configured to publish the pre-built or built model production line to the production line store platform in response to a publishing operation for the pre-built or built model production line, so as to display and purchase the pre-built or built model production line on the production line store platform.
In one possible implementation, the pre-built or built model production line may be presented in a management interface as shown in FIG. 13, which may be triggered to publish the pre-built or built model production line to the production line store platform by clicking on the "publish" button shown in FIG. 13. It should be understood that, a person skilled in the art may set the release manner of the pre-built or built model production line according to actual requirements, which is not limited to the embodiments of the present disclosure.
As described above, the expert development platform and the production line shop platform perform information transmission through the communication interface, and the pre-built or built model production line is released to the production line shop platform, which may be that all configuration contents contained in the pre-built or built model production line are packed and compressed, and the packed and compressed file is sent to the production line shop platform. The disclosed embodiments are not limited with respect to how pre-built or built model production lines are distributed to the production line store platform.
The pre-built or built model production model is displayed and purchased on the production line shop platform, and reference may be made to the content in the embodiments of the present disclosure described above, which is not described herein.
In the embodiment of the disclosure, the model production line can be built by using the expert development platform, so that a common user can use the model production line built by the professional technician to generate the target deep learning model to be deployed, and the production efficiency and the precision of the target deep learning model to be deployed are improved.
As described above, the build operation comprises a network configuration operation for a model production line, and in one possible implementation, in response to the build operation for the model production line, a pre-built or built model production line is obtained, comprising:
responding to network configuration operation aiming at a model production line to obtain at least one deep learning model, wherein the network configuration operation comprises the operation of configuring at least one of a network structure, a network algorithm and algorithm parameters;
pre-training at least one deep learning model to obtain a pre-trained deep learning model in a pre-built or built model production line, wherein the pre-trained deep learning model corresponds to a network type to be set in a setting module of a user development platform.
The network structure may include a backbone network backbone, a converged network stack, and a branched network head of the deep learning model. The main network is used for extracting features, the fusion network is used for carrying out fusion processing on the features extracted by the main network, and the branch network is used for carrying out reasoning of different tasks aiming at the features extracted by the main network or the features subjected to fusion processing.
In one possible implementation, the network structures of the backbone network backhaul, the converged network neg and the branch network head may at least employ: resNet series, SENet series, mobileNet series, sheffleNet series, etc. The network algorithm may include at least: a fast convolution Faster R-CNN algorithm, a key point convolution Keypoint R-CNN algorithm, a ladder convolution Cascade R-CNN algorithm, a normalization algorithm and the like. The algorithm parameters include at least: batch size, sampling mode, data enhancement mode, network parameters, etc.
The batch size is understood as the number of sample data for one training; the sampling mode may at least include: positive and negative Sample sampling mode, difficult Sample (Hard Sample) sampling mode, undersampling mode, oversampling mode, etc., and the data enhancement mode may include: data enhancement modes such as random cutting, twisting, amplifying, mirroring, deformation and the like; the network parameters may include super parameters of the network, and may be determined by selecting existing parameter configuration files due to the variety of super parameters.
It should be understood that the network configuration operation for the model production line, that is, the network configuration operation for the deep learning model, may include more than the configuration content disclosed in the embodiments of the present disclosure described above; and the network configuration operation can be realized by man-machine interaction through a display interface, or can be realized by background calling of related interfaces and related configuration files, so long as a deep learning model can be configured, and the embodiment of the disclosure is not limited.
It should be understood that different configurations may result in different deep learning models, and that the network size, accuracy, and computational performance of the different deep learning models may be different. For example, the network algorithms may be different for the same network structure; the algorithm parameters may be different for the same network structure and network algorithm; the deep learning model with high precision is large in size; the deep learning model with small volume has low precision and good operation performance. By the method, the deep learning models with different precision and different operation performances, namely the deep learning models with different network types, can be configured, so that a user can select a target deep learning model according to the precision requirement and the performance requirement.
Fig. 14 shows a schematic diagram of a network configuration interface according to an embodiment of the present disclosure. As shown in fig. 14, a user may implement a network configuration operation for a model production line by triggering controls such as a drop-down box, an edit box, a multi-selection box, and the like, which correspond to each item. It should be understood that the network configuration interface shown in fig. 14 is one possible implementation, and those skilled in the art may set configurable contents in the network configuration interface, layout, style of the network configuration interface, etc. according to actual needs, which is not limited to the embodiments of the present disclosure.
In one possible implementation, the at least one configured deep learning model may be pre-trained using any known pre-training approach to obtain a pre-trained deep learning model. It should be understood that the training set used in the pre-training may be related to the target task, the application scenario, etc. of the model production line, for example, the training sets corresponding to the human body detection task and the vehicle detection task may be different, and the training to obtain the deep learning model may also be different.
The pre-trained deep learning model corresponds to a network type to be set in a setting module of the user development platform, so that a user can determine a corresponding target deep learning model by setting the network type. The target deep learning model set on the user development platform is a pre-trained deep learning model. By the method, the deep learning model with certain precision can be obtained, so that the training efficiency of training the target deep learning model is improved, and the target deep learning model to be deployed is generated efficiently.
It should be appreciated that multiple processing tasks may be included in the same model line, such as vehicle detection, license plate identification, etc. Then a corresponding at least one deep learning model may be configured and pre-trained for a plurality of processing tasks in the same model production line, respectively.
In the embodiment of the disclosure, at least one deep learning model can be pre-established for a model production line, and the at least one deep learning model is pre-trained to obtain the deep learning model selected by a user, so that the user can obtain the target deep learning model capable of realizing a target task through simple and convenient setting operation, namely selecting a network type, and the generation efficiency of the target deep learning model is improved.
In one possible implementation, in response to a build operation for a model production line, obtaining a model production line for generating a target deep learning model to be deployed, further includes:
responding to information editing operation aiming at the model production line, obtaining production line information of the pre-built or built model production line, wherein the production line information is used for identifying the pre-built or built model production line in a production line store platform and a user development platform;
responding to training configuration operation aiming at a model production line to obtain a training mode to be set in the pre-built or built model production line, wherein the training mode to be set is used for setting in a setting module of a user development platform;
responding to standard configuration operation aiming at a model production line, obtaining a preset data acquisition standard and a preset data labeling standard of the preset or built model production line, wherein the preset data acquisition standard is used for being displayed in an interface of an import module of a user development platform, and the preset data labeling standard is used for being displayed in an interface of a labeling module of the user development platform;
And responding to index configuration operation aiming at the model production line to obtain network evaluation indexes to be set in the pre-built or built model production line, wherein the network evaluation indexes to be set are used for setting in an evaluation module of a user development platform.
Fig. 15 shows a schematic diagram of an information editing interface according to an embodiment of the present disclosure. As shown in fig. 15, the information editing interface may display prompt information of information editing, and information such as task names, processing objects, application scenes, data acquisition devices, etc. may be input through an editing box, that is, information editing operation for a model production line is implemented. For example, for a vehicle detection project, the project information may include "task name: vehicle detection, processing object: vehicle, application scenario: street, data acquisition device: a camera head. It should be understood that the information entered by the user in the edit box, i.e., the line information of the resulting model line, may be determined by clicking the "ok" button shown in fig. 15.
FIG. 16 illustrates a schematic diagram of a training configuration interface, according to an embodiment of the present disclosure. As shown in fig. 16, the training end indicator may be configured through multiple boxes, and it should be understood that the training end indicator configured herein is the training end indicator set by the user in the setting module of the user platform; the training equipment can be configured through multiple selection frames, and the training equipment configured in the method is the training equipment which is set by a user in a setting module of the user platform; in this way, a plurality of training end indexes and a plurality of training devices can be configured, so that a user can select any training end index and training device for training the target deep learning model according to requirements.
In one possible implementation, as shown in fig. 16, an edit box for editing performance prediction information for the training device may also be provided in the training configuration interface, where the performance prediction information may be, for example, a running time required for the training device to perform the training of the target deep learning model. The first setting item of the performance evaluation information of the edited training device can be displayed in a setting interface so that a user can select a required training device to execute training of the target deep learning model based on the displayed running time.
It should be appreciated that the training configuration operations for the model production line may include a multi-selection operation for the training end indicator, a multi-selection operation for the training device, and an editing operation for the performance prediction information. The user may determine the selected training end indicator and training device, and the edited performance prediction information by clicking the "confirm" button shown in fig. 16, which is not limited to the embodiment of the present disclosure.
In one possible implementation manner, the preset data acquisition standard and the preset data labeling standard can be respectively input through an editing box on the display interface, namely standard editing operation aiming at the model production line is realized; and the configuration of the preset data acquisition standard and the preset data labeling standard can be realized by calling the related interfaces through the background. The configured preset data acquisition standard and preset data labeling standard can be respectively displayed in a setting interface for importing a third setting item of the data set and a setting interface for labeling a fourth setting item of the data set.
Fig. 17 is a schematic diagram of a standard configuration interface according to an embodiment of the disclosure, where, as shown in fig. 17, a preset data collection standard and a preset data labeling standard may be input through an edit box, or the data collection standard and the data labeling standard may be configured through a drop-down box. The user may determine the edited preset data collection standard, the preset data labeling standard, and the like by clicking the "confirm" button shown in fig. 17, which is not limited to the embodiment of the present disclosure.
In one possible implementation manner, the index configuration operation on the network evaluation index may be implemented in an index configuration interface through a multi-option box, where the index configuration interface may refer to the setting interface of the fifth setting item shown in fig. 9. It should be understood that the network evaluation index selected in the index configuration interface, that is, the network evaluation index displayed in the setting interface of the fifth setting item, the network evaluation index to be set in the setting interface of the fifth setting item may be part or all of the network evaluation index displayed in the index configuration interface, which is not limited to the embodiment of the disclosure.
It should be noted that the above network configuration, information editing, training configuration, standard configuration, and index configuration may be displayed in the same display interface or may be displayed in different display interfaces, that is, the network configuration interface, information editing interface, training configuration interface, standard configuration interface, and index configuration interface may be the same interface or may be different interfaces, which is not limited to the embodiment of the present disclosure. The building interface of the model production line comprises: the system comprises a network configuration interface, an information editing interface, a training configuration interface, a standard editing interface and an index configuration interface.
In one possible implementation, after configuration for the deep learning model, the production line information, the training mode, the preset data acquisition standard, the preset data labeling standard, the network evaluation index and the like, each configuration result can be connected in series through the existing assembly serial tool, so that a model production line is built.
In the embodiment of the disclosure, professional technicians can configure the content of each setting item in the model production line to build the model production line suitable for different scene demands and different task demands, so that a common user can obtain a target deep learning model to be deployed, which meets actual demands, efficiently through simple and convenient flow setting operation based on the built model production line.
In embodiments of the present disclosure, at least one model production line and recommended configurations contained therein can be established, wherein the recommended configurations may include, for example, at least deep learning models to pre-training, data acquisition criteria, data labeling criteria, deep learning model structures, super parameters, assessment indicators, training configurations, reasoning configurations, and the like. Therefore, the user can conveniently conduct the flow setting operation based on the built model production line and the recommended configuration contained in the model production line, and the target deep learning model to be deployed is efficiently generated.
It can be appreciated that the process of generating the deep learning model requires a professional technician to have expertise and experience, such as expertise on data acquisition mode, deep learning model selection, super parameters, pre-training model, data enhancement mode, iteration times, logic of iteration mode, and the like. Such knowledge and experience vary with the application scenario and processing task of the deep learning model. Therefore, in the practical application scenario, the deep learning model generation process usually needs to be customized by a professional technician. In addition, the deep learning model has weak migration and adaptability, such as a vehicle recognition network with very good training accuracy in one city, and has poor performance due to the change of vehicle type distribution in another city.
According to the embodiments of the present disclosure, it is possible to provide a configuration environment for a model production line for a professional, and a setting environment for a general user to generate a desired deep learning model using the model production line. Thus, model production lines for generating deep learning models can be defined and generated by expert technicians, and each model production line can support the generation of the deep learning model of at least one application scene. The model production line built by the professional technician can be used for common users to select, and the setting environment of the model production line is used for completing the generation process of the deep learning model to obtain the deep learning model which can be deployed on equipment. In this way, the level of the expert technician can be approached by the average user based on the efficiency and accuracy of the model production line to generate the deep learning model.
In the related art, the generation flow of the deep learning model cannot meet the precision requirement of training the deep learning model in various scenes; and the migration capability of the deep learning model in the deep learning field is not strong enough, so that a user needs to train and verify the deep learning model by re-acquiring data under different application scenes. And the deep learning model generated by the same generation flow in the related technology cannot automatically adapt to the deep learning model generation requirements under various scenes. According to the embodiment of the disclosure, the application scenes of the model production line are defined, so that the customized model production line under different application scenes is realized, and the generation of the automatic deep learning model under various application scenes can be supported.
In the related art, the single generation flow of the deep learning model enables the provided trained deep learning model to have lower precision in similar scenes. According to the embodiment of the disclosure, the generated deep learning model can be suitable for similar application scenes by aiming at different setting operations of the same model production line, and the deep learning model with higher precision is obtained through optimized setting flow and training configuration. For example, the model production line under the vehicle detection scene can train the same target deep learning model by adopting different training sets, and different target deep learning models to be deployed can be obtained efficiently, so that the model production line can be suitable for similar vehicle detection scenes of different cities and different streets.
According to the embodiment of the disclosure, model production lines of different application scenes can be established by a professional technician, a common user can meet the deep learning model generation requirements of various different application scenes and similar application scenes by providing a generation flow of configuration and related configuration by using a professional technology, and a deep learning model which is close to the level of the professional technician, has high precision and high performance and is suitable for scene requirements can be obtained.
It will be appreciated that the above-mentioned system embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. Those skilled in the art will appreciate that in the above-described system of the specific embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described system. The computer readable storage medium may be a non-volatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the method steps in the system described above.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code which, when run on a device, causes a processor in the device to execute instructions for implementing the deep learning model production system provided in any of the embodiments above.
The disclosed embodiments also provide another computer program product for storing computer readable instructions that, when executed, cause a computer to perform the operations of the deep learning model production system provided by any of the above embodiments.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 18 shows a block diagram of an electronic device 800, according to an embodiment of the disclosure. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 18, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the system described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or system operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the method steps in the system described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the method steps in the system described above.
Fig. 19 shows a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to fig. 19, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the method steps in the system described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. Electronic device 1900 may operate an operating system based on memory 1932, such as the Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Multi-user multi-process computer operating system (Unix) TM ) Unix-like operating system (Linux) of free and open source code TM ) Unix-like operating system (FreeBSD) with open source code TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the method steps of the system described above.
The present disclosure may be a system and/or a computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of systems and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. The deep learning model production system is characterized by comprising a user development platform, an expert development platform and a production line shop platform, wherein the expert development platform and the production line shop platform are in information transmission through a communication interface, and the production line shop platform and the user development platform are in information transmission through a communication interface;
the expert development platform is used for building a model production line and releasing the pre-built or built model production line to the production line shop platform, wherein the pre-built or built model production line comprises: a pre-trained deep learning model, a training mode to be set, a preset data acquisition standard and a preset data labeling standard;
The production line shop platform is used for displaying and selling pre-built or built model production lines;
the user development platform is used for entering the production line shop platform to purchase a model production line and generating a target deep learning model to be deployed based on the purchased model production line, and the production line list of the user development platform displays the production line information of the purchased model production line;
the user development platform comprises:
a production line selection module, configured to respond to a selection operation for a model production line in a production line list, and display a setting list of a selected model production line, where the selected model production line is used to generate a target deep learning model to be deployed, the setting list includes a first setting item of the selected model production line, the first setting item is used to set a target deep learning model, a training mode of the target deep learning model, and a dataset used to train the target deep learning model, and the production line list is used to display production line information of the model production line to be selected;
the setting module is used for responding to the setting operation of the first setting item in the setting list, determining a target deep learning model for realizing a target task, a training mode of the target deep learning model and a data set for training the target deep learning model;
And the training module is used for responding to the training triggering operation aiming at the target deep learning model, and training the target deep learning model according to the data set and the training mode to obtain the target deep learning model to be deployed.
2. The system of claim 1, wherein the setting operation of the first setting item comprises: data set setting operation, network type setting operation and training mode setting operation;
the determining, in response to a setting operation for a first setting item in the setting list, a target deep learning model for implementing the target task, a training manner of the target deep learning model, and a dataset for training the target deep learning model includes:
determining a target deep learning model for implementing the target task in response to a network type setting operation for the target deep learning model;
determining a dataset for training the target deep learning model in response to a dataset setup operation for the dataset of the target task;
and determining a training mode of the target deep learning model in response to a training mode setting operation for the target deep learning model.
3. The system according to claim 1 or 2, wherein the training means comprises a training device for performing training, a training end index, and a specified size of sample data;
training the target deep learning model according to the data set and the training mode to obtain a target deep learning model to be deployed, wherein the training method comprises the following steps:
adjusting the size of the sample data in the dataset according to the designated size to obtain an adjusted dataset;
and training the target deep learning model in the training equipment according to the adjusted data set and the training ending index to obtain a target deep learning model to be deployed.
4. The system according to claim 1 or 2, wherein the setting list further comprises a second setting item, the second setting item is used for setting packaging parameters of the target deep learning model to be deployed, the packaging parameters comprise packaging modes and/or deployment equipment, and the user development platform further comprises:
a deployment module, configured to respond to a setting operation for a second setting item in the setting list, encapsulate the target deep learning model to be deployed to obtain an encapsulation file of the target deep learning model to be deployed,
Wherein the setting operation of the second setting item includes: and setting the encapsulation mode and/or the deployment equipment, wherein the encapsulation file is used for deploying the target deep learning model to be deployed in the deployment equipment.
5. The system of claim 1 or 2, wherein the setup list further includes a third setup term for importing a dataset, a fourth setup term for annotating the dataset, and a fifth setup term for evaluating a target deep learning model to be deployed, and the user development platform further includes:
an importing module, configured to obtain an original data set of the target task in response to an importing operation for a third setting item in the setting list, where sample data in the original data set is data that meets a preset data collection standard, where the preset data collection standard is used to indicate collection of sample data in the original data set;
the labeling module is used for responding to the labeling operation for the fourth setting item in the setting list, labeling the sample data in the original data set according to a preset data labeling standard, and obtaining the data set for training the target deep learning model;
The evaluation module is used for responding to the setting operation of the fifth setting item in the setting list, displaying the performance evaluation result of the target deep learning model to be deployed according to the set network evaluation index, wherein the network evaluation index is used for evaluating the performance of the deep learning model to be deployed.
6. The system of claim 1, wherein the user development platform further comprises a line purchase item for entering a line store platform in response to a click operation for the line purchase item.
7. The system of claim 1 or 6, wherein in case of successful purchase of a model line by the line shop platform, line information of the successfully purchased model line is added to the line list.
8. The system of claim 1, wherein the expert development platform comprises:
the production line building module is used for responding to building operation of the model production line to obtain a pre-built model production line, wherein the pre-built model production line is used for generating a target deep learning model to be deployed;
and the production line release module is used for responding to release operation of the pre-built model production line and releasing the pre-built model production line to the production line store platform so as to display and purchase the pre-built model production line on the production line store platform.
9. The system of claim 8, wherein the build operation comprises a network configuration operation for a model production line;
the responding to the building operation aiming at the model production line obtains a pre-built model production line, which comprises the following steps:
responding to the network configuration operation aiming at the model production line to obtain at least one deep learning model, wherein the network configuration operation comprises the operation of configuring at least one of a network structure, a network algorithm and algorithm parameters;
and pre-training the at least one deep learning model to obtain a pre-trained deep learning model in the pre-built model production line, wherein the pre-trained deep learning model corresponds to the network type to be set in the setting module of the user development platform.
10. The system according to claim 8 or 9, wherein the building operation further comprises: information editing operation, training configuration operation, standard editing operation and index configuration operation aiming at a model production line;
the method for obtaining the pre-built model production line in response to the building operation for the model production line further comprises the following steps:
obtaining production line information of the pre-built model production line in response to the information editing operation for the model production line, wherein the production line information is used for identifying the pre-built model production line in the production line store platform and the user development platform;
Responding to the training configuration operation aiming at the model production line to obtain a training mode to be set in the pre-built model production line, wherein the training mode to be set is used for setting in a setting module of the user development platform;
responding to the standard configuration operation aiming at the model production line to obtain a preset data acquisition standard and a preset data labeling standard of the preset model production line, wherein the preset data acquisition standard is used for being displayed in an interface of an importing module of the user development platform, and the preset data labeling standard is used for being displayed in an interface of a labeling module of the user development platform;
and responding to the index configuration operation aiming at the model production line to obtain a network evaluation index to be set in the pre-built model production line, wherein the network evaluation index to be set is used for setting in an evaluation module of the user development platform.
11. The system of claim 1, wherein the target task comprises an image processing task comprising: at least one of image recognition, image segmentation, image classification, and keypoint detection.
12. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the system of any of claims 1 to 11.
13. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the system of any of claims 1 to 11.
CN202110363166.XA 2021-04-02 2021-04-02 Deep learning model production system, electronic device, and storage medium Active CN113052328B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110363166.XA CN113052328B (en) 2021-04-02 2021-04-02 Deep learning model production system, electronic device, and storage medium
PCT/CN2021/124453 WO2022205835A1 (en) 2021-04-02 2021-10-18 Deep learning model production system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110363166.XA CN113052328B (en) 2021-04-02 2021-04-02 Deep learning model production system, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN113052328A CN113052328A (en) 2021-06-29
CN113052328B true CN113052328B (en) 2023-05-12

Family

ID=76517289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110363166.XA Active CN113052328B (en) 2021-04-02 2021-04-02 Deep learning model production system, electronic device, and storage medium

Country Status (2)

Country Link
CN (1) CN113052328B (en)
WO (1) WO2022205835A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052328B (en) * 2021-04-02 2023-05-12 上海商汤科技开发有限公司 Deep learning model production system, electronic device, and storage medium
CN113505895B (en) * 2021-08-05 2023-05-05 上海高德威智能交通***有限公司 Machine learning engine service system, model training method and configuration method
CN114153540A (en) * 2021-11-30 2022-03-08 上海商汤科技开发有限公司 Pre-training model issuing method and device, electronic equipment and storage medium
CN114186697B (en) * 2021-12-10 2023-03-14 北京百度网讯科技有限公司 Method and device for generating and applying deep learning model based on deep learning framework
CN114329201B (en) * 2021-12-27 2023-08-11 北京百度网讯科技有限公司 Training method of deep learning model, content recommendation method and device
CN115796272B (en) * 2022-11-24 2024-03-12 北京百度网讯科技有限公司 Model training method based on deep learning platform, data processing method and device
CN116994609B (en) * 2023-09-28 2023-12-01 苏州芯合半导体材料有限公司 Data analysis method and system applied to intelligent production line

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934512A (en) * 2019-03-28 2019-06-25 努比亚技术有限公司 A kind of training method and system of prediction model
CN111310934A (en) * 2020-02-14 2020-06-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
CN112558929A (en) * 2019-09-26 2021-03-26 罗克韦尔自动化技术公司 Artificial intelligence design analysis and recommendation

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598868B (en) * 2018-05-25 2023-04-18 腾讯科技(深圳)有限公司 Machine learning model building method and device and related equipment
US11790239B2 (en) * 2018-12-29 2023-10-17 International Business Machines Corporation Deep learning testing
CN110532445A (en) * 2019-04-26 2019-12-03 长佳智能股份有限公司 The cloud transaction system and its method of neural network training pattern are provided
US20210019665A1 (en) * 2019-07-18 2021-01-21 International Business Machines Corporation Machine Learning Model Repository Management and Search Engine
CN112529026B (en) * 2019-09-17 2023-12-19 华为云计算技术有限公司 Method for providing AI model, AI platform, computing device and storage medium
CN110991649A (en) * 2019-10-28 2020-04-10 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Deep learning model building method, device, equipment and storage medium
CN111611239A (en) * 2020-04-17 2020-09-01 第四范式(北京)技术有限公司 Method, device, equipment and storage medium for realizing automatic machine learning
CN112416301A (en) * 2020-10-19 2021-02-26 山东产研鲲云人工智能研究院有限公司 Deep learning model development method and device and computer readable storage medium
CN113052328B (en) * 2021-04-02 2023-05-12 上海商汤科技开发有限公司 Deep learning model production system, electronic device, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109934512A (en) * 2019-03-28 2019-06-25 努比亚技术有限公司 A kind of training method and system of prediction model
CN112558929A (en) * 2019-09-26 2021-03-26 罗克韦尔自动化技术公司 Artificial intelligence design analysis and recommendation
CN111310934A (en) * 2020-02-14 2020-06-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2022205835A1 (en) 2022-10-06
CN113052328A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN113052328B (en) Deep learning model production system, electronic device, and storage medium
CN107491541B (en) Text classification method and device
US20210248718A1 (en) Image processing method and apparatus, electronic device and storage medium
CN110909815B (en) Neural network training method, neural network training device, neural network processing device, neural network training device, image processing device and electronic equipment
CN111445493B (en) Image processing method and device, electronic equipment and storage medium
US11527233B2 (en) Method, apparatus, device and computer storage medium for generating speech packet
CN108538291A (en) Sound control method, terminal device, cloud server and system
CN109887515B (en) Audio processing method and device, electronic equipment and storage medium
CN110837761A (en) Multi-model knowledge distillation method and device, electronic equipment and storage medium
US20200294249A1 (en) Network module and distribution method and apparatus, electronic device, and storage medium
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN109858614B (en) Neural network training method and device, electronic equipment and storage medium
TWI735112B (en) Method, apparatus and electronic device for image generating and storage medium thereof
CN112668707B (en) Operation method, device and related product
CN114240882A (en) Defect detection method and device, electronic equipment and storage medium
CN110858924A (en) Video background music generation method and device
CN108320208A (en) Vehicle recommends method and device
CN110633715B (en) Image processing method, network training method and device and electronic equipment
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
CN112559673A (en) Language processing model training method and device, electronic equipment and storage medium
CN112035651A (en) Sentence completion method and device and computer-readable storage medium
CN111506767A (en) Song word filling processing method and device, electronic equipment and storage medium
WO2023097952A1 (en) Pre-trained model publishing method and apparatus, electronic device, storage medium, and computer program product
CN113378893B (en) Data management method and device, electronic equipment and storage medium
CN112149653B (en) Information processing method, information processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40050084

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant