CN116594608A - Method and device for generating and training visual neural network model - Google Patents

Method and device for generating and training visual neural network model Download PDF

Info

Publication number
CN116594608A
CN116594608A CN202310357558.4A CN202310357558A CN116594608A CN 116594608 A CN116594608 A CN 116594608A CN 202310357558 A CN202310357558 A CN 202310357558A CN 116594608 A CN116594608 A CN 116594608A
Authority
CN
China
Prior art keywords
model
document tree
visual
neural network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310357558.4A
Other languages
Chinese (zh)
Inventor
李菲菲
李沁颖
廖名学
吕品
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202310357558.4A priority Critical patent/CN116594608A/en
Publication of CN116594608A publication Critical patent/CN116594608A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/34Graphical or visual programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • G06F40/154Tree transformation for tree-structured or markup documents, e.g. XSLT, XSL-FO or stylesheets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Document Processing Apparatus (AREA)

Abstract

The application provides a method and a device for generating and training a visual neural network model, wherein the method comprises the following steps: obtaining a visual model file of the neural network model generated by the visual editor; converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard; and generating a target neural network model based on the XML text file. According to the method and the device for generating and training the visual neural network model, the interactive framework is constructed through the visual model, the visual model meeting the user requirements can be generated, then the visual model is converted into the model file meeting the unified modeling standard, the model file can be used for realizing cross-platform convenient training of the model, the complexity of the model generating and training process is reduced, and the model generating process is more visual.

Description

Method and device for generating and training visual neural network model
Technical Field
The application relates to the field of deep learning, in particular to a method and a device for generating and training a visual neural network model.
Background
Artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a similar manner to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence has been developed from birth, and the theory and technology are mature, and the application field is expanding.
Along with the rapid development of artificial intelligence, in the important field artificial intelligence goes deep into practical application, a series of problems such as complex model, multiple tools, large data volume, long calculation time and the like are faced, the problems not only increase the difficulty of the combined application of intelligent demand and intelligent generation in the important field, but also increase the difficulty and threshold of the research and development application of the artificial intelligence in the important field. Moreover, the traditional coding neural network modeling process not only needs the user to deeply understand the algorithm principle, but also needs the user to deeply study the service, the user needs to fully understand the specialized construction knowledge of the neural network based on different frameworks such as TensorFlow, pyTorch, and constructs the model in an abstract way in the mode of coding supported by different deep learning frameworks, the application threshold of the user for deep learning modeling and training is high, the coupling of the neural network modeling and training technology and the computer implementation technology is relatively high, and the problems that the model generation process is complex and not visual enough and the training threshold is high exist.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating and training a visual neural network model, which are used for solving the technical problems that the model generating process is complex and not visual enough and the training threshold is high in the related technology.
In a first aspect, an embodiment of the present application provides a method for generating and training a visualized neural network model, including:
obtaining a visual model file of the neural network model generated by the visual editor;
converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard;
generating a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
In some embodiments, the converting the visualization model file into an XML text file includes:
generating a first model document tree based on the visualization model file; the first model document tree is a visual model document tree;
converting the first model document tree into a second model document tree; the second model document tree is a model document tree which accords with a preset unified modeling standard;
And serializing the text of the second model document tree to obtain the XML text file.
In some embodiments, the converting the first model document tree to a second model document tree includes:
traversing element nodes in the first model document tree;
determining element nodes and element links in the first model document tree;
converting element nodes in the first model document tree into element nodes in the second model document tree;
and converting the element connection relation in the first model document tree into the element connection relation in the second model document tree.
In some embodiments, the converting the element node in the first model document tree to the element node in the second model document tree includes:
and under the condition that the node number is a first preset value, reading attribute information of the current node number in the first model document tree, and generating a root node of the second model document tree based on the attribute information of the current node number.
In some embodiments, the converting the element node in the first model document tree to the element node in the second model document tree includes:
And under the condition that the node number is larger than a first preset value, reading node type information of the current node number in the first model document tree, and generating a corresponding node of the second model document tree based on the node type information of the current node number.
In some embodiments, the converting element wiring relationships in the first model document tree to element wiring relationships in the second model document tree includes:
determining nodes with edge attributes of a second preset value in the first model document tree as connecting nodes;
determining element connection relations of the connection nodes;
and determining the parent-child level relation of the nodes of the second model document tree corresponding to the connecting nodes based on the element connecting relation of the connecting nodes.
In some embodiments, the converting element wiring relationships in the first model document tree to element wiring relationships in the second model document tree includes:
and adjusting the parent-child level relation of the nodes of the second model document tree based on the parent-level attribute information of the element nodes in the first model document tree.
In some embodiments, generating the target neural network model based on the XML text file includes:
Analyzing the XML text file based on the preset unified modeling standard to generate model training information under a specified deep learning frame;
based on the model training information, training of the neural network model under the appointed deep learning frame is completed, and the target neural network model is generated.
In a second aspect, an embodiment of the present application further provides an apparatus for generating and training a visual neural network model, including:
the first acquisition module acquires a visual model file of the neural network model generated by the visual editor;
the first conversion module is used for converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard;
the first generation module is used for generating a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
In some embodiments, the first conversion module includes a first generation sub-module, a first conversion sub-module, a first execution sub-module, wherein:
the first generation sub-module is used for generating a first model document tree based on the visual model file; the first model document tree is a visual model document tree;
The first conversion sub-module is used for converting the first model document tree into a second model document tree; the second model document tree is a model document tree which accords with a preset unified modeling standard;
and the first execution submodule is used for serializing the text of the second model document tree to obtain the XML text file.
In some embodiments, the first conversion sub-module includes a first traversal unit, a first determination unit, a first conversion unit, and a second conversion unit, wherein:
the first traversing unit is used for traversing element nodes in the first model document tree;
the first determining unit is used for determining element nodes and element connecting lines in the first model document tree;
the first conversion unit is used for converting element nodes in the first model document tree into element nodes in the second model document tree;
the second conversion unit is used for converting element connection relations in the first model document tree into element connection relations in the second model document tree.
In some embodiments, the first conversion unit comprises:
the first reading subunit is used for reading attribute information of the current node number in the first model document tree under the condition that the node number is a first preset value;
And the first generation subunit is used for generating the root node of the second model document tree based on the attribute information of the current node number.
In some embodiments, the first conversion unit comprises:
the second reading subunit is used for reading node type information of the current node number in the first model document tree under the condition that the node number is larger than a first preset value;
and the second generation subunit is used for generating the corresponding node of the second model document tree based on the node type information of the current node number.
In some embodiments, the second conversion unit includes:
the first determining subunit is used for determining that a node with the edge attribute of the first model document tree being a second preset value is a connecting node;
a second determining subunit, configured to determine an element connection relationship of the connection node;
and the third determining subunit is used for determining the parent-child level relation of the nodes of the second model document tree corresponding to the connecting node based on the element connecting relation of the connecting node.
In some embodiments, the second conversion unit includes:
and the first adjusting subunit is used for adjusting the father-son relationship of the nodes of the second model document tree based on the father-son attribute information of the element nodes in the first model document tree.
In some embodiments, the first generation module includes a second generation sub-module, a third generation sub-module, wherein:
the second generation sub-module is used for analyzing the XML text file based on the preset unified modeling standard to generate model training information under a specified deep learning frame;
the third generation sub-module is used for completing training of the neural network model under the appointed deep learning frame based on the model training information and generating the target neural network model.
In a third aspect, an embodiment of the present application further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements a method for generating and training a visual neural network model as described in any one of the above when the program is executed by the processor.
In a fourth aspect, embodiments of the present application also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of visualized neural network model generation and training as described in any of the above.
In a fifth aspect, embodiments of the present application also provide a computer program product comprising a computer program which, when executed by a processor, implements a method of visualized neural network model generation and training as described in any of the above.
According to the method and the device for generating and training the visual neural network model, the interactive framework is constructed through the visual model, the visual model meeting the user requirements can be generated, then the visual model is converted into the model file meeting the unified modeling standard, the model file can be used for realizing cross-platform convenient training of the model, the complexity of the model generating and training process is reduced, and the model generating process is more visual.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is apparent that the drawings in the following descriptions are some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a flow chart of a method for generating and training a visual neural network model provided by an embodiment of the application;
FIG. 2 is a logic flow diagram of the principles of visualized neural network model generation and training provided by an embodiment of the present application;
FIG. 3 is a schematic structural diagram of a device for generating and training a visual neural network model according to an embodiment of the present application;
fig. 4 is a schematic entity structure of an electronic device according to an embodiment of the present application.
Detailed Description
For some users focusing on application business, the complex process of learning related code programming can be skipped, the conceived neural network model can be directly analyzed in a more visual, concise and efficient mode of visual modeling, the application threshold of deep learning modeling and training of the users is reduced, and the users in the specific field are assisted to quickly customize the intelligent model, so that the method is very important user requirements.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a flow chart of a method for generating and training a visual neural network model according to an embodiment of the present application, and as shown in fig. 1, the embodiment of the present application provides a method for generating and training a visual neural network model, including:
and step 101, obtaining a visual model file of the neural network model generated by the visual editor.
Specifically, the method for generating and training the visual neural network model provides a system entry for quickly creating visual modeling tasks for users. The portal can complete the system environment construction contents such as the loading of the visual operator structure system, the initialization of the visual model construction interactive framework operation area and the like, and support a user to establish a brand new visual model to construct or import the existing visual model file for iteration according to the needs. If the user imports the visual model file, the system can analyze the content of the visual model file into a form supporting drag operation display in the visual model construction interactive frame.
In the visual model construction interactive framework, a user can construct a supervision learning intelligent model according to the requirement through interface operation, and the required model can be developed in a customized mode based on an interface of a basic open source library in a visual editor through a visual drag operator and a scaling interactive mode with proper granularity. If the model generation process is not completely finished, the current visual model file can be saved, and the visual model file can be imported by a subsequent user as required to continue the model generation process.
Step 102, converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard.
Specifically, when the visualization model generation process is completely finished, the generated visualization model file can be converted into a model file conforming to a preset unified modeling standard. The unified modeling standard is used for assisting and restricting generation of a cross-platform neural network model, namely, assisting and restricting generation of the neural network model which can be used for training, and further, under a visualization and standardization combined modeling system, the influence of specific deep learning framework grammar and specific network framework can be reduced through the combined modeling process that the visualization model file is converted into an XML text file.
The unified modeling standard may be used to verify the generated XML text, in particular to check whether the XML text meets the coding requirements and whether the relevant limitations of the unified modeling standard are met according to the unified modeling standard.
Unified modeling criteria include, but are not limited to, one or more of the following: the network organization architecture specification, the inclusion element specification, the length specification of the element, the type specification of the element, etc., to which embodiments of the present invention are not particularly limited.
Step 103, generating a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
Specifically, based on the unified modeling criteria, the XML text file may be converted into a neural network model that can be adapted to the specified platform, and further training performed. The XML text file can realize cross-platform convenient training. Further, training core elements such as a data set, super parameters, a model structure, a deep learning frame and the like are already specified in a standardized model file (XML model file) obtained by converting a visual model, the XML is used as an input file, and the file can be analyzed under a visual and standardized joint modeling system to complete training of a neural network model under the specified deep learning frame, so that a required neural network model is generated.
For example, the XML text file may be converted to a corresponding neural network model under the Pytorch deep learning framework according to the generation requirements, and further distributed parallel training may be performed.
For another example, the XML text file may be converted into a corresponding neural network model under the TensorFlow deep learning framework according to the generation requirement, and further distributed parallel training may be performed.
According to the method for generating and training the visual neural network model, provided by the embodiment of the application, the interactive framework is constructed through the visual model, the visual model meeting the user requirements can be generated, then the visual model is converted into the model file meeting the unified modeling standard, the model file can be used for realizing cross-platform convenient training of the model, the complexity of the model generating and training process is reduced, and the model generating process is more visual.
In some embodiments, the converting the visualization model file into an XML text file includes:
generating a first model document tree based on the visualization model file; the first model document tree is a visual model document tree;
converting the first model document tree into a second model document tree; the second model document tree is a model document tree which accords with a preset unified modeling standard;
and serializing the text of the second model document tree to obtain the XML text file.
Specifically, a visualization model document tree may be obtained based on the generated visualization model file and the visualization editor. The XML DOC Document instance corresponding to the visual model drawing can be obtained by calling the getGraphXml interface of the visual editor, so that a visual model Document tree is obtained. And converting the visualized model document tree into a model document tree conforming to a preset unified modeling standard, including but not limited to conversion of all element nodes and element connection relations of the visualized model document tree according to the unified modeling standard. And finally, performing text serialization on the model document tree which accords with the preset unified modeling standard, and storing the model document tree as an XML text file, namely a standardized model XML file.
According to the method for generating and training the visual neural network model, provided by the embodiment of the application, the visual model is converted into the XML text file according to the unified modeling standard, so that the cross-platform convenient training of the model can be realized, the complexity of the model generating and training process is reduced, and the model generating process is more visual.
In some embodiments, the converting the first model document tree to a second model document tree includes:
traversing element nodes in the first model document tree;
determining element nodes and element connection relations in the first model document tree;
converting element nodes in the first model document tree into element nodes in the second model document tree;
and converting the element connection relation in the first model document tree into the element connection relation in the second model document tree.
Specifically, the conversion of the model document tree needs to traverse all element nodes in the visual model document tree, determine all element nodes and element connection relations of the visual model document tree, and convert the element nodes and element connection relations into element nodes and element connection relations of the model document tree which meet preset unified modeling standards.
Further, object instances in a model document of a unified modeling standard may be initialized first. And traversing all element nodes in the visual model document tree through a multi-tree preamble traversal algorithm. Wherein the traversal may ignore virtual nodes with a node number (id) of 0.
According to the method for generating and training the visual neural network model, provided by the embodiment of the application, the visual model document tree is traversed, and the visual model is converted into the XML text file according to the unified modeling standard, so that the stability of model conversion can be improved, and the accuracy of the generated XML text file is further improved.
In some embodiments, the converting the element node in the first model document tree to the element node in the second model document tree includes:
and under the condition that the node number is a first preset value, reading attribute information of the current node number in the first model document tree, and generating a root node of the second model document tree based on the attribute information of the current node number.
Specifically, when traversing the visualized model document tree, if the number of the accessed node is a first preset value, only the attribute information of the accessed node is read, element nodes with corresponding attribute information are constructed according to the attribute information, and then the element nodes are inserted into the model document tree with unified modeling standard to form a model document root node.
For example, if the accessed node is numbered 1, the attribute information of the node numbered 1 is read, element nodes with corresponding attribute information are constructed according to the attribute information, and then the element nodes are inserted into a model document tree of a unified modeling standard to form a model document root node.
According to the method for generating and training the visual neural network model, provided by the embodiment of the application, the visual model document tree is traversed, and the visual model is converted into the XML text file according to the unified modeling standard, so that the stability of model conversion can be improved, and the accuracy of the generated XML text file is further improved.
In some embodiments, the converting the element node in the first model document tree to the element node in the second model document tree includes:
and under the condition that the node number is larger than a first preset value, reading node type (type) information of the current node number in the first model document tree, and generating a corresponding node of the second model document tree based on the node type information of the current node number.
Specifically, when traversing the visualized model document tree, if the number of the accessed node is larger than a first preset value, reading node type information of the current node number in the visualized model document tree, and then creating a model node corresponding to the node type information and inserting the model node into the model document tree with unified modeling standard. The created model nodes may include data set nodes (datasets), parameter nodes (parameters), block nodes (blocks), layer nodes (layers), vertex nodes (vertexes), and the attached attribute information of the model nodes.
For example, if the accessed node number is greater than 1, the type information of the node with the node number greater than 1 in the visualized model document tree is read, and then the model node corresponding to the type information is created and inserted into the model document tree of the unified modeling standard.
According to the method for generating and training the visual neural network model, provided by the embodiment of the application, the visual model document tree is traversed, and the visual model is converted into the XML text file according to the unified modeling standard, so that the stability of model conversion can be improved, and the accuracy of the generated XML text file is further improved.
In some embodiments, the converting element wiring relationships in the first model document tree to element wiring relationships in the second model document tree includes:
determining nodes with edge attributes of a second preset value in the first model document tree as connecting nodes;
determining element connection relations of the connection nodes;
and determining the parent-child level relation of the nodes of the second model document tree corresponding to the connecting nodes based on the element connecting relation of the connecting nodes.
Specifically, if an element node with an edge (edge) attribute of a second preset value in the visual model document tree is accessed, judging that the element node is a connection node, and then calculating the element connection relation of the connection node according to a source (source) attribute and a target (target) attribute of the connection node. And finally, setting parent attribute information of the corresponding node of the connecting node in a model document tree of the unified modeling standard according to the element connecting relation of the connecting node. Wherein the second preset value may be 1.
According to the method for generating and training the visual neural network model, provided by the embodiment of the application, the visual model document tree is traversed, and the visual model is converted into the XML text file according to the unified modeling standard, so that the stability of model conversion can be improved, and the accuracy of the generated XML text file is further improved.
In some embodiments, the converting element wiring relationships in the first model document tree to element wiring relationships in the second model document tree includes:
and adjusting the parent-child level relation of the nodes of the second model document tree based on the parent-level attribute information of the element nodes in the first model document tree.
Specifically, traversing all element nodes in the visual model document tree, determining father-level attribute information of the traversed element nodes, and adjusting father-son-level relations of the nodes in the model document tree of the unified modeling standard according to the father-level attribute information of the element nodes to form a correct tree structure.
According to the method for generating and training the visual neural network model, provided by the embodiment of the application, the visual model document tree is traversed, and the visual model is converted into the XML text file according to the unified modeling standard, so that the stability of model conversion can be improved, and the accuracy of the generated XML text file is further improved.
In some embodiments, generating the target neural network model based on the XML text file includes:
analyzing the XML text file based on the preset unified modeling standard to generate model training information under a specified deep learning frame;
based on the model training information, training of the neural network model under the appointed deep learning frame is completed, and the target neural network model is generated.
Specifically, based on the unified modeling criteria, the XML text file may be converted into a neural network model that can be adapted to the specified platform, and further training performed. The XML text file can realize cross-platform convenient training. Further, training core elements such as a data set, super parameters, a model structure, a deep learning frame and the like are already specified in a standardized model file (XML model file) obtained by converting a visual model, the XML is used as an input file, and the file can be analyzed under a visual and standardized joint modeling system to complete training of a neural network model under the specified deep learning frame, so that a required neural network model is generated.
According to the method for generating and training the visual neural network model, provided by the embodiment of the application, the visual model is converted into the model file which accords with the unified modeling standard, and the model file is used for realizing cross-platform convenient training of the model, so that the complexity of the model generating and training process is reduced.
The method in the above embodiment will be further described below with specific examples.
FIG. 2 is a logic flow diagram of the principle of visual neural network model generation and training provided by an embodiment of the present application, as shown in FIG. 2, which provides a system portal for a user to quickly create visual modeling tasks, which is the beginning of the execution flow of the present application. The portal can complete the system environment construction contents such as the loading of the visual operator structure system, the initialization of the visual model construction interactive framework operation area and the like, and support a user to establish a brand new visual model to construct or import the existing visual model file for iteration according to the needs. If the user imports the visual model file, the system can analyze the content of the visual model file into a form supporting drag operation display in the visual model construction interactive frame.
In the visual model construction interaction framework, a user can construct a supervised learning intelligent model through interface operation according to requirements. Specifically, a user can construct an interactive framework through a visual model, and a supervision learning intelligent model is constructed as required through a visual drag operator and a scaling interaction mode with proper granularity by using a built-in visual operator and a user-defined operator universal template.
If the user model building process is not completely finished, supporting to store the current visual model file, and leading the visual model file into the visual model file for continuous modeling process by a subsequent user according to the need; if the user model construction process is finished completely, the visualization model file can be saved for subsequent iteration, and a visualization and standardization combined modeling system planned based on the application embodiment is supported to generate a model file oriented to a unified modeling standard, so that decoupling between the user model and the deep learning frame is realized, and subsequent execution of training tasks can be realized. The user can submit the standardized model training task based on the model file setting of the generated unified modeling standard, and rely on the mode of visualization and standardized joint modeling training, the user can pay attention to the training data set, the training super parameters and intelligent model application customization, and the system can automatically allocate computing resources to execute the training task on the basis of the existing distributed parallel training framework.
And finally, the user can judge whether the expected requirement is met according to the training result and the log in the training process or after the training process is finished, and iterate training of the neural network model according to the requirement. If the training result of the current neural network model does not meet the user requirement, a visual model file corresponding to the current unified modeling standard model file is imported into the system, and an interactive framework is constructed through the visual model to iterate the user requirement for a plurality of times. If the training result meets the expected requirement, the training result is stored as a model file of the required deep learning framework type, and thus, the visual supervised learning distributed parallel training task is completed.
The invention provides a method for generating and training a visual neural network model, which is used for establishing an interactive framework for a visual model construction for a user, supporting the user to realize visual and standardized joint modeling response through a visual drag operator and a scaling interaction mode with proper granularity, generating a model file with a unified modeling standard, supporting the extendibility representation capability of a customized algorithm in the application field from bottom to top, and supporting the distributed parallel training of the model file with the unified modeling standard by combining the invented cross-platform neural network standardized definition technology. Therefore, the tight coupling problem of the neural network modeling technology and the computer implementation technology can be reduced, the research and development influence caused by different internal structures of the intelligent application model and a cross-deep learning framework is reduced, and the professional threshold for constructing and training the deep learning model in the application field is reduced.
The prototype system is developed based on the method of the invention, and comprises a visual model construction interaction frame loading module, a visual supervised learning model interaction construction module, a model file generation module facing to unified modeling standards, a visual supervised learning distributed parallel training module and a visual supervised learning iterative modeling support module, wherein:
The visual model construction interaction frame loading module is an opening entrance for a user to construct and train a visual supervision learning model by using a prototype system. The module can support the initialization of visual modeling basic capability and support service before a user actually creates a visual modeling task, and complete the loading of a visual operator structure system, the initialization of an interactive framework operation area of visual model construction and the like. In the aspect of visual modeling basic capability, basic elements and standard specifications in the visual modeling and standardized joint modeling process are mainly loaded, and the visual modeling basic capability is related to a visual building component, a visual modeling operator library, mapping transformation basic logic and the like. In the aspect of visual modeling support service, important support capacity and processing frames required in the visual modeling process are mainly initialized, and the visual drag interaction sub-frame, the built-in visual model sub-frame, the standard mode verification sub-frame, the conversion exception processing sub-frame and the like are involved.
The visual supervised learning model interaction construction module is a basic and core module for supporting a user to construct a visual supervised learning model by using a prototype system, is mainly responsible for providing visual operator component selection, is based on interface customized development of a basic open source library GraphEditor mxGraph, and supports the construction of the supervised learning model by a visual drag operator and a scaling interaction mode with proper granularity. The visual operator component is built by supporting modeling and training integrated visual operator library, and is mainly divided into a data processing operator, a training configuration operator and a model structure operator according to functional purposes, wherein the lower part of the visual operator component is thinned into three-level operators according to the belonged relation. The model structure type operators are divided into layer level operators (first level, second level and third level) and block level operators (first level, second level and third level) according to application requirements, wherein the layer level operators are built-in operators for visual modeling, the block level operators provide custom operator universal templates for users, and the block level operators can support the users to construct a composite operator structure according to the requirements by using the layer level operators.
For example, the first-level operator in the training configuration class operator can be a weight, and the weight initialization mode of the convolution kernel and the full connection layer is set. The secondary operator is a weight type and is used for designating the weight type. The three-level operator is random normal distribution initialization and is used for sampling and initializing weights from normal distribution.
For another example, the first-level operators in the layer hierarchy may be convolutional layers, which are used to perform feature extraction on the input data, and determine the size of the convolutional layer output feature map from the parameters of the convolutional layers. The second-level operator is an activation function for specifying the convolutional layer activation function. The three-level operator is an activation function Softmax and is used for multi-classification neural network output.
The model file generating module oriented to the unified modeling standard is a core module supporting seamless training after the visual supervision learning model is constructed. Under the planned visualization and standardization combined modeling system, the method reduces the influence of specific deep learning framework grammar and specific network framework, and can convert a model structure which is visualized and constructed in a visualization mode into a subsequent standardization model file which can be trained in a cross-platform convenience mode through visualization and standardization combined modeling response. This module is implemented by a model file generation algorithm of a unified modeling standard. The specific correlation algorithm is as follows:
And unifying a model file generation algorithm of a modeling standard.
Input: a visual model is edited by a visual editor (editor).
And (3) outputting: model xml text file (file) of unified modeling standard.
The process comprises the following steps:
(1) A visualization model document tree is obtained from a visualization editor.
And (1.1) calling a getGraphXml interface of the visual editor to obtain an XML DOM Document instance corresponding to the current drawing, and naming the XML DOM Document instance as a graph.
(2) Traversing the visual model document tree graph, and converting element nodes and element connection lines in the document into a model document tree with uniform modeling standard.
And (2.1) calling a window.DOMParser class paramstring method, initializing a unified markup model Document XML DOM Document object instance, and naming the model.
(2.2) traversing all nodes (ignoring the virtual node with id 0) in the graph document through the multi-tree preamble traversing algorithm.
(2.2.1) if a node with a node id of 1 is encountered, only the attribute information is read, a model node is constructed, the corresponding attribute information is set, and then the model node is inserted into a model document tree to form a model document root node.
(2.2.2) if a node having a node id greater than 1 is encountered, reading its type attribute information, and creating a model node (dataset, parameters, block, layer, vertex and its attached attribute information) corresponding thereto and inserting it into the model document tree.
(2.2.3) if the edge attribute is 1, judging as a connection node, calculating a connection relation according to the source attribute and the target attribute information, and setting parent attribute information for the corresponding node in the model.
And (2.2.4) aiming at all traversed drawing node parent attribute information, adjusting the node parent-child level relation in the model to form a correct tree structure.
(3) The model document tree is text serialized and saved as an xml text file.
And (3.1) calling a getPrettyXml interface of an mxUtils class in the mxGraph library, and performing text serialization conversion on the model document tree instance.
(3.2) creating a Blob instance for the converted text serialization content through the Blob class, and calling a window. URL. CreateObjectURL interface to convert the Blob instance into an ObjectUrl instance.
(3.3) creating html hyperlink element a, setting the href attribute of a as ObjectUrl instance, calling click method to download the model xml content to the local file in the text attachment mode.
The visual supervised learning distributed parallel training module is an execution module for supporting a user to perform distributed training by utilizing a model file of a unified modeling standard generated by the module. In the module, a model file facing the unified modeling standard is used as an output result of a visualization and standardization combined modeling response, can be further used as input of a cross-platform neural network standardization model training task under the basis of the existing distributed parallel training framework, can be converted into a trainable Pytorch or TensorFlow neural network model through the module by the cross-platform neural network standardization definition technology provided in the preamble, and further performs training.
The visual supervised learning iterative modeling support module is used for providing a convenient way for a user to construct and train the visual supervised learning model in an iterative manner. The module supports storing the visual model file for subsequent iteration, provides a visual model file import interface for a user, and supports analyzing the content of the visual model file into a form supporting drag operation display in an interactive framework constructed by the visual model. The parsing process is implemented by a model file visualization algorithm. The specific correlation algorithm is as follows:
model file visualization algorithms.
Input: model file xml instance text.
And (3) outputting: the visual model is displayed via a visual editor.
The process comprises the following steps:
(1) And loading the xml text content of the model file and analyzing the xml text content into an xml document tree model.
(1.1) reading the text content of the document of the local model xml through the browser input form control.
And (1.2) calling a window.DOMParser class parsef rom string method, analyzing the XML text content into an XML DOM Document object instance, and naming the XML DOM Document object instance as a model.
(2) Initializing a visualization model mxGraph document tree graph.
(2.1) constructing an XML DOM Document object instance of the initialization mxGraph by a window-DOMParer class parsefmstring method, and naming the XML DOM Document object instance as a graph.
(2.2) setting basic attribute information of the drawing document, and inserting the drawing root node and the first element for the subsequent node addition and management.
(3) Traversing all nodes in the model and node attributes, and generating corresponding graph nodes and connecting lines according to node names.
(3.1) traversing all nodes in the model using a multi-tree preamble traversal algorithm.
(3.1.1) if the node is named "model", the basic attribute information of the node is recorded and the traversal is continued.
(3.1.2) if the node name is "dataset" or "parameters", the parameter configuration information drawing node is inserted into the graph document.
(3.1.3) if the node name is "block", "layer", "vertex", etc., drawing node information corresponding thereto is inserted into the graph document. Meanwhile, the detailed attribute information in the nodes is read through a getAttribute method, and is inserted into the corresponding drawing nodes in the graph document in the form of basic text nodes.
(3.1.3.1) the read node detail attribute information contains a parent attribute field.
(3.1.3.1.1) traversing all drawing nodes in the current graph, finding out the nodes matched with the drawing nodes, and constructing the connection relation drawing nodes.
(3.1.3.1.2) inserting the connected drawing nodes into the graph document.
(3.1.3.2) the read node detail attribute information does not contain a parent attribute field.
(3.1.3.2.1) constructing a link relationship graph node with a previous graph node by default.
(3.1.3.2.2) inserting the connected drawing nodes into the graph document.
(4) And transmitting the graph document tree content to an mxGraph visual editor for visual display.
(4.1) creating a drawing Editor instance, named "Editor", through the Editor class of the mxGraph library.
(4.2) transmitting the graph document instance to an editor for drawing display.
And (4.3) calling a typesetting layout interface of the editor to perform automatic typesetting layout on the graph nodes.
And (4.4) calling an unfolding node interface of the editor, and unfolding all the nodes which can be unfolded in the graph.
(4.5) automatically resizing the nodes of the graph through an automatic node size adjustment interface of the editor.
And (4.6) calling the typesetting layout interface of the editor again, and performing secondary automatic layout on the nodes subjected to the expansion and automatic resizing.
Aiming at the difficulty and threshold problems of artificial intelligence research and development application in the key field, the method provided by the embodiment of the application builds service call provided by an interactive framework through the visual model under the constraint of a visual and standardized joint modeling system, and a user can drag a scaling operator as required to build a supervised learning intelligent model and support to generate a model file with a unified modeling standard as input of a standardized model training task, complete model training in a linkage manner, and reduce the professional threshold of deep learning model building and training in the application field.
Fig. 3 is a schematic structural diagram of a device for generating and training a visualized neural network model according to an embodiment of the present application, where, as shown in fig. 3, the device for generating and training a visualized neural network model according to an embodiment of the present application includes a first obtaining module 301, a first converting module 302, and a first generating module 303, where:
the first obtaining module 301 obtains a visual model file of the neural network model generated by the visual editor;
a first conversion module 302, configured to convert the visualization model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard;
a first generation module 303, configured to generate a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
In some embodiments, the first conversion module includes a first generation sub-module, a first conversion sub-module, a first execution sub-module, wherein:
the first generation sub-module is used for generating a first model document tree based on the visual model file; the first model document tree is a visual model document tree;
The first conversion sub-module is used for converting the first model document tree into a second model document tree; the second model document tree is a model document tree which accords with a preset unified modeling standard;
and the first execution submodule is used for serializing the text of the second model document tree to obtain the XML text file.
In some embodiments, the first conversion sub-module includes a first traversal unit, a first determination unit, a first conversion unit, and a second conversion unit, wherein:
the first traversing unit is used for traversing element nodes in the first model document tree;
the first determining unit is used for determining element nodes and element connecting lines in the first model document tree;
the first conversion unit is used for converting element nodes in the first model document tree into element nodes in the second model document tree;
the second conversion unit is used for converting element connection relations in the first model document tree into element connection relations in the second model document tree.
In some embodiments, the first conversion unit comprises:
the first reading subunit is used for reading attribute information of the current node number in the first model document tree under the condition that the node number is a first preset value;
And the first generation subunit is used for generating the root node of the second model document tree based on the attribute information of the current node number.
In some embodiments, the first conversion unit comprises:
the second reading subunit is used for reading node type information of the current node number in the first model document tree under the condition that the node number is larger than a first preset value;
and the second generation subunit is used for generating the corresponding node of the second model document tree based on the node type information of the current node number.
In some embodiments, the second conversion unit includes:
the first determining subunit is used for determining that a node with the edge attribute of the first model document tree being a second preset value is a connecting node;
a second determining subunit, configured to determine an element connection relationship of the connection node;
and the third determining subunit is used for determining the parent-child level relation of the nodes of the second model document tree corresponding to the connecting node based on the element connecting relation of the connecting node.
In some embodiments, the second conversion unit includes:
and the first adjusting subunit is used for adjusting the father-son relationship of the nodes of the second model document tree based on the father-son attribute information of the element nodes in the first model document tree.
In some embodiments, the first generation module includes a second generation sub-module, a third generation sub-module, wherein:
the second generation sub-module is used for analyzing the XML text file based on the preset unified modeling standard to generate model training information under a specified deep learning frame;
the third generation sub-module is used for completing training of the neural network model under the appointed deep learning frame based on the model training information and generating the target neural network model.
Specifically, the device for generating and training the visual neural network model provided by the embodiment of the application can realize all the method steps realized by the method embodiment for generating and training the visual neural network model, and can achieve the same technical effects, and the parts and beneficial effects which are the same as those of the method embodiment in the embodiment are not described in detail.
Fig. 4 is a schematic physical structure of an electronic device according to an embodiment of the present application, as shown in fig. 4, where the electronic device may include: processor 410, communication interface (Communications Interface) 420, memory 430 and communication bus 440, wherein processor 410, communication interface 420 and memory 430 communicate with each other via communication bus 440. The processor 410 may invoke logic instructions in the memory 430 to perform a method of visual neural network model generation and training, the method comprising:
Obtaining a visual model file of the neural network model generated by the visual editor;
converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard;
generating a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
Further, the logic instructions in the memory 430 described above may be implemented in the form of software functional units and may be stored in a computer-readable storage medium when sold or used as a stand-alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In some embodiments, the converting the visualization model file into an XML text file includes:
generating a first model document tree based on the visualization model file; the first model document tree is a visual model document tree;
converting the first model document tree into a second model document tree; the second model document tree is a model document tree which accords with a preset unified modeling standard;
and serializing the text of the second model document tree to obtain the XML text file.
In some embodiments, the converting the first model document tree to a second model document tree includes:
traversing element nodes in the first model document tree;
determining element nodes and element links in the first model document tree;
converting element nodes in the first model document tree into element nodes in the second model document tree;
and converting the element connection relation in the first model document tree into the element connection relation in the second model document tree.
In some embodiments, the converting the element node in the first model document tree to the element node in the second model document tree includes:
And under the condition that the node number is a first preset value, reading attribute information of the current node number in the first model document tree, and generating a root node of the second model document tree based on the attribute information of the current node number.
In some embodiments, the converting the element node in the first model document tree to the element node in the second model document tree includes:
and under the condition that the node number is larger than a first preset value, reading node type information of the current node number in the first model document tree, and generating a corresponding node of the second model document tree based on the node type information of the current node number.
In some embodiments, the converting element wiring relationships in the first model document tree to element wiring relationships in the second model document tree includes:
determining nodes with edge attributes of a second preset value in the first model document tree as connecting nodes;
determining element connection relations of the connection nodes;
and determining the parent-child level relation of the nodes of the second model document tree corresponding to the connecting nodes based on the element connecting relation of the connecting nodes.
In some embodiments, the converting element wiring relationships in the first model document tree to element wiring relationships in the second model document tree includes:
and adjusting the parent-child level relation of the nodes of the second model document tree based on the parent-level attribute information of the element nodes in the first model document tree.
In some embodiments, the generating a target neural network model based on the XML text file includes:
analyzing the XML text file based on the preset unified modeling standard to generate model training information under a specified deep learning frame;
based on the model training information, training of the neural network model under the appointed deep learning frame is completed, and the target neural network model is generated.
Specifically, the electronic device provided by the embodiment of the present application can implement all the method steps implemented by the method embodiment in which the execution subject is the electronic device, and can achieve the same technical effects, and the same parts and beneficial effects as those of the method embodiment in the embodiment are not described in detail herein.
In another aspect, the present application also provides a computer program product, the computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium, the computer program, when executed by a processor, being capable of performing the method of generating and training a visual neural network model provided by the methods described above, the method comprising:
Obtaining a visual model file of the neural network model generated by the visual editor;
converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard;
generating a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform a method of visualized neural network model generation and training provided by the methods described above, the method comprising:
obtaining a visual model file of the neural network model generated by the visual editor;
converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard;
generating a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
In addition, it should be noted that: the terms "first," "second," and the like in embodiments of the present application are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or otherwise described herein, and that the "first" and "second" distinguishing between objects generally are not limited in number to the extent that the first object may, for example, be one or more.
In the embodiment of the application, the term "and/or" describes the association relation of the association objects, which means that three relations can exist, for example, a and/or B can be expressed as follows: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The term "plurality" in embodiments of the present application means two or more, and other adjectives are similar.
The term "determining B based on a" in the present application means that a is a factor to be considered in determining B. Not limited to "B can be determined based on A alone", it should also include: "B based on A and C", "B based on A, C and E", "C based on A, further B based on C", etc. Additionally, a may be included as a condition for determining B, for example, "when a satisfies a first condition, B is determined using a first method"; for another example, "when a satisfies the second condition, B" is determined, etc.; for another example, "when a satisfies the third condition, B" is determined based on the first parameter, and the like. Of course, a may be a condition in which a is a factor for determining B, for example, "when a satisfies the first condition, C is determined using the first method, and B is further determined based on C", or the like.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for visual neural network model generation and training, comprising:
obtaining a visual model file of the neural network model generated by the visual editor;
converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard;
generating a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
2. The method of claim 1, wherein converting the visualization model file into an XML text file comprises:
Generating a first model document tree based on the visualization model file; the first model document tree is a visual model document tree;
converting the first model document tree into a second model document tree; the second model document tree is a model document tree which accords with a preset unified modeling standard;
and serializing the text of the second model document tree to obtain the XML text file.
3. The method of visualized neural network model generation and training of claim 2, wherein said converting the first model document tree to a second model document tree comprises:
traversing element nodes in the first model document tree;
determining element nodes and element links in the first model document tree;
converting element nodes in the first model document tree into element nodes in the second model document tree;
and converting the element connection relation in the first model document tree into the element connection relation in the second model document tree.
4. A method of visual neural network model generation and training as claimed in claim 3, wherein said converting element nodes in the first model document tree to element nodes in the second model document tree comprises:
Under the condition that the node number is a first preset value, the attribute information of the current node number in the first model document tree is read;
and generating a root node of the second model document tree based on the attribute information of the current node number.
5. A method of visual neural network model generation and training as claimed in claim 3, wherein said converting element nodes in the first model document tree to element nodes in the second model document tree comprises:
reading node type information of the current node number in the first model document tree under the condition that the node number is larger than a first preset value;
and generating the corresponding node of the second model document tree based on the node type information of the current node number.
6. A method of visual neural network model generation and training as claimed in claim 3, wherein said converting element wiring relationships in the first model document tree to element wiring relationships in the second model document tree comprises:
determining nodes with edge attributes of a second preset value in the first model document tree as connecting nodes;
determining element connection relations of the connection nodes;
And determining the parent-child level relation of the nodes of the second model document tree corresponding to the connecting nodes based on the element connecting relation of the connecting nodes.
7. A method of visual neural network model generation and training as claimed in claim 3, wherein said converting element wiring relationships in the first model document tree to element wiring relationships in the second model document tree comprises:
and adjusting the parent-child level relation of the nodes of the second model document tree based on the parent-level attribute information of the element nodes in the first model document tree.
8. The method of generating and training a visual neural network model of claim 1, wherein generating a target neural network model based on the XML text file comprises:
analyzing the XML text file based on the preset unified modeling standard to generate model training information under a specified deep learning frame;
based on the model training information, training of the neural network model under the appointed deep learning frame is completed, and the target neural network model is generated.
9. An apparatus for generating and training a visual neural network model, comprising:
The first acquisition module acquires a visual model file of the neural network model generated by the visual editor;
the first conversion module is used for converting the visual model file into an XML text file; the XML text file is a model file which accords with a preset unified modeling standard;
the first generation module is used for generating a target neural network model based on the XML text file; the target neural network model is a neural network model which completes model training under a specified deep learning framework based on a preset unified modeling standard.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of generating and training a visual neural network model according to any one of claims 1 to 8 when the program is executed by the processor.
CN202310357558.4A 2023-04-04 2023-04-04 Method and device for generating and training visual neural network model Pending CN116594608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310357558.4A CN116594608A (en) 2023-04-04 2023-04-04 Method and device for generating and training visual neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310357558.4A CN116594608A (en) 2023-04-04 2023-04-04 Method and device for generating and training visual neural network model

Publications (1)

Publication Number Publication Date
CN116594608A true CN116594608A (en) 2023-08-15

Family

ID=87594468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310357558.4A Pending CN116594608A (en) 2023-04-04 2023-04-04 Method and device for generating and training visual neural network model

Country Status (1)

Country Link
CN (1) CN116594608A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117591104A (en) * 2023-11-29 2024-02-23 摩尔线程智能科技(北京)有限责任公司 Model generation method and device, electronic equipment and storage medium
CN117876840A (en) * 2023-11-30 2024-04-12 中国科学院空天信息创新研究院 Remote sensing basic model rapid training method and system based on template editing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117591104A (en) * 2023-11-29 2024-02-23 摩尔线程智能科技(北京)有限责任公司 Model generation method and device, electronic equipment and storage medium
CN117591104B (en) * 2023-11-29 2024-04-12 摩尔线程智能科技(北京)有限责任公司 Model generation method and device, electronic equipment and storage medium
CN117876840A (en) * 2023-11-30 2024-04-12 中国科学院空天信息创新研究院 Remote sensing basic model rapid training method and system based on template editing

Similar Documents

Publication Publication Date Title
Paige et al. Rigorous identification and encoding of trace-links in model-driven engineering
JP6753596B2 (en) Processes and systems that automatically generate functional architecture documents and software design / analysis specifications in natural language
CN116594608A (en) Method and device for generating and training visual neural network model
US7890923B2 (en) Configurable pattern detection method and apparatus
Cho et al. Creating visual domain-specific modeling languages from end-user demonstration
US20170139685A1 (en) Visual software modeling method to construct software views based on a software meta view
CN110941427B (en) Code generation method and code generator
US20170235550A1 (en) General software modeling method to construct software models based on a software meta model
CN101464799A (en) MPI parallel programming system based on visual modeling and automatic skeleton code generation method
CN114840196B (en) Modelica language model processing method, device and storage medium
US20200150937A1 (en) Advanced machine learning interfaces
CN111581920A (en) Document conversion method, device, equipment and computer storage medium
US20170139684A1 (en) General modeling method to construct system models based on a system meta model
CN110162297A (en) A kind of source code fragment natural language description automatic generation method and system
CN112463989A (en) Knowledge graph-based information acquisition method and system
Li et al. Auto completion of user interface layout design using transformer-based tree decoders
CN112818678B (en) Dependency relationship graph-based relationship reasoning method and system
CN116541020A (en) Code generation method, device, equipment, medium and product based on field model
Li et al. Breeze/adl: Graph grammar support for an xml-based software architecture description language
US20220075796A1 (en) Architecture for data map converters
Alanen et al. Realizing a model driven engineering process
CN113486180A (en) Remote supervision relation extraction method and system based on relation hierarchy interaction
CN116542124B (en) Auxiliary modeling method for distributed hydrologic model
Chen et al. Automatic generation of UML diagrams from product requirements described by natural language
CN112199086B (en) Automatic programming control system, method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination