CN115374915A - Operator operation method, device and related product - Google Patents

Operator operation method, device and related product Download PDF

Info

Publication number
CN115374915A
CN115374915A CN202110545402.XA CN202110545402A CN115374915A CN 115374915 A CN115374915 A CN 115374915A CN 202110545402 A CN202110545402 A CN 202110545402A CN 115374915 A CN115374915 A CN 115374915A
Authority
CN
China
Prior art keywords
operation mode
operator
target operator
fusion
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110545402.XA
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cambrian Kunshan Information Technology Co ltd
Original Assignee
Shanghai Cambricon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Cambricon Information Technology Co Ltd filed Critical Shanghai Cambricon Information Technology Co Ltd
Priority to CN202110545402.XA priority Critical patent/CN115374915A/en
Publication of CN115374915A publication Critical patent/CN115374915A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Stored Programmes (AREA)

Abstract

The disclosed embodiment relates to a method and a device for operating an operator and a related product, wherein a target operator is generated in a deep learning framework according to an instruction, then an operating mode of the target operator is determined based on a preset detection strategy, and if the operating mode of the target operator is a first operating mode, the target operator is executed in the first operating mode; and if the operation mode of the target operator is the second operation mode, executing the target operator in the second operation mode. The method can realize the operator supporting multiple operation modes at the same time by only encapsulating one set of operator on the deep learning frame level, and greatly reduces the maintenance cost and the debugging difficulty on the frame level on the premise of ensuring the operation result.

Description

Operator operation method, device and related product
Technical Field
The disclosed embodiments relate to the field of neural network technologies, and in particular, to a method and an apparatus for operator operation, and a related product.
Background
With the development of artificial intelligence technology, neural networks have been widely used in various fields, for example, a convolutional neural network is used for image recognition in the task of processing images.
Various operators are involved in the neural network, e.g., convolution operators, pooling operators, sampling operators, and so on. The switching and mutual calling among operators can generate more IO consumption, so that a corresponding bottom layer mechanism and an interface are provided for partial hardware in order to reduce the IO consumption in the neural network and reduce the time delay.
However, these hardware often provide interfaces for different modes of operators, resulting in significant maintenance cost and debugging difficulty at the framework level.
Disclosure of Invention
In view of the foregoing, there is a need to provide a method, an apparatus, and a related product for operator operation, which can reduce maintenance cost and debugging difficulty at a framework level.
In a first aspect, embodiments of the present disclosure provide a method for operating an operator, the method including:
generating a target operator in a deep learning framework according to the instruction;
determining an operation mode of a target operator based on a preset detection strategy;
if the operation mode of the target operator is the first operation mode, executing the target operator in the first operation mode; and if the operation mode of the target operator is the second operation mode, executing the target operator in the second operation mode.
In a second aspect, an embodiment of the present disclosure provides an apparatus for operator operation, the apparatus including:
the generation module is used for generating a target operator in the deep learning framework according to the instruction;
the determining module is used for determining the operation mode of the target operator based on a preset detection strategy;
the operation module is used for executing the target operator in a first operation mode if the operation mode of the target operator is the first operation mode; and if the operation mode of the target operator is the second operation mode, executing the target operator in the second operation mode.
In a third aspect, an embodiment of the present disclosure provides a data processing apparatus, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps in the foregoing first aspect when executing the computer program.
In a fourth aspect, the present disclosure provides a combined processing device, which includes the data processing device in the third aspect, a universal interconnection interface, and other processing devices except for the data processing device; the data processing device interacts with other processing devices.
In a fifth aspect, an embodiment of the present disclosure provides a chip, where the chip includes the combined processing device in the embodiment of the fourth aspect.
In a sixth aspect, an embodiment of the present disclosure provides a board card, where the board card includes the chip in the embodiment of the fifth aspect.
In a seventh aspect, an embodiment of the present disclosure provides an electronic device, where the electronic device includes the board in the above sixth aspect.
In an eighth aspect, embodiments of the present disclosure provide an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the steps of the method in the first aspect described above are implemented.
In a ninth aspect, embodiments of the present disclosure provide a storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the method in the above-described first aspect.
According to the method, the device and the related product for operating the operator, the target operator is generated in the deep learning frame according to the instruction, then the operation mode of the target operator is determined based on the preset detection strategy, and if the operation mode of the target operator is the first operation mode, the target operator is executed in the first operation mode; and if the operation mode of the target operator is the second operation mode, executing the target operator in the second operation mode. According to the method, the operation mode of the operator is determined based on a preset detection strategy before the operator is executed, which is equivalent to setting an operator operation mode detection link, the operation mode of a target operator can be determined through the operator operation mode detection link, and then the operator is operated in a corresponding operation mode, so that operators in different modes do not need to be independently packaged in a deep learning frame, operators in multiple operation modes can be simultaneously supported only by packaging one set of operators in the deep learning frame, and the maintenance cost and the debugging difficulty are greatly reduced in the frame layer on the premise of ensuring the operation result.
Drawings
FIG. 1a is a schematic diagram illustrating a flowchart of different operation modes of an operator in a pytore framework according to an embodiment;
FIG. 1b is a diagram of an application environment for a method of operating an operator in one embodiment;
FIG. 2 is a schematic flow chart diagram illustrating a method for operating an operator in one embodiment;
FIG. 2a is a schematic flow chart illustrating the different operator modes in the pytore framework in another embodiment;
FIG. 3 is a schematic flow chart diagram illustrating a method for operating an operator in accordance with another embodiment;
FIG. 4 is a schematic flow chart diagram of a method for operating an operator in another embodiment;
FIG. 5 is a schematic flow chart diagram of a method for operating an operator in another embodiment;
FIG. 6 is a schematic flow chart diagram illustrating a method for operating an operator in accordance with another embodiment;
FIG. 7 is a block diagram of an apparatus for operating operators in one embodiment;
FIG. 8 is a block diagram of an apparatus for operating operators in another embodiment;
FIG. 9 is a block diagram showing an apparatus for operating an operator according to another embodiment;
FIG. 10 is a block diagram of a combination processing device in one embodiment;
fig. 11 is a schematic structural diagram of a board card in an embodiment.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is to be understood that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be derived by one skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the scope of protection of the present disclosure.
It should be understood that the terms "first," "second," and the like in the claims, the description, and the drawings of the present disclosure are used for distinguishing between different objects and not for describing a particular order. The term "comprises/comprising" when used in the specification and claims of this disclosure is taken to specify the presence of stated features, integers, steps, operations, elements, and/or components, but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations. As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
First, before specifically describing the technical solution of the embodiment of the present disclosure, the technical background or the technical evolution context on which the embodiment of the present disclosure is based will be described. Taking an artificial intelligence processor as an example, the method supports an operator API (application programming interface) with two operation modes of layer-by-layer operation and fusion, wherein the operation mode of layer-by-layer refers to that each operator is compiled independently after being generated (namely each operator needs to be compiled once) and then executed, and the operation mode of fusion refers to that after each operator is generated, a plurality of generated operators are compiled together (namely all operators are compiled once) and then executed. Accordingly, operators operating in a layer-by-layer mode may be referred to as layer-by-layer operators, and operators operating in a fusion mode may be referred to as fusion operators. Due to the difference between the fusion mode and the layer-by-layer mode mechanism, the usage of the layer-by-layer mode operator and the fusion mode operator is also different, wherein the usage flow of the layer-by-layer mode operator and the fusion mode operator in the neural network framework of the pytorch is shown in fig. 1 a. In fig. 1a, the layer-by-layer operator is initiated by the dispatch system (dispatch), and the fusion operator is initiated from just-in-time compilation (JIT) to fusion operator dispatch (FusedKernel). As can be seen from fig. 1a, in order to support two modes, i.e., layer-by-layer and fusion, two sets of operators need to be separately encapsulated layer-by-layer and fused layer-by-layer at the framework level, which greatly increases the maintenance cost and the debugging difficulty. In addition, it should be noted that, from the technical problems identified and the technical solutions described in the following embodiments, the applicant has paid a lot of creative efforts. Aiming at the defect, the operator operation method provided by the disclosure can simultaneously support two modes of layer-by-layer and fusion in one set of operators.
The operator running method provided by the present disclosure can be applied to an application environment as shown in fig. 1b, where the application environment includes a computer device 01, and the computer device 01 can be any type, for example, various terminals such as personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, or an independent server or a server cluster composed of multiple servers, and the like. The internal structure of the computer device includes a processor 011, a nonvolatile storage medium 012, an internal memory 013, and a network interface 014; the processor 011 is used for providing calculation and control capability when executing an operator operation method, and the processor 011 can be any type of processor including but not limited to a machine learning processor, an artificial intelligence processor IPU, a central processing unit CPU, a graphics processing unit GPU, or a combination thereof. Wherein, the nonvolatile storage medium 012 stores an operating system 0121, a computer program 0122 and a database 0123; the internal memory provides an environment for the operating system 0121 and the computer program 0122 to run on the nonvolatile storage medium 012, and the database 0123 is used to store the data related to the operator running method process. The network interface is used for communicating with other external devices through network connection. .
In order to make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not intended to limit the disclosure. The following detailed description will specifically explain the technical solutions of the present disclosure and how the technical solutions of the present disclosure solve the above technical problems by using embodiments and with reference to the accompanying drawings. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. It should be noted that, an execution subject of the method for operating an operator provided by the present disclosure is a computer device or an artificial intelligence processor, where the execution subject of the method may also be an apparatus for operating an operator, and the apparatus may be implemented as part or all of the computer device or the artificial intelligence processor by software, hardware, or a combination of software and hardware.
Fig. 2 provides a method for operating an operator, and the method relates to a specific process of detecting a corresponding operation mode of a target operator after the target operator is generated, and operating the target operator according to the corresponding operation mode. As shown in fig. 2, the method includes:
and S101, generating a target operator in the deep learning frame according to the instruction.
The instruction is an instruction for generating a target operator, and illustratively, the computer device receives the instruction for generating the target operator and generates the target operator according to the instruction. The computer device may receive the instruction by directly receiving an instruction input by a user, where the user input includes, but is not limited to, voice input, and input through a peripheral; the instruction may also be that the computer device actively acquires a generation file from the database when the trigger condition is met, and parses the instruction from the generation file, which is not limited in this disclosure. For example, referring to fig. 2a, for the computer device, after receiving the operator call instruction dispatched by the distribution system/fusion operator distribution system, it is equivalent to receiving the instruction, that is, the computer device starts to execute the operation of generating the target operator.
The computer device, upon receiving the instruction, begins generating a target operator in a deep learning framework. Wherein the deep learning framework may be a Pytorch, which is a python version of the torch, a neural network framework for Deep Neural Network (DNN) programming for GPU acceleration. The deep learning framework is the first layer in the whole deep learning ecosystem, and needs to embody the deep learning task expressed by the calculation graph structure of the neural network mapping into instructions and data which can be executed in a CPU or an artificial intelligence processor. In the process, the deep learning framework adopts operators as specific elements for implementing a calculation task, a Kernel function (Kernel) executed on a CPU or an artificial intelligence processor is provided for each operator, and the deep learning framework schedules and executes the Kernel function corresponding to each operator in the calculation graph according to the calculation graph to complete the calculation of the whole neural network. Equivalently, the neural network calculation is further divided into various common operators facing tensor data, and the functions of the neural network are realized by executing the kernel functions corresponding to the operators.
Operators in the deep learning framework can be normally executed when the neural network is used only by generating the operators in advance, wherein target operators are generated in the deep learning framework, and the target operators generally refer to any operator to be generated currently. In practical application, the computer device may be a source code file for executing a target operator, so that a generated target operator can be obtained.
S102, determining an operation mode of a target operator based on a preset detection strategy.
As shown in fig. 2a, since the operator invocation indication sent by the distribution system/fusion operator distribution system is triggered by the user when the deep learning network needs to be invoked, and the user has already determined the operation mode of the operator in the deep learning network when the deep learning network indication is triggered to be invoked, but for the computer device, the operation mode of the target operator cannot be determined at this time, so that after the target operator is generated, the computer device needs to determine the operation mode of the target operator before the target operator is formally executed. The operation mode can be understood as the program operation mode of the operator, and the execution modes of the operator are different in different operation modes. In practical application, the operation modes of the operators include, but are not limited to, a delay operation mode, a trigger operation mode, a composite operation mode, a layer-by-layer operation mode, a fusion operation mode, and the like.
In order to further explain the operation mode of the operator, a layer-by-layer operation mode and a fusion operation mode are taken as examples to explain the specific operation mode of the operator. The mechanism of layer-by-layer mode and fusion mode is different, and the use method of the operator is also different, wherein, the operator use mode of layer-by-layer mode is: createOp (production operator) > CompileOp (compilation operator) > ComputeOp (execution of each compiled operator); the operator of the fusion mode is used in the following mode: createOp (production operator) > FuseOp (fusion compiler) > ComputeOp (operator after performing fusion compilation); that is, the layer-by-layer operation mode refers to that each operator is compiled independently (that is, each operator needs to be compiled once) and then executed after being generated, and the fusion operation mode refers to that a plurality of generated operators are compiled together (that is, all operators are compiled once) and then executed after being generated.
The computer device may determine an operation mode of the target operator based on a preset detection strategy. Optionally, the preset detection policy is a policy determined based on call stack rules of operators in different operation modes. Namely, the detection strategy is a strategy for determining the operation mode of the target operator, which is established in advance. Specifically, in a deep learning framework, for example, a pytorch framework, the call stacks of the operators in the layer-by-layer and fusion modes are completely different, and in the pytorch framework, the call of the operator in the fusion mode must be initiated from a fusion operator distribution system (Fuse dkearel), but there are not only layer-by-layer operators but also other modes of operators initiated in the distribution system (dispatch), so based on this, a detection strategy is set. For example, a global variable (RunningMode) is defined, and RunningMode is selected to be set to merge when entering FusedKernel and to layer-by-layer at the end of FusedKernel. Then the computer device can determine the operation mode of the target operator based on the operation mode, so that the current correct RunningMode can be ensured to be accessed in the bottom layer independent kernel, and the operation mode of the target operator can be accurately determined. Besides the detection strategy, detection can be performed in other ways to determine the operation mode of the target operator, and the embodiment of the disclosure does not limit the operation mode of the target operator.
S103, if the operation mode of the target operator is the first operation mode, executing the target operator in the first operation mode; and if the operation mode of the target operator is the second operation mode, executing the target operator in the second operation mode.
And after the computer equipment starts to generate the target operator and determines the operation mode of the target operator, executing the target operator by determining the operation mode of the target operator. Specifically, if the operation mode of the target operator is the first operation mode, the target operator is correspondingly executed in the first operation mode, and if the operation mode of the target operator is the second operation mode, the target operator is correspondingly executed in the second operation mode.
Optionally, the first operation mode is a layer-by-layer operation mode, and the second operation mode is a fusion operation mode, so that if the operation mode of the target operator is the first operation mode, the target operator is executed in the layer-by-layer operation mode, specifically, the target operator is compiled separately. And if the operation mode of the target operator is the second operation mode, executing the target operator in a fusion operation mode, and specifically, fusing and compiling the target operator and other generated operators.
According to the operator operation method, the operator operation device and the related products, the target operator is generated in the deep learning framework according to the generation instruction, then the operation mode of the target operator is determined based on a preset detection strategy, and if the operation mode of the target operator is a first operation mode, the target operator is executed in the first operation mode; and if the operation mode of the target operator is the second operation mode, executing the target operator in the second operation mode. According to the method, the operation mode of the operator is determined based on a preset detection strategy before the operator is executed, which is equivalent to setting an operator operation mode detection link, the operation mode of a target operator can be determined through the operator operation mode detection link, and then the operator is operated in a corresponding operation mode, so that operators in different modes do not need to be independently packaged in a deep learning frame, operators in multiple operation modes can be simultaneously supported only by packaging one set of operators in the deep learning frame, and the maintenance cost and the debugging difficulty are greatly reduced in the frame layer on the premise of ensuring the operation result.
Based on the above embodiment, the following describes in detail a process of determining an operation mode of a target operator based on a preset detection strategy, and as shown in fig. 3, the embodiment includes the following steps:
s201, detecting whether a preset global zone bit exists in the fusion distribution system.
In this embodiment, the global flag refers to a flag that can be recognized by each subsystem in the deep learning network, and the operation mode of the target operator can be recognized through the global flag. Optionally, the preset global flag is set at a specific time based on the mechanism of the pytorch. The specific time is determined in kernel by means of a pytore h mechanism, namely, when RunningMode is set through a global variable, a preset global flag bit is set. Specifically, because the layer-by-layer mode and the fusion mode are different in operation mode, the calling of an operator in the fusion mode is necessarily initiated from the fused operator, the global flag RunningMode is set to be fused when entering the fused operator based on the pytorch mechanism selection, and is set to be layer-by-layer when the fused operator is ended, so that the current correct RunningMode can be accessed in the bottom-layer independent operator, and the support of one operator kernel to the layer-by-layer fusion mode is realized. As shown in fig. 2a, the global flag is preset, so that as long as the call instruction is sent from the fusion operator distribution system in fig. 2a, all the operators used by the call instruction are operators that need to run in the fusion mode, that is, fusion operators. If the operator is not the call instruction dispatched from the fusion operator dispatching system in fig. 2a, the operators to be used by the call instruction are all the operators which need to operate in the layer-by-layer mode, that is, the layer-by-layer operators. Wherein, the global flag may be a specific numerical value; of course, the global flag may also be set as a letter, a combination of a numerical value and a letter, and the like, which is not limited in the embodiment of the disclosure.
S202, if the global zone bit exists in the fusion distribution system, determining that the operation mode of the target operator is a first operation mode; and if the fusion distribution system does not have the global zone bit, determining the operation mode of the target operator as a second operation mode.
Optionally, the first operation mode is taken as a fusion operation mode, the second operation mode is taken as a layer-by-layer operation mode for example to schematically illustrate, after the computer device generates the target operator, whether a global flag bit exists in a fusion distribution system (i.e., the fusion operator distribution system) is detected, and if the global flag bit exists, the operation mode of the target operator is determined to be the first operation mode (the fusion operation mode). And if the target operator does not exist, determining that the operation mode of the target operator is a second operation mode (layer-by-layer operation mode).
In the embodiment of the disclosure, the operation modes of operators are distinguished by setting the global flag bit, the operation mode of a target operator can be determined only by judging whether the preset global flag bit exists in the fusion distribution system, and then the target operator is executed in the corresponding operation mode, so that operators in different modes do not need to be separately packaged in a deep learning frame, operators in multiple operation modes can be simultaneously supported only by packaging one operator in the deep learning frame, and the maintenance cost and the debugging difficulty are greatly reduced at the frame level.
And after the operation mode of the target operator is determined, executing the target operator by adopting the determined operation mode. The above mentioned is the usage mode of the layer-by-layer mode operator and the fusion mode operator, and as can be seen from the usage mode, the layer-by-layer mode operator is executed according to the normal generation operator and the process of executing the operator after independently compiling the operator, but the fusion operator fuses and compiles all the generated operators after generating the operator. Then, the first operation mode is taken as a fusion operation mode, and the second operation mode is taken as a layer-by-layer operation mode as an example to explain the execution of the target operator. In one embodiment, as shown in fig. 4, if the operation mode is a layer-by-layer operation mode, the step S103 includes the following steps:
s301, compiling the target operator to obtain an executable file of the target operator.
S302, the executable file of the target operator is operated on the artificial intelligence processor.
When the target operator is executed, the target operator needs to be compiled to form an executable file, and then the obtained executable file is operated on the artificial intelligence processor, namely the target operator operation is completed. In this embodiment, the target operator is in a layer-by-layer operation mode, so that the target operator can be directly compiled and operated. By way of example, assume that the target operator includes three: the operation logic among the three operators is that the output of the operator A is used as all the inputs of the operator B, the output of the operator B and the output of the operator A are used as the input of the operator C together, and the output of the operator C is the final operation result. Executing the three operators in a layer-by-layer operation mode, namely executing the operator A, obtaining the output of the operator A, then storing the output in a memory, calling the output of the operator A in the memory as the input of the operator B when executing the operator B, similarly, storing the output of the operator B in the memory, and calling the outputs of the operator A and the operator B in the memory as the input of the operator C when executing the operator C.
In another embodiment, as shown in fig. 5, if the operation mode is the fusion operation mode, the step S103 includes the following steps:
s401, sending the target operator to the fusion unit.
S402, in the fusion unit, the target operator and other operators in the fusion unit are compiled simultaneously, and the compiled executable file is operated on the artificial intelligence processor.
The operator in the fusion operation mode needs to send a target operator to a fusion unit (or a fusion container), then the fusion unit is used for compiling the whole body, and the compiled executable file is operated on the artificial intelligence processor to complete the operation on the target operator. Still take operator a, operator B, operator C as an example to explain, adopt the fusion mode to operate these three operators, place operator a, operator B, operator C in the same container, when operating these three operators in this container, the output of operator a need not to exist in the content, can give operator B and operator C as the input directly, likewise, the output of operator B can give operator C as the input directly too, when carrying out the operator with the fusion mode of operation, need not to carry on the memory and store this process, thus saved the operating resource.
In the embodiment of the disclosure, the target operator is executed in the operation mode corresponding to the target operator, and the execution accuracy of the target operator is ensured due to different operation modes.
As shown in fig. 6, the present disclosure also provides a multi-mode operation method, including the steps of:
s1, generating a target operator in a deep learning framework according to an instruction.
And S2, detecting whether a preset global zone bit exists in the fusion distribution system.
And S3, if the global zone bit exists in the fusion distribution system, determining that the target operator is in a fusion operation mode.
And S4, sending the target operator to the fusion unit.
And S5, simultaneously compiling the target operator in the fusion unit and other operators in the fusion unit, and running the compiled executable file on the artificial intelligence processor.
And S6, if the global zone bit does not exist in the fusion distribution system, determining that the target operator is in a layer-by-layer operation mode.
And S7, compiling the target operator to obtain an executable file of the target operator.
And S8, running the executable file of the target operator on the artificial intelligence processor.
Specifically, in this embodiment, the organization and call flow of the framework level operator is as follows:
Figure BDA0003073333140000111
in order to realize that a set of operators can simultaneously support two modes of layer-by-layer operation and fusion, a link of CheckFuse (an operation mode of a detection operator) is added to an organization and a calling form of a framework-level operator, the link of CheckFuse is used for detecting whether the operation mode of a target operator is a fusion operation mode, if the operation mode is the fusion mode, a FuseOp (grandson after compiling is executed after the operator is compiled) is executed, and if the operation mode is a layer-by-layer mode, a ComputeOp (operator generated by independent compiling) > ComputeOp (operator executing the independent compiling) is executed. After the operator is organized by adopting the strategy in the disclosure, two modes of layer-by-layer and fusion can be simultaneously supported by only one set of operator, and the maintenance cost and the debugging difficulty are greatly reduced on the premise of ensuring the operation result.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
In one embodiment, as shown in fig. 7, there is provided an apparatus for operator operation, the apparatus comprising: a generation module 10, a determination module 11 and an operation module 12, wherein:
a generating module 10, configured to generate a target operator in the neural network framework according to the instruction;
the determining module 11 is configured to determine an operation mode of a target operator based on a preset detection strategy;
the operation module 12 is configured to execute the target operator in a first operation mode if the operation mode of the target operator is the first operation mode; and if the operation mode of the target operator is the second operation mode, executing the target operator in the second operation mode.
In one embodiment, the detection policy is determined based on call stack rules of different run mode operators.
In one embodiment, as shown in fig. 8, the determining module 11 includes: a detection unit 111 and a determination unit 112, wherein,
the detecting unit 111 is configured to detect whether a preset global flag exists in the fusion distribution system;
a determining unit 112, configured to determine that the operation mode of the target operator is the first operation mode if a global flag exists in the fusion distribution system; and if the fusion distribution system does not have the global zone bit, determining the operation mode of the target operator as a second operation mode.
In one embodiment, the preset global flag is set at a specific time based on the mechanism of the pytorch. Further, the specific time is determined by means of a pyrorch mechanism and then inside the kernel. That is, when RunningMode is set by the global variable, a preset global flag is set. Specifically, because the layer-by-layer mode and the fusion mode are different in operation mode, the calling of an operator in the fusion mode is certainly initiated from the fused kernel, the global flag running mode is set to be fused when entering the fused kernel based on the pytorch mechanism selection, and is set to be layer-by-layer when the fused kernel is ended, so that the current correct running mode can be accessed in the bottom-layer independent kernel, and the support of the layer-by-layer fusion mode by one set of operator kernel is realized.
In one embodiment, as shown in fig. 9, the operation module 12 includes: a compiling unit 121 and an executing unit 122, wherein,
the compiling unit 121 is configured to compile a target operator to obtain an executable file of the target operator;
and the execution unit 122 is used for running the executable file of the target operator on the artificial intelligence processor.
In an embodiment, the compiling unit 121 is further configured to send the target operator to the fusing unit; the execution unit 122 is further configured to compile the target operator and other operators in the fusion unit simultaneously in the fusion unit, and run the compiled executable file on the artificial intelligence processor.
The specific limitations of the apparatus for operating the operator can be referred to the limitations of the method for operating the operator, and are not described herein again. The modules in the apparatus on which the operator operates may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In an embodiment, the disclosed embodiment further provides a data processing apparatus, which includes a processor and a memory, where the memory stores a computer program, and the processor implements the steps in any embodiment of the operator running method when executing the computer program.
The implementation principle and technical effect of the data processing apparatus provided in this embodiment are similar to those of the embodiment of the operator operation method, and are not described herein again.
Fig. 10 is a block diagram illustrating a combined processing device 1000 according to an embodiment of the present disclosure. As shown in fig. 10, the combined processing device 1000 includes a computing processing device 1002, an interface device 1004, other processing devices 1006, and a storage device 1008. Depending on the application scenario, one or more computing devices 1010 may be included in the computing processing device, which may be configured to perform the operations described herein in connection with the operation of the multi-mode operator of the figures.
In various embodiments, the computing processing device of the present disclosure may be configured to perform user-specified operations. In an exemplary application, the computing processing device may be implemented as a single-core artificial intelligence processor or a multi-core artificial intelligence processor. Similarly, one or more computing devices included within a computing processing device may be implemented as an artificial intelligence processor core or as part of a hardware architecture of an artificial intelligence processor core. When multiple computing devices are implemented as an artificial intelligence processor core or as part of a hardware structure of an artificial intelligence processor core, the computing processing devices of the present disclosure may be viewed as having a single core structure or a homogeneous multi-core structure.
In an exemplary operation, the computing processing device of the present disclosure may interact with other processing devices through an interface device to collectively perform user-specified operations. Other Processing devices of the present disclosure may include one or more types of general and/or special purpose processors, such as Central Processing Units (CPUs), graphics Processing Units (GPUs), and artificial intelligence processors, depending on the implementation. These processors may include, but are not limited to, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic, discrete hardware components, etc., and the number may be determined based on actual needs. As previously mentioned, the computing processing device of the present disclosure may be considered to have a single core structure or an isomorphic multi-core structure only. However, when considered together, a computing processing device and other processing devices may be considered to form a heterogeneous multi-core structure.
In one or more embodiments, the other processing devices can interface with external data and controls as a computational processing device of the present disclosure (which can be embodied as artificial intelligence, e.g., a computing device associated with neural network operations), performing basic controls including, but not limited to, data handling, turning on and/or off of the computing device, and the like. In further embodiments, other processing devices may also cooperate with the computing processing device to collectively perform computational tasks.
In one or more embodiments, the interface device may be used to transfer data and control instructions between the computing processing device and other processing devices. For example, the computing processing device may obtain input data from other processing devices via the interface device, and write the input data into a storage device (or memory) on the computing processing device. Further, the computing processing device may obtain the control instruction from the other processing device via the interface device, and write the control instruction into the control cache on the computing processing device slice. Alternatively or optionally, the interface device may also read data from the memory device of the computing processing device and transmit the data to the other processing device.
Additionally or alternatively, the combined processing device of the present disclosure may further include a storage device. As shown in the figure, the storage means is connected to the computing processing means and the further processing means, respectively. In one or more embodiments, the storage device may be used to store data for the computing processing device and/or the other processing devices. For example, the data may be data that is not fully retained within internal or on-chip storage of a computing processing device or other processing device.
In some embodiments, the present disclosure also discloses a chip (e.g., chip 1102 shown in fig. 11). In one implementation, the Chip is a System on Chip (SoC) and is integrated with one or more combinatorial processing devices as shown in fig. 10. The chip may be connected to other associated components through an external interface device (e.g., external interface device 1106 shown in fig. 11). The relevant component may be, for example, a camera, a display, a mouse, a keyboard, a network card, or a wifi interface. In some application scenarios, other processing units (e.g., video codecs) and/or interface modules (e.g., DRAM interfaces) and/or the like may be integrated on the chip. In some embodiments, the disclosure also discloses a chip packaging structure, which includes the chip. In some embodiments, the disclosure further discloses a board card including the chip packaging structure. The board will be described in detail with reference to fig. 11.
Fig. 11 is a schematic diagram illustrating a structure of a board 1100 according to an embodiment of the disclosure. As shown in FIG. 11, the card includes a memory device 1104 for storing data, which includes one or more memory cells 1110. The memory device may be coupled to and communicate data with control device 1108 and chip 1102 described above via, for example, a bus. Further, the board also includes an external interface device 1106 configured for data relay or transfer functions between the chip (or chips in a chip package) and an external device 1112 (e.g., a server or computer, etc.). For example, the data to be processed may be transferred to the chip by an external device through an external interface. For another example, the calculation result of the chip may be transmitted back to an external device via the external interface device. According to different application scenarios, the external interface device may have different interface forms, for example, it may adopt a standard PCIE interface or the like.
In one or more embodiments, the control device in the disclosed card may be configured to regulate the state of the chip. Therefore, in an application scenario, the control device may include a single chip Microcomputer (MCU) for controlling the operating state of the chip.
From the above description in conjunction with fig. 10 and 11, it will be understood by those skilled in the art that the present disclosure also discloses an electronic device or apparatus, which may include one or more of the above boards, one or more of the above chips and/or one or more of the above combination processing devices.
According to different application scenarios, the electronic device or apparatus of the present disclosure may include a server, a cloud server, a server cluster, a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet computer, a smart terminal, a PC device, a terminal of the internet of things, a mobile terminal, a mobile phone, a vehicle recorder, a navigator, a sensor, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a visual terminal, an autopilot terminal, a vehicle, a household appliance, and/or a medical device. The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance instrument, a B ultrasonic instrument and/or an electrocardiograph. The electronic device or apparatus of the present disclosure may also be applied to the fields of the internet, the internet of things, data centers, energy, transportation, public management, manufacturing, education, power grid, telecommunications, finance, retail, construction sites, medical, and the like. Further, the electronic device or apparatus disclosed herein may also be used in application scenarios related to artificial intelligence, big data, and/or cloud computing, such as a cloud end, an edge end, and a terminal. In one or more embodiments, the computationally-powerful electronic device or apparatus according to the present disclosure may be applied to a cloud device (e.g., a cloud server), while the less-power electronic device or apparatus may be applied to a terminal device and/or an edge device (e.g., a smartphone or a camera). In one or more embodiments, the hardware information of the cloud device and the hardware information of the terminal device and/or the edge device are compatible with each other, so that appropriate hardware resources can be matched from the hardware resources of the cloud device to simulate the hardware resources of the terminal device and/or the edge device according to the hardware information of the terminal device and/or the edge device, and uniform management, scheduling and cooperative work of end-cloud integration or cloud-edge-end integration can be completed.
It is noted that for the sake of brevity, this disclosure presents some methods and embodiments thereof as a series of acts or combinations thereof, but those skilled in the art will appreciate that the disclosed aspects are not limited by the order of acts described. Accordingly, one of ordinary skill in the art will appreciate that certain steps may be performed in other sequences or simultaneously, in accordance with the disclosure or teachings of the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in this disclosure are capable of alternative embodiments, in which acts or modules are involved, which are not necessarily required to practice one or more aspects of the disclosure. In addition, the present disclosure also focuses on the description of some embodiments, depending on the solution. In view of the above, those skilled in the art will understand that portions of the disclosure that are not described in detail in one embodiment may also be referred to in the description of other embodiments.
In particular implementation, based on the disclosure and teachings of the present disclosure, one of ordinary skill in the art will appreciate that the several embodiments disclosed in the present disclosure may be implemented in other ways not disclosed herein. For example, as for each unit in the foregoing embodiments of the electronic device or apparatus, the units are divided based on the logic function, and there may be another division manner in the actual implementation. Also for example, multiple units or components may be combined or integrated with another system or some features or functions in a unit or component may be selectively disabled. The connections discussed above in connection with the figures may be direct or indirect couplings between the units or components in terms of connectivity between the different units or components. In some scenarios, the aforementioned direct or indirect coupling involves a communication connection utilizing an interface, where the communication interface may support electrical, optical, acoustic, magnetic, or other forms of signal transmission.
In the present disclosure, units described as separate parts may or may not be physically separate, and parts shown as units may or may not be physical units. The aforementioned components or units may be co-located or distributed across multiple network elements. In addition, according to actual needs, part or all of the units can be selected to achieve the purpose of the solution of the embodiment of the present disclosure. In addition, in some scenarios, multiple units in embodiments of the present disclosure may be integrated into one unit or each unit may exist physically separately.
In some implementation scenarios, the integrated units may be implemented in the form of software program modules. If implemented in the form of software program modules and sold or used as a stand-alone product, the integrated units may be stored in a computer readable memory. In this regard, when aspects of the present disclosure are embodied in the form of a software product (e.g., a computer-readable storage medium), the software product may be stored in a memory, which may include instructions for causing a computer device (e.g., a personal computer, a server, or a network device, etc.) to perform some or all of the steps of the methods described in embodiments of the present disclosure. The Memory may include, but is not limited to, a usb disk, a flash disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
In other implementation scenarios, the integrated unit may also be implemented in hardware, that is, a specific hardware circuit, which may include a digital circuit and/or an analog circuit, etc. The physical implementation of the hardware structure of the circuit may include, but is not limited to, physical devices, which may include, but are not limited to, transistors or memristors, among other devices. In view of this, the various devices described herein (e.g., computing devices or other processing devices) may be implemented by suitable hardware processors, such as CPUs, GPUs, FPGAs, DSPs, ASICs, and the like. Further, the aforementioned storage unit or storage device may be any suitable storage medium (including magnetic storage medium or magneto-optical storage medium, etc.), and may be, for example, a variable Resistive Memory (RRAM), a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), an Enhanced Dynamic Random Access Memory (EDRAM), a High Bandwidth Memory (HBM), a Hybrid Memory Cube (HMC), a ROM, a RAM, or the like.
While various embodiments of the present disclosure have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous modifications, changes, and substitutions will occur to those skilled in the art without departing from the spirit and scope of the present disclosure. It should be understood that various alternatives to the embodiments of the disclosure described herein may be employed in practicing the disclosure. It is intended that the following claims define the scope of the disclosure and that equivalents or alternatives within the scope of these claims be covered thereby.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be construed as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The foregoing may be better understood in light of the following clauses:
clause A1, a method of operator operation, the method comprising:
generating a target operator in a deep learning framework according to the instruction;
determining an operation mode of the target operator based on a preset detection strategy;
if the operation mode of the target operator is a first operation mode, executing the target operator in the first operation mode; and if the operation mode of the target operator is a second operation mode, executing the target operator in the second operation mode.
Clause A2, according to the method of clause A1, the detection policy is determined based on call stack rules of different operation mode operators.
Clause A3, according to the method of clause A1 or A2, when the first operation mode is a fusion operation mode and the second operation mode is a layer-by-layer operation mode, the determining the operation mode of the target operator based on a preset detection strategy includes:
detecting whether a preset global zone bit exists in the fusion distribution system;
if the global zone bit exists in the fusion distribution system, determining that the operation mode of the target operator is the first operation mode; and if the global zone bit does not exist in the fusion distribution system, determining that the operation mode of the target operator is the second operation mode.
Clause A4, according to the method of clause A3, the preset global flag is set at a specific time based on the pytorch mechanism.
Clause A5, the method of clause A4, wherein executing the target operator in the first mode of operation comprises:
compiling the target operator to obtain an executable file of the target operator;
and running the executable file of the target operator on an artificial intelligence processor.
Clause A6, the method of clause A4, wherein executing the target operator in the second mode of operation comprises:
sending the target operator to a fusion unit;
and in the fusion unit, compiling the target operator and other operators in the fusion unit at the same time, and running the compiled executable file on an artificial intelligence processor.
Clause A7, an apparatus for operator operation, the apparatus comprising:
the generation module is used for generating a target operator in the deep learning framework according to the instruction;
the determining module is used for determining the operation mode of the target operator based on a preset detection strategy;
the operation module is used for executing the target operator in a first operation mode if the operation mode of the target operator is the first operation mode; and if the operation mode of the target operator is a second operation mode, executing the target operator in the second operation mode.
Clause A8, the apparatus of clause A7, wherein the detection policy is determined based on call stack rules for different operating mode operators.
Clause A9, the apparatus of clause A7 or A8, the determining module comprising: a detection unit and a determination unit, wherein,
the detection unit is used for detecting whether a preset global flag bit exists in the fusion distribution system;
the determining unit is configured to determine that the operation mode of the target operator is the first operation mode if the global flag bit exists in the fusion distribution system; and if the global zone bit does not exist in the fusion distribution system, determining that the operation mode of the target operator is the second operation mode.
Clause a10, the apparatus according to clause A7 or A8, wherein the predetermined global flag is set at a specific time based on the pytorch mechanism.
Clause a11, the apparatus of clause a10, the execution module comprising: a compiling unit and an executing unit; the compiling unit is used for compiling the target operator to obtain an executable file of the target operator; and the execution unit is used for running the executable file of the target operator on the artificial intelligence processor.
Clause a12, the apparatus according to clause a10, the compiling unit further configured to send the target operator to a fusion unit; the execution unit is further configured to compile the target operator and other operators in the fusion unit simultaneously in the fusion unit, and run the compiled executable file on an artificial intelligence processor.
Clause a13, a data processing apparatus comprising a memory storing a computer program and a processor, the processor implementing the steps of the method of any one of clauses A1 to A6 when executing the computer program.
Clause a14, a combined processing device comprising the data processing device of clause a13, a universal interconnection interface, and other processing devices other than the data processing device; the data processing device interacts with the other processing devices.
Clause a15, a chip comprising the combinatorial processing device of clause a 14.
Clause a16, a card comprising the chip of clause a 15.
Clause a17, an electronic device comprising the card of clause a 16.
Clause a18, an electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, when executing the computer program, implementing the steps of the method of any of clauses A1-A6.
Clause a19, a storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of clauses A1-A6.
The above detailed description of the disclosed embodiments, and the specific examples used herein to explain the principles and implementations of the present disclosure, are presented only to assist in understanding the method and its central concept of the present disclosure. Meanwhile, a person skilled in the art should, according to the idea of the present disclosure, change or modify the embodiments and applications of the present disclosure. In view of the above, this description should not be taken as limiting the present disclosure.

Claims (10)

1. A method of operator operation, the method comprising:
generating a target operator in a deep learning framework according to the instruction;
determining an operation mode of the target operator based on a preset detection strategy;
if the operation mode of the target operator is a first operation mode, executing the target operator in the first operation mode; and if the operation mode of the target operator is a second operation mode, executing the target operator in the second operation mode.
2. The method of claim 1, wherein the detection policy is determined based on call stack rules for different run mode operators.
3. The method according to claim 1 or 2, wherein when the first operation mode is a fusion operation mode and the second operation mode is a layer-by-layer operation mode, the determining the operation mode of the target operator based on a preset detection strategy includes:
detecting whether a preset global zone bit exists in the fusion distribution system;
if the global zone bit exists in the fusion distribution system, determining that the operation mode of the target operator is the first operation mode; and if the global zone bit does not exist in the fusion distribution system, determining that the operation mode of the target operator is the second operation mode.
4. The method of claim 3, wherein the preset global flag is set at a specific time based on a pytorch mechanism.
5. The method of claim 4, wherein said executing said target operator in said first mode of operation comprises:
compiling the target operator to obtain an executable file of the target operator;
and running the executable file of the target operator on an artificial intelligence processor.
6. The method of claim 4, wherein executing the target operator in the second mode of operation comprises:
sending the target operator to a fusion unit;
and in the fusion unit, compiling the target operator and other operators in the fusion unit at the same time, and running the compiled executable file on an artificial intelligence processor.
7. An apparatus for operator operations, the apparatus comprising:
the generation module is used for generating a target operator in the deep learning framework according to the instruction;
the determining module is used for determining the operation mode of the target operator based on a preset detection strategy;
the operation module is used for executing the target operator in a first operation mode if the operation mode of the target operator is the first operation mode; and if the operation mode of the target operator is a second operation mode, executing the target operator in the second operation mode.
8. The apparatus of claim 7, wherein the determining module comprises:
the detection unit is used for detecting whether a preset global zone bit exists in the fusion distribution system;
a determining unit, configured to determine that an operation mode of the target operator is the first operation mode if the global flag exists in the fusion distribution system; and if the global zone bit does not exist in the fusion distribution system, determining that the operation mode of the target operator is the second operation mode.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 5 are implemented when the computer program is executed by the processor.
10. A storage medium having a computer program stored thereon, the computer program, when being executed by a processor, realizing the steps of the method of any one of claims 1 to 5.
CN202110545402.XA 2021-05-19 2021-05-19 Operator operation method, device and related product Pending CN115374915A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110545402.XA CN115374915A (en) 2021-05-19 2021-05-19 Operator operation method, device and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110545402.XA CN115374915A (en) 2021-05-19 2021-05-19 Operator operation method, device and related product

Publications (1)

Publication Number Publication Date
CN115374915A true CN115374915A (en) 2022-11-22

Family

ID=84059885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110545402.XA Pending CN115374915A (en) 2021-05-19 2021-05-19 Operator operation method, device and related product

Country Status (1)

Country Link
CN (1) CN115374915A (en)

Similar Documents

Publication Publication Date Title
CN109543825B (en) Neural network model algorithm compiling method and device and related products
CN110119807B (en) Operation method, operation device, computer equipment and storage medium
CN111079909B (en) Operation method, system and related product
CN111353124A (en) Operation method, operation device, computer equipment and storage medium
CN115373646A (en) Information expansion method, device and related product
CN115374915A (en) Operator operation method, device and related product
CN111078291B (en) Operation method, system and related product
CN111079925B (en) Operation method, device and related product
CN111061507A (en) Operation method, operation device, computer equipment and storage medium
CN111026440B (en) Operation method, operation device, computer equipment and storage medium
CN111275197B (en) Operation method, device, computer equipment and storage medium
CN111339060B (en) Operation method, device, computer equipment and storage medium
CN111078283B (en) Operation method, device and related product
CN111079915B (en) Operation method, device and related product
CN111078125B (en) Operation method, device and related product
CN111079924B (en) Operation method, system and related product
CN111079914B (en) Operation method, system and related product
CN111353125B (en) Operation method, operation device, computer equipment and storage medium
CN111078280B (en) Operation method, device and related product
CN111078285B (en) Operation method, system and related product
CN111079907B (en) Operation method, device and related product
CN111078281B (en) Operation method, system and related product
CN111325331B (en) Operation method, device and related product
CN111062483A (en) Operation method, operation device, computer equipment and storage medium
CN114443259A (en) Data processing method, data processing device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221214

Address after: 215300 room 5, 232 Yuanfeng Road, Yushan Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Cambrian (Kunshan) Information Technology Co.,Ltd.

Address before: 6 / F, block B, 168 Tonghui Road, Pudong New Area, Shanghai 201306

Applicant before: SHANGHAI CAMBRICON INFORMATION TECHNOLOGY Co.,Ltd.