CN110751272A - Method, device and storage medium for positioning data in convolutional neural network model - Google Patents

Method, device and storage medium for positioning data in convolutional neural network model Download PDF

Info

Publication number
CN110751272A
CN110751272A CN201911047979.7A CN201911047979A CN110751272A CN 110751272 A CN110751272 A CN 110751272A CN 201911047979 A CN201911047979 A CN 201911047979A CN 110751272 A CN110751272 A CN 110751272A
Authority
CN
China
Prior art keywords
neuron
layer
mother
data
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911047979.7A
Other languages
Chinese (zh)
Other versions
CN110751272B (en
Inventor
杨飏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN201911047979.7A priority Critical patent/CN110751272B/en
Publication of CN110751272A publication Critical patent/CN110751272A/en
Application granted granted Critical
Publication of CN110751272B publication Critical patent/CN110751272B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to the technical field of data processing, in particular to a method and a device for positioning data in a convolutional neural network model and a storage medium, which are used for solving the technical problem that it is difficult to determine which layer has errors when the convolutional neural network model outputs error data in the related art. The method for positioning data in the convolutional neural network model comprises the following steps: determining the occurrence of target sub-neurons in the fifth set of neurons that do not fit the expected data; calculating the position of a fourth parent neuron corresponding to the target child neuron in the fourth neuron set; acquiring data of a fourth mother neuron according to the position of the fourth mother neuron; judging whether the setting parameters of the pooling layer are correct or not according to the data of the fourth mother neuron and the data of the target child neuron; when the setting parameters of the pooling layer are incorrect, outputting first prompt information to enable a user to modify the setting parameters of the pooling layer to obtain a modified convolutional neural network model.

Description

Method, device and storage medium for positioning data in convolutional neural network model
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for positioning data in a convolutional neural network model, and a storage medium.
Background
Convolutional Neural Networks (CNN) are a type of feed-forward Neural network that includes convolution calculations and has a deep structure, and are one of the representative algorithms for deep learning. The study on the convolutional neural network starts in the 80 to 90 s of the twentieth century, and after the twenty-first century, along with the proposal of a deep learning theory and the improvement of numerical calculation equipment, the convolutional neural network is rapidly developed and applied to the fields of computer vision, natural language processing and the like, such as image identification, image identification and positioning, object detection, image segmentation and the like. The convolutional neural network is constructed by imitating a visual perception (visual perception) mechanism of a living being, can perform supervised learning and unsupervised learning, and has the advantages that the convolutional neural network can learn grid-likeness characteristics (such as pixels and audio) with a small amount of calculation by implying convolutional kernel parameter sharing and sparsity of interlayer connection in layers, so that the convolutional neural network has a stable effect and has no additional characteristic engineering (feature engineering) requirement on data.
The convolutional neural network model structure is essentially a mapping from input to output, and the mapping is usually formed by overlapping a convolutional Layer, a BN Layer (Batch Norm Layer), an activation Layer, a pooling Layer and the like. The output result of each layer can be understood as that the neurons of the layer are obtained by combining a specific formula and a specific parameter operation. These specific parameters are determined by learning from the convolutional neural network training. All neurons (data) in each layer become new neurons in the next layer after the operation of the layer. After the convolutional neural network training is finished, forward derivation can be carried out according to layer operation so as to meet application (prediction) requirements.
Artificial Intelligence (AI) software programs based on convolutional neural networks all require training of corresponding convolutional neural network models. The trained convolutional neural network model command sequence needs to be executed by an accelerated computing processing unit such as a Graphic Processing Unit (GPU) and an embedded neural Network Processor (NPU), or the CPU processing unit runs. When a GPU, an NPU or a CPU is designed by self, whether a correct result is obtained after a command of the convolutional neural network is executed needs to be debugged.
Disclosure of Invention
The present disclosure provides a method, an apparatus, and a storage medium for data positioning in a convolutional neural network model, so as to solve the technical problem in the related art that it is difficult to determine which layer has an error when the convolutional neural network model outputs erroneous data.
To achieve the above object, in a first aspect of the embodiments of the present disclosure, there is provided a method for data localization in a convolutional neural network model, the convolutional neural network model including a pooling layer and an activation layer, the activation layer outputting a fourth set of neurons, the fourth set of neurons being input to the pooling layer so that the pooling layer outputs a fifth set of neurons; the method comprises the following steps:
determining that a target sub-neuron of the fifth set of neurons is present that does not fit expected data;
calculating the position of a fourth parent neuron corresponding to the target child neuron in the fourth neuron set according to the position of the target child neuron in the fifth neuron set, the kernel size of the pooling layer, the position of the target child neuron in the fifth neuron set and the width and height of the fourth neuron set;
acquiring data of the fourth mother neuron according to the position of the fourth mother neuron;
judging whether the setting parameters of the pooling layer are correct or not according to the data of the fourth mother neuron and the data of the target child neuron;
when the setting parameters of the pooling layer are incorrect, outputting first prompt information to enable a user to modify the setting parameters of the pooling layer to obtain a modified convolutional neural network model.
Optionally, the convolutional neural network model further comprises a BN layer, the BN layer outputting a third set of neurons that are input to the activation layer to cause the activation layer to output the fourth set of neurons; the method further comprises the following steps:
when the setting parameters of the pooling layer are correct, calculating the position of a third mother neuron corresponding to the fourth mother neuron in the third neuron set according to the position of the fourth mother neuron in the fourth neuron set, the position of the fourth mother neuron in the fourth neuron set and the width and height of the third neuron set;
acquiring data of the third maternal neuron according to the position of the third maternal neuron;
judging whether the setting parameters of the activation layer are correct or not according to the data of the third maternal neuron and the data of the fourth maternal neuron;
and when the setting parameters of the activation layer are incorrect, outputting second prompt information to enable a user to modify the setting parameters of the activation layer to obtain a modified convolutional neural network model.
Optionally, the location of the third mother neuron is obtained by calculation according to the following equation:
P3(x)=Set4Ry*Wx+Set4Cy+P4(x)*Wx*Hx
wherein, P3(x) Indicates the location of the third mother neuron, P4(x) Represents the position of the fourth mother neuron in a fourth neuron Set, Wx represents the width of the third neuron Set, Hx represents the height of the third neuron Set, Set4Ry represents the position of the fourth mother neuron on the y-th row of the fourth neuron Set, and Set4Cy represents the position of the fourth mother neuron on the y-th column of the fourth neuron Set.
Optionally, the convolutional neural network model further comprises a convolutional layer, the convolutional layer outputs a second set of neurons, the second set of neurons inputs the BN layer to cause the BN layer to output the third set of neurons; the method further comprises the following steps:
when the setting parameters of the activation layer are correct, calculating the position of a second mother neuron corresponding to the third mother neuron in the second neuron set according to the position of the third mother neuron in the third neuron set, the position of the third mother neuron in the third neuron set and the width and height of the second neuron set;
acquiring data of the second maternal neuron according to the position of the second maternal neuron;
judging whether the setting parameters of the BN layer are correct or not according to the data of the second maternal neuron and the data of the second maternal neuron;
and when the setting parameters of the BN layer are incorrect, outputting third prompt information to enable a user to modify the setting parameters of the BN layer so as to obtain a modified convolutional neural network model.
Optionally, the position of the second mother neuron is obtained by calculation according to the following equation:
P2(x)=Set3Ry*Wx+Set3Cy+P3(x)*Wx*Hx
wherein, P2(x) Indicates the location of the second parent neuron, P3(x) Represents the position of the third mother neuron in a third neuron Set, Wx represents the width of the second neuron Set, Hx represents the height of the second neuron Set, Set3Ry represents the position of the third mother neuron on the y-th row of the third neuron Set, and Set3Cy represents the position of the third mother neuron on the y-th column of the third neuron Set.
Optionally, the convolutional layer outputs the second set of neurons after inputting the first set of neurons; the method further comprises the following steps:
when the setting parameters of the BN layer are correct, calculating the position of a first mother neuron corresponding to the second mother neuron in the first neuron set according to the position of the second mother neuron in the second neuron set, the step value of the movement of the convolution kernel in the convolution layer and the width and height of the first neuron set;
acquiring data of the first mother neuron according to the position of the first mother neuron;
judging whether the setting parameters of the convolutional layer are correct or not according to the data of the second maternal neuron and the data of the first maternal neuron;
and when the setting parameters of the convolutional layer are incorrect, outputting fourth prompt information to enable a user to modify the setting parameters of the convolutional layer so as to obtain a modified convolutional neural network model.
Optionally, the method further comprises:
after the modified convolutional neural network model is obtained, verified target input data and target output data are obtained;
inputting the target input data into the modified convolutional neural network model to obtain output data to be verified, which is output by the modified convolutional neural network model;
and according to the target output data and the output data to be verified, confirming the modified convolutional neural network model as a network model which is in line with expectation.
In a second aspect of the embodiments of the present disclosure, there is provided an apparatus for data localization in a convolutional neural network model, the convolutional neural network model including a pooling layer and an activation layer, the activation layer outputting a fourth set of neurons, the fourth set of neurons inputting into the pooling layer to cause the pooling layer to output a fifth set of neurons; the device comprises:
a determination module configured to determine that a target sub-neuron of the fifth set of neurons is present that does not fit the expected data;
a calculation module configured to calculate a position of a fourth parent neuron corresponding to the target child neuron in the fourth neuron set according to the position of the target child neuron in the fifth neuron set, a kernel size of the pooling layer, the position of the target child neuron in the fifth neuron set, and a width and a height of the fourth neuron set;
the acquisition module is configured to acquire data of the fourth mother neuron according to the position of the fourth mother neuron;
a judging module configured to judge whether the setting parameters of the pooling layer are correct according to the data of the fourth mother neuron and the data of the target child neuron;
an output module configured to output first prompt information to enable a user to modify the setting parameters of the pooling layer to obtain a modified convolutional neural network model when the setting parameters of the pooling layer are incorrect.
In a third aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the method of any one of the above first aspects.
In a fourth aspect of the embodiments of the present disclosure, an apparatus for locating data in a convolutional neural network model is provided, including:
a memory having a computer program stored thereon; and
a processor for executing the computer program in the memory to implement the steps of the method of any of the first aspects above.
By adopting the technical scheme, the following technical effects can be at least achieved:
according to the method, the positions of the target sub-neurons in the neuron set after the convolutional neural network model outputs and the data among all layers can be traced back one layer by one layer, which layers in the convolutional neural network model are in problem can be positioned, then a user can modify the setting parameters of the layers, the convolutional neural network model is repaired, and the technical problem that which layer is in error is difficult to determine when the convolutional neural network model outputs error data in the related technology is solved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a flowchart illustrating a method for locating data in a convolutional neural network model according to an exemplary embodiment of the present disclosure.
Fig. 2 is a diagram illustrating the generation of forward derived neurons according to an exemplary embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a neuron set shown in an exemplary embodiment of the present disclosure.
Fig. 4 is a schematic diagram illustrating the localization of pooled maternal neurons according to an exemplary embodiment of the present disclosure.
Fig. 5 is a schematic diagram illustrating the location of an activation layer mother neuron according to an exemplary embodiment of the present disclosure.
Fig. 6 is a schematic diagram illustrating a BN layer mother neuron localization according to an exemplary embodiment of the present disclosure.
Fig. 7 is a schematic diagram illustrating a convolutional layer mother neuron localization according to an exemplary embodiment of the present disclosure.
Fig. 8 is a schematic diagram of a mother neuron localization under a multi-layer overlay, shown in an exemplary embodiment of the present disclosure.
Fig. 9 is another schematic diagram of a mother neuron location under a multi-layer overlay, shown in an exemplary embodiment of the present disclosure.
Fig. 10 is a diagram of a convolutional neural network model structure shown in an exemplary embodiment of the present disclosure.
Fig. 11 is a table of the operation scale and the binary data size of each layer of the convolutional neural network model shown in fig. 10.
FIG. 12 is a binary sequence diagram of the convolutional neural network model output shown in FIG. 10.
Fig. 13 is a schematic diagram of the binary data shown in fig. 12 spread by layers.
Fig. 14 is another schematic diagram of the binary data shown in fig. 12 spread by layers.
Fig. 15 is a schematic diagram of the output source corresponding to each layer of channel in the convolutional neural network model shown in fig. 10.
Fig. 16 is a schematic diagram of a neuron trace of the convolutional neural network model shown in fig. 10.
Fig. 17 is a block diagram of an apparatus for locating data in a convolutional neural network model according to an exemplary embodiment of the present disclosure.
FIG. 18 is a block diagram of an apparatus for data localization in another convolutional neural network model, according to an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in detail with reference to the accompanying drawings and examples, so that how to apply technical means to solve technical problems and achieve the corresponding technical effects can be fully understood and implemented. The embodiments and various features in the embodiments of the present application can be combined with each other without conflict, and the formed technical solutions are all within the protection scope of the present disclosure.
The inventor of the present disclosure finds that, in the related art, the convolutional neural network operation framework can only obtain the neuron value output by each layer through forward derivation, and loses the connection between the neuron and the neuron in the previous layer of the current layer. When a neuron in a certain layer is found to be not expected to be correct in the operation process, the source of the neuron needs to be traced back, namely the neuron is determined to be obtained by which operations are performed on the parent neurons in the previous layer or even the last layers, so as to correct the possible wrong mapping operation. In order to reduce instruction time consumption, most typical applications are to process a plurality of conventional network layers (including convolutional layers, BN layers, etc.) in a convolutional neural network model together as a hardware layer, and finally obtain a processing result of the hardware layer. When the result is not in accordance with the expectation, the general technical point cannot directly know which layer of the plurality of traditional network layers forming the hardware large layer the error point is at through the processing result of the hardware large layer.
Example one
Fig. 1 is a flowchart illustrating a method for locating data in a convolutional neural network model according to an exemplary embodiment of the present disclosure, so as to solve a technical problem in the related art that it is difficult to determine which layer of erroneous neurons are when the convolutional neural network model outputs erroneous data. The convolutional neural network model includes a convolutional layer, a BN layer, an activation layer, and a pooling layer, the convolutional layer outputting a second neuron set after inputting a first neuron set, the second neuron set inputting the BN layer to cause the BN layer to output a third neuron set, the third neuron set inputting the activation layer to cause the activation layer to output a fourth neuron set, the activation layer outputting a fourth neuron set, the fourth neuron set inputting the pooling layer to cause the pooling layer to output a fifth neuron set; as shown in fig. 1, the method may include the steps of:
s101, determining that target sub-neurons which do not accord with expected data appear in the fifth neuron set.
And S102, calculating the position of a fourth mother neuron corresponding to the target sub-neuron in the fourth neuron set according to the position of the target sub-neuron in the fifth neuron set, the kernel size of the pooling layer, the position of the target sub-neuron in the fifth neuron set and the width and height of the fourth neuron set to acquire data of the fourth mother neuron.
S103, judging whether the setting parameters of the pooling layer are correct or not according to the data of the fourth mother neuron and the data of the target child neuron.
And S104, when the setting parameters of the pooling layer are incorrect, outputting first prompt information to enable a user to modify the setting parameters of the pooling layer so as to obtain a modified convolutional neural network model.
And S105, when the setting parameters of the pooling layer are correct, calculating the position of a third mother neuron corresponding to a fourth mother neuron in the third neuron set according to the position of the fourth mother neuron in the fourth neuron set, the position of the fourth mother neuron in the fourth neuron set and the width and height of the third neuron set so as to acquire data of the third mother neuron.
And S106, judging whether the setting parameters of the activation layer are correct or not according to the data of the third mother neuron and the data of the fourth mother neuron.
And S107, when the setting parameters of the activation layer are incorrect, outputting second prompt information to enable a user to modify the setting parameters of the activation layer so as to obtain a modified convolutional neural network model.
And S108, when the setting parameters of the activation layer are correct, calculating the position of a second mother neuron corresponding to a third mother neuron in the second neuron set according to the position of the third mother neuron in the third neuron set, the position of the third mother neuron in the third neuron set and the width and height of the second neuron set so as to acquire the data of the second mother neuron.
And S109, judging whether the setting parameters of the BN layer are correct or not according to the data of the second mother neuron and the data of the third mother neuron.
S110, when the setting parameters of the BN layer are incorrect, outputting third prompt information to enable a user to modify the setting parameters of the BN layer so as to obtain a modified convolutional neural network model.
And S111, when the setting parameters of the BN layer are correct, calculating the position of a first mother neuron corresponding to a second mother neuron in the first neuron set according to the position of the second mother neuron in the second neuron set, the step value of the movement of a convolution kernel in the convolution layer and the width and height of the first neuron set so as to acquire the data of the first mother neuron.
S112, judging whether the setting parameters of the convolutional layer are correct or not according to the data of the second mother neuron and the data of the first mother neuron.
S113, when the setting parameters of the convolutional layer are incorrect, outputting fourth prompt information to enable a user to modify the setting parameters of the convolutional layer so as to obtain a modified convolutional neural network model.
The forward derived neuron generation diagram shown in fig. 2 indicates a part of a convolutional neural network model, a first neuron set 1 generates a second neuron set2 through a convolutional layer (CONV), the second neuron set2 outputs a third neuron set3 through a BN layer (BN), the third neuron set3 generates a fourth neuron set4 through an activation layer (actiaptve), and the fourth neuron set4 generates a fifth neuron set5 through a pooling layer (POOL). The neuron groups 1, 2, 3, 4 and 5 are composed of neurons a, b, c, d, e, f, g, h, i and the like shown in FIG. 3.
According to the characteristics of the convolutional neural network and the requirement of backtracking the parent neuron in practical application, the method can be divided into single-layer backtracking and multi-layer backtracking. The single-layer backtracking comprises pooling layer backtracking, activation layer backtracking, BN layer backtracking and convolutional layer backtracking. The multi-layer backtracking comprises backtracking any multi-layer mother neuron by taking the current layer as a base point, wherein the multi-layer backtracking may comprise multi-layer superposition of pooling, activation, BN and convolution.
In step S101, by detecting data in the fifth neuron set5 finally output by the convolutional neural network model, if data that does not conform to expectation exists in the fifth neuron set5, the neuron that does not conform to the expectation data is taken as a target sub-neuron.
Next, backtracking needs to be performed layer by layer, and since the fifth neuron set5 is generated by the fourth neuron set4 through the pooling layer, the pooling layer needs to be backtracked first, and a position of a fourth mother neuron corresponding to the target child neuron in the fourth neuron set4 is found. Finding the fourth parent neuron requires a calculation based on the position of the target child neuron in the fifth neuron set, the kernel size of the pooling layer, the position of the target child neuron in the fifth neuron set, and the width and height of the fourth neuron set.
Referring to fig. 4, for example, taking the pooling level parameter pooling size (PoolSize)2x2 and the step value is 2, it is desired to trace back the'd' neuron in the fifth neuron Set5 (position is in channel x, Set5C1R1), and calculate d1, d2, d3, d4 of Set4C3R3 and Set4C2R2, Set4C3R2, Set4C2R3, whose mother neuron is in the fourth neuron Set4 according to the input size (width W4, height H4) of the fourth neuron Set4, and the corresponding positions are calculated as follows:
the pooling layer positioning algorithm illustrated by equation 1 below exemplifies a certain neuron:
where Set5R1 represents a row 1 position of the fifth neuron Set5 of FIG. 4, Set5C1 represents a column 1 position of the fifth neuron Set5 of FIG. 4, PoolSize represents a kernel size of a pooling layer for pooling operations, W4 represents a width of the fourth neuron Set4 of FIG. 4, H4 represents a height of the fourth neuron Set4 of FIG. 4, x represents a position of a target child neuron in the fifth neuron Set5, P (d1) represents a position of a fourth parent neuron d1 of the fifth neuron Set5 corresponding to the target child neuron d of the fifth neuron Set5 in the fourth neuron Set4, P (d2) represents a position of a fourth parent neuron d2 of the fifth neuron Set5 corresponding to the target child neuron d in the fourth neuron Set4, P (d3) represents a position of a fourth parent neuron d3 of the target child neuron Set5 in the fourth neuron Set4, P (d3) represents a position of the fourth parent neuron d3 of the fifth neuron Set5 corresponding to the target child neuron d 4834, P (d 585) represents a position of the fourth parent neuron Set4 corresponding to the target child neuron Set5 d The position of set 4.
After the position of the fourth mother neuron is obtained, the data of the fourth mother neuron in the corresponding position can be obtained, and whether the setting parameters of the pooling layer are correct or not can be judged according to the data of the fourth mother neuron and the data of the target child neuron. The judgment method is as follows: according to the setting parameters of the pooling layer, after the data of the fourth mother neuron is input into the pooling layer, the output expected data can be calculated; then, comparing whether the expected data is matched with the data of the target sub-neuron, if so, setting parameters of the pooling layer are correct; if not, the setup parameters of the pooling layer are incorrect.
If the setting parameters of the pooling layer are incorrect, finding a layer position with an error, and outputting first prompt information, wherein the first prompt information can comprise data of a fourth mother neuron and data of a target child neuron, so that a user can modify the setting parameters of the pooling layer according to the data of the fourth mother neuron and the data of the target child neuron conveniently, the setting parameters of the pooling layer can comprise parameters on hardware and parameters on software, for example, parameters used in an algorithm or an algorithm are reset, and the modified convolutional neural network model can be obtained after the user modifies the setting parameters.
If the setting parameters of the pooling layer are correct, step S105 is performed to calculate a position of a third mother neuron corresponding to a fourth mother neuron in the third neuron set according to the position of the fourth mother neuron in the fourth neuron set, and the width and height of the third neuron set.
Referring to fig. 5, according to the activation level input size (width W3, height H3), the fourth mother neuron d4 (located at channel x, Set4C3R3) which can locate the C3R3 position in the fourth neuron Set4 is derived from the C3R3 position in the third neuron Set3, and the corresponding position calculation method is as follows:
the following formula 2 illustrates an activation layer localization algorithm for a neuron:
P(d4)=Set4R3*W3+C3+x*W3*H3;
where Set4R3 represents the row 3 position of the fourth neuron Set4 in fig. 5, W3 represents the width of the third neuron Set3 in fig. 5, H3 represents the height of the third neuron Set3 in fig. 5, x represents the position of the fourth mother neuron d4 requiring calculation for trace back in the fourth neuron Set4, and P (d4) represents the position of the third mother neuron corresponding to the fourth mother neuron d4 in the fourth neuron Set4 in the third neuron Set 3.
After the position of the third mother neuron is obtained, the data of the third mother neuron in the corresponding position can be obtained, and whether the setting parameters of the activation layer are correct or not can be judged according to the data of the third mother neuron and the data of the fourth mother neuron. The judgment method is as follows: according to the setting parameters of the activation layer, after the data of the third mother neuron is input into the activation layer, the output expected data can be calculated; then, comparing whether the expected data is matched with the data of the fourth mother neuron, and if so, judging that the setting parameters of the activation layer are correct; if not, the setting parameters of the activation layer are incorrect.
If the setting parameters of the activation layer are incorrect, a layer position with an error is found, and second prompt information is output, wherein the second prompt information can include data of a third mother neuron and data of a fourth mother neuron, so that a user can modify the setting parameters of the activation layer according to the data of the third mother neuron and the data of the fourth mother neuron conveniently, the setting parameters of the activation layer can include parameters on hardware and parameters on software, for example, the parameters used in an algorithm or an algorithm are reset, and the modified convolutional neural network model can be obtained after the user modifies the setting parameters.
If the setting parameters of the activation layer are correct, step S108 is performed to calculate the position of the second mother neuron corresponding to the third mother neuron in the second neuron set according to the position of the third mother neuron in the third neuron set, and the width and height of the second neuron set.
Referring to fig. 6, according to the BN layer input size (width W2, height H2), the third mother neuron d4 (located in channel x, Set3C3R3) which can locate the C3R3 position in the third neuron Set3 is derived from the C3R3 position in the second neuron Set2, and the corresponding position calculation method is as follows:
the following equation 3 illustrates a BN layer localization algorithm by taking a certain neuron as an example:
P(d4)=Set3R3*W2+C2+x*W2*H;
where Set3R3 represents the row 3 position of the third neuron Set3 in fig. 6, W2 represents the width of the second neuron Set2 in fig. 6, H2 represents the height of the third neuron Set2 in fig. 6, x represents the position of the third mother neuron d4 requiring calculation for trace back in the third neuron Set3, and P (d4) represents the position of the second mother neuron corresponding to the third mother neuron d4 in the third neuron Set3 in the second neuron Set 2.
After the position of the second maternal neuron is obtained, the data of the second maternal neuron in the corresponding position can be obtained, and whether the setting parameter of the BN layer is correct or not is judged according to the data of the second maternal neuron and the data of the third maternal neuron. The judgment method is as follows: according to the setting parameters of the BN layer, after the data of the second maternal neuron is input into the BN layer, the output expected data can be calculated out; then, comparing whether the expected data is matched with the data of the third mother neuron, if so, setting parameters of the BN layer are correct; if not, the setting parameters of the BN layer are incorrect.
If the setting parameters of the BN layer are incorrect, a layer position with an error is found, third prompt information is output, the third prompt information can comprise data of a third mother neuron and data of a second mother neuron, so that a user can modify the setting parameters of the BN layer conveniently according to the data of the third mother neuron and the data of the second mother neuron, the setting parameters of the BN layer can comprise parameters on hardware and parameters on software, for example, the algorithm or parameters used in the algorithm are reset, and the modified convolutional neural network model can be obtained after the user modifies the setting parameters.
If the setting parameters of the BN layer are correct, step S111 is performed to calculate a position of a first mother neuron in the first neuron set corresponding to the second mother neuron according to the position of the second mother neuron in the second neuron set, a step value of the convolution kernel movement in the convolution layer, and the width and height of the first neuron set.
Referring to fig. 7, taking convolutional layer convolution kernel size 3x3 and step value (ConvStride)1 as an example, trace back a and d (positions are in channel x, Set2C1R1) in the neuron Set, and find their corresponding mother neurons a1 to a9 and d1 to d9, respectively, the corresponding position calculation method is as follows:
the convolutional layer localization algorithm illustrated by the following equation 4 using a certain neuron as an example:
Figure BDA0002254595100000121
after the position of the first mother neuron is obtained, the data of the first mother neuron in the corresponding position can be obtained, and whether the setting parameters of the convolutional layer are correct or not can be judged according to the data of the second mother neuron and the data of the first mother neuron. The judgment method is as follows: according to the setting parameters of the convolutional layer, after the convolutional layer inputs the data of the first mother neuron, the output expected data can be calculated; then, comparing whether the expected data is matched with the data of the second mother neuron, if so, the setting parameters of the convolutional layer are correct; if not, the setup parameters of the convolutional layer are incorrect.
If the setting parameters of the convolutional layer are incorrect, finding the layer position with an error, and outputting fourth prompt information, wherein the fourth prompt information can include the data of the first mother neuron and the data of the second mother neuron, so that a user can modify the setting parameters of the convolutional layer according to the data of the first mother neuron and the data of the second mother neuron, the setting parameters of the convolutional layer can include parameters on hardware and parameters on software, for example, the parameters used in an algorithm or an algorithm are reset, and the modified convolutional neural network model can be obtained after the user modifies the setting parameters. If the setup parameters of the convolutional layer are correct, the process may return to step S101.
In step S104, step S107, step S110, or step S113, after obtaining the modified convolutional neural network model, obtaining target input data and target output data that have passed the verification, inputting the target input data into the modified convolutional neural network model, and then outputting output data to be verified by the modified convolutional neural network model, and by comparing the target output data with the output data to be verified, if the target output data is consistent with the output data to be verified, it may be determined that the modified convolutional neural network model is a network model that meets the expectation. If the target output data and the output data to be verified are not consistent, the step S101 may be executed in a return manner.
Next, as an example of the positioning of the multi-layered mother neurons, referring to fig. 8 and 9, the positioning of the multi-layered mother neurons is performed based on the pooled layer mother neuron positioning, the activated layer mother neuron positioning, the BN layer mother neuron positioning, and the convolutional layer mother neuron positioning. As shown in fig. 8 and fig. 9, x in the last layer of neuron set output by the convolutional neural network needs to be located, and the x position needs to be applied to formula 1 according to the locating algorithm formula 1 of the pooling layer (POOL) above the x position, and the positions of four parent neurons p1, p2, p3 and p4 are calculated.
If the corresponding mother neuron of the activation layer (ACTIAVVE) is continuously located back to p1, p2, p3 and p 4. The activation layer localization algorithm formula 2 is applied to the four points p1, p2, p3 and p4 respectively, and their positions are replaced by x in formula 2, which respectively corresponds to the four positions of the points a1, a2, a3 and a 4.
And continuing to retrospectively locate the mother neurons corresponding to the Bn level for a1, a2, a3 and a4, respectively applying Bn level location algorithm formula 3 to the four points a1, a2, a3 and a4, replacing the positions of the four points with x in formula 3, and respectively obtaining four positions of points b1, b2, b3 and b 4.
And continuously backtracking and positioning the mother neurons corresponding to the convolutional layers (CONV) of b1, b2, b3 and b4, respectively applying a convolutional layer positioning algorithm formula 4 to four points of b1, b2, b3 and b4, replacing the positions of the four points with x in the formula 4, and obtaining nine points of the positions of the mother neurons of the convolutional layers of c1, c2, c3, c5, c6, c7, c9, ca and cb for the b 1. For b2, nine points were obtained, namely, the positions of the mother neurons in the convolutional layer, namely, c2, c3, 4, c6, c7, c8, ca, cb and cc. For b3, nine points of the positions of the mother neurons in the convolutional layer, namely c5, c6, c7, c9, ca, cb, cd, ce and cf, are obtained. For b4, nine points of the positions of the mother neurons in the convolutional layer, namely c6, c7, c8, ca, cb, cc, ce, cf and cg, were obtained.
Thus, the positioning of the parent neuron positions of the x at any layer in the multilayer superposed network can be completed, namely the parent neuron positions of the x at the upper pooling layer are p1, p2, p3 and p 4; x the parent neuron positions of the two activation layers above the position are a1, a2, a3 and a 4; x the positions of the mother neurons of the three Bn layers above the position b1, b2, b3 and b 4; the positions of the mother neurons of the four convolutional layers on the x are c1, c2, c3, c4, c5, c6, c7, c8, c9, ca, cb, cc, cd, ce, cf and cg.
According to the method, the positions of the target sub-neurons in the neuron set after the convolutional neural network model outputs and the data among all layers can be traced back one layer by one layer, which layers in the convolutional neural network model are in problem can be positioned, then a user can modify the setting parameters of the layers, the convolutional neural network model is repaired, and the technical problem that which layer is in error is difficult to determine when the convolutional neural network model outputs error data in the related technology is solved.
It should be noted that the method embodiment shown in fig. 1 is described as a series of acts or combinations for simplicity of description, but it should be understood by those skilled in the art that the present disclosure is not limited by the order of acts or steps described. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required in order to implement the disclosure.
Example two
In this embodiment, a specific model is used to explain how to apply the present disclosure to locate a parent neuron of multi-channel data, please refer to fig. 10, where the convolutional neural network model shown in fig. 10 is composed of an input layer, a convolutional layer, a BN layer, an activation layer, and a pooling layer, each layer of the convolutional neural network model will obtain outputs of different sizes according to different sizes and operation rules, and fig. 11 shows the operation scale of the convolutional neural network model and the size of binary data of each layer.
The output of each layer of the convolutional neural network model constitutes a series of binary data, as shown in FIG. 12. The binary data shown in fig. 12 is developed by layers, that is, as shown in fig. 13 and fig. 14 (where the number label indicates the channel number of the neuron set data channel in the layer, and the dashed curve indicates the continuity of the binary data).
As shown in fig. 15, each layer of output is obtained by operating one or more corresponding channels (INPUT- > CONV) of the previous layer, and the parent neuron is traced backwards, that is, fig. 16, and fig. 16 is a schematic diagram of tracing the multichannel data neuron. That is, any mother neuron data can be found by applying the formula 1, the formula 2, the formula 3 and the formula 4 in the first embodiment to the binary data shown in fig. 2.
EXAMPLE III
Fig. 17 is an apparatus for data localization in a convolutional neural network model according to an exemplary embodiment of the present disclosure. The convolutional neural network model comprises a pooling layer and an activation layer, the activation layer outputs a fourth neuron set, the fourth neuron set is input into the pooling layer to enable the pooling layer to output a fifth neuron set, as shown in fig. 17, and the apparatus 300 for data positioning in the convolutional neural network model comprises:
a determining module 310 configured to determine that a target sub-neuron of the fifth set of neurons is present that does not fit the expected data;
a calculating module 320 configured to calculate a position of a fourth mother neuron corresponding to the target child neuron in the fourth neuron set according to the position of the target child neuron in the fifth neuron set, a kernel size of the pooling layer, the position of the target child neuron in the fifth neuron set, and a width and a height of the fourth neuron set;
an obtaining module 330, configured to obtain data of the fourth mother neuron according to a position of the fourth mother neuron;
a determining module 340 configured to determine whether the setting parameters of the pooling layer are correct according to the data of the fourth mother neuron and the data of the target child neuron;
an output module 350 configured to output first prompt information to enable a user to modify the setup parameters of the pooling layer to obtain a modified convolutional neural network model when the setup parameters of the pooling layer are incorrect.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Example four
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the method steps of any of the alternative embodiments described above.
For example, the data positioning method in the convolutional neural network model according to the present disclosure may be implemented by referring to a specific embodiment of the data positioning method in the convolutional neural network model, which is not described herein again.
The processor may be an integrated circuit chip having information processing capabilities. The Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like.
EXAMPLE five
The present disclosure also provides a device for data localization in a convolutional neural network model, including:
a memory having a computer program stored thereon; and
a processor for executing the computer program in the memory to implement the method steps of any of the alternative embodiments described above.
FIG. 18 is a block diagram illustrating an apparatus 400 for data localization in a convolutional neural network model, according to an example embodiment. As shown in fig. 18, the apparatus 400 may include: a processor 401, a memory 402, a multimedia component 403, an input/output (I/O) interface 404, and a communication component 405.
The processor 401 is configured to control the overall operation of the apparatus 400, so as to complete all or part of the steps in the above-mentioned method for positioning data in the convolutional neural network model. The memory 402 is used to store various types of data to support operation of the apparatus 400, and such data may include, for example, instructions for any application or method operating on the apparatus 400, as well as application-related data. The Memory 402 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 403 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 402 or transmitted through the communication component 405. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 404 provides an interface between the processor 401 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 405 is used for wired or wireless communication between the apparatus 400 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding Communication component 405 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described method of data localization in convolutional neural network models.
In another exemplary embodiment, a computer readable storage medium, such as a memory 402, comprising program instructions executable by a processor 401 of the apparatus 400 to perform the method of data localization in a convolutional neural network model described above is also provided.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method of data localization in a convolutional neural network model, the convolutional neural network model comprising a pooling layer and an activation layer, the activation layer outputting a fourth set of neurons that is input to the pooling layer such that the pooling layer outputs a fifth set of neurons; the method comprises the following steps:
determining that a target sub-neuron of the fifth set of neurons is present that does not fit expected data;
calculating the position of a fourth parent neuron corresponding to the target child neuron in the fourth neuron set according to the position of the target child neuron in the fifth neuron set, the kernel size of the pooling layer, the position of the target child neuron in the fifth neuron set and the width and height of the fourth neuron set;
acquiring data of the fourth mother neuron according to the position of the fourth mother neuron;
judging whether the setting parameters of the pooling layer are correct or not according to the data of the fourth mother neuron and the data of the target child neuron;
when the setting parameters of the pooling layer are incorrect, outputting first prompt information to enable a user to modify the setting parameters of the pooling layer to obtain a modified convolutional neural network model.
2. The method of claim 1, wherein the convolutional neural network model further comprises a BN layer, the BN layer outputting a third set of neurons that are input to the activation layer to cause the activation layer to output the fourth set of neurons; the method further comprises the following steps:
when the setting parameters of the pooling layer are correct, calculating the position of a third mother neuron corresponding to the fourth mother neuron in the third neuron set according to the position of the fourth mother neuron in the fourth neuron set, the position of the fourth mother neuron in the fourth neuron set and the width and height of the third neuron set;
acquiring data of the third maternal neuron according to the position of the third maternal neuron;
judging whether the setting parameters of the activation layer are correct or not according to the data of the third maternal neuron and the data of the fourth maternal neuron;
and when the setting parameters of the activation layer are incorrect, outputting second prompt information to enable a user to modify the setting parameters of the activation layer to obtain a modified convolutional neural network model.
3. The method of claim 2, wherein the location of the third mother neuron is obtained according to the following equation:
P3(x)=Set4Ry*Wx+Set4Cy+P4(x)*Wx*Hx
wherein, P3(x) Indicates the location of the third mother neuron, P4(x) Represents the position of the fourth mother neuron in a fourth neuron Set, Wx represents the width of the third neuron Set, Hx represents the height of the third neuron Set, Set4Ry represents the position of the fourth mother neuron on the y-th row of the fourth neuron Set, and Set4Cy represents the position of the fourth mother neuron on the y-th column of the fourth neuron Set.
4. The method of claim 2, wherein the convolutional neural network model further comprises a convolutional layer that outputs a second set of neurons that inputs to the BN layer to cause the BN layer to output the third set of neurons; the method further comprises the following steps:
when the setting parameters of the activation layer are correct, calculating the position of a second mother neuron corresponding to the third mother neuron in the second neuron set according to the position of the third mother neuron in the third neuron set, the position of the third mother neuron in the third neuron set and the width and height of the second neuron set;
acquiring data of the second maternal neuron according to the position of the second maternal neuron;
judging whether the setting parameters of the BN layer are correct or not according to the data of the second maternal neuron and the data of the second maternal neuron;
and when the setting parameters of the BN layer are incorrect, outputting third prompt information to enable a user to modify the setting parameters of the BN layer so as to obtain a modified convolutional neural network model.
5. The method of claim 4, wherein the location of the second mother neuron is obtained according to the following equation:
P2(x)=Set3Ry*Wx+Set3Cy+P3(x)*Wx*Hx
wherein, P2(x) Indicates the location of the second parent neuron, P3(x) Represents the position of the third mother neuron in a third neuron Set, Wx represents the width of the second neuron Set, Hx represents the height of the second neuron Set, Set3Ry represents the position of the third mother neuron on the y-th row of the third neuron Set, and Set3Cy represents the position of the third mother neuron on the y-th column of the third neuron Set.
6. The method of claim 4, wherein the convolutional layer outputs the second set of neurons after inputting the first set of neurons; the method further comprises the following steps:
when the setting parameters of the BN layer are correct, calculating the position of a first mother neuron corresponding to the second mother neuron in the first neuron set according to the position of the second mother neuron in the second neuron set, the step value of the movement of the convolution kernel in the convolution layer and the width and height of the first neuron set;
acquiring data of the first mother neuron according to the position of the first mother neuron;
judging whether the setting parameters of the convolutional layer are correct or not according to the data of the second maternal neuron and the data of the first maternal neuron;
and when the setting parameters of the convolutional layer are incorrect, outputting fourth prompt information to enable a user to modify the setting parameters of the convolutional layer so as to obtain a modified convolutional neural network model.
7. The method of any one of claims 1 to 6, further comprising:
after the modified convolutional neural network model is obtained, verified target input data and target output data are obtained;
inputting the target input data into the modified convolutional neural network model to obtain output data to be verified, which is output by the modified convolutional neural network model;
and according to the target output data and the output data to be verified, confirming the modified convolutional neural network model as a network model which is in line with expectation.
8. An apparatus for data localization in a convolutional neural network model, the convolutional neural network model comprising a pooling layer and an activation layer, the activation layer outputting a fourth set of neurons, the fourth set of neurons inputting into the pooling layer such that the pooling layer outputs a fifth set of neurons; the device comprises:
a determination module configured to determine that a target sub-neuron of the fifth set of neurons is present that does not fit the expected data;
a calculation module configured to calculate a position of a fourth parent neuron corresponding to the target child neuron in the fourth neuron set according to the position of the target child neuron in the fifth neuron set, a kernel size of the pooling layer, the position of the target child neuron in the fifth neuron set, and a width and a height of the fourth neuron set;
the acquisition module is configured to acquire data of the fourth mother neuron according to the position of the fourth mother neuron;
a judging module configured to judge whether the setting parameters of the pooling layer are correct according to the data of the fourth mother neuron and the data of the target child neuron;
an output module configured to output first prompt information to enable a user to modify the setting parameters of the pooling layer to obtain a modified convolutional neural network model when the setting parameters of the pooling layer are incorrect.
9. An apparatus for data localization in a convolutional neural network model, comprising:
a memory having a computer program stored thereon; and
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
10. A storage medium on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN201911047979.7A 2019-10-30 2019-10-30 Method, device and storage medium for positioning data in convolutional neural network model Active CN110751272B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911047979.7A CN110751272B (en) 2019-10-30 2019-10-30 Method, device and storage medium for positioning data in convolutional neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911047979.7A CN110751272B (en) 2019-10-30 2019-10-30 Method, device and storage medium for positioning data in convolutional neural network model

Publications (2)

Publication Number Publication Date
CN110751272A true CN110751272A (en) 2020-02-04
CN110751272B CN110751272B (en) 2021-02-23

Family

ID=69281320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911047979.7A Active CN110751272B (en) 2019-10-30 2019-10-30 Method, device and storage medium for positioning data in convolutional neural network model

Country Status (1)

Country Link
CN (1) CN110751272B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083792A1 (en) * 2015-09-22 2017-03-23 Xerox Corporation Similarity-based detection of prominent objects using deep cnn pooling layers as features
CN108197699A (en) * 2018-01-05 2018-06-22 中国人民解放军国防科技大学 Debugging module for convolutional neural network hardware accelerator
US20180267858A1 (en) * 2017-03-20 2018-09-20 Hewlett Packard Enterprise Development Lp Baseboard Management Controller To Deconfigure Field Replaceable Units According To Deep Learning Model
CN109086889A (en) * 2018-09-30 2018-12-25 广东电网有限责任公司 Terminal fault diagnostic method neural network based, device and system
CN109189671A (en) * 2018-08-10 2019-01-11 百富计算机技术(深圳)有限公司 Successively folder forces formula variable localization method, system and terminal device
US20190064389A1 (en) * 2017-08-25 2019-02-28 Huseyin Denli Geophysical Inversion with Convolutional Neural Networks
CN109557460A (en) * 2019-02-18 2019-04-02 深兰人工智能芯片研究院(江苏)有限公司 A kind of test method and equipment of the convolutional neural networks algorithm based on FPGA
CN109636786A (en) * 2018-12-11 2019-04-16 杭州嘉楠耘智信息科技有限公司 Verification method and device of image recognition module
CN109739703A (en) * 2018-12-28 2019-05-10 北京中科寒武纪科技有限公司 Adjust wrong method and Related product
CN109859204A (en) * 2019-02-22 2019-06-07 厦门美图之家科技有限公司 Convolutional neural networks Model Checking and device
CN109858621A (en) * 2019-01-09 2019-06-07 深兰科技(上海)有限公司 A kind of debugging apparatus, method and the storage medium of convolutional neural networks accelerator
CN109947898A (en) * 2018-11-09 2019-06-28 中国电子科技集团公司第二十八研究所 Based on intelligentized equipment failure test method
CN109948788A (en) * 2019-03-07 2019-06-28 清华大学 Neural network accelerator based on FPGA
CN110083532A (en) * 2019-04-12 2019-08-02 北京中科寒武纪科技有限公司 Run-time error localization method and device under fusion mode based on deep learning frame
CN110377472A (en) * 2019-07-25 2019-10-25 北京中星微电子有限公司 The method and device of positioning chip run-time error
CN110515754A (en) * 2018-05-22 2019-11-29 深圳云天励飞技术有限公司 The debugging system and method for neural network processor

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083792A1 (en) * 2015-09-22 2017-03-23 Xerox Corporation Similarity-based detection of prominent objects using deep cnn pooling layers as features
US20180267858A1 (en) * 2017-03-20 2018-09-20 Hewlett Packard Enterprise Development Lp Baseboard Management Controller To Deconfigure Field Replaceable Units According To Deep Learning Model
US20190064389A1 (en) * 2017-08-25 2019-02-28 Huseyin Denli Geophysical Inversion with Convolutional Neural Networks
CN108197699A (en) * 2018-01-05 2018-06-22 中国人民解放军国防科技大学 Debugging module for convolutional neural network hardware accelerator
CN110515754A (en) * 2018-05-22 2019-11-29 深圳云天励飞技术有限公司 The debugging system and method for neural network processor
CN109189671A (en) * 2018-08-10 2019-01-11 百富计算机技术(深圳)有限公司 Successively folder forces formula variable localization method, system and terminal device
CN109086889A (en) * 2018-09-30 2018-12-25 广东电网有限责任公司 Terminal fault diagnostic method neural network based, device and system
CN109947898A (en) * 2018-11-09 2019-06-28 中国电子科技集团公司第二十八研究所 Based on intelligentized equipment failure test method
CN109636786A (en) * 2018-12-11 2019-04-16 杭州嘉楠耘智信息科技有限公司 Verification method and device of image recognition module
CN109739703A (en) * 2018-12-28 2019-05-10 北京中科寒武纪科技有限公司 Adjust wrong method and Related product
CN109858621A (en) * 2019-01-09 2019-06-07 深兰科技(上海)有限公司 A kind of debugging apparatus, method and the storage medium of convolutional neural networks accelerator
CN109557460A (en) * 2019-02-18 2019-04-02 深兰人工智能芯片研究院(江苏)有限公司 A kind of test method and equipment of the convolutional neural networks algorithm based on FPGA
CN109859204A (en) * 2019-02-22 2019-06-07 厦门美图之家科技有限公司 Convolutional neural networks Model Checking and device
CN109948788A (en) * 2019-03-07 2019-06-28 清华大学 Neural network accelerator based on FPGA
CN110083532A (en) * 2019-04-12 2019-08-02 北京中科寒武纪科技有限公司 Run-time error localization method and device under fusion mode based on deep learning frame
CN110377472A (en) * 2019-07-25 2019-10-25 北京中星微电子有限公司 The method and device of positioning chip run-time error

Also Published As

Publication number Publication date
CN110751272B (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112040858B (en) Systems and methods for identifying biological structures associated with neuromuscular source signals
EP3493120A1 (en) Training a neural network model
US11657265B2 (en) Training first and second neural network models
KR20210040301A (en) Image questioning and answering method, apparatus, device, storage medium, and computer program
CN111344800A (en) Training model
CN106294533A (en) Use the distributed work flow that data base replicates
US20190179734A1 (en) User assisted automated test case generation
US11514315B2 (en) Deep neural network training method and apparatus, and computer device
US11335025B2 (en) Method and device for joint point detection
CN112528634A (en) Text error correction model training and recognition method, device, equipment and storage medium
US20210390731A1 (en) Method and apparatus for positioning key point, device, and storage medium
JP2021108191A (en) Calibration method, device, system, and storage medium for external parameter of on-vehicle camera
WO2017132545A1 (en) Systems and methods for generative learning
US20180268097A1 (en) Interactive Routing of Connections in Circuit Using Auto Welding and Auto Cloning
CN115526641A (en) Flexible board product production quality tracing method, system, device and storage medium
CN110751272B (en) Method, device and storage medium for positioning data in convolutional neural network model
CN109492540B (en) Face exchange method and device in image and electronic equipment
CN111507219A (en) Action recognition method and device, electronic equipment and storage medium
CN113935277A (en) System, method and non-transitory computer readable medium for design rule checking
CN111859933A (en) Training method, recognition method, device and equipment of Malay recognition model
CN114399628B (en) Insulator high-efficiency detection system under complex space environment
CN116052176A (en) Text extraction method based on cascade multitask learning
CN114996076A (en) Traversal type use case verification method and system for chip simulation and electronic equipment
CN106802970B (en) Printed circuit board layout method and system
CN115510782B (en) Method for locating verification errors, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant