CN110674934B - Neural network pooling layer and operation method thereof - Google Patents

Neural network pooling layer and operation method thereof Download PDF

Info

Publication number
CN110674934B
CN110674934B CN201910792663.4A CN201910792663A CN110674934B CN 110674934 B CN110674934 B CN 110674934B CN 201910792663 A CN201910792663 A CN 201910792663A CN 110674934 B CN110674934 B CN 110674934B
Authority
CN
China
Prior art keywords
characteristic data
pooling
padding
memory
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910792663.4A
Other languages
Chinese (zh)
Other versions
CN110674934A (en
Inventor
陈小柏
赖青松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910792663.4A priority Critical patent/CN110674934B/en
Publication of CN110674934A publication Critical patent/CN110674934A/en
Application granted granted Critical
Publication of CN110674934B publication Critical patent/CN110674934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Complex Calculations (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a neural network pooling layer and an operation method thereof, wherein the neural network pooling layer comprises a direct memory access module DMA for reading characteristic data of a memory; the memory described in this embodiment is DDR3; the pooling operation module is used for carrying out pooling operation on the characteristic data of the read memory and returning an operation result to the DMA; and the controller module is used for controlling the DMA to transfer the characteristic data from the memory to the pooling operation module for operation processing and monitoring the state of the pooling operation module. The invention can reduce the operation amount, improve the operation speed and has simple reading control.

Description

Neural network pooling layer and operation method thereof
Technical Field
The invention relates to the technical field of integrated circuits, in particular to a neural network pooling layer and an operation method thereof.
Background
Convolutional Neural Networks (CNNs) are important algorithms for deep learning, and have very wide application in the field of computer vision, especially image recognition. At present, almost all identification and detection problems take convolutional neural networks as the first choice method, and various IT huge heads in the world also strive for related researches.
From the perspective of a computer, the image is actually a two-dimensional matrix, and the convolutional neural network is used for extracting features from a two-dimensional array by adopting convolution, pooling and other operations, and identifying the image. Theoretically, a convolutional neural network can be used for identification and detection as long as it can be converted into a two-dimensional matrix. For example, the sound file can be divided into short segments, the level of each segment of musical scale can be converted into numbers, so that the whole segment of sound file can be converted into a two-dimensional matrix, and the identification and detection can be realized by utilizing a convolutional neural network, wherein the text features in natural language, chemical features in medical experiments and the like are similar.
Convolution is used to extract local features of an image, and pooling is the compression of the extracted local features. Pooling, which is to divide the feature matrix into several small blocks, selects a value from each sub-matrix to replace the sub-matrix, and compresses the feature matrix to simplify the following calculation, is also called downsampling. Pooling has two ways: max Pooling (maximum Pooling) where the maximum is taken from the sub-matrix and Average Pooling (Average Pooling) where the Average is taken and the size of the sub-matrix is size. As shown in fig. 1, size=2, the size of the sub-matrix is 2x2, and Max Pooling and Average Pooling schematic diagrams are shown.
The features Feature is usually stored in an external DDR3 memory, and conventionally, a pooling operation is performed by reading out from DDR3 channel by channel, for example, when size=3, 9 features need to be read from DDR3 each time to perform Max maximum value or Average value operation, and the method has slow operation speed, complex control of reading from DDR3, and large operation amount.
Chinese patent No.: 201810284999.5, a method and a circuit for accelerating operation of a pooling layer of a neural network are disclosed, which can reduce the operation amount, but the patent does not propose a multi-channel processing method or a padding processing method, which is not in line with the practical application, has no practicability, and needs to divide the width W direction, which is complicated to control DDR 3.
Disclosure of Invention
The invention provides a neural network pooling layer and an operation method thereof, which can reduce the operation amount, improve the operation speed and simplify the reading control.
In order to achieve the above purpose of the present invention, the following technical scheme is adopted: the neural network pooling layer comprises a direct memory access module DMA for reading characteristic data of a memory;
the pooling operation module is used for carrying out pooling operation on the characteristic data of the read memory and returning an operation result to the DMA;
and the controller module is used for controlling the DMA to transfer the characteristic data from the memory to the pooling operation module for operation processing and monitoring the state of the pooling operation module.
The method directly reads the characteristic data of the memory through the DMA in the pooling layer, and the operation of controlling the neural network pooling layer to read the characteristic data in the memory is simpler.
Preferably, the DMA and the memory are communicated by an AXI interface.
The invention is based on the neural network pooling layer, and also provides an operation method of the neural network pooling layer, wherein N pieces of characteristic data can be read by the DMA in each clock cycle and transmitted to a pooling operation module, the pooling operation module firstly carries out width direction operation on the characteristic data, and N channels are operated in parallel by N width direction operation units;
then carrying out height direction pooling operation on the characteristic data, and parallelly operating N channels by N height direction operation units;
and (3) finishing pooling operation, and finally writing an operation result into a memory through DMA.
The invention reduces the operation amount by carrying out the width direction operation and then carrying out the height direction operation, and simultaneously adopts a multichannel parallel operation mode to carry out algorithm acceleration, thereby improving the operation speed.
Further, the controller module transmits the characteristic data to the DMA at the starting address and byte number of the memory, and the DMA reads the characteristic data of corresponding byte number from the memory according to the starting address and transmits the characteristic data to the pooling operation module.
Still further, the characteristic data is stored in the memory according to an N-channel arrangement mode, the characteristic data is a three-dimensional matrix, the width is Wi, the height is Hi, the number of channels is C, the characteristic data are arranged in an N-channel arrangement sequence, and the characteristic data of each N channels are stored in the memory according to continuous addresses; the sum of all N is equal to C.
Still further, N is a power of 2.
Still further, the feature data in the width direction is calculated in units of one point.
Still further, before the width direction pooling operation is performed on the feature data, it is determined by the formula wi=wo×size that the feature data needs to be filled, that is, zero padding operation, when Wi and size are not in an integer multiple relationship; wherein Wi represents the width of input characteristic data, wo represents the width of output characteristic data, size represents the size of the submatrix, and is a positive integer;
the zero padding operation is to fill zero data around the feature, the filling number is padding, wherein the padding is larger than a positive integer, so that wi+2 times padding=wo times size, and when zero padding is needed, the pooling operation needs to calculate the 1 st output or the last 1 output in each row in the width direction, and the pooling operation is performed with (size-padding) input data.
Still further, the height direction feature data is operated in a unit of one line, and one FIFO is adopted to cache one line of feature data for operation with the next line of feature data.
Still further, before performing the height direction pooling operation on the feature data, determining that the feature data needs to be filled, that is, zero padding operation, when Wi and size are not in an integer multiple relationship according to a formula hi=ho; wherein Hi represents the height of input characteristic data, ho represents the height of output characteristic data, size represents the size of a submatrix, and is a positive integer;
the zero padding operation is to fill zero data around the feature, the filling number is padding, wherein the padding is larger than a positive integer, so that hi+2 times padding=ho times size, and when zero padding is needed, the pooling operation needs to calculate the 1 st output or the last 1 output in each column in the height direction, and the pooling operation is performed with (size-padding) input data.
The beneficial effects of the invention are as follows:
1. the invention directly adopts the DMA to read the memory, thereby acquiring the characteristic data, and compared with the traditional mode of controlling the DMA by a Central Processing Unit (CPU), the invention has simpler, faster and more efficient control and reading mode.
2. The invention accelerates the algorithm by adopting a multichannel parallel operation mode, firstly carries out width direction pooling operation on the characteristic data, and then carries out height direction pooling operation, thereby reducing the operation quantity. Meanwhile, the characteristic data stored in the memory does not need to be divided and only needs to be read out from beginning to end at one time, so that the operation speed is effectively increased.
Drawings
Fig. 1 is a schematic diagram showing maximum value pooling and average pooling.
Fig. 2 is a schematic structural diagram of the neural network layer according to the present embodiment.
Fig. 3 is a schematic diagram of the feature data according to the present embodiment.
Fig. 4 is a schematic diagram of the non-zero padding in the width direction according to the present embodiment.
Fig. 5 is a schematic diagram of the width direction 1 zero padding according to the present embodiment.
Fig. 6 is a schematic diagram of the width direction 2 zeros of the present embodiment.
Fig. 7 is a schematic diagram of the calculation result in the width direction according to the present embodiment.
Fig. 8 is a schematic diagram of the height direction non-zero padding according to the present embodiment.
Fig. 9 is a schematic diagram of the height direction 1 zero padding according to the present embodiment.
Fig. 10 is a schematic diagram of the height direction of the present embodiment supplemented with 2 zeros.
Fig. 11 is a schematic diagram of the height direction calculation result according to the present embodiment.
Detailed Description
The invention is described in detail below with reference to the drawings and the detailed description.
Example 1
As shown in fig. 2, a neural network pooling layer includes a direct memory access module DMA for reading characteristic data of a memory; the memory described in this embodiment is DDR3;
the pooling operation module is used for carrying out pooling operation on the characteristic data of the read memory and returning an operation result to the DMA;
and the controller module is used for controlling the DMA to transfer the characteristic data from the memory to the pooling operation module for operation processing and monitoring the state of the pooling operation module.
The method controls the operation of the neural network pooling layer to read the characteristic data in the memory to be simpler.
The DMA and the memory in this embodiment are communicated by using an AXI interface. The controller module controls the DMA to carry the characteristic data from the DDR3 to the pooling operation module, the control is very simple, the characteristic data is only required to be transmitted to the DMA at the initial address and the byte number of the DDR3, and then the DMA reads the characteristic data with the corresponding byte number from the DDR3 according to the initial address, and the segmentation operation is not required.
The embodiment is based on the neural network pooling layer, and further provides an operation method of the neural network pooling layer, wherein N pieces of characteristic data are read by the DMA in each clock cycle and are transmitted to the pooling operation module, the pooling operation module firstly carries out width direction operation on the characteristic data, and N width direction operation units parallelly operate N channels;
then carrying out height direction pooling operation on the characteristic data, and parallelly operating N channels by N height direction operation units;
and (3) finishing pooling operation, and finally writing an operation result into a memory through DMA.
According to the embodiment, the width direction operation is firstly performed, then the height direction operation is performed, the operation amount is reduced, and meanwhile, the algorithm is accelerated in a multi-channel parallel operation mode, so that the operation speed is improved.
The controller module in this embodiment transmits the start address and the byte number of the feature data in the memory to the DMA, and the DMA reads the feature data of the corresponding byte number from the memory according to the start address and transmits the feature data to the pooling operation module.
The storage structure of the characteristic Feature data in DDR3 is stored in DDR3 according to an N-channel arrangement mode, the storage mode is determined by the whole neural network, the initial convolution operation is already arranged, and the pooling layer is not required to additionally take time to arrange. As shown in fig. 3, the total number of Feature features is C, and the Feature features are arranged in N channels, and Feature features of every N channels are stored in DDR3 as consecutive addresses. The sum of all N is equal to C. N is typically a power of 2, e.g., 2, 4, 8, 16, 32, etc., the benefit of an N-channel arrangement is two-fold, the first is that DDR3 read and write operations are burst transfers, and must be byte aligned, typically 8 bytes, 16 bytes, 32 bytes aligned, the number of individual Feature features data is sometimes not byte aligned, but the sum of the N-channel Feature features data must be byte aligned; and secondly, N pieces of characteristic data can be operated in parallel by the pooling layer operation, so that the acceleration of the algorithm is facilitated.
In this embodiment, the DDR3 memory stores Feature data, where the Feature data is a three-dimensional matrix, the width of the Feature data is Wi, the height of the Feature data is Hi, the number of channels is C, the arrangement sequence is arranged according to N channels, the Feature of the 1 st N channel is stored in a first part of continuous address, the Feature of the 2 nd N channel is stored in a second part of continuous address, and so on.
The pooling operation in this embodiment divides the feature data matrix into a plurality of small blocks, each small block is a sub-matrix, and a value is selected from each sub-matrix to replace the sub-matrix, so as to achieve the purpose of compressing the feature matrix, and the matrix is simplified and then calculated. The pooling operation has two modes: max Pooling (maximum Pooling) where the maximum is taken from the sub-matrix and Average Pooling (Average Pooling) where the Average is taken and the size of the sub-matrix is size.
The feature data in the width direction described in this embodiment is calculated in units of one point.
Before carrying out width direction pooling operation on the feature data, the embodiment judges that the feature data needs to be filled when Wi and size are not in an integer multiple relation through a formula wi=wo, namely zero padding operation; wherein Wi represents the width of input characteristic data, wo represents the width of output characteristic data, size represents the size of the submatrix, and is a positive integer;
the zero padding operation is to fill zero data around the feature, the filling number is padding, wherein the padding is larger than a positive integer, so that wi+2 times padding=wo times size, and when zero padding is needed, the pooling operation needs to calculate the 1 st output or the last 1 output in each row in the width direction, and the pooling operation is performed with (size-padding) input data.
In this embodiment, the size=3 is taken as an example, and the pooling operation is the maximum pooling, and in three cases, zero is not added, 1 zero is added, and 2 zeros are added, as shown in fig. 4, 5, and 6. The comp_start in the figure indicates that a data pooling operation starts, comp_end indicates that a data pooling operation ends, and the next clock cycle of comp_end obtains an operation result. When the time is not compensated, the data are more on the rightmost side and are directly discarded; when 1 zero is added, only the rightmost zero is added, and the actual zero addition is not realized, and only the rightmost 2 data are subjected to pooling operation once; when 2 zero is added, the same theory is adopted, the left-most side performs one pooling operation on the initial 2 data, and the right-most side performs one pooling operation on the remaining 2 data, so that the zero adding effect is achieved. Through the operation method described in this embodiment, the effect shown in fig. 7 is obtained, where Wo represents the width of the output Feature.
The height direction feature data in this embodiment is calculated in a unit of one line, and one FIFO is used to cache one line of feature data for calculation operation with the next line of feature data.
Before carrying out height direction pooling operation on the characteristic data, judging that the characteristic data needs to be filled when Wi and size are not in an integer multiple relation through a formula Hi=Ho, namely, carrying out zero filling operation; where Hi represents the height of the input feature data, ho represents the height of the output feature data, size represents the size of the sub-matrix, and is a positive integer;
the zero padding operation is to fill zero data around the feature, the filling number is padding, wherein the padding is larger than a positive integer, so that hi+2 times padding=ho times size, and when zero padding is needed, the pooling operation needs to calculate the 1 st output or the last 1 output in each column in the height direction, and the pooling operation is performed with (size-padding) input data.
In this embodiment, the size=3 is taken as an example, and the pooling operation is the maximum pooling, and the three cases are not zero-filling, 1 zero-filling and 2 zero-filling, as shown in fig. 4, 5 and 6. In the figure, comp_start represents the start of a pooling operation of a line of data, comp_end represents the end of the pooling operation of a line of data, fifo_wr represents a signal to be stored in the FIFO, and the next clock cycle of comp_end obtains a line of operation result. When the time is not added, the lowest side can be provided with more data, and the data is directly discarded; when 1 zero is added, only the lowest zero is added, and the actual zero addition is not true, and only the pooling operation is carried out on the rest 2 data at the lowest side; when 2 zero is added, the same theory is adopted, the pooling operation is carried out on the data of the initial 2 rows at the uppermost side, and the pooling operation is carried out on the data of the rest 2 rows at the lowermost side, so that the zero adding effect is achieved. The present embodiment obtains the effect shown in FIG. 7 by calculation, where Ho represents the height of the output Feature
After the height direction operation, the whole pooling operation is finished, DDR3 is written through DMA, and after the writing is finished, the information of the finishing operation is sent to a Central Processing Unit (CPU) through interruption.
It is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (8)

1. The utility model provides a neural network pooling layer operation system which characterized in that: the direct memory access module DMA is used for reading the characteristic data of the memory;
the pooling operation module is used for carrying out pooling operation on the characteristic data of the read memory and returning an operation result to the DMA;
the controller module is used for controlling the DMA to transfer the characteristic data from the memory to the pooling operation module for operation processing and monitoring the state of the pooling operation module;
the operation method of the neural network pooling layer comprises the following steps: the DMA reads N pieces of characteristic data and transmits the N pieces of characteristic data to a pooling operation module in each clock period, the pooling operation module firstly carries out width direction operation on the characteristic data, and N width direction operation units parallelly operate N channels;
then carrying out height direction pooling operation on the characteristic data, and parallelly operating N channels by N height direction operation units;
finishing pooling operation, and finally writing an operation result into a memory through DMA;
before carrying out width direction pooling operation on the characteristic data, judging that the characteristic data needs to be filled when Wi and size are not in an integer multiple relation through a formula wi=wo, namely, carrying out zero filling operation; wherein Wi represents the width of input characteristic data, wo represents the width of output characteristic data, size represents the size of the submatrix, and is a positive integer;
the zero padding operation is to fill zero data around the feature, wherein the filling number is padding, the padding is larger than a positive integer, so that wi+2 times padding=wo times size, and when zero padding is needed, the pooling operation needs to calculate the 1 st output or the last 1 output in each row in the width direction, and the pooling operation is performed with (size-padding) input data.
2. The neural network pooling layer operation system according to claim 1, wherein: and the DMA and the memory are communicated by adopting an AXI interface.
3. The neural network pooling layer operation system according to claim 1, wherein: the controller module transmits the characteristic data to the DMA at the initial address and byte number of the memory, and the DMA reads the characteristic data with corresponding byte number from the memory according to the initial address and transmits the characteristic data to the pooling operation module.
4. The neural network pooling layer operation system according to claim 3, wherein: the characteristic data are stored in the memory according to an N-channel arrangement mode, the characteristic data are three-dimensional matrixes, the width is Wi, the height is Hi, the number of channels is C, the arrangement sequence is arranged according to N channels, and the characteristic data of each N channels are stored in the memory according to continuous addresses; the sum of all N is equal to C.
5. The neural network pooling layer operation system according to claim 3, wherein: the N is the power of 2.
6. The neural network pooling layer operation system according to claim 3, wherein: the feature data in the width direction is calculated in units of one point.
7. The neural network pooling layer operation system according to claim 3, wherein: the characteristic data in the height direction is operated by taking one line as a unit, and one line of characteristic data is cached by adopting one FIFO (first in first out) for operation with the next line of characteristic data.
8. The neural network pooling layer operation system according to claim 6, wherein: before carrying out height direction pooling operation on the characteristic data, judging that the characteristic data needs to be filled when Wi and size are not in an integer multiple relation through a formula Hi=Ho, namely, carrying out zero filling operation; wherein Hi represents the height of input characteristic data, ho represents the height of output characteristic data, size represents the size of a submatrix, and is a positive integer;
the zero padding operation is to fill zero data around the feature, the filling number is padding, wherein the padding is larger than a positive integer, so that hi+2 times padding=ho times size, and when zero padding is needed, the pooling operation needs to calculate the 1 st output or the last 1 output in each column in the height direction, and the pooling operation is performed with (size-padding) input data.
CN201910792663.4A 2019-08-26 2019-08-26 Neural network pooling layer and operation method thereof Active CN110674934B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910792663.4A CN110674934B (en) 2019-08-26 2019-08-26 Neural network pooling layer and operation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910792663.4A CN110674934B (en) 2019-08-26 2019-08-26 Neural network pooling layer and operation method thereof

Publications (2)

Publication Number Publication Date
CN110674934A CN110674934A (en) 2020-01-10
CN110674934B true CN110674934B (en) 2023-05-09

Family

ID=69075719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910792663.4A Active CN110674934B (en) 2019-08-26 2019-08-26 Neural network pooling layer and operation method thereof

Country Status (1)

Country Link
CN (1) CN110674934B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114372012B (en) * 2021-12-21 2024-02-20 中国科学院深圳先进技术研究院 Universal and configurable high-energy-efficiency pooling calculation single-row output system and method
CN115935888A (en) * 2022-12-05 2023-04-07 北京航天自动控制研究所 Neural network accelerating system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704923A (en) * 2017-10-19 2018-02-16 珠海格力电器股份有限公司 Convolutional neural networks computing circuit
CN109034373A (en) * 2018-07-02 2018-12-18 鼎视智慧(北京)科技有限公司 The parallel processor and processing method of convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704923A (en) * 2017-10-19 2018-02-16 珠海格力电器股份有限公司 Convolutional neural networks computing circuit
CN109034373A (en) * 2018-07-02 2018-12-18 鼎视智慧(北京)科技有限公司 The parallel processor and processing method of convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
aLIGO CDS Real-time Sequencer Software;LIGO Laboratory;《LASER INTERFEROMETER GRAVITATIONAL WAVE OBSERVATORY》;20101231;第1-21页 *

Also Published As

Publication number Publication date
CN110674934A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
US20180131946A1 (en) Convolution neural network system and method for compressing synapse data of convolution neural network
CN110674934B (en) Neural network pooling layer and operation method thereof
CN116541647A (en) Operation accelerator, processing method and related equipment
CN109886395B (en) Data reading method for multi-core image processing convolutional neural network
CN111758107A (en) System and method for hardware-based pooling
CN109840585B (en) Sparse two-dimensional convolution-oriented operation method and system
CN101276152A (en) Drawing apparatus
CN110688616B (en) Convolution module of stripe array based on ping-pong RAM and operation method thereof
CN110032538B (en) Data reading system and method
US11386533B2 (en) Image processing method and related device
CN109146065B (en) Convolution operation method and device for two-dimensional data
CN111783967B (en) Data double-layer caching method suitable for special neural network accelerator
CN113673701A (en) Method for operating neural network model, readable medium and electronic device
CN109145107B (en) Theme extraction method, device, medium and equipment based on convolutional neural network
CN110647978B (en) System and method for extracting convolution window in convolution neural network
CN106849956A (en) Compression method, decompression method, device and data handling system
CN111191774B (en) Simplified convolutional neural network-oriented low-cost accelerator architecture and processing method thereof
CN116010313A (en) Universal and configurable image filtering calculation multi-line output system and method
CN107894957B (en) Convolutional neural network-oriented memory data access and zero insertion method and device
US20230267310A1 (en) Neural network processing apparatus, information processing apparatus, information processing system, electronic device, neural network processing method, and program
CN114372012A (en) Universal and configurable single-row output system and method for high-energy-efficiency pooling calculation
CN114330687A (en) Data processing method and device and neural network processing device
CN110503193B (en) ROI-based pooling operation method and circuit
CN111831207B (en) Data processing method, device and equipment thereof
CN108769697B (en) JPEG-LS compression system and method based on time interleaving pipeline architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant