TW201633181A - Event-driven temporal convolution for asynchronous pulse-modulated sampled signals - Google Patents

Event-driven temporal convolution for asynchronous pulse-modulated sampled signals Download PDF

Info

Publication number
TW201633181A
TW201633181A TW104128169A TW104128169A TW201633181A TW 201633181 A TW201633181 A TW 201633181A TW 104128169 A TW104128169 A TW 104128169A TW 104128169 A TW104128169 A TW 104128169A TW 201633181 A TW201633181 A TW 201633181A
Authority
TW
Taiwan
Prior art keywords
event
output
whirling
driven
calculating
Prior art date
Application number
TW104128169A
Other languages
Chinese (zh)
Inventor
汪新
庸永春
拉斯妥濟馬努
Original Assignee
高通公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 高通公司 filed Critical 高通公司
Publication of TW201633181A publication Critical patent/TW201633181A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Image Analysis (AREA)

Abstract

A method of processing asynchronous event-driven input samples of a continuous time signal, includes calculating a convolutional output directly from the event-driven input samples. The convolutional output is based on an asynchronous pulse modulated (APM) encoding pulse. The method further includes interpolating output between events.

Description

用於經非同步脈衝調制的取樣信號的事件驅動型時間迴旋 Event-driven time warping for sampled signals modulated by asynchronous pulses 【相關申請案的交叉引用】[Cross-reference to related applications]

本專利申請案主張於2014年9月5日提出申請且題為「EVENT-DRIVEN TEMPORAL CONVOLUTION FOR ASYNCHRONOUS PULSE-MODULATED SAMPLED SIGNALS(用於經非同步脈衝調制的取樣信號的事件驅動型時間迴旋)」的美國臨時專利申請案第62/046,757號的權益,其揭示內容經由援引全部明確納入於此。 This patent application claims to be filed on September 5, 2014 and entitled "EVENT-DRIVEN TEMPORAL CONVOLUTION FOR ASYNCHRONOUS PULSE-MODULATED SAMPLED SIGNALS" (Event-Driven Time Swing for Sampling Signals with Unsynchronized Pulse Modulation)" The disclosure of U.S. Provisional Patent Application Serial No. 62/046,757, the disclosure of which is expressly incorporated herein by reference.

本案的某些態樣一般係關於機器學習,尤其係關於改善神經網路中用於經非同步脈衝調制的取樣信號的事件驅動型時間迴旋的系統和方法。 Some aspects of the present invention are generally related to machine learning, and in particular to systems and methods for improving event-driven time warping of a sampled signal for asynchronous pulse modulation in a neural network.

可包括一組互連的人工神經元(例如,神經元模型)的人工神經網路是一種計算設備或者表示將由計算設備執行的方法。 An artificial neural network that can include a set of interconnected artificial neurons (eg, a neuron model) is a computing device or a method that is to be performed by a computing device.

迴旋神經網路是一類前饋人工神經網路。迴旋神經網路可包括神經元集合,其中每個神經元具有感受野並且合而鋪覆一輸入空間。迴旋神經網路(CNN)具有眾多應用。具體地,CNN已被廣泛用於模式辨識和分類領域。 The gyrotron network is a type of feedforward artificial neural network. The gyroscopic neural network can include a collection of neurons, wherein each neuron has a receptive field and fits an input space. The gyrotron network (CNN) has many applications. In particular, CNN has been widely used in the field of pattern recognition and classification.

深度學習架構(諸如深度置信網路和深度迴旋網路)是分層神經網路架構,其中第一層神經元的輸出成為第二層神經元的輸入,第二層神經元的輸出成為第三層神經元的輸入,以此類推。深度神經網路可被訓練以辨識特徵階層,因此它們被越來越多地用於物件辨識應用。類似於迴旋神經網路,這些深度學習架構中的計算可分佈在眾多處理節點上,這些處理節點可被配置在一或多個計算鏈中。這些多層架構可每次訓練一層並可使用反向傳播來微調。 Deep learning architectures (such as deep confidence networks and deep gyro networks) are hierarchical neural network architectures in which the output of the first layer of neurons becomes the input to the second layer of neurons, and the output of the second layer of neurons becomes the third The input of layer neurons, and so on. Deep neural networks can be trained to identify feature hierarchies, so they are increasingly being used for object recognition applications. Similar to a gyroscopic neural network, the computations in these deep learning architectures can be distributed across a number of processing nodes that can be configured in one or more computing chains. These multi-layer architectures can be trained one layer at a time and can be fine-tuned using backpropagation.

其他模型亦可用於物件辨識。例如,支援向量機(SVM)是可被應用於分類的學習工具。支援向量機包括對資料進行分類的分離超平面(例如,決策邊界)。該超平面由受監督式學習來定義。期望的超平面增加訓練資料的邊際。換言之,該超平面應該具有最大的至訓練實例的最小距離。 Other models can also be used for object identification. For example, a support vector machine (SVM) is a learning tool that can be applied to classification. The support vector machine includes separate hyperplanes (eg, decision boundaries) that classify the data. This hyperplane is defined by supervised learning. The desired hyperplane increases the margin of training material. In other words, the hyperplane should have the largest minimum distance to the training instance.

儘管這些解決方案在數個分類基準上取得了優異的結果,但它們的計算複雜度可能極高。另外,模型的訓練可能是具有挑戰性的。 Although these solutions have achieved excellent results on several classification benchmarks, their computational complexity may be extremely high. In addition, training of models can be challenging.

在本案的一態樣,提供了一種用於處理連續時間信號的非同步事件驅動型輸入取樣的方法。該方法包括從事件驅動型輸入取樣直接計算迴旋輸出。該迴旋輸出基於經非同步脈衝調制的(APM)編碼脈衝。 In one aspect of the present invention, a method for processing asynchronous event-driven input samples for processing continuous time signals is provided. The method includes directly calculating a swing output from an event-driven input sample. The whirling output is based on a non-synchronized pulse modulated (APM) encoded pulse.

在本案的另一態樣,提供了一種用於處理連續時間信號的非同步事件驅動型輸入取樣的裝置。該裝置包括記憶體以及耦合至該記憶體的至少一個處理器。該一或多個處理器被配置成從事件驅動型輸入取樣直接計算迴旋輸出。 In another aspect of the present invention, an apparatus for processing asynchronous event-driven input samples for processing continuous time signals is provided. The device includes a memory and at least one processor coupled to the memory. The one or more processors are configured to directly calculate the whirling output from the event-driven input samples.

在本案的又一態樣,提供了一種用於處理連續時間信號的非同步事件驅動型輸入取樣的設備。該設備包括用於從事件驅動型輸入取樣直接計算迴旋輸出的裝置。該設備進一步包括用於在事件之間內插輸出的裝置。 In yet another aspect of the present invention, an apparatus for processing asynchronous event-driven input samples for processing continuous time signals is provided. The apparatus includes means for directly calculating the swing output from an event driven input sample. The apparatus further includes means for interpolating the output between events.

根據本案的再一態樣,提供了一種非瞬態電腦可讀取媒體。該非瞬態電腦可讀取媒體上有程式碼,該程式碼在由處理器執行時使得該處理器處理連續時間信號的非同步事件驅動型輸入取樣。該程式碼包括用於從事件驅動型輸入取樣直接計算迴旋輸出的程式碼。 According to still another aspect of the present invention, a non-transitory computer readable medium is provided. The non-transitory computer readable medium has a code that, when executed by the processor, causes the processor to process asynchronous event driven input samples of the continuous time signal. The code includes code for directly calculating the whirling output from event-driven input samples.

本案的其他特徵和優點將在下文描述。本發明所屬領域中具有通常知識者應該領會,本案可容易地被用作修改或設計用於實施與本案相同的目的的其他結構的基礎。本發明所屬領域中具有通常知識者亦應認識到,此類等效構造並不脫離所附請求項中所闡述的本案的教導。被認為是本案的 特性的新穎特徵在其組織和操作方法兩態樣連同進一步的目的和優點在結合附圖來考慮以下描述時將被更好地理解。然而,要清楚理解的是,提供每一幅附圖均僅用於圖示和描述目的,且無意作為對本案的限定的定義。 Other features and advantages of the present invention will be described below. It will be appreciated by those of ordinary skill in the art to which the present invention pertains, which can be readily utilized as a basis for modifying or designing other structures for performing the same purposes as the present invention. It is also to be understood by those of ordinary skill in the art that the present invention is not limited to the teachings of the present invention as set forth in the appended claims. Considered to be the case The novel features of the invention will be better understood from the following description of the <RTIgt; It is to be expressly understood, however, that the claims

100‧‧‧片上系統(SOC) 100‧‧‧System on Chip (SOC)

102‧‧‧多核通用處理器(CPU) 102‧‧‧Multi-core general purpose processor (CPU)

104‧‧‧圖形處理單元(GPU) 104‧‧‧Graphical Processing Unit (GPU)

106‧‧‧數位訊號處理器(DSP) 106‧‧‧Digital Signal Processor (DSP)

108‧‧‧神經處理單元(NPU) 108‧‧‧Neural Processing Unit (NPU)

110‧‧‧連通性塊 110‧‧‧Connectivity block

112‧‧‧多媒體處理器 112‧‧‧Multimedia processor

114‧‧‧感測器處理器 114‧‧‧Sensor processor

116‧‧‧影像信號處理器(ISP) 116‧‧‧Image Signal Processor (ISP)

118‧‧‧記憶體塊 118‧‧‧ memory block

120‧‧‧導航 120‧‧‧Navigation

200‧‧‧系統 200‧‧‧ system

202‧‧‧局部處理單元 202‧‧‧Local Processing Unit

204‧‧‧局部狀態記憶體 204‧‧‧Local state memory

206‧‧‧局部參數記憶體 206‧‧‧Local parameter memory

208‧‧‧局部(神經元)模型程式(LMP)記憶體 208‧‧‧Local (neuronal) model program (LMP) memory

210‧‧‧局部學習程式(LLP)記憶體 210‧‧‧Local Learning Program (LLP) Memory

212‧‧‧局部連接記憶體 212‧‧‧Locally connected memory

214‧‧‧配置處理器單元 214‧‧‧Configure processor unit

216‧‧‧連接處理單元 216‧‧‧Connection processing unit

300‧‧‧網路 300‧‧‧Network

302‧‧‧全連接網路 302‧‧‧ Fully connected network

304‧‧‧局部連接網路 304‧‧‧Local connection network

306‧‧‧迴旋網路 306‧‧‧ gyronet

308‧‧‧共用 308‧‧‧Share

310‧‧‧強度值 310‧‧‧ intensity value

312‧‧‧強度值 312‧‧‧ intensity values

314‧‧‧強度值 314‧‧‧ intensity value

316‧‧‧強度值 316‧‧‧ intensity values

318‧‧‧後續層 318‧‧‧Next layer

320‧‧‧後續層 320‧‧‧Next layer

322‧‧‧輸出 322‧‧‧ Output

326‧‧‧影像 326‧‧‧ images

350‧‧‧深度迴旋網路 350‧‧‧Deep Circling Network

400‧‧‧軟體架構 400‧‧‧Software Architecture

402‧‧‧AI應用 402‧‧‧AI application

404‧‧‧使用者空間 404‧‧‧User space

406‧‧‧應用程式設計介面(API) 406‧‧‧Application Programming Interface (API)

408‧‧‧執行時引擎 408‧‧‧execution engine

410‧‧‧作業系統 410‧‧‧Operating system

412‧‧‧Linux核心 412‧‧‧Linux core

414‧‧‧驅動器 414‧‧‧ drive

416‧‧‧驅動器 416‧‧‧ drive

418‧‧‧驅動器 418‧‧‧ drive

420‧‧‧SOC 420‧‧‧SOC

422‧‧‧CPU 422‧‧‧CPU

424‧‧‧DSP 424‧‧‧DSP

426‧‧‧GPU 426‧‧‧GPU

428‧‧‧NPU 428‧‧‧NPU

500‧‧‧執行時操作 500‧‧‧Operational operations

502‧‧‧智慧手機 502‧‧‧Smart Phone

504‧‧‧預處理模組 504‧‧‧Pre-processing module

506‧‧‧轉換影像 506‧‧‧Converted image

508‧‧‧剪裁及/或調整大小 508‧‧‧ tailoring and/or resizing

510‧‧‧分類應用 510‧‧‧Classification application

512‧‧‧場景偵測後端引擎 512‧‧‧Scene detection backend engine

514‧‧‧預處理 514‧‧‧Pretreatment

516‧‧‧縮放 516‧‧‧ Zoom

518‧‧‧剪裁 518‧‧‧ tailoring

520‧‧‧深度神經網路塊 520‧‧‧Deep neural network block

522‧‧‧取閾 522‧‧‧ threshold

524‧‧‧指數平滑塊 524‧‧‧index smoothing block

602‧‧‧記憶體組 602‧‧‧ memory group

604‧‧‧處理單元 604‧‧‧Processing unit

700‧‧‧方法 700‧‧‧ method

702‧‧‧方塊 702‧‧‧ square

704‧‧‧方塊 704‧‧‧ squares

802‧‧‧方塊 802‧‧‧ square

804‧‧‧方塊 804‧‧‧ square

806‧‧‧方塊 806‧‧‧ square

808‧‧‧方塊 808‧‧‧ square

810‧‧‧方塊 810‧‧‧ square

在結合附圖理解下面闡述的詳細描述時,本案的特徵、本質和優點將變得更加明顯,在附圖中,相同元件符號始終作相應標識。 The features, nature, and advantages of the present invention will become more apparent from the detailed description of the invention.

圖1圖示根據本案的某些態樣的使用包括通用處理器的片上系統(SOC)來設計神經網路的實例實現。 1 illustrates an example implementation of designing a neural network using a system on a chip (SOC) including a general purpose processor in accordance with certain aspects of the present disclosure.

圖2圖示根據本案的諸態樣的系統的實例實現。 2 illustrates an example implementation of a system in accordance with aspects of the present disclosure.

圖3A是圖示根據本案的諸態樣的神經網路的示圖。 FIG. 3A is a diagram illustrating a neural network in accordance with aspects of the present disclosure.

圖3B是圖示根據本案的諸態樣的示例性深度迴旋網路(DCN)的方塊圖。 FIG. 3B is a block diagram illustrating an exemplary deep swing network (DCN) in accordance with aspects of the present disclosure.

圖4是圖示根據本案的諸態樣的可使人工智慧(AI)功能模組化的示例性軟體架構的方塊圖。 4 is a block diagram illustrating an exemplary software architecture that can modularize artificial intelligence (AI) functionality in accordance with aspects of the present disclosure.

圖5是圖示根據本案的諸態樣的智慧手機上的AI應用的執行時操作的方塊圖。 FIG. 5 is a block diagram illustrating an execution time operation of an AI application on a smart phone according to aspects of the present disclosure.

圖6A和6B圖示用於取樣信號的時間迴旋的非同步、事件驅動型處理的實例實現。 6A and 6B illustrate an example implementation of a non-synchronous, event-driven type of processing for time warping of sampled signals.

圖7圖示根據本案的諸態樣的用於處理連續時間信號的非同步事件驅動型取樣的方法。 Figure 7 illustrates a method for processing asynchronous event-driven sampling of continuous time signals in accordance with aspects of the present disclosure.

圖8是圖示根據本案的諸態樣的用於取樣信號的事件驅動型時間迴旋的方法的方塊圖。 8 is a block diagram illustrating a method for event-driven time warping of a sampled signal in accordance with aspects of the present disclosure.

以下結合附圖闡述的詳細描述意欲作為各種配置的描述,而無意表示可實踐本文中所描述的概念的僅有的配置。本詳細描述包括具體細節以便提供對各種概念的透徹理解。然而,對於本發明所屬領域中具有通常知識者將顯而易見的是,沒有這些具體細節亦可實踐這些概念。在一些例子中,以方塊圖形式示出眾所周知的結構和元件以避免湮沒此類概念。 The detailed description set forth below with reference to the drawings is intended to be a description of the various configurations, and is not intended to represent the only configuration in which the concepts described herein may be practiced. The detailed description includes specific details in order to provide a thorough understanding of various concepts. It will be apparent, however, to those skilled in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.

基於本教導,本發明所屬領域中具有通常知識者應領會,本案的範疇意欲覆蓋本案的任何態樣,不論其是與本案的任何其他態樣相獨立地還是組合地實現的。例如,可以使用所闡述的任何數目的態樣來實現裝置或實踐方法。另外,本案的範疇意欲覆蓋使用作為所闡述的本案的各個態樣的補充或者與之不同的其他結構、功能性、或者結構及功能性來實踐的此類裝置或方法。應當理解,所揭示的本案的任何態樣可由請求項的一或多個元素來實施。 Based on the present teachings, those of ordinary skill in the art to which the present invention pertains should be appreciated, and the scope of the present invention is intended to cover any aspect of the present invention, whether it is implemented independently or in combination with any other aspect of the present invention. For example, any number of aspects set forth may be used to implement an apparatus or a method of practice. In addition, the scope of the present invention is intended to cover such an apparatus or method that is practiced as a supplement to the various aspects of the present disclosure or other structural, functional, or structural and functional. It should be understood that any aspect of the disclosed subject matter can be implemented by one or more elements of the claim.

措辭「示例性」在本文中用於表示「用作實例、例子或圖示」。本文中描述為「示例性」的任何態樣不必被解釋為優於或勝過其他態樣。 The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous.

儘管本文描述了特定態樣,但這些態樣的眾多變體和置換落在本案的範疇之內。儘管提到了優選態樣的一些益處和優點,但本案的範疇並非意欲被限定於特定益處、用途或目標。相反,本案的各態樣意欲能寬泛地應用於不同的技術、系統組態、網路和協定,其中一些作為實例在附圖以及以下對優選態樣的描述中圖示。詳細描述和附圖僅僅圖示本案而非限定本案,本案的範疇由所附請求項及其等效技術方案來定義。 Although specific aspects are described herein, numerous variations and permutations of these aspects are within the scope of the present disclosure. Although some of the benefits and advantages of the preferred aspects are mentioned, the scope of the present invention is not intended to be limited to a particular benefit, use, or objective. Rather, the various aspects of the present invention are intended to be applied broadly to the various techniques, system configurations, networks, and protocols, some of which are illustrated in the drawings and the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the present invention, and the scope of the present invention is defined by the appended claims and their equivalents.

用於經非同步脈衝調制的取樣信號的事件驅動型時間迴旋 Event-driven time warping for sampled signals modulated by asynchronous pulses

均勻取樣被用於習知的資料獲取和信號處理技術中。取樣頻率可根據最大預期頻譜頻率來決定。然而,該最大頻率取樣對於具有寬鬆性質(例如,減少的頻率內容、增加的靜默時段)的輸入信號可能會浪費功率。使用最大頻率取樣對於依賴稀缺能量資源的新興應用而言是有問題的。 Uniform sampling is used in conventional data acquisition and signal processing techniques. The sampling frequency can be determined based on the maximum expected spectral frequency. However, this maximum frequency sampling may waste power for input signals that have loose properties (eg, reduced frequency content, increased silence periods). Using maximum frequency sampling is problematic for emerging applications that rely on scarce energy resources.

基於事件的取樣是對均勻取樣的有前景的替代方案。在基於事件的取樣中,僅在信號中發生某種顯著事情(例如事件)時才輸出取樣。本案的一個態樣涉及事件驅動型信號處理技術以助力於依賴稀缺能量資源的未來高能效神經元形態系統。 Event-based sampling is a promising alternative to uniform sampling. In event-based sampling, sampling is only output when something significant (such as an event) occurs in the signal. One aspect of the case involves event-driven signal processing techniques to assist future energy-efficient neuronal morphologies that rely on scarce energy resources.

本案的一態樣針對用於取樣信號(例如,Lebesgue(勒貝格)及/或經非同步脈衝調制的(APM)取樣信號)的時間迴旋的事件驅動型處理。具體地,一個態樣涉及用於APM取樣信號的時間迴旋的事件驅動型處理,其中編碼脈衝和核(例如衝激回應)函數兩者皆被表達為因果複指數的複數和。相應地,根據非同步事件驅動型信號處理技術提供的取樣信號的迴旋可助力於高度高能效的未來神經元形態系統。 One aspect of the present invention is directed to event-driven processing of time warping for sampling signals (e.g., Lebesgue and/or asynchronous pulse modulated (APM) sampled signals). In particular, one aspect relates to event-driven processing for time warping of APM sampled signals, where both coded pulses and core (eg, impulse response) functions are expressed as a complex sum of causal complex exponents. Accordingly, the swirling of the sampled signal provided by the asynchronous event-driven signal processing technique can assist in a highly energy efficient future neuronal morphological system.

根據本案的諸態樣,可接收連續的類比信號作為輸入。可應用基於事件的取樣程序以產生脈衝串。在一些態樣,該取樣程序可包括APM取樣程序或類似程序。該脈衝串可以是正脈衝、負脈衝或兩者(雙極)。由於輸入在事件發生時被取樣,因此取樣速率遠低於例如針對均勻時間取樣所可觀察到的。如此,本系統和方法使得滿足奈奎斯特取樣速率以重建輸入信號變得更容易實現。 According to aspects of the present invention, a continuous analog signal can be received as an input. An event based sampling program can be applied to generate bursts. In some aspects, the sampling program can include an APM sampling program or the like. The pulse train can be a positive pulse, a negative pulse, or both (bipolar). Since the input is sampled at the time of the event, the sampling rate is much lower than would be observed, for example, for uniform time sampling. As such, the present system and method make it easier to achieve a Nyquist sampling rate to reconstruct an input signal.

該脈衝串進而可經由直接應用該脈衝串與核函數的迴旋以近似時域輸出(例如連續信號)來被處理。亦即,該脈衝串可被用於產生輸出信號而無需將這些脈衝轉換回類比來處理和建立輸出信號。在一些態樣,輸出信號(其可具有基於事件的形式)可被維持在此種形式以供進一步處理。額外的後處理(諸如將輸出傳遞給第二級濾波器)可以被直接進行而沒有從類比信號進行重新轉換的管理負擔。 The pulse train can in turn be processed by approximating a time domain output (e.g., a continuous signal) by directly applying the convolution of the burst to the kernel function. That is, the burst can be used to generate an output signal without converting the pulses back to analog to process and establish an output signal. In some aspects, the output signal (which may have an event-based form) may be maintained in this form for further processing. Additional post processing (such as passing the output to the second stage filter) can be done directly without the administrative burden of reconverting from the analog signal.

基於事件的非同步取樣Event-based asynchronous sampling

考慮從時間t 0 開始的可積分連續時間實信號x(t):[to,∞)→,其中X是包含所有此類信號x(t)的集合。 Consider the integral continuous time real signal x(t) from time t 0 : [t o , ∞) → Where X is a collection containing all such signals x(t).

極化事件串ξ(t):[to,∞)→是有序極化事件集合{(tk,pk)|tk [to,∞),pk {-1,1}}(k=1,...,K)的回應函數,以使得:tk<t' k,如果k<k',並且 (1) ξ(t;{(tk,pk)})=Σkpkδ(t-tk), (2)其中δ是狄拉克△函數。用≡來標示包含所有此類事件串ξ(t)的集合。極化事件串ξ(t;{(tk,k)})在核函數h(t)下的直接重構可由它與h(t)的迴旋提供, y(t)=[ξ* h](t) (3) =ʃh(T)ξ(t-τ)dτ (4) =Σkpkh(t-tk)。 (5) Polarization event series (t): [t o , ∞) → Is an ordered set of polarized events {(t k ,p k )|t k [t o ,∞),p k a response function of {-1,1}}(k=1,...,K) such that: t k <t ' k , if k<k ' , and (1) ξ(t;{(t k , p k )})=Σ k p k δ(tt k ), (2) where δ is a Dirac △ function. Use ≡ to mark a collection containing all such event strings (t). Polarization event series t(t;{(t k , k)}) The direct reconstruction under the kernel function h(t) can be provided by its rotation with h(t), y(t)=[ξ* h](t) (3) =ʃh(T)ξ( T-τ)dτ (4) = Σ k p k h(tt k ). (5)

在一些態樣,核函數可以是因果性的(例如,h(t)=0,t0)。進一步用:≡→X來標示直接重構器(例如,)。 In some aspects, the kernel function can be causal (eg, h ( t ) = 0, t 0). Further use :≡→X to indicate the direct reconstructor (for example, ).

非同步脈衝調制(APM)取樣器:X→≡可在核函數(例如,脈衝)h(t)下將x(t)變換為極化事件串以使得k{1,...,N}: sign(x(tk-0)-y(tk-0)-x(to))=pk並且 (7) 其中0是無窮小的正數而y(t)=[x*h](t)是ξ(t)在h(t)下的直接重構。 Asynchronous pulse modulation (APM) sampler :X→≡ converts x(t) into a polarization event string under a kernel function (eg, pulse) h(t) So that k {1,...,N}: Sign(x(t k -0)-y(t k -0)-x(t o ))=p k and (7) Where 0 is an infinitesimal positive number and y(t)=[x*h](t) is a direct reconstruction of ξ(t) at h(t).

位準交叉取樣器(例如,勒貝格取樣器):×→≡是以heavy-side(赫維賽德)階躍函數作為其核函數的APM取樣器。heavy-side階躍函數可被定義為: Level cross sampler (eg, Lebes sampler) :×→≡ is a heavy-side (Heaviside) step function An APM sampler as its kernel function. The heavy-side step function can be defined as:

事件驅動型信號處理Event-driven signal processing

給定了連續信號x(t),處理的目標一般可被定義為計算連續時間信號x(t)的變換。例如,在一些態樣,該變換可由下式給定: 其中s表示可選的額外引數。 Given the continuous signal x(t), the target of the process can generally be defined as a transformation that computes the continuous time signal x(t). For example, in some aspects, the transformation can be given by: Where s represents an optional extra argument.

在實踐中,x(t)可首先由取樣器取樣: 這可得到經時間排序的時間值對的集合{(tk,qk)}集合。在一些態樣,可選取tks以使得它們具有規律的、相等的間隔,從而將該取樣器配置成黎曼(Riemann)取樣器。在另一配置中,qks可以是二進位的從而將該取樣器配置成APM取樣器。此外 ,該取樣器可被配置為位準交叉取樣器(例如,勒貝格取樣器)。 In practice, x(t) can be first used by the sampler sampling: This results in a set of time-ordered pairs of time values {(t k , q k )}. In some aspects, t k s can be chosen such that they have regular, equal spacing to configure the sampler as a Riemann sampler. In another configuration, q k s can be binary to configure the sampler as an APM sampler. Additionally, the sampler can be configured as a level cross sampler (eg, a Lebes sampler).

隨後,技術被應用於取樣信號ξ(t): 以計算在時刻{tk}定義的近似目標變換y(t,s)。因此,一般而言,信號處理範式可由對取樣信號進行操作的目標變換、取樣器、以及技術(,,)來完全定義。作為特殊情形,若是恒等變換(例如,),其中I是恒等函數,則信號處理範式約化為傳播範式:y(t,s)=I[x](t)=x(t), (13)其目標是從其取樣版本ξ(t)重構信號x(t)。 Subsequently, technology Applied to the sampling signal ξ(t): To calculate the approximate target transformation defined at time {t k } y(t, s). Thus, in general, the signal processing paradigm can be manipulated by a target transform, sampler, and technique that operates on the sampled signal ( , , ) to fully define. As a special case, if Is an identity transformation (for example, ), where I is an identity function, the signal processing paradigm is reduced to the propagation paradigm: y(t, s) = I[x](t) = x(t), (13) whose target is the sampled version from (t) reconstructing the signal x(t).

在一些態樣,事件驅動型技術可被用於將ξ(t;{(tk,qk)})变换为。若可被表達為以下複現關係: 其中n是有限的非負整數,則該技術可被認為是事件驅動的。亦即,一旦該取樣或(同義地)事件(tk,qk)到達,就可從當前取樣加上有限歷史來計算輸出值In some aspects, event-driven techniques can be used to transform ξ(t;{(t k ,q k )}) into . If Can be expressed as the following recurring relationship: Where n is a finite non-negative integer, then the technique can be considered event driven. That is, once the sample or (synonymous) event (t k , q k ) arrives, the output value can be calculated from the current sample plus a finite history. .

例如,若可被進一步表達為取樣時間間隔而不是絕對時間, 則事件驅動型技術亦可被稱為時不變的。 For example, if Can be further expressed as a sampling interval rather than an absolute time, Event-driven techniques can also be referred to as time-invariant.

此外,在一些態樣,可被表達為以下廣義線性形式: 其中φ是取樣值qs的向量函數,而A和B分別是前饋核函數和回饋核函數。事件驅動型技術可被表徵為廣義線性的。此外,在一些態樣,時不變的事件驅動型信號處理技術可被表達為以下廣義線性形式: 如此,事件驅動型信號處理技術可被認為是線性時不變的(LTI)。 Also, in some ways, Can be expressed as the following generalized linear form: Where φ is a vector function of the sampled value qs, and A and B are the feedforward kernel function and the feedback kernel function, respectively. Event-driven techniques can be characterized as generalized linear. In addition, in some aspects, time-invariant event-driven signal processing techniques Can be expressed as the following generalized linear form: So, event-driven signal processing technology It can be considered as linear time invariant (LTI).

在一個實例中,其中單個信號(例如,tks)的黎曼取樣由規律的間隔分隔開,q k =[x(t k )]且φ為q k 的恒等函數,時不變廣義線性事件驅動型信號處理技術可包括無限衝激回應(IIR)濾波器。 In one example, where the Riemann samples of a single signal (eg, t k s) are separated by regular intervals, q k =[ x ( t k )] and φ is an identity of q k , time invariant Generalized linear event-driven signal processing techniques may include an Infinite Impulse Response (IIR) filter.

事件驅動型迴旋Event-driven maneuver

信號x(t)與核函數k(t)的迴旋可定義為 =ʃk(τ)x(t-τ)dr (20) =ʃx(τ)k(t-τ)dr。 (21) The cyclotron of the signal x ( t ) and the kernel function k ( t ) can be defined as =ʃk(τ)x(t-τ)dr (20) =ʃx(τ)k(t-τ)dr. (twenty one)

與APM取樣信號的直接重構的迴旋Direct reconstruction of the maneuver with the APM sampled signal

x(t)首先由取樣(例如,APM取樣)(例如,)並且隨後由直接重構(例如,y(t) )時,y(t)與k(t)的迴旋可提供對的近似: k(τ)y(t-τ)dr (23) =ʃy(τ)k(t-τ)dr。 (24) When x ( t ) is first Sampling (eg, APM sampling) (for example, And then by Direct reconstruction (for example, y(t) When the y ( t ) and k ( t ) maneuvers can provide Approximate: k ( τ ) y ( t - τ ) dr (23) =ʃ y ( τ ) k ( t - τ ) dr . (twenty four)

使用{(tk,pk)}來標示事件串聯-並聯且應用式5: k p k ʃh(τ-t k )k(t-τ)dt (26) 其中 是事件t k 的基元貢獻。 Use {(t k , p k )} to indicate the event series-parallel and apply Equation 5: k p k ʃ h ( τ - t k ) k ( t - τ ) dt (26) among them Is the event t k pair The contribution of the primitives.

編碼脈衝和核函數的複加權複指數展開Complex weighted complex exponential expansion of coded pulses and kernel functions

為了推導純事件驅動型技術以用於迴旋,APM編碼脈衝和迴旋核函數可被表達為因果複指數的複加權和。 To derive a pure event-driven technique for convolution, the APM coded pulse and the whirling kernel function can be expressed as a complex weighted sum of the causal complex exponents.

給定了任意因果函數,若g(t)可被表達為複指數和,例如 其中a j ,b j ,則f(t)可被表達為因果複指數的複加權和: Given any causal function If g(t) can be expressed as a complex exponential sum, for example Where a j , b j , then f(t) can be expressed as the complex weighted sum of the causal complex exponents:

此外,表達為因果阻尼振盪的加權和的實函數可被轉換成因果複指數的複加權和的形式。首先,作為N個因果阻尼振盪的加權和的實函數可被表達為: Furthermore, a real function expressed as a weighted sum of causal damped oscillations can be converted into a form of a complex weighted sum of the causal complex exponents. First, the real function as a weighted sum of N causal damped oscillations can be expressed as:

其中諸ρn、諸σn、諸ωn和諸Φn是分量的振幅、阻尼常數、角頻率和相位。 Where ρ n , σ n , ω n and Φ n are the amplitude, damping constant, angular frequency and phase of the component.

由複指數的共軛對 構建(aj,bj)(j=1,....,2N),產生 Conjugate pair Construct (a j ,b j )(j=1,....,2N), generate

這意味著式33形式的實因果函數可被表達為因果複指數的複加權和。在一些態樣,APM編碼脈衝h(t)和迴旋核函數k(t)兩者皆可被表達為如下實例的這種形式: This means that the real causal function in the form of Equation 33 can be expressed as a complex weighted sum of the causal complex exponents. In some aspects, both the APM coded pulse h(t) and the gyroscopic kernel function k(t) can be expressed as this form of the following example:

將這些函數***式28,則來自事件tk的貢獻則為 其中 是事件t k 經由APM脈衝的第n個分量和核函數的第m個分量對的基元貢獻。 By inserting these functions into Equation 28, the contribution from event t k is among them Is the event t k via the nth component of the APM pulse and the mth component pair of the kernel function The contribution of the primitives.

因果複指數編碼脈衝和核函數Causal complex index coding pulse and kernel function

考慮事件tk經由相同形式的因果複指數編碼脈衝和核函數作出的基元貢獻,例如 Consider the event contribution of the event t k via the same form of causal complex exponential coding pulse and kernel function, eg

如此,可被分解成兩個分量,例如 其中 in this way, Can be broken down into two components, for example among them

進一步定義 得到 Further definition get

用於在因果指數脈衝下與APM取樣信號的因果指數核進行迴旋的事件驅動型技術Event-driven technique for maneuvering with a causal index kernel of APM sampled signals under a causal index pulse

首先,考慮从事件tk-1到事件tk的演化,在t=tk-1+0(此處0可表示無窮小的正值), Think first From the evolution of event t k-1 to event t k , at t=t k-1 +0 (where 0 can represent a positive value of infinity),

最後在t=tk+0, Finally at t=t k +0,

因此,從事件tk-1到事件tk的變化為 therefore, with The change from event t k-1 to event t k is

將式76和77相加在一起,產生 Combine equations 76 and 77 to produce

將式76、77和79組合,這種特定的遞迴技術是廣義線性時不變(LTI)無限衝激回應(IIR)濾波器(式18),例如其可由下式提供: 其中φ(p)=p, (81) Combining equations 76, 77, and 79, this particular recursive technique is a generalized linear time-invariant (LTI) infinite impulse response (IIR) filter (Equation 18), which can be provided, for example, by: Where φ ( p )= p , (81)

用於APM取樣信號的迴旋的事件驅動型技術Event-driven technology for the cyclotron of APM sampled signals

相應地,用於廣義複加權複指數和編碼脈衝與核函數的全迴旋技術可被推導出。 Accordingly, the full-rotation technique for generalized complex weighted complex exponents and coded pulses and kernel functions can be derived.

基於式80、81、82和83中描述的遞迴技術,事件t k 經由APM脈衝的第n個分量和核函數的第m個分量對的基元貢獻可使用以下等式以純事件驅動方式來計算: 其中 是兩個輔助狀態變數。 81, 82 and 83 based on the equation recursive techniques described event t k through n-th pulse APM component and a kernel function for the m-th component Primitive contribution It can be calculated in pure event-driven mode using the following equation: among them Are two auxiliary state variables.

考慮 全遞迴式可表達如下: consider The full recursive expression can be expressed as follows:

作為總結,用於APM取樣信號的迴旋的完全純事件驅動型技術在表1中描述。表1包括用於計算APM取樣信號的迴旋的示例性偽代碼,其中APM核與迴旋核函數是表達為複指數的複加權和的任意因果函數。 As a summary, a completely pure event-driven technique for the convolution of APM sampled signals is described in Table 1. Table 1 includes exemplary pseudocode for calculating the whirling of an APM sampled signal, where the APM kernel and the whirling kernel function are arbitrary causal functions expressed as a complex weighted sum of complex exponents.

參照表1,事件時間向量、事件極性向量以及核函數h(t)可被用於表示連續時間輸入信號x(t)。可使用複權重的常係數a h 和指數b h 數來進一步表示核函數h(t)。核函數k(t)可被用於表示衝激回應(或第二輸入)。可使用複權重的常係數a k 和指數b k 來表示核函數k(t)。當事件發生時,輸入信號可被取樣。對於每個事件,狀態變數可被更新並被用於將輸出直接計算為輸入信號和衝激回應的迴旋。目的是計算輸出信號,其是輸入信號與系統衝激回應(或第二輸入信號)的迴旋。 Referring to Table 1, the event time vector, the event polarity vector, and the kernel function h(t) can be used to represent the continuous time input signal x(t). The kernel function h(t) can be further represented by the constant coefficient a h and the exponent b h number of the complex weight. The kernel function k(t) can be used to represent an impulse response (or second input). The kernel function k(t) can be expressed using the constant coefficient a k of the complex weight and the exponent b k . When an event occurs, the input signal can be sampled. For each event, the state variable can be updated and used to directly calculate the output as a spin of the input signal and the impulse response. The purpose is to calculate the output signal, which is the convolution of the input signal with the system impulse response (or the second input signal).

圖1圖示根據本案的某些態樣使用片上系統(SOC)100對連續時間信號的非同步事件驅動型取樣進行前述處理的實例實現,SOC 100可包括通用處理器(CPU)或多核通用處理器(CPU)102。變數(例如,神經信號和突觸權重)、與計算設備相關聯的系統參數(例如,帶有權重的神經網路)、延遲、頻率槽資訊、以及任務資訊可被儲存在與神經處理單元(NPU)108相關聯的記憶體塊、與CPU 102相關聯的記憶體塊、與圖形處理單元(GPU)104相關聯的記憶體塊、與數位訊號處理器(DSP)106相關聯的記憶體塊、專用記憶 體塊118中,或可跨多個塊分佈。在通用處理器102處執行的指令可從與CPU 102相關聯的程式記憶體載入或可從專用記憶體塊118載入。 1 illustrates an example implementation of the foregoing processing of asynchronous event-driven sampling of continuous time signals using a system on a chip (SOC) 100, which may include a general purpose processor (CPU) or multi-core general processing, in accordance with certain aspects of the present disclosure. (CPU) 102. Variables (eg, neural signals and synaptic weights), system parameters associated with the computing device (eg, neural networks with weights), delays, frequency bin information, and task information can be stored in the neural processing unit ( NPU) 108 associated memory block, memory block associated with CPU 102, memory block associated with graphics processing unit (GPU) 104, memory block associated with digital signal processor (DSP) 106 Dedicated memory In block 118, or may be distributed across multiple blocks. Instructions executed at general purpose processor 102 may be loaded from program memory associated with CPU 102 or may be loaded from dedicated memory block 118.

SOC 100亦可包括為具體功能定製的額外處理塊(諸如GPU 104、DSP 106、連通性塊110)以及例如可偵測和辨識姿勢的多媒體處理器112,這些具體功能可包括***長期進化(4G LTE)連通性、無執照Wi-Fi連通性、USB連通性、藍芽連通性等。在一種實現中,NPU實現在CPU、DSP、及/或GPU中。SOC 100亦可包括感測器處理器114、影像信號處理器(ISP)、及/或導航120(其可包括全球定位系統)。 The SOC 100 may also include additional processing blocks (such as GPU 104, DSP 106, connectivity block 110) customized for specific functions, and a multimedia processor 112, such as a detectable and recognizable gesture, which may include a fourth generation long term. Evolution (4G LTE) connectivity, unlicensed Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and more. In one implementation, the NPU is implemented in a CPU, DSP, and/or GPU. The SOC 100 may also include a sensor processor 114, an image signal processor (ISP), and/or a navigation 120 (which may include a global positioning system).

SOC可基於ARM指令集。在本案的一態樣,載入到通用處理器102中的指令可包括用於從事件驅動型輸入取樣直接計算迴旋輸出的代碼。 The SOC can be based on the ARM instruction set. In one aspect of the present disclosure, the instructions loaded into the general purpose processor 102 can include code for directly calculating the whirling output from the event driven input samples.

圖2圖示根據本案的某些態樣的系統200的實例實現。如圖2中所圖示的,系統200可具有多個可執行本文所描述的方法的各種操作的局部處理單元202。每個局部處理單元202可包括可儲存神經網路的參數的局部狀態記憶體204和局部參數記憶體206。另外,局部處理單元202可具有用於儲存局部模型程式的局部(神經元)模型程式(LMP)記憶體208、用於儲存局部學習程式的局部學習程式(LLP)記憶體210、以及局部連接記憶體212。此外,如圖2中所圖示的,每個局部處理單元202可與用於為該局部處理單元的各局部記憶 體提供配置的配置處理器單元214對接,並且與提供各局部處理單元202之間的路由的路由連接處理單元216對接。 FIG. 2 illustrates an example implementation of system 200 in accordance with certain aspects of the present disclosure. As illustrated in FIG. 2, system 200 can have a plurality of local processing units 202 that can perform various operations of the methods described herein. Each local processing unit 202 can include a local state memory 204 and a local parameter memory 206 that can store parameters of the neural network. In addition, the local processing unit 202 may have a local (neuron) model program (LMP) memory 208 for storing local model programs, a local learning program (LLP) memory 210 for storing local learning programs, and local connection memory. Body 212. Moreover, as illustrated in FIG. 2, each local processing unit 202 can be associated with a local memory for the local processing unit The configuration providing processor unit 214 is docked and interfaces with the routing connection processing unit 216 that provides routing between the various local processing units 202.

深度學習架構可經由學習在每一層中以逐次更高的抽象程度來表示輸入、藉此構建輸入資料的有用特徵表示來執行物件辨識任務。以此方式,深度學習解決了傳統機器學習的主要瓶頸。在深度學習出現之前,用於物件辨識問題的機器學習辦法可能嚴重依賴人類工程設計的特徵,或許與淺分類器相結合。淺分類器可以是兩類線性分類器,例如,其中可將特徵向量分量的加權和與閾值作比較以預測輸入屬於哪一類。人類工程設計的特徵可以是由擁有領域專業知識的工程師針對具體問題領域定製的模版或核。相反,深度學習架構可學習去表示與人類工程師可能會設計的相似的特徵,但它是經由訓練來學習的。另外,深度網路可以學習去表示和辨識人類可能亦沒有考慮過的新類型的特徵。 The deep learning architecture can perform object identification tasks by learning to represent inputs in each layer with successively higher levels of abstraction, thereby constructing useful feature representations of the input material. In this way, deep learning solves the main bottleneck of traditional machine learning. Prior to the advent of deep learning, machine learning methods for object identification problems may rely heavily on features of human engineering design, perhaps in combination with shallow classifiers. The shallow classifier can be two types of linear classifiers, for example, where the weighted sum of the feature vector components can be compared to a threshold to predict which class the input belongs to. The characteristics of human engineering design can be templates or cores tailored to specific problem areas by engineers with domain expertise. Instead, the deep learning architecture can learn to represent features similar to those that human engineers might design, but it is learned through training. In addition, deep networks can learn to represent and identify new types of features that humans may not have considered.

深度學習架構可以學習特徵階層。例如,若向第一層呈遞視覺資料,則第一層可學習去辨識輸入流中的簡單特徵(諸如邊)。若向第一層呈遞聽覺資料,則第一層可學習去辨識特定頻率中的頻譜功率。取第一層的輸出作為輸入的第二層可以學習去辨識特徵組合,諸如對於視覺資料去辨識簡單形狀或對於聽覺資料去辨識聲音組合。更高層可學習去表示視覺資料中的複雜形狀或聽覺資料中的詞語。再高層可學習去辨識常見視覺物件或口語短語。 The deep learning architecture can learn the feature hierarchy. For example, if the visual material is presented to the first layer, the first layer can learn to recognize simple features (such as edges) in the input stream. If the auditory material is presented to the first layer, the first layer can learn to identify the spectral power in a particular frequency. The second layer, which takes the output of the first layer as input, can learn to identify feature combinations, such as recognizing simple shapes for visual data or recognizing sound combinations for auditory data. Higher layers can be learned to represent complex shapes in the visual material or words in the auditory material. The upper level can learn to identify common visual objects or spoken phrases.

深度學習架構在被應用於具有自然階層結構的問題時可能表現特別好。例如,機動交通工具的分類可受益於首先學習去辨識輪子、擋風玻璃、以及其他特徵。這些特徵可在更高層以不同方式被組合以辨識轎車、卡車和飛機。 Deep learning architectures may perform particularly well when applied to problems with a natural hierarchical structure. For example, the classification of motor vehicles may benefit from first learning to identify wheels, windshields, and other features. These features can be combined in different ways at a higher level to identify cars, trucks and airplanes.

神經網路可被設計成具有各種連通性模式。在前饋網路中,資訊從較低層被傳遞到較高層,其中給定層之每一者神經元向更高層中的神經元進行傳達。如前述,可在前饋網路的相繼層中構建階層式表示。神經網路亦可具有回流或回饋(亦被稱為自頂向下(top-down))連接。在回流連接中,來自給定層中的神經元的輸出被傳達給相同層中的另一神經元。回流架構可有助於辨識跨越不止一個按順序遞送給該神經網路的輸入資料組塊的模式。從給定層中的神經元到較低層中的神經元的連接被稱為回饋(或自頂向下)連接。當高層級概念的辨識可輔助辨別輸入的特定低層級特徵時,具有許多回饋連接的網路可能是有助益的。 Neural networks can be designed to have various connectivity modes. In a feedforward network, information is passed from a lower layer to a higher layer, where each neuron of a given layer communicates to neurons in a higher layer. As mentioned above, a hierarchical representation can be constructed in successive layers of the feedforward network. The neural network may also have a backflow or feedback (also referred to as a top-down) connection. In a reflow connection, the output from a neuron in a given layer is communicated to another neuron in the same layer. The reflow architecture can help identify patterns that span more than one input data chunk that is delivered sequentially to the neural network. The connection from a neuron in a given layer to a neuron in a lower layer is referred to as a feedback (or top-down) connection. A network with many feedback connections may be helpful when the identification of high-level concepts can assist in identifying specific low-level features of the input.

參照圖3A,神經網路的各層之間的連接可以是全連接的(302)或局部連接的(304)。在全連接網路302中,第一層中的神經元可將它的輸出傳達給第二層之每一者神經元,從而第二層之每一者神經元將從第一層之每一者神經元接收輸入。替換地,在局部連接網路304中,第一層中的神經元可連接至第二層中有限數目的神經元。迴旋網路306可以是局部連接的,並且被進一步配置成使得與針對第二層中每個神經元的輸入相關聯的連接強度被共用(例如,308)。更一般 化地,網路的局部連接層可被配置成使得一層之每一者神經元將具有相同或相似的連通性模式,但其連接強度可具有不同的值(例如,310、312、314和316)。局部連接的連通性模式可能在更高層中產生空間上相異的感受野,這是由於給定區域中的更高層神經元可接收到經由訓練被調諧為到網路的總輸入的受限部分的性質的輸入。 Referring to Figure 3A, the connections between the layers of the neural network may be fully connected (302) or partially connected (304). In the fully connected network 302, the neurons in the first layer can communicate its output to each of the neurons of the second layer, such that each of the neurons of the second layer will be from each of the first layers The neuron receives input. Alternatively, in the local connection network 304, the neurons in the first layer can be connected to a limited number of neurons in the second layer. The whirling network 306 can be locally connected and further configured such that the connection strength associated with the input for each neuron in the second layer is shared (eg, 308). More general Alternatively, the local connection layer of the network can be configured such that each of the neurons of the layer will have the same or similar connectivity pattern, but the connection strength can have different values (eg, 310, 312, 314, and 316). ). Locally connected connectivity patterns may create spatially distinct receptive fields in higher layers, since higher layer neurons in a given region may receive a restricted portion of the total input that is tuned to the network via training. The nature of the input.

局部連接的神經網路可能非常適合於其中輸入的空間位置有意義的問題。例如,被設計成辨識來自車載相機的視覺特徵的網路300可發展具有不同性質的高層神經元,這取決於它們與影像下部關聯亦是與影像上部關聯。例如,與影像下部相關聯的神經元可學習去辨識車道標記,而與影像上部相關聯的神經元可學習去辨識交通訊號燈、交通標誌等。 A locally connected neural network may be well suited to problems where the spatial location of the input is significant. For example, network 300 designed to recognize visual features from on-board cameras can develop high-level neurons with different properties depending on whether they are associated with the lower portion of the image or with the upper portion of the image. For example, neurons associated with the lower portion of the image can learn to recognize lane markers, while neurons associated with the upper portion of the image can learn to identify traffic lights, traffic signs, and the like.

DCN可以用受監督式學習來訓練。在訓練期間,DCN可被呈遞影像326(諸如限速標誌的經裁剪影像),並且可隨後計算「前向傳遞(forward pass)」以產生輸出322。輸出322可以是對應於特徵(諸如「標誌」、「60」、和「100」)的值向量。網路設計者可能希望DCN在輸出特徵向量中針對其中一些神經元輸出高得分,例如對應於經訓練的網路300的輸出322中所示的「標誌」和「60」的那些神經元。在訓練之前,DCN產生的輸出很可能是不正確的,並且由此可計算實際輸出與目標輸出之間的誤差。DCN的權重可隨後被調整以使得DCN的輸出得分與目標更緊密地對準。 DCN can be trained with supervised learning. During training, the DCN may be rendered an image 326 (such as a silhouetted silhouette of a speed limit flag) and may then calculate a "forward pass" to produce an output 322. Output 322 may be a value vector corresponding to features such as "flags," "60," and "100." The network designer may wish the DCN to output a high score for some of the neurons in the output feature vector, such as those corresponding to the "flag" and "60" shown in the output 322 of the trained network 300. Prior to training, the output produced by the DCN is likely to be incorrect, and thus the error between the actual output and the target output can be calculated. The weight of the DCN can then be adjusted such that the output score of the DCN is more closely aligned with the target.

為了調整權重,學習程序可為權重計算梯度向量。該梯度可指示若權重被略微調整則誤差將增加或減少的量。在頂層,該梯度可直接對應於連接倒數第二層中的活化神經元與輸出層中的神經元的權重的值。在較低層中,該梯度可取決於權重的值以及所計算出的較高層的誤差梯度。權重可隨後被調整以減小誤差。這種調整權重的方式可被稱為「反向傳播」,因為其涉及在神經網路中的「反向傳遞(backward pass)」。 To adjust the weights, the learning program can calculate gradient vectors for the weights. This gradient may indicate the amount by which the error will increase or decrease if the weight is slightly adjusted. At the top level, the gradient may directly correspond to the value of the weight of the neurons in the connected penultimate layer and the neurons in the output layer. In the lower layers, the gradient may depend on the value of the weight and the calculated error gradient of the higher layer. The weights can then be adjusted to reduce the error. This way of adjusting weights can be referred to as "backpropagation" because it involves "backward pass" in the neural network.

在實踐中,權重的誤差梯度可能是在少量實例上計算的,從而計算出的梯度近似於真實誤差梯度。這種近似方法可被稱為隨機梯度下降法。隨機梯度下降法可被重複,直到整個系統可達成的誤差率已停止下降或直到誤差率已達到目標水平。 In practice, the error gradient of the weights may be calculated on a small number of instances such that the calculated gradient approximates the true error gradient. This approximation can be referred to as a stochastic gradient descent method. The stochastic gradient descent method can be repeated until the error rate achievable by the entire system has stopped falling or until the error rate has reached the target level.

在學習之後,DCN可被呈遞新影像326並且在網路中的前向傳遞可產生輸出322,其可被認為是該DCN的推斷或預測。 After learning, the DCN can be presented with a new image 326 and forward delivery in the network can produce an output 322 that can be considered an inference or prediction of the DCN.

深度置信網路(DBN)是包括多層隱藏節點的概率性模型。DBN可被用於提取訓練資料集的階層式表示。DBN可經由堆疊多層受限波爾茲曼機(RBM)來獲得。RBM是一類可在輸入集上學習概率分佈的人工神經網路。由於RBM可在沒有關於每個輸入應該被分類到哪個類的資訊的情況下學習概率分佈的,因此RBM經常被用於無監督式學習。使用混合無監督式和受監督式範式,DBN的底部RBM可按無監督方 式被訓練並且可以用作特徵提取器,而頂部RBM可按受監督方式(在來自先前層的輸入和目標類的聯合分佈上)被訓練並且可用作分類器。 A deep belief network (DBN) is a probabilistic model that includes multiple layers of hidden nodes. The DBN can be used to extract a hierarchical representation of the training data set. The DBN can be obtained by stacking a multilayer multi-layered Boltzmann machine (RBM). RBM is a type of artificial neural network that can learn the probability distribution on an input set. Since RBM can learn the probability distribution without information about which class each input should be classified into, RBM is often used for unsupervised learning. Using a hybrid unsupervised and supervised paradigm, the bottom RBM of the DBN can be unsupervised The formula is trained and can be used as a feature extractor, while the top RBM can be trained in a supervised manner (on a joint distribution of input and target classes from previous layers) and can be used as a classifier.

深度迴旋網路(DCN)是迴旋網路的網路,其配置有額外的池化和正規化層。DCN已在許多任務上達成現有最先進的效能。DCN可使用受監督式學習來訓練,其中輸入和輸出目標兩者對於許多典範是已知的並被用於經由使用梯度下降法來修改網路的權重。 The Deep Cyclotron Network (DCN) is a network of gyroscopic networks that are configured with additional pooling and normalization layers. DCN has achieved the most advanced performance available on many missions. The DCN can be trained using supervised learning, where both input and output targets are known for many paradigms and are used to modify the weight of the network by using a gradient descent method.

DCN可以是前饋網路。另外,如前述,從DCN的第一層中的神經元到下一更高層中的神經元群的連接跨第一層中的神經元被共用。DCN的前饋和共用連接可被利用於進行快速處理。DCN的計算負擔可比例如類似大小的包括回流或回饋連接的神經網路小得多。 The DCN can be a feedforward network. In addition, as described above, the connection from the neurons in the first layer of the DCN to the neuron population in the next higher layer is shared across the neurons in the first layer. The feedforward and shared connections of the DCN can be utilized for fast processing. The computational burden of the DCN can be much smaller than, for example, a neural network of similar size including reflow or feedback connections.

迴旋網路的每一層的處理可被認為是空間不變模版或基礎投影。若輸入首先被分解成多個通道,諸如彩色影像的紅色、綠色和藍色通道,則在該輸入上訓練的迴旋網路可被認為是三維的,其具有沿著該影像的軸的兩個空間維度以及捕捉顏色資訊的第三個維度。迴旋連接的輸出可被認為在後續層318和320中形成特徵圖,該特徵圖(例如,320)之每一者元素從先前層(例如,318)中一定範圍的神經元以及從該多個通道中的每一個通道接收輸入。特徵圖中的值可以用非線性(諸如矯正)max(0,x)進一步處理。來自毗鄰神經元的值可被進一步池化(這對應於降取樣)並可提供額外的局部 不變性以及維度縮減。亦可經由特徵圖中神經元之間的側向抑制來應用正規化,其對應於白化。 The processing of each layer of the whirling network can be considered as a spatially invariant template or a base projection. If the input is first decomposed into multiple channels, such as red, green, and blue channels of a color image, the gyroscopic network trained on that input can be considered three-dimensional, with two along the axis of the image. The spatial dimension and the third dimension that captures color information. The output of the convoluted connection can be considered to form a feature map in subsequent layers 318 and 320, each element of the feature map (eg, 320) from a range of neurons in the previous layer (eg, 318) and from the plurality of Each channel in the channel receives input. The values in the feature map can be further processed with non-linear (such as correction) max(0, x). Values from adjacent neurons can be further pooled (this corresponds to downsampling) and additional locals can be provided Invariance and dimension reduction. Normalization can also be applied via lateral suppression between neurons in the feature map, which corresponds to whitening.

深度學習架構的效能可隨著有更多被標記的資料點變為可用或隨著計算能力提高而提高。現代深度神經網路用比僅僅十五年前可供典型研究者使用的計算資源多數千倍的計算資源來例行地訓練。新的架構和訓練範式可進一步推升深度學習的效能。經矯正的線性單元可減少被稱為梯度消失的訓練問題。新的訓練技術可減少過度擬合(over-fitting)並因此使更大的模型能夠達成更好的推廣。封裝技術可抽象出給定的感受野中的資料並進一步提升整體效能。 The performance of a deep learning architecture can increase as more marked data points become available or as computing power increases. Modern deep neural networks are routinely trained with computing resources that are thousands of times larger than the computing resources available to typical researchers only fifteen years ago. The new architecture and training paradigm can further boost the effectiveness of deep learning. The corrected linear unit reduces the training problem known as gradient disappearance. New training techniques can reduce over-fitting and thus enable larger models to achieve better outreach. Encapsulation technology abstracts the data in a given receptive field and further enhances overall performance.

圖3B是圖示示例性深度迴旋網路350的方塊圖。深度迴旋網路350可包括多個基於連通性和權重共用的不同類型的層。如圖3B所示,該示例性深度迴旋網路350可包括多個迴旋塊(例如,C1和C2)。每個迴旋塊可配置有迴旋層、正規化層(LNorm)、和池化層。迴旋層可包括一或多個迴旋濾波器,其可被應用於輸入資料以產生特徵圖。儘管僅圖示兩個迴旋塊,但本案不限於此,相反,根據設計偏好,任何數目的迴旋塊可被包括在深度迴旋網路350中。正規化層可被用於對迴旋濾波器的輸出進行正規化。例如,正規化層可提供白化或側向抑制。池化層可提供在空間上的降取樣聚集以實現局部不變性和維度縮減。 FIG. 3B is a block diagram illustrating an exemplary deep swing network 350. Deep gyro network 350 may include multiple different types of layers based on connectivity and weight sharing. As shown in FIG. 3B, the example deep whirling network 350 can include a plurality of gyroscope blocks (eg, C1 and C2). Each of the swirling blocks may be configured with a swirling layer, a normalized layer (LNorm), and a pooling layer. The whirling layer can include one or more cyclotron filters that can be applied to the input data to produce a feature map. Although only two gyroscopic blocks are illustrated, the present case is not limited thereto, and instead, any number of gyroscopic blocks may be included in the deep gyroscopic network 350 depending on design preferences. A normalization layer can be used to normalize the output of the cyclotron filter. For example, the normalized layer can provide whitening or lateral inhibition. The pooled layer can provide spatially downsampled aggregation to achieve local invariance and dimensional reduction.

例如,深度迴旋網路的平行濾波器組可任選地基於ARM指令集被載入到SOC 100的CPU 102或GPU 104上以達成 高效能和低功耗。在替換實施例中,平行濾波器組可被載入到SOC 100的DSP 106或ISP 116上。另外,DCN可存取其他可存在於SOC上的處理塊,諸如專用於感測器114和導航120的處理塊。 For example, a parallel filter bank of a deep gyroscopic network can optionally be loaded onto CPU 102 or GPU 104 of SOC 100 based on an ARM instruction set to achieve High performance and low power consumption. In an alternate embodiment, the parallel filter bank can be loaded onto the DSP 106 or ISP 116 of the SOC 100. In addition, the DCN can access other processing blocks that may be present on the SOC, such as processing blocks dedicated to the sensor 114 and the navigation 120.

深度迴旋網路350亦可包括一或多個全連接層(例如,FC1和FC2)。深度迴旋網路350可進一步包括邏輯回歸(LR)層。深度迴旋網路350的每一層之間是要被更新的權重(未圖示)。每一層的輸出可以用作深度迴旋網路350中後續層的輸入以從在第一迴旋塊C1處提供的輸入資料(例如,影像、音訊、視訊、感測器資料及/或其他輸入資料)學習階層式特徵表示。 The deep gyro network 350 may also include one or more fully connected layers (eg, FC1 and FC2). Deep swing network 350 may further include a Logistic Regression (LR) layer. Between each layer of the deep gyroscopic network 350 is a weight to be updated (not shown). The output of each layer can be used as an input to subsequent layers in the deep gyroscopic network 350 to input data (eg, images, audio, video, sensor data, and/or other input data) provided at the first gyro block C1. Learn the hierarchical feature representation.

圖4是圖示可使人工智慧(AI)功能模組化的示例性軟體架構400的方塊圖。使用該架構,應用402可被設計成可使得SOC 420的各種處理塊(例如CPU 422、DSP 424、GPU 426及/或NPU 428)在該應用402的執行時操作期間執行支援計算。 4 is a block diagram illustrating an exemplary software architecture 400 that can modularize artificial intelligence (AI) functionality. Using this architecture, application 402 can be designed to cause various processing blocks of SOC 420 (eg, CPU 422, DSP 424, GPU 426, and/or NPU 428) to perform support calculations during execution time operations of the application 402.

AI應用402可配置成調用在使用者空間404中定義的功能,例如,這些功能可提供對指示該設備當前操作位置的場景的偵測和辨識。例如,AI應用402可取決於辨識出的場景是否為辦公室、報告廳、餐館、或室外環境(諸如湖泊)而以不同方式配置話筒和相機。AI應用402可向與在場景偵測應用程式設計介面(API)406中定義的庫相關聯的經編譯器代碼作出請求以提供對當前場景的估計。該請求可最終依賴於 配置成基於例如視訊和定位資料來提供場景估計的深度神經網路的輸出。 The AI application 402 can be configured to invoke functions defined in the user space 404, for example, to provide detection and identification of scenes indicating the current operational location of the device. For example, the AI application 402 can configure the microphone and camera differently depending on whether the identified scene is an office, a lecture hall, a restaurant, or an outdoor environment such as a lake. The AI application 402 can make a request to the compiler code associated with the library defined in the Scene Detection Application Programming Interface (API) 406 to provide an estimate of the current scene. The request can ultimately depend on An output of a deep neural network configured to provide a scene estimate based on, for example, video and positioning data.

執行時引擎408(其可以是執行時框架的經編譯代碼)可進一步可由AI應用402存取。例如,AI應用402可使得執行時引擎請求特定的時間間隔的場景估計或由應用的使用者介面偵測到的事件觸發的場景估計。在使得執行時引擎估計場景時,執行時引擎可進而發送信號給在SOC 420上執行的作業系統410(諸如Linux核心412)。作業系統410進而可使得在CPU 422、DSP 424、GPU 426、NPU 428、或其某種組合上執行計算。CPU 422可被作業系統直接存取,而其他處理塊可經由驅動器(諸如用於DSP 424、GPU 426、或NPU 428的驅動器414-418)被存取。在示例性實例中,深度神經網路可被配置成在處理塊的組合(諸如CPU 422和GPU 426)上執行,或可在NPU 428(若存在的話)上執行。 Execution time engine 408 (which may be compiled code of the runtime framework) may be further accessible by AI application 402. For example, the AI application 402 can cause the execution engine to request a scene estimate for a particular time interval or an event-triggered scene estimate detected by the user interface of the application. The execution engine may in turn send a signal to the operating system 410 (such as the Linux kernel 412) executing on the SOC 420 while causing the execution time engine to estimate the scene. Operating system 410, in turn, may cause calculations to be performed on CPU 422, DSP 424, GPU 426, NPU 428, or some combination thereof. CPU 422 can be accessed directly by the operating system, while other processing blocks can be accessed via a driver, such as drivers 414-418 for DSP 424, GPU 426, or NPU 428. In an illustrative example, the deep neural network may be configured to execute on a combination of processing blocks, such as CPU 422 and GPU 426, or may be performed on NPU 428 (if present).

圖5是圖示智慧手機502上的AI應用的執行時操作500的方塊圖。AI應用可包括預處理模組504,該預處理模組504可被配置(例如,使用JAVA程式設計語言被配置)成轉換影像506的格式並隨後對該影像進行剪裁及/或調整大小(508)。經預處理的影像可接著被傳達給分類應用510,該分類應用510包含場景偵測後端引擎512,該場景偵測後端引擎512可被配置(例如,使用C程式設計語言被配置)成基於視覺輸入來偵測和分類場景。場景偵測後端引擎512可被配置成進一步經由縮放(516)和剪裁(518)來預處理(514)該影 像。例如,該影像可被縮放和剪裁以使所得到的影像是224圖元×224圖元。這些維度可映射到神經網路的輸入維度。神經網路可由深度神經網路塊520配置以使得SOC 100的各種處理塊進一步借助深度神經網路來處理影像圖元。深度神經網路的結果可隨後被取閾(522)並被傳遞經由分類應用510中的指數平滑塊524。經平滑的結果可接著使得智能手機502的設置及/或顯示改變。 FIG. 5 is a block diagram illustrating an execution time operation 500 of an AI application on smart phone 502. The AI application can include a pre-processing module 504 that can be configured (eg, configured using a JAVA programming language) to convert the format of the image 506 and then crop and/or resize the image (508) ). The pre-processed image can then be communicated to the classification application 510, which includes a scene detection backend engine 512 that can be configured (eg, configured using a C programming language) into Detect and classify scenes based on visual input. The scene detection backend engine 512 can be configured to further preprocess (514) the shadow via scaling (516) and cropping (518). image. For example, the image can be scaled and cropped such that the resulting image is 224 primitives x 224 primitives. These dimensions can be mapped to the input dimensions of the neural network. The neural network may be configured by the deep neural network block 520 such that the various processing blocks of the SOC 100 further process the image primitives by means of a deep neural network. The results of the deep neural network may then be thresholded (522) and passed through exponential smoothing block 524 in classification application 510. The smoothed result can then cause the settings and/or display of the smartphone 502 to change.

在一種配置中,神經元模型被配置成用於從事件驅動型輸入取樣直接計算迴旋輸出並在事件之間內插輸出。神經元模型包括計算裝置及/或內插裝置。在一個態樣,計算裝置及/或內插裝置可以是配置成執行所敘述功能的通用處理器102、與通用處理器102相關聯的程式記憶體、記憶體塊118、局部處理單元202、路由連接處理單元216、及/或迴旋處理單元604。在另一配置中,前述裝置可以是被配置成執行由前述裝置所敘述的功能的任何模組或任何設備。 In one configuration, the neuron model is configured to directly calculate the whirling output from event-driven input samples and interpolate the output between events. The neuron model includes a computing device and/or an interpolation device. In one aspect, the computing device and/or the interpolating device can be a general purpose processor 102 configured to perform the recited functions, a program memory associated with the general purpose processor 102, a memory block 118, a local processing unit 202, and routing Connection processing unit 216, and/or whirling processing unit 604. In another configuration, the aforementioned means may be any module or any device configured to perform the functions recited by the aforementioned means.

根據本案的某些態樣,每個局部處理單元202可被配置成基於神經網路的一或多個期望功能性特徵來決定神經網路的參數,以及隨著所決定的參數被進一步適配、調諧和更新來使這一或多個功能性特徵朝著期望的功能性特徵發展。 According to some aspects of the present disclosure, each local processing unit 202 can be configured to determine parameters of the neural network based on one or more desired functional characteristics of the neural network, and further adapted as the determined parameters Tuning and updating to develop one or more functional features toward desired functional features.

圖6A和6B圖示用於取樣信號的時間迴旋的前述非同步、事件驅動型處理的實例實現。如圖6A中所圖示的,每個記憶體組602儲存與相應的處理單元(迴旋處理單元)604相關聯的第一輸入信號和第二輸入信號的事件驅動型取樣。 在本案的這態樣,處理單元604可被配置成用於取樣信號的時間迴旋以提供該迴旋的事件驅動型輸出取樣。如圖6B中所圖示的,一記憶體組602儲存輸入信號的事件驅動型取樣而另一記憶體組602儲存與相應的處理單元(迴旋處理單元)604相關聯的系統衝激回應函數的事件驅動型取樣。處理單元604被配置成用於取樣信號的時間迴旋以提供該迴旋的事件驅動輸出取樣。 6A and 6B illustrate an example implementation of the aforementioned non-synchronous, event-driven type of processing for time warping of sampled signals. As illustrated in FIG. 6A, each memory bank 602 stores event-driven samples of the first input signal and the second input signal associated with respective processing units (skew processing units) 604. In this aspect of the present case, processing unit 604 can be configured to time warn the sampled signal to provide the event-driven output sample of the wrap. As illustrated in FIG. 6B, one memory bank 602 stores event-driven samples of the input signal and another memory bank 602 stores system impulse response functions associated with the respective processing units (skew processing units) 604. Event-driven sampling. Processing unit 604 is configured to time warn the sampled signal to provide an event driven output sample of the wrap.

圖7圖示用於處理連續時間信號的非同步事件驅動型取樣的方法700。在方塊702,該程序從事件驅動型輸入取樣直接計算迴旋輸出。在一些態樣,迴旋輸出可以是以基於事件的格式的。迴旋輸出可基於經非同步脈衝調制的(APM)編碼脈衝。 FIG. 7 illustrates a method 700 for processing asynchronous event-driven sampling of continuous time signals. At block 702, the program directly calculates the whirling output from the event-driven input samples. In some aspects, the whirling output can be in an event-based format. The whirling output can be based on a non-synchronized pulse modulated (APM) encoded pulse.

可經由將編碼脈衝和迴旋核函數表達為複加權因果複指數之和來計算迴旋輸出。在一些態樣,可經由將編碼脈衝和迴旋核函數近似為複加權因果複指數之和來計算迴旋輸出。 The whirling output can be calculated by expressing the coded pulse and the whirling kernel function as the sum of the complex weighted causal complex exponents. In some aspects, the whirling output can be calculated by approximating the coded pulse and the whirling kernel function as the sum of the complex weighted causal complex exponents.

在一些態樣,事件驅動型輸入取樣可包括輸入信號的事件驅動型取樣以及系統衝激回應函數的事件驅動型取樣。如此,可經由回應於輸入信號事件以事件驅動方式產生迴旋輸出來計算迴旋輸出。 In some aspects, event-driven input sampling can include event-driven sampling of input signals and event-driven sampling of system impulse response functions. As such, the whirling output can be calculated by generating a whirling output in an event driven manner in response to an input signal event.

在方塊704,該程序在事件之間內插輸出。在一些態樣,該程序進一步從呈基於事件的格式的迴旋輸出直接計算第二迴旋輸出。 At block 704, the program interpolates the output between events. In some aspects, the program further calculates the second convoluted output directly from the convoluted output in an event-based format.

圖8是圖示根據本案的諸態樣用於取樣信號的事件驅動型時間迴旋方法的方塊圖。在方塊802,該程序初始化狀態變數(例如,u (n,m)v (n,m))。在方塊804,該程序將連續時間輸入信號表示為事件序列。例如,可應用基於事件的取樣程序(例如,APM取樣)以產生脈衝串。在一些態樣,脈衝串可以是正脈衝、負脈衝或兩者(雙極)。 8 is a block diagram illustrating an event-driven time warping method for sampling signals according to aspects of the present disclosure. At block 802, the program initializes state variables (eg, u ( n , m ) , v ( n , m ) ). At block 804, the program represents the continuous time input signal as a sequence of events. For example, an event based sampling program (eg, APM sampling) can be applied to generate a burst. In some aspects, the pulse train can be a positive pulse, a negative pulse, or both (bipolar).

在方塊806,該程序決定事件是否已發生。若事件已發生,則該程序在方塊808更新狀態變數(例如,u (n,m)v (n,m))。在方塊810,該程序使用經更新的狀態變數來計算時間t k 的輸出信號。輸出信號可包括對連續時間輸出信號y(t)的近似。另一態樣,若事件尚未發生,則該程序可返回方塊806以等待事件發生(例如,保持在閒置模式)。 At block 806, the program determines if an event has occurred. If an event has occurred, the program updates the state variable (e.g., u ( n , m ) , v ( n , m ) ) at block 808. At block 810, the program calculates the time t k of the output signal using the updated state variable. The output signal can include an approximation of the continuous time output signal y(t). In another aspect, if an event has not occurred, the program can return to block 806 to wait for an event to occur (eg, remain in idle mode).

以上所描述的方法的各種操作可由能夠執行相應功能的任何合適的裝置來執行。這些裝置可包括各種硬體及/或軟體元件及/或模組,包括但不限於電路、特殊應用積體電路(ASIC)、或處理器。一般而言,在附圖中有圖示的操作的場合,那些操作可具有帶相似編號的相應配對手段功能元件。 The various operations of the methods described above can be performed by any suitable means capable of performing the corresponding functions. These devices may include various hardware and/or software components and/or modules including, but not limited to, circuitry, special application integrated circuits (ASICs), or processors. In general, where the operations illustrated are illustrated in the drawings, those operations may have corresponding pairing means functional elements with similar numbers.

如本文所使用的,術語「決定」涵蓋各種各樣的動作。例如,「決定」可包括演算、計算、處理、推導、研究、檢視(例如,在表、資料庫或另一資料結構中檢視)、探知及諸如此類。另外,「決定」可包括接收(例如,接收資訊)、存取(例如,存取記憶體中的資料)、及類似動作。而且,「決定」可包括解析、選擇、選取、確立及類似動作。 As used herein, the term "decision" encompasses a wide variety of actions. For example, a "decision" may include calculation, calculation, processing, derivation, research, review (eg, viewing in a table, database, or another data structure), detection, and the like. In addition, "decision" may include receiving (eg, receiving information), accessing (eg, accessing data in memory), and the like. Moreover, "decisions" may include parsing, selecting, selecting, establishing, and the like.

如本文中所使用的,引述一列項目中的「至少一個」的短語是指這些專案的任何組合,包括單個成員。作為實例,「a、b或c中的至少一個」意欲涵蓋:a、b、c、a-b、a-c、b-c、以及a-b-c。 As used herein, a phrase referring to "at least one of" a list of items refers to any combination of these items, including a single member. As an example, "at least one of a, b or c" is intended to encompass: a, b, c, a-b, a-c, b-c, and a-b-c.

結合本案所描述的各種說明性邏輯區塊、模組、以及電路可用設計成執行本文所描述功能的通用處理器、數位訊號處理器(DSP)、特殊應用積體電路(ASIC)、現場可程式設計閘陣列信號(FPGA)或其他可程式設計邏輯裝置(PLD)、個別閘或電晶體邏輯、個別的硬體元件或其任何組合來實現或執行。通用處理器可以是微處理器,但在替換方案中,該處理器可以是任何市售的處理器、控制器、微控制器、或狀態機。處理器亦可以被實現為計算設備的組合,例如DSP與微處理器的組合、多個微處理器、與DSP核心協同的一或多個微處理器、或任何其他此類配置。 The various illustrative logic blocks, modules, and circuits described in connection with the present disclosure can be implemented as general purpose processors, digital signal processors (DSPs), special application integrated circuits (ASICs), and field programmable programs that perform the functions described herein. A gate array signal (FPGA) or other programmable logic device (PLD), individual gate or transistor logic, individual hardware components, or any combination thereof are designed to implement or execute. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

結合本案所描述的方法或演算法的步驟可直接在硬體中、在由處理器執行的軟體模組中、或在這兩者的組合中 體現。軟體模組可常駐在本發明所屬領域所知的任何形式的儲存媒體中。可使用的儲存媒體的一些實例包括隨機存取記憶體(RAM)、唯讀記憶體(ROM)、快閃記憶體、可抹除可程式設計唯讀記憶體(EPROM)、電子可抹除可程式設計唯讀記憶體(EEPROM)、暫存器、硬碟、可移除磁碟、CD-ROM,等等。軟體模組可包括單一指令、或許多指令,且可分佈在若干不同的程式碼片段上,分佈在不同的程式間以及跨多個儲存媒體分佈。儲存媒體可被耦合到處理器以使得該處理器能從/向該儲存媒體讀寫資訊。替換地,儲存媒體可以被整合到處理器。 The steps of the method or algorithm described in connection with the present invention may be directly in hardware, in a software module executed by a processor, or in a combination of the two. reflect. The software modules can reside in any form of storage medium known in the art to which the present invention pertains. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, erasable programmable read only memory (EPROM), electronic erasable Programming read-only memory (EEPROM), scratchpad, hard drive, removable disk, CD-ROM, and more. The software module can include a single instruction, or many instructions, and can be distributed over several different code segments, distributed among different programs, and distributed across multiple storage media. The storage medium can be coupled to the processor such that the processor can read and write information from/to the storage medium. Alternatively, the storage medium can be integrated into the processor.

本文所揭示的方法包括用於實現所描述的方法的一或多個步驟或動作。這些方法步驟及/或動作可以彼此互換而不會脫離請求項的範疇。換言之,除非指定了步驟或動作的特定次序,否則具體步驟及/或動作的次序及/或使用可以改動而不會脫離請求項的範疇。 The methods disclosed herein comprise one or more steps or actions for implementing the methods described. These method steps and/or actions may be interchanged without departing from the scope of the claims. In other words, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.

所描述的功能可在硬體、軟體、韌體或其任何組合中實現。若以硬體實現,則實例硬體設定可包括設備中的處理系統。處理系統可以用匯流排架構來實現。取決於處理系統的具體應用和整體設計約束,匯流排可包括任何數目的互連匯流排和橋接器。匯流排可將包括處理器、機器可讀取媒體、以及匯流排介面的各種電路連結在一起。匯流排介面可用於尤其將網路介面卡等經由匯流排連接至處理系統。網路介面卡可用於實現信號處理功能。對於某些態樣,使用者介 面(例如,按鍵板、顯示器、滑鼠、操縱桿,等等)亦可以被連接到匯流排。匯流排亦可以連結各種其他電路,諸如定時源、周邊設備、穩壓器、功率管理電路以及類似電路,它們在本發明所屬領域中是眾所周知的,因此將不再進一步描述。 The functions described can be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, the example hardware settings can include processing systems in the device. The processing system can be implemented with a bus architecture. The bus bar can include any number of interconnect bus bars and bridges depending on the particular application of the processing system and overall design constraints. Busbars connect various circuits including processors, machine readable media, and bus interfaces. The bus interface can be used to connect a network interface card or the like to a processing system via a bus bar. The network interface card can be used to implement signal processing functions. For some aspects, the user interface Faces (eg, keypads, displays, mice, joysticks, etc.) can also be connected to the busbars. The busbars can also be coupled to various other circuits, such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art to which the present invention pertains and therefore will not be further described.

處理器可負責管理匯流排和一般處理,包括執行儲存在機器可讀取媒體上的軟體。處理器可用一或多個通用及/或專用處理器來實現。實例包括微處理器、微控制器、DSP處理器、以及其他能執行軟體的電路系統。軟體應當被寬泛地解釋成意指指令、資料、或其任何組合,無論是被稱作軟體、韌體、仲介軟體、微代碼、硬體描述語言、或其他。作為實例,機器可讀取媒體可包括隨機存取記憶體(RAM)、快閃記憶體、唯讀記憶體(ROM)、可程式設計唯讀記憶體(PROM)、可抹除可程式設計唯讀記憶體(EPROM)、電可抹除可程式設計唯讀記憶體(EEPROM)、暫存器、磁碟、光碟、硬驅動器、或者任何其他合適的儲存媒體、或其任何組合。機器可讀取媒體可被實施在電腦程式產品中。電腦程式產品可以包括包裝材料。 The processor is responsible for managing the bus and general processing, including executing software stored on machine readable media. The processor can be implemented with one or more general purpose and/or special purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Software should be interpreted broadly to mean instructions, materials, or any combination thereof, whether referred to as software, firmware, mediator, microcode, hardware description language, or otherwise. By way of example, machine readable media may include random access memory (RAM), flash memory, read only memory (ROM), programmable read only memory (PROM), erasable programmable only Read memory (EPROM), electrically erasable programmable read only memory (EEPROM), scratchpad, diskette, compact disc, hard drive, or any other suitable storage medium, or any combination thereof. Machine readable media can be implemented in a computer program product. Computer program products may include packaging materials.

在硬體實現中,機器可讀取媒體可以是處理系統中與處理器分開的一部分。然而,如本發明所屬領域中具有通常知識者將容易領會的,機器可讀取媒體、或其任何部分可在處理系統外部。作為實例,機器可讀取媒體可包括傳輸線、由資料調制的載波、及/或與設備分開的電腦產品,所有這 些皆可由處理器經由匯流排介面來存取。替換地或補充地,機器可讀取媒體、或其任何部分可被整合到處理器中,諸如快取記憶體及/或通用暫存器檔可能就是這種情形。儘管所論述的各種元件可被描述為具有特定位置,諸如局部元件,但它們亦可按各種方式來配置,諸如某些元件被配置成分散式計算系統的一部分。 In a hardware implementation, the machine readable medium can be part of the processing system separate from the processor. However, as will be readily appreciated by those of ordinary skill in the art to which the present invention pertains, the machine readable medium, or any portion thereof, can be external to the processing system. As an example, a machine readable medium can include a transmission line, a carrier modulated by the data, and/or a computer product separate from the device, all of which These can be accessed by the processor via the bus interface. Alternatively or additionally, the machine readable medium, or any portion thereof, may be integrated into the processor, such as cache memory and/or general purpose register files. Although the various elements discussed may be described as having particular locations, such as local components, they may also be configured in various ways, such as some components being configured as part of a distributed computing system.

處理系統可以被配置為通用處理系統,該通用處理系統具有一或多個提供處理器功能性的微處理器、和提供機器可讀取媒體中的至少一部分的外部記憶體,它們皆經由外部匯流排架構與其他支援電路系統連結在一起。替換地,該處理系統可以包括一或多個神經元形態處理器以用於實現本文所述的神經元模型和神經系統模型。作為另一替換方案,處理系統可以用帶有整合在單塊晶片中的處理器、匯流排介面、使用者介面、支援電路系統、和至少一部分機器可讀取媒體的特殊應用積體電路(ASIC)來實現,或者用一或多個現場可程式設計閘陣列(FPGA)、可程式設計邏輯裝置(PLD)、控制器、狀態機、閘控邏輯、個別硬體元件、或者任何其他合適的電路系統、或者能執行本案通篇所描述的各種功能性的電路的任何組合來實現。取決於具體應用和加諸於整體系統上的總設計約束,本發明所屬領域中具有通常知識者將認識到如何最佳地實現關於處理系統所描述的功能性。 The processing system can be configured as a general purpose processing system having one or more microprocessors that provide processor functionality, and external memory that provides at least a portion of the machine readable media, both of which are externally coupled The row architecture is linked to other supporting circuitry. Alternatively, the processing system can include one or more neuron morphological processors for implementing the neuron model and nervous system model described herein. As a further alternative, the processing system may utilize a special application integrated circuit (ASIC) with a processor integrated in a single chip, a bus interface, a user interface, a support circuitry, and at least a portion of machine readable media. To implement, or to use one or more field programmable gate arrays (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, individual hardware components, or any other suitable circuit The system, or any combination of circuits capable of performing the various functions described throughout the present invention, can be implemented. Depending on the particular application and the overall design constraints imposed on the overall system, one of ordinary skill in the art to which the invention pertains will recognize how best to implement the functionality described with respect to the processing system.

機器可讀取媒體可包括數個軟體模組。這些軟體模組包括當由處理器執行時使處理系統執行各種功能的指令。 這些軟體模組可包括傳送模組和接收模組。每個軟體模組可以常駐在單個存放裝置中或者跨越多個存放裝置分佈。作為實例,當觸發事件發生時,可以從硬驅動器中將軟體模組載入到RAM中。在軟體模組執行期間,處理器可以將一些指令載入到快取記憶體中以提高存取速度。隨後可將一或多個快取記憶體行載入到通用暫存器檔中以供由處理器執行。在以下談及軟體模組的功能性時,將理解此類功能性是在處理器執行來自該軟體模組的指令時由該處理器來實現的。此外,應當領會本案的各態樣使實現此類態樣的處理器、電腦、機器、或其他系統的運轉得到改善。 Machine readable media can include several software modules. These software modules include instructions that, when executed by a processor, cause the processing system to perform various functions. The software modules can include a transmission module and a receiving module. Each software module can be resident in a single storage device or distributed across multiple storage devices. As an example, when a trigger event occurs, the software module can be loaded into the RAM from the hard drive. During execution of the software module, the processor can load some instructions into the cache to increase access speed. One or more cache memory lines can then be loaded into the general purpose scratchpad file for execution by the processor. In the following discussion of the functionality of a software module, it will be appreciated that such functionality is implemented by the processor when the processor executes instructions from the software module. In addition, it should be appreciated that the various aspects of the present invention improve the operation of processors, computers, machines, or other systems that perform such aspects.

若以軟體實現,則各功能可作為一或多個指令或代碼儲存在電腦可讀取媒體上或藉其進行傳送。電腦可讀取媒體包括電腦儲存媒體和通訊媒體兩者,這些媒體包括促成電腦程式從一地向另一地轉移的任何媒體。儲存媒體可以是能被電腦存取的任何可用媒體。作為實例而非限定,此類電腦可讀取媒體可包括RAM、ROM、EEPROM、CD-ROM或其他光碟儲存、磁碟儲存或其他磁存放裝置、或能用於攜帶或儲存指令或資料結構形式的期望程式碼且能被電腦存取的任何其他媒體。另外,任何連接亦被正當地稱為電腦可讀取媒體。例如,若軟體是使用同軸電纜、光纖電纜、雙絞線、數位用戶線(DSL)、或無線技術(諸如紅外(IR)、無線電、以及微波)從web網站、伺服器、或其他遠端源傳送而來,則該同軸電纜、光纖電纜、雙絞線、DSL或無線技術(諸如紅外、 無線電、以及微波)就被包括在媒體的定義之中。如本文中所使用的盤(disk)和碟(disc)包括壓縮光碟(CD)、鐳射光碟、光碟、數位多功能光碟(DVD)、軟碟、和藍光®光碟,其中盤(disk)常常磁性地再現資料,而碟(disc)用鐳射來光學地再現資料。因此,在一些態樣,電腦可讀取媒體可包括非瞬態電腦可讀取媒體(例如,有形媒體)。另外,對於其他態樣,電腦可讀取媒體可包括瞬態電腦可讀取媒體(例如,信號)。上述的組合應當亦被包括在電腦可讀取媒體的範疇內。 If implemented in software, each function can be stored on or transmitted as a computer readable medium as one or more instructions or codes. Computer readable media includes both computer storage media and communication media including any media that facilitates the transfer of a computer program from one location to another. The storage medium can be any available media that can be accessed by the computer. By way of example and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, disk storage or other magnetic storage device, or can be used to carry or store instructions or data structures. Any other medium that expects code and can be accessed by a computer. In addition, any connection is also properly referred to as computer readable media. For example, if the software is using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology (such as infrared (IR), radio, and microwave) from a web site, server, or other remote source Transmitted, the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies (such as infrared, radio, and microwave) are included in the definition of the media. Disks and discs as used herein include compact discs (CDs), laser discs, compact discs, digital versatile discs (DVDs), floppy discs , and Blu-ray discs , where disks are often magnetic. The data is reproduced, and the disc uses laser to optically reproduce the data. Thus, in some aspects, computer readable media can include non-transitory computer readable media (eg, tangible media). Additionally, for other aspects, computer readable media can include transient computer readable media (eg, signals). The above combinations should also be included in the context of computer readable media.

因此,某些態樣可包括用於執行本文中提供的操作的電腦程式產品。例如,此類電腦程式產品可包括其上儲存(及/或編碼)有指令的電腦可讀取媒體,這些指令能由一或多個處理器執行以執行本文中所描述的操作。對於某些態樣,電腦程式產品可包括包裝材料。 Accordingly, certain aspects may include a computer program product for performing the operations provided herein. For example, such a computer program product can include computer readable media having stored thereon (and/or encoded) instructions executable by one or more processors to perform the operations described herein. For some aspects, computer program products may include packaging materials.

此外,應當領會,用於執行本文中所描述的方法和技術的模組及/或其他合適裝置能由使用者終端及/或基地台在適用的場合下載及/或以其他方式獲得。例如,此類設備能被耦合至伺服器以促成用於執行本文中所描述的方法的裝置的轉移。替換地,本文所述的各種方法能經由儲存裝置(例如,RAM、ROM、諸如壓縮光碟(CD)或軟碟等實體儲存媒體等)來提供,以使得一旦將該儲存裝置耦合至或提供給使用者終端及/或基地台,該設備就能獲得各種方法。此外,可 利用適於向設備提供本文中所描述的方法和技術的任何其他合適的技術。 In addition, it should be appreciated that modules and/or other suitable means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by a user terminal and/or base station where applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, the various methods described herein can be provided via a storage device (eg, RAM, ROM, physical storage media such as a compact disc (CD) or floppy disk, etc.) such that once the storage device is coupled or provided to The user terminal and/or the base station can obtain various methods. In addition, Any other suitable technique suitable for providing the methods and techniques described herein to a device is utilized.

將理解,請求項並不被限定於以上所圖示的精確配置和元件。可在以上所描述的方法和裝置的佈局、操作和細節上作出各種改動、更換和變形而不會脫離請求項的範疇。 It will be understood that the claims are not limited to the precise configurations and elements illustrated above. Various changes, modifications, and variations can be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.

700‧‧‧方法 700‧‧‧ method

702‧‧‧方塊 702‧‧‧ square

704‧‧‧方塊 704‧‧‧ squares

Claims (31)

一種處理一連續時間信號的非同步事件驅動型輸入取樣的方法,包括以下步驟:從該等事件驅動型輸入取樣直接計算一迴旋輸出。 A method of processing asynchronous event-driven input samples for a continuous time signal, comprising the steps of directly calculating a cyclotron output from the event-driven input samples. 如請求項1之方法,其中該迴旋輸出是一呈基於事件的格式的。 The method of claim 1, wherein the whirling output is in an event-based format. 如請求項1之方法,進一步包括在事件之間內插輸出。 The method of claim 1, further comprising interpolating the output between the events. 如請求項1之方法,其中該迴旋輸出至少部分地基於一經非同步脈衝調制的(APM)編碼脈衝。 The method of claim 1, wherein the whirling output is based at least in part on a non-synchronized pulse modulated (APM) encoded pulse. 如請求項1之方法,其中計算包括將一編碼脈衝和一迴旋核函數表達為複加權因果複指數之一和。 The method of claim 1, wherein the calculating comprises expressing a coded pulse and a convolution kernel function as one of a complex weighted causal complex index. 如請求項1之方法,其中計算包括將一編碼脈衝和一迴旋核函數近似為複加權因果複指數之一和。 The method of claim 1, wherein the calculating comprises approximating a coded pulse and a cyclotron kernel function to one of a complex weighted causal complex index. 如請求項1之方法,其中該等事件驅動型輸入取樣包括一輸入信號的事件驅動型取樣以及一系統衝激回應函數的事件驅動型取樣,並且其中計算進一步包括回應於一輸入信號事件以一事件驅動方式產生該迴旋輸出。 The method of claim 1, wherein the event-driven input samples comprise an event-driven sampling of an input signal and an event-driven sampling of a system impulse response function, and wherein the calculating further comprises responding to an input signal event with a The event-driven mode produces this whirling output. 如請求項1之方法,進一步包括從一呈基於事件的格式的該迴旋輸出直接計算一第二迴旋輸出。 The method of claim 1, further comprising directly calculating a second convoluted output from the convoluted output in an event-based format. 一種用於處理一連續時間信號的非同步事件驅動型輸入取樣的裝置,包括:一記憶體;及耦合至該記憶體的至少一個處理器,該至少一個處理器被配置成從該等事件驅動型輸入取樣直接計算一迴旋輸出。 An apparatus for processing asynchronous event-driven input samples for a continuous time signal, comprising: a memory; and at least one processor coupled to the memory, the at least one processor configured to be driven from the events The type input sample directly calculates a convoluted output. 如請求項9之裝置,其中該迴旋輸出是一呈基於事件的格式的。 The device of claim 9, wherein the whirling output is in an event-based format. 如請求項9之裝置,其中該至少一個處理器被進一步配置成在事件之間內插輸出。 The device of claim 9, wherein the at least one processor is further configured to interpolate the output between events. 如請求項9之裝置,其中該至少一個處理器被進一步配置成至少部分地基於一經非同步脈衝調制(APM)編碼脈衝來計算該迴旋輸出。 The apparatus of claim 9, wherein the at least one processor is further configured to calculate the cyclotron output based at least in part on an asynchronous pulse modulation (APM) encoded pulse. 如請求項9之裝置,其中該至少一個處理器被進一步配置成經由將一編碼脈衝和一迴旋核函數表達為複加權因果複指數之一和來計算該迴旋輸出。 The apparatus of claim 9, wherein the at least one processor is further configured to calculate the whirling output by expressing a coded pulse and a whirling kernel function as one of a complex weighted causal complex exponent. 如請求項9之裝置,其中該至少一個處理器被進一步配置 成經由將一編碼脈衝和一迴旋核函數近似為複加權因果複指數之一和來計算該迴旋輸出。 The device of claim 9, wherein the at least one processor is further configured The cyclotron output is calculated by approximating a coded pulse and a convoluted kernel function to one of a complex weighted causal complex exponent. 如請求項9之裝置,其中該等事件驅動型輸入取樣包括一輸入信號的事件驅動型取樣以及一系統衝激回應函數的事件驅動型取樣,並且其中該至少一個處理器被進一步配置成經由回應於一輸入信號事件以一事件驅動方式產生該迴旋輸出來計算該迴旋輸出。 The apparatus of claim 9, wherein the event-driven input samples comprise event-driven sampling of an input signal and event-driven sampling of a system impulse response function, and wherein the at least one processor is further configured to respond via The whirling output is generated in an event-driven manner at an input signal event to calculate the whirling output. 如請求項9之裝置,其中該至少一個處理器被進一步配置成從一呈基於事件的格式的該迴旋輸出直接計算一第二迴旋輸出。 The apparatus of claim 9, wherein the at least one processor is further configured to directly calculate a second whirling output from the whirling output in an event-based format. 一種用於處理一連續時間信號的非同步事件驅動型輸入取樣的設備,包括:用於從該等事件驅動型輸入取樣直接計算一迴旋輸出的裝置;及用於在事件之間內插輸出的裝置。 An apparatus for processing asynchronous event-driven input sampling for a continuous time signal, comprising: means for directly calculating a whirling output from the event-driven input samples; and for interpolating output between events Device. 如請求項17之設備,其中該迴旋輸出是一呈基於事件的格式的。 The device of claim 17, wherein the whirling output is in an event-based format. 如請求項17之設備,其中該用於計算的裝置至少部分地基於一經非同步脈衝調制的(APM)編碼脈衝來計算該迴旋 輸出。 The apparatus of claim 17, wherein the means for calculating calculates the maneuver based at least in part on a non-synchronized pulse modulated (APM) coded pulse Output. 如請求項17之設備,其中該用於計算的裝置經由將一編碼脈衝和一迴旋核函數表達為複加權因果複指數之一和來計算該迴旋輸出。 The apparatus of claim 17, wherein the means for calculating calculates the whirling output by expressing a coded pulse and a whirling kernel function as one of a complex weighted causal complex exponent. 如請求項17之設備,其中該用於計算的裝置經由將一編碼脈衝和一迴旋核函數近似為複加權因果複指數之一和來計算該迴旋輸出。 The apparatus of claim 17, wherein the means for calculating calculates the whirling output by approximating a coded pulse and a convoluted kernel function to one of a complex weighted causal complex index. 如請求項17之設備,其中該等事件驅動型輸入取樣包括一輸入信號的事件驅動型取樣以及一系統衝激回應函數的事件驅動型取樣,並且其中該用於計算的裝置經由回應於一輸入信號事件以一事件驅動方式產生該迴旋輸出來計算該迴旋輸出。 The device of claim 17, wherein the event-driven input samples comprise event-driven sampling of an input signal and event-driven sampling of a system impulse response function, and wherein the means for calculating is responsive to an input The signal event produces the whirling output in an event driven manner to calculate the whirling output. 如請求項17之設備,進一步包括用於從一呈基於事件的格式的該迴旋輸出直接計算一第二迴旋輸出的裝置。 The apparatus of claim 17, further comprising means for directly calculating a second convoluted output from the convoluted output in an event-based format. 一種其上編碼有程式碼的非瞬態電腦可讀取媒體,該程式碼在由一處理器執行時使得該處理器處理一連續時間信號的非同步事件驅動型輸入取樣,該程式碼包括:用於從該等事件驅動型輸入取樣直接計算一迴旋輸出的程式碼。 A non-transitory computer readable medium having a code encoded thereon, the code, when executed by a processor, causing the processor to process an asynchronous event-driven input sample of a continuous time signal, the code comprising: A code for directly calculating a whirling output from the event-driven input samples. 如請求項24之非瞬態電腦可讀取媒體,其中該迴旋輸出是一呈基於事件的格式的。 The non-transitory computer readable medium of claim 24, wherein the whirling output is in an event based format. 如請求項24之非瞬態電腦可讀取媒體,進一步包括用於在事件之間內插輸出的程式碼。 The non-transitory computer readable medium of claim 24 further includes code for interpolating the output between events. 如請求項24之非瞬態電腦可讀取媒體,進一步包括用於至少部分地基於一經非同步脈衝調制的(APM)編碼脈衝來計算該迴旋輸出的程式碼。 The non-transitory computer readable medium of claim 24, further comprising code for calculating the whirling output based at least in part on an asynchronous pulse modulated (APM) encoded pulse. 如請求項24之非瞬態電腦可讀取媒體,進一步包括用於經由將一編碼脈衝和一迴旋核函數表達為複加權因果複指數之一和來計算該迴旋輸出的程式碼。 The non-transitory computer readable medium of claim 24, further comprising code for calculating the whirling output by expressing a coded pulse and a convolution kernel function as one of a complex weighted causal complex index. 如請求項24之非瞬態電腦可讀取媒體,進一步包括用於經由將一編碼脈衝和一迴旋核函數近似為複加權因果複指數之一和來計算該迴旋輸出的程式碼。 The non-transitory computer readable medium of claim 24, further comprising code for calculating the whirling output by approximating a coded pulse and a convoluted kernel function to one of a complex weighted causal complex index. 如請求項24之非瞬態電腦可讀取媒體,其中該等事件驅動型輸入取樣包括一輸入信號的事件驅動型取樣以及一系統衝激回應函數的事件驅動型取樣,並且該非瞬態電腦可讀取媒體進一步包括用於經由回應於一輸入信號事件以一事件驅動方式產生該迴旋輸出來計算該迴旋輸出的程式碼。 The non-transitory computer readable medium of claim 24, wherein the event-driven input samples comprise event-driven sampling of an input signal and event-driven sampling of a system impulse response function, and the non-transient computer is The reading medium further includes code for calculating the whirling output by generating the whirling output in an event driven manner in response to an input signal event. 如請求項24之非瞬態電腦可讀取媒體,進一步包括用於從一呈基於事件的格式的該迴旋輸出直接計算一第二迴旋輸出的程式碼。 The non-transitory computer readable medium of claim 24, further comprising code for directly calculating a second gyro output from the gyro output in an event based format.
TW104128169A 2014-09-05 2015-08-27 Event-driven temporal convolution for asynchronous pulse-modulated sampled signals TW201633181A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462046757P 2014-09-05 2014-09-05
US14/835,664 US11551076B2 (en) 2014-09-05 2015-08-25 Event-driven temporal convolution for asynchronous pulse-modulated sampled signals

Publications (1)

Publication Number Publication Date
TW201633181A true TW201633181A (en) 2016-09-16

Family

ID=55437803

Family Applications (1)

Application Number Title Priority Date Filing Date
TW104128169A TW201633181A (en) 2014-09-05 2015-08-27 Event-driven temporal convolution for asynchronous pulse-modulated sampled signals

Country Status (3)

Country Link
US (1) US11551076B2 (en)
TW (1) TW201633181A (en)
WO (1) WO2016036565A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9536293B2 (en) * 2014-07-30 2017-01-03 Adobe Systems Incorporated Image assessment using deep convolutional neural networks
US9953425B2 (en) 2014-07-30 2018-04-24 Adobe Systems Incorporated Learning image categorization using related attributes
US11335438B1 (en) * 2016-05-06 2022-05-17 Verily Life Sciences Llc Detecting false positive variant calls in next-generation sequencing
WO2017201627A1 (en) * 2016-05-26 2017-11-30 The Governing Council Of The University Of Toronto Accelerator for deep neural networks
US10303961B1 (en) * 2017-04-13 2019-05-28 Zoox, Inc. Object detection and passenger notification
US10211856B1 (en) * 2017-10-12 2019-02-19 The Boeing Company Hardware scalable channelizer utilizing a neuromorphic approach
KR102561261B1 (en) 2017-11-14 2023-07-28 삼성전자주식회사 Apparatus and method for processing convolution operation using kernel
US10846621B2 (en) * 2017-12-12 2020-11-24 Amazon Technologies, Inc. Fast context switching for computational networks
US10803379B2 (en) 2017-12-12 2020-10-13 Amazon Technologies, Inc. Multi-memory on-chip computational network
EP3976046A4 (en) 2019-06-03 2023-05-31 Sanford Burnham Prebys Medical Discovery Institute Machine-learning system for diagnosing disorders and diseases and determining drug responsiveness
CN113741718B (en) * 2020-05-29 2024-06-04 杭州海康威视数字技术股份有限公司 Control data conversion method and device and main control equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITRM20060110A1 (en) 2006-03-03 2007-09-04 Cnr Consiglio Naz Delle Ricerche METHOD AND SYSTEM FOR THE AUTOMATIC DETECTION OF EVENTS IN SPORTS ENVIRONMENT
US7358884B1 (en) * 2006-10-05 2008-04-15 Apple Inc. Methods and systems for implementing a Digital-to-Analog Converter
US8229209B2 (en) 2008-12-26 2012-07-24 Five Apes, Inc. Neural network based pattern recognizer
TWI426420B (en) 2010-02-05 2014-02-11 Univ Yuan Ze Hand track identification system and method thereof
US20120155140A1 (en) * 2010-12-21 2012-06-21 Chung-Shan Institute of Science and Technology, Armaments, Bureau, Ministry of National Defense Asynchronous Sigma-Delta Modulation Controller
US8870783B2 (en) * 2011-11-30 2014-10-28 Covidien Lp Pulse rate determination using Gaussian kernel smoothing of multiple inter-fiducial pulse periods
CA2899571A1 (en) * 2013-01-30 2014-08-07 Japan Science And Technology Agency Digital filter for image processing, image processing apparatus, printing medium, recording medium, image processing method, and program
WO2014133506A1 (en) 2013-02-27 2014-09-04 Hrl Laboratories, Llc A mimo-ofdm system for robust and efficient neuromorphic inter-device communication
US9008840B1 (en) 2013-04-19 2015-04-14 Brain Corporation Apparatus and methods for reinforcement-guided supervised learning

Also Published As

Publication number Publication date
US11551076B2 (en) 2023-01-10
WO2016036565A1 (en) 2016-03-10
US20160071005A1 (en) 2016-03-10

Similar Documents

Publication Publication Date Title
CN108780522B (en) Recursive network using motion-based attention for video understanding
US10510146B2 (en) Neural network for image processing
CN106796580B (en) Method, apparatus, and medium for processing multiple asynchronous event driven samples
TW201633181A (en) Event-driven temporal convolution for asynchronous pulse-modulated sampled signals
KR102595399B1 (en) Detection of unknown classes and initialization of classifiers for unknown classes
US10275719B2 (en) Hyper-parameter selection for deep convolutional networks
US11423323B2 (en) Generating a sparse feature vector for classification
US20160283864A1 (en) Sequential image sampling and storage of fine-tuned features
US20170024641A1 (en) Transfer learning in neural networks
US20160328645A1 (en) Reduced computational complexity for fixed point neural network
TW201706918A (en) Filter specificity as training criterion for neural networks
US20210158166A1 (en) Semi-structured learned threshold pruning for deep neural networks
TW201627923A (en) Model compression and fine-tuning
WO2018084941A1 (en) Temporal difference estimation in an artificial neural network
EP4232957A1 (en) Personalized neural network pruning
KR20230079043A (en) Multi-modal expression based event localization
EP4058930A1 (en) Context-driven learning of human-object interactions
US20220004904A1 (en) Deepfake detection models utilizing subject-specific libraries
JP2024509862A (en) Efficient test time adaptation for improved temporal consistency in video processing
WO2023167791A1 (en) On-device artificial intelligence video search
WO2023249821A1 (en) Adapters for quantization
CN118215934A (en) Saliency-based input resampling for efficient object detection
CN116997907A (en) Out-of-distribution detection for personalized neural network models