US20190065936A1 - Anti-hebbian and hebbian computing with thermodynamic ram - Google Patents

Anti-hebbian and hebbian computing with thermodynamic ram Download PDF

Info

Publication number
US20190065936A1
US20190065936A1 US16/010,439 US201816010439A US2019065936A1 US 20190065936 A1 US20190065936 A1 US 20190065936A1 US 201816010439 A US201816010439 A US 201816010439A US 2019065936 A1 US2019065936 A1 US 2019065936A1
Authority
US
United States
Prior art keywords
circuit
ahah
core
collection
hebbian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/010,439
Inventor
Alex Nugent
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Knowmtech LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US16/010,439 priority Critical patent/US20190065936A1/en
Assigned to KNOWMTECH, LLC reassignment KNOWMTECH, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NUGENT, ALEX
Publication of US20190065936A1 publication Critical patent/US20190065936A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N99/005

Definitions

  • Embodiments are generally related to the field of AHaH (Anti-Hebbian and Hebbian) learning computing-based devices, methods and systems. Embodiments are additionally related to the field of thermodynamic RAM (Random Access Memory). Embodiments also relate to the field of machine learning.
  • AHaH Anti-Hebbian and Hebbian
  • thermodynamic RAM Random Access Memory
  • Machine learning (ML) systems are composed of (usually large) numbers of adaptive weights.
  • the goal of ML is to adapt the values of these weights based on exposure to data to optimize a function, for example temporal prediction, spatial classification or reward.
  • the foundation objective of ML creates friction with modern methods of computing, since every adaptation event necessarily reduces to a communication procedure between memory and processing resources separated by a distance.
  • the power required to simulate the adaptive network grows impractically large, owing to the tremendous energy consumed shuttling information back and forth.
  • thermodynamic circuit formed of differential pairs of memristors.
  • thermodynamic RAM Core comprising collections of differential pairs of memristors.
  • An AHaH (Anti-Hebbian and Hebbian) circuit which includes a collection of differential pairs of memristors.
  • a kT-Core can be implemented, which includes an AHaH Circuit with a RAM interface, and is capable of partitioning via time multiplexing.
  • a kT-RAM processor is composed of a collection of kT-Cores.
  • AHaH Computing is the theoretical space encompassing the capabilities of AHaH nodes, and kT-RAM is a learning processor providing random access to AHaH learning. At this level of development solutions have been found for problems as diverse as classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization and universal logic.
  • FIG. 1 illustrates a graphic depicting how multiple conduction pathways compete to dissipate energy through a plastic (pliable or adaptive) container, and how the container will adapt in a particular way that leads to the maximization of energy dissipation and Anti-Hebbian and Hebbian (AHaH) plasticity;
  • FIG. 2 illustrates a graph depicting a pinched hysteresis loop indicative of a generalized Meta-Stable Switch (MSS) Memristor model
  • FIG. 3 illustrates a schematic diagram of a memristor as an adaptive energy-dissipating pathway and two competing memristors that form a Thermodynamic Synapse (kT-Synapse), in accordance with a preferred embodiment
  • FIG. 4 illustrates a schematic diagram of an AHaH circuit that can be formed when a collective of kT-Synapses are coupled to a common readout line, in accordance with a preferred embodiment
  • FIG. 5 illustrates a schematic diagram of AHaH circuit with a RAM interface in accordance with a preferred embodiment
  • FIG. 6 illustrates a schematic diagram of a kT-Cores coming together to form kT-RAM, an adaptive computational resource for any requesting digital process, in accordance with a preferred embodiment
  • FIG. 7 illustrates a kT-RAM instruction set, in accordance with an alternative embodiment
  • FIG. 8 illustrates an example spike encoder and related terminology
  • FIG. 9 illustrates a variety of network topologies of AHaH nodes possible with kT-RAM.
  • FIG. 1 illustrates a graphic 10 depicting how multiple conduction pathways compete to dissipate energy through a plastic (pliable or adaptive) container, and how the container will adapt in a particular way that leads to the maximization of energy dissipation.
  • This mechanism is called the Anti-Hebbian and Hebbian (AHaH) plasticity rule, and is computationally universal and leads to general-purpose solutions to machine learning.
  • AHaH rule is a physical process, we can create extremely efficient and dense AHaH synaptic circuits with memristive components. These circuits form a generic adaptive computing resource we call Thermodynamic Random Access Memory or kT-RAM for brevity.
  • the kT-RAM approach offers the unique possibility of providing a specification for a general-purpose adaptive computing resource, since the components that it is built from can be rigorously defined and their function abstracted or “black-boxed” at each level of the technology stack. This allows individuals to specialize at one or more levels of the stack. Improvements at various levels of the stack can propagate throughout the whole technology ecosystem, from materials to markets, without any single technology vendor having to bridge the whole stack—a herculean feat that would be close to impossible. The rest of this disclosure outlines the levels of the technology stack need to support an AHaH Computing industry.
  • FIG. 2 illustrates a graph 20 depicting a pinched hysteresis plot indicative of a generalized Meta-Stable Switch (MSS) Memristor model.
  • MSS generalized Meta-Stable Stable Switch
  • the generalized Meta-Stable Stable Switch (MSS) Memristor model is an attempt to accurately capture the properties of memristors at a level of abstraction sufficient to enable efficient circuit simulations while describing as wide a range of devices as possible.
  • the MSS model provides a common ground from which a diversity of materials can be compared and incorporated into the technology stack. By modeling a device with the MSS model, a material scientist can evaluate its utility through emulation across domains of machine learning and computing and gain valuable insight into what actually is, and is not, computationally useful.
  • a Meta Stable Switch is an idealized two-state element that switches probabilistically between its two states as a function of applied voltage bias and temperature.
  • a memristor is modeled a collection of MSSs evolving in time. The total current through the device comes from both a memory-dependent current component, I m , and a Schottky diode current, I s in parallel:
  • I ⁇ I m ( V, t )+(1 ⁇ ) I s ( V ),
  • [0,1].
  • the MSS model can be made more complex to account for failure modes, for example by making the MSS state potentials temporally variable. Multiple MSS models with different variable state potentials can be combined in parralell or series to model increasingly more complex state systems.
  • FIG. 3 illustrates a schematic diagram of a memristor circuit 30 , in accordance with a preferred embodiment.
  • a memristor is an adaptive energy-dissipating pathway. This is demonstrated by configuration 32 , pathways 34 , 36 and respective configurations 38 , 39 (respectively, 1-2 and 2-1).
  • FIG. 3 illustrates how two competing memristors form a Thermodynamic Synapse (kT-Synapse). kT-synapses come in two configurations, 1-2 and 2-1, depending on the direction of energy flow. When two adaptive energy-dissipating pathways compete for conduction resources, a kT-Synapse will emerge.
  • FIG. 4 illustrates a schematic diagram of an AHaH circuit 40 that can be formed when a collective of kT-Synapses are coupled to a common readout line, in accordance with a preferred embodiment.
  • the AHaH circuit 40 shown in FIG. 4 is formed when a collective of kT-Synapses are coupled to a common readout line.
  • an AHaH node is capable of being partitioned into smaller AHaH nodes.
  • An AHaH node circuit provides a simple but universal computational and adaptation resource.
  • FIG. 5 illustrates a schematic diagram of AHaH circuit 50 that includes a RAM interface in accordance with a preferred embodiment.
  • a circuit 42 is composed of column decoders and row decoders and combined with an AHaH Circuit 40 to form a kT-Core circuit 44 that includes an AHaH controller capable of execution of instruction set 46 .
  • kT-RAM provides a generic substrate from which any topology can be constructed.
  • AHaH node's can have as few or as many synapses as the application requires and can be connected in whatever way desired. This universality is possible because of a RAM interface and temporal partitioning or multiplexing.
  • the kT-Core exposes a simple instruction set describing the direction of applied bias voltage: forward (F) or reverse (R), as well as the applied feedback: float (F), high (H), low(L), unsupervised (U), anti-unsupervised (A), and Zero (Z).
  • the kT-Core instruction set allows emulation with alternate or existing technologies, for example with traditional digital processing techniques coupled to Flash memory, a program running on a CPU or emerging platforms like Epiphany processors.
  • FIG. 6 illustrates a schematic diagram of a circuit 60 based on kT-Cores coming together to form kT-RAM, an adaptive computational resource for any requesting digital process, in accordance with a preferred embodiment.
  • the number of cores, and the way in which they are addressed and accessed vary across implementations so as to be optimized for application areas.
  • kT-Cores can be partitioned into AHaH nodes of any size via time multiplexing. Cores can also couple their readout electrodes together to form a larger combined core.
  • Physical AHaH node sizes can vary from just one synapse to the size of the kT-RAM chip, while digital coupling extends the maximal size to “the cloud”, limited only by the cores intrinsic adaptation rates and chip-to-chip communication.
  • FIG. 7 illustrates kT-RAM instruction set 70 for an emulator that can allow developers to commence application development while remaining competitive with competing machine learning approaches, in accordance with an alternative embodiment.
  • Emulators allow developers to commence application development while remaining competitive with competing machine learning approaches. In other words, we can build a market for kT-RAM across all existing computing platforms while we simultaneously build the next generations of kT-RAM hardware.
  • FIG. 8 illustrates a representation of a spike encoding framework 80 , in accordance with an alternative embodiment.
  • Spikes allows for core partitioning and multiplexing, which enable arbitrary AHaH node sizes. Sparse spikes codes are also very energy and bandwidth efficient.
  • a spike framework such as framework 80 requires, for example, Spike Encoders (sensors), Spike Streams (wire bundle), Spike Channel (a wire), Spike Space (Number of Wires), Spike Sets (active spike channels) and finally Spikes (the state of being active).
  • FIG. 9 illustrates a variety of AHaH node connection topologies.
  • AHaH Computing is the theoretical space encompassing the capabilities of AHaH nodes. At this level of development solutions have been found for problems as diverse as classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization and universal logic.
  • AHaH computing is built from the ‘ahbit’.
  • AHaH attractor states are a reflection of the underlying statistics of the data stream. It is both the collection of synapses and also the structure of the information that is being processed that together result in an AHaH attractor state.
  • an ‘ahbit’ is what results when we couple information to energy dissipation.
  • thermodynamic RAM circuit which includes a collection of kT-Core circuits.
  • Each kT-Core among the collection of core kT-Core circuits can include an AHaH circuit with a RAM interface.
  • an instruction set for a kT-Core learning circuit among the collection of kT-Core circuits can be implemented, which includes the following instructions: FF,FH,FL,FU,FA,FZ, RF,RH,RL,RU,RA,RZ.
  • At least one at least one kT-RAM circuit can be implemented, which includes at least one kT-Core among the collection of the kT-Core circuits partitioned into AHaH nodes of any size via time multiplexing.
  • at least one kT-Core circuit among the collection of kT-Core circuits couples readout electrodes together to form a larger combined kT-Core among the collection of kT-Core circuits.

Abstract

A thermodynamic RAM circuit composed of a group of AHaH (Anti-Hebbian and Hebbian) computing circuits that form one or more kT-RAM circuits. The AHaH computing circuits can be configured as an AHaH computing stack. The kT-RAM circuit(s) can include one or core kT-Cores, each partitioned into AHaH nodes of any size via time multiplexing. The kT-Core couples readout electrodes together to form a larger combined kT-Core. AHaH Computing is the theoretical space encompassing the capabilities of AHaH nodes. At this level of development solutions have been found for problems as diverse as classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization and universal logic.

Description

    CROSS-REFERENCE TO PROVISIONAL APPLICATION
  • This patent application is a continuation of U.S. patent application Ser. No. 14/674,428 entitled “Anti-Hebbian and Hebbian Computing with Thermodynamic RAM,” which was filed on Mar. 31, 2015, the disclosure of which is incorporated herein by reference in its entirety. U.S. patent application Ser. No. 14/674,428 in turn claims priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application Ser. No. 61/975,028, entitled “AHaH Computing with Thermodynamic RAM,” which was filed on Apr. 4, 2014, the disclosure of which is also incorporated herein by reference in its entirety.
  • STATEMENT OF GOVERNMENT RIGHTS
  • The United States Government has certain rights in this invention pursuant to Contract No. FA8750-13-C-0031 awarded by the United States Air Force.
  • TECHNICAL FIELD
  • Embodiments are generally related to the field of AHaH (Anti-Hebbian and Hebbian) learning computing-based devices, methods and systems. Embodiments are additionally related to the field of thermodynamic RAM (Random Access Memory). Embodiments also relate to the field of machine learning.
  • BACKGROUND
  • Machine learning (ML) systems are composed of (usually large) numbers of adaptive weights. The goal of ML is to adapt the values of these weights based on exposure to data to optimize a function, for example temporal prediction, spatial classification or reward. The foundation objective of ML creates friction with modern methods of computing, since every adaptation event necessarily reduces to a communication procedure between memory and processing resources separated by a distance. The power required to simulate the adaptive network grows impractically large, owing to the tremendous energy consumed shuttling information back and forth.
  • Nature, on the other hand, does not separate memory and processing. Rather, the act of memory access is the act of computing is the act of adaptation. The memory processing distance goes to zero and power efficiency explodes by factors exceeding a billion.
  • Modern computing allows us to explore the universe of all possible ways to adapt. Creating intrinsically adaptive hardware implies that we give up this flexibility and rely on just one method. After all, neurobiological researchers have unearthed dozens of plasticity methods in a brain, which would seem to imply that they are all important in some way or another. If we take a step back and look at all of Nature, however, we find that a viable solution is literally all around us in both biological and non-biological systems. The solution is remarkably simple, and it is obviously universal.
  • We find the solution around us in rivers, lightning and trees but also deep within us. The air that we breathe is coupled to our blood through thousands of bifurcating channels that form our lungs. Our brain is coupled to our blood through thousands of bifurcating channels that form our circulatory system, and our neurons are coupled to our brain through the thousands of bifurcating channels forming our axons and dendrites. At all scales we see flow systems built of a very simple fractal building block.
  • BRIEF SUMMARY
  • The following summary is provided to facilitate an understanding of some of the innovative features unique to the disclosed embodiments and is not intended to be a full description. A full appreciation of the various aspects of the embodiments disclosed herein can be gained by taking the entire specification, claims, drawings, and abstract as a whole.
  • It is, therefore, one aspect of the disclosed embodiments to provide for a thermodynamic circuit formed of differential pairs of memristors.
  • It is another aspect of the disclosed embodiments to provide for a thermodynamic RAM Core comprising collections of differential pairs of memristors.
  • It is another aspect of the disclosed embodiments to provide a kT-RAM processor composed of one or more core kT-Cores.
  • It is another aspect of the disclosed embodiments to provide an instruction set for a kT-RAM processor.
  • It is yet another aspect of the disclosed embodiments to provide for an AHaH technology computing stack.
  • It is yet another aspect of the disclosed embodiments to provide a specification for a general-purpose adaptive computing resource
  • The aforementioned aspects and other objectives and advantages can now be achieved as described herein. An AHaH (Anti-Hebbian and Hebbian) circuit is disclosed, which includes a collection of differential pairs of memristors. A kT-Core can be implemented, which includes an AHaH Circuit with a RAM interface, and is capable of partitioning via time multiplexing. A kT-RAM processor is composed of a collection of kT-Cores. AHaH Computing is the theoretical space encompassing the capabilities of AHaH nodes, and kT-RAM is a learning processor providing random access to AHaH learning. At this level of development solutions have been found for problems as diverse as classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization and universal logic.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying figures, in which like reference numerals refer to identical or functionally-similar elements throughout the separate views and which are incorporated in and form a part of the specification, further illustrate the present invention and, together with the detailed description of the invention, serve to explain the principles of the present invention.
  • FIG. 1 illustrates a graphic depicting how multiple conduction pathways compete to dissipate energy through a plastic (pliable or adaptive) container, and how the container will adapt in a particular way that leads to the maximization of energy dissipation and Anti-Hebbian and Hebbian (AHaH) plasticity;
  • FIG. 2 illustrates a graph depicting a pinched hysteresis loop indicative of a generalized Meta-Stable Switch (MSS) Memristor model;
  • FIG. 3 illustrates a schematic diagram of a memristor as an adaptive energy-dissipating pathway and two competing memristors that form a Thermodynamic Synapse (kT-Synapse), in accordance with a preferred embodiment;
  • FIG. 4 illustrates a schematic diagram of an AHaH circuit that can be formed when a collective of kT-Synapses are coupled to a common readout line, in accordance with a preferred embodiment;
  • FIG. 5 illustrates a schematic diagram of AHaH circuit with a RAM interface in accordance with a preferred embodiment;
  • FIG. 6 illustrates a schematic diagram of a kT-Cores coming together to form kT-RAM, an adaptive computational resource for any requesting digital process, in accordance with a preferred embodiment;
  • FIG. 7 illustrates a kT-RAM instruction set, in accordance with an alternative embodiment;
  • FIG. 8 illustrates an example spike encoder and related terminology; and
  • FIG. 9 illustrates a variety of network topologies of AHaH nodes possible with kT-RAM.
  • DETAILED DESCRIPTION
  • The particular values and configurations discussed in these non-limiting examples can be varied and are cited merely to illustrate at least one embodiment and are not intended to limit the scope thereof.
  • FIG. 1 illustrates a graphic 10 depicting how multiple conduction pathways compete to dissipate energy through a plastic (pliable or adaptive) container, and how the container will adapt in a particular way that leads to the maximization of energy dissipation. This mechanism is called the Anti-Hebbian and Hebbian (AHaH) plasticity rule, and is computationally universal and leads to general-purpose solutions to machine learning. Because the AHaH rule is a physical process, we can create extremely efficient and dense AHaH synaptic circuits with memristive components. These circuits form a generic adaptive computing resource we call Thermodynamic Random Access Memory or kT-RAM for brevity.
  • The kT-RAM approach offers the unique possibility of providing a specification for a general-purpose adaptive computing resource, since the components that it is built from can be rigorously defined and their function abstracted or “black-boxed” at each level of the technology stack. This allows individuals to specialize at one or more levels of the stack. Improvements at various levels of the stack can propagate throughout the whole technology ecosystem, from materials to markets, without any single technology vendor having to bridge the whole stack—a herculean feat that would be close to impossible. The rest of this disclosure outlines the levels of the technology stack need to support an AHaH Computing industry.
  • FIG. 2 illustrates a graph 20 depicting a pinched hysteresis plot indicative of a generalized Meta-Stable Switch (MSS) Memristor model. Many memristive materials have been reported, and it likely that more will be invented. The generalized Meta-Stable Stable Switch (MSS) Memristor model is an attempt to accurately capture the properties of memristors at a level of abstraction sufficient to enable efficient circuit simulations while describing as wide a range of devices as possible. The MSS model provides a common ground from which a diversity of materials can be compared and incorporated into the technology stack. By modeling a device with the MSS model, a material scientist can evaluate its utility through emulation across domains of machine learning and computing and gain valuable insight into what actually is, and is not, computationally useful.
  • A Meta Stable Switch (MSS) is an idealized two-state element that switches probabilistically between its two states as a function of applied voltage bias and temperature. A memristor is modeled a collection of MSSs evolving in time. The total current through the device comes from both a memory-dependent current component, Im, and a Schottky diode current, Is in parallel:

  • I=ϕI m(V, t)+(1−ϕ)I s(V),
  • where ϕ∈ [0,1]. A value of ϕ=1 represents a device that contains no diode effects. The MSS model can be made more complex to account for failure modes, for example by making the MSS state potentials temporally variable. Multiple MSS models with different variable state potentials can be combined in parralell or series to model increasingly more complex state systems.
  • FIG. 3 illustrates a schematic diagram of a memristor circuit 30, in accordance with a preferred embodiment. A memristor is an adaptive energy-dissipating pathway. This is demonstrated by configuration 32, pathways 34, 36 and respective configurations 38, 39 (respectively, 1-2 and 2-1). FIG. 3 illustrates how two competing memristors form a Thermodynamic Synapse (kT-Synapse). kT-synapses come in two configurations, 1-2 and 2-1, depending on the direction of energy flow. When two adaptive energy-dissipating pathways compete for conduction resources, a kT-Synapse will emerge. We see this building block for self-organized structures throughout nature, for example in arteries, veins, lungs, neurons, leaves, branches, roots, lightning, rivers and mycelium networks of fungus. We observe that in all cases there is a particle that flows through the assembly that is either directly a carrier of free energy dissipation or else it gates access, like a key to a lock, to free energy dissipation of the units in the collective. Some examples of these particles include water in plants, ATP in cells, blood in bodies, neurotrophins in brains, and money in economies. In memristive electronics, the particle is of course the electron.
  • FIG. 4 illustrates a schematic diagram of an AHaH circuit 40 that can be formed when a collective of kT-Synapses are coupled to a common readout line, in accordance with a preferred embodiment. The AHaH circuit 40 shown in FIG. 4 is formed when a collective of kT-Synapses are coupled to a common readout line. Through spike encoding and temporal multiplexing, an AHaH node is capable of being partitioned into smaller AHaH nodes. An AHaH node circuit provides a simple but universal computational and adaptation resource.
  • FIG. 5 illustrates a schematic diagram of AHaH circuit 50 that includes a RAM interface in accordance with a preferred embodiment. A circuit 42 is composed of column decoders and row decoders and combined with an AHaH Circuit 40 to form a kT-Core circuit 44 that includes an AHaH controller capable of execution of instruction set 46.
  • kT-RAM provides a generic substrate from which any topology can be constructed. AHaH node's can have as few or as many synapses as the application requires and can be connected in whatever way desired. This universality is possible because of a RAM interface and temporal partitioning or multiplexing.
  • The kT-Core exposes a simple instruction set describing the direction of applied bias voltage: forward (F) or reverse (R), as well as the applied feedback: float (F), high (H), low(L), unsupervised (U), anti-unsupervised (A), and Zero (Z). The kT-Core instruction set allows emulation with alternate or existing technologies, for example with traditional digital processing techniques coupled to Flash memory, a program running on a CPU or emerging platforms like Epiphany processors.
  • FIG. 6 illustrates a schematic diagram of a circuit 60 based on kT-Cores coming together to form kT-RAM, an adaptive computational resource for any requesting digital process, in accordance with a preferred embodiment. The number of cores, and the way in which they are addressed and accessed vary across implementations so as to be optimized for application areas. kT-Cores can be partitioned into AHaH nodes of any size via time multiplexing. Cores can also couple their readout electrodes together to form a larger combined core. Physical AHaH node sizes can vary from just one synapse to the size of the kT-RAM chip, while digital coupling extends the maximal size to “the cloud”, limited only by the cores intrinsic adaptation rates and chip-to-chip communication.
  • FIG. 7 illustrates kT-RAM instruction set 70 for an emulator that can allow developers to commence application development while remaining competitive with competing machine learning approaches, in accordance with an alternative embodiment.
  • Emulators allow developers to commence application development while remaining competitive with competing machine learning approaches. In other words, we can build a market for kT-RAM across all existing computing platforms while we simultaneously build the next generations of kT-RAM hardware.
  • FIG. 8 illustrates a representation of a spike encoding framework 80, in accordance with an alternative embodiment. There are many compelling motivations to use spikes: Spikes allows for core partitioning and multiplexing, which enable arbitrary AHaH node sizes. Sparse spikes codes are also very energy and bandwidth efficient. A spike framework such as framework 80 requires, for example, Spike Encoders (sensors), Spike Streams (wire bundle), Spike Channel (a wire), Spike Space (Number of Wires), Spike Sets (active spike channels) and finally Spikes (the state of being active).
  • FIG. 9 illustrates a variety of AHaH node connection topologies. AHaH Computing is the theoretical space encompassing the capabilities of AHaH nodes. At this level of development solutions have been found for problems as diverse as classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization and universal logic. Just as modern computing is based on the concept of the ‘bit’ and quantum computing is based on the concept of the ‘qubit’, AHaH computing is built from the ‘ahbit’. AHaH attractor states are a reflection of the underlying statistics of the data stream. It is both the collection of synapses and also the structure of the information that is being processed that together result in an AHaH attractor state. Hence, an ‘ahbit’ is what results when we couple information to energy dissipation.
  • Thus, in a preferred embodiment a thermodynamic RAM circuit can be implemented, which includes a collection of kT-Core circuits. Each kT-Core among the collection of core kT-Core circuits can include an AHaH circuit with a RAM interface. In another embodiment, an instruction set for a kT-Core learning circuit among the collection of kT-Core circuits can be implemented, which includes the following instructions: FF,FH,FL,FU,FA,FZ, RF,RH,RL,RU,RA,RZ. In yet another embodiment at least one at least one kT-RAM circuit can be implemented, which includes at least one kT-Core among the collection of the kT-Core circuits partitioned into AHaH nodes of any size via time multiplexing. In another embodiment, at least one kT-Core circuit among the collection of kT-Core circuits couples readout electrodes together to form a larger combined kT-Core among the collection of kT-Core circuits.
  • It will be appreciated that variations of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. It will also be appreciated that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art, which are also intended to be encompassed by the following claims.

Claims (28)

1. (canceled)
2. (canceled)
3. (canceled)
4. (canceled)
5. (canceled)
6. (canceled)
7. (canceled)
8. A thermodynamic RAM (Random Access Memory) circuit, comprising:
a collection of kT (Thermodynamic)-Core circuits, wherein each kT-Core circuit among said collection of core kT-Core circuits comprises an AHaH (Anti-Hebbian and Hebbian) circuit comprising an AHaH node circuit that provides a universal computational and adaptation resource, wherein at least one kT-Core circuit among said collection of kT-Core circuits couples readout electrodes together to form a larger combined kT-Core among said collection of kT-Core circuits.
9. The circuit of claim 8 further comprising an instruction set for a kT-Core learning circuit among said collection of kT-Core circuits.
10. The circuit of claim 8 further comprising at least one kT-RAM circuit that comprises at least one kT-Core circuit among said collection of said kT-Core circuits partitioned into AHaH nodes via temporal multiplexing.
11. The circuit of claim 9 further comprising at least one kT-RAM circuit that comprises at least one kT-Core circuit among said collection of said kT-Core circuits partitioned into AHaH nodes via temporal multiplexing.
12. The circuit of claim 8 wherein said AHaH circuit is based on an AHaH node connection topology that provides for classification.
13. The circuit of claim 8 wherein said AHaH circuit is based on an AHaH node connection topology that provides for prediction.
14. The circuit of claim 8 wherein said AHaH circuit is based on an AHaH node connection topology that provides for anomaly detection.
15. The circuit of claim 8 wherein said AHaH circuit is based on an AHaH node connection topology that provides for clustering.
16. The circuit of claim 8 wherein said AHaH circuit is based on an AHaH node connection topology that provides for feature learning.
17. The circuit of claim 8 wherein said AHaH circuit is based on an AHaH node connection topology that provides for actuation.
18. The circuit of claim 8 wherein said AHaH circuit is based on an AHaH node connection topology that provides for combinatorial optimization and universal logic.
19. The circuit of claim 8 wherein said AHaH circuit comprises at least one memristor.
20. A thermodynamic RAM (Random Access Memory) circuit, comprising:
a collection of kT (Thermodynamic)-Core circuits, wherein each kT-Core circuit among said collection of core kT-Core circuits comprises an AHaH (Anti-Hebbian and Hebbian) circuit comprising at least one memristor, wherein at least one kT-Core circuit among said collection of kT-Core circuits couples readout electrodes together to form a larger combined kT-Core among said collection of kT-Core circuits, wherein said AHaH circuit comprises an AHaH node circuit, and wherein said AHaH node circuit provides a universal computational and adaptation resource.
21. The circuit of claim 12 wherein said AHaH circuit is based on an AHaH node connection topology that provides for at least one of the following: classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization and universal logic.
22. The circuit of claim 12 further comprising an instruction set for a kT-Core learning circuit among said collection of kT-Core circuits.
23. The circuit of claim 12 further comprising at least one kT-RAM circuit that comprises at least one kT-Core circuit among said collection of said kT-Core circuits partitioned into AHaH nodes via temporal multiplexing.
24. The circuit of claim 12 wherein said AHaH circuit comprises an AHaH node circuit.
25. The circuit of claim 16 wherein said AHaH node circuit provides a universal computational and adaptation resource.
26. A thermodynamic RAM (Random Access Memory) circuit, comprising:
a collection of kT (Thermodynamic)-Core circuits, wherein each kT-Core circuit among said collection of core kT-Core circuits comprises an AHaH (Anti-Hebbian and Hebbian) circuit, wherein at least one kT-Core circuit among said collection of kT-Core circuits couples readout electrodes together to form a larger combined kT-Core among said collection of kT-Core circuits and wherein said AHaH circuit comprises an AHaH node circuit; and
at least one kT-RAM circuit that comprises at least one kT-Core circuit among said collection of said kT-Core circuits partitioned into AHaH nodes via temporal multiplexing.
27. The circuit of claim 26 wherein said AHaH node circuit provides a universal computational and adaptation resource and wherein said AHaH circuit is based on an AHaH node connection topology that provides for at least one of the following: classification, prediction, anomaly detection, clustering, feature learning, actuation, combinatorial optimization and universal logic.
28. The circuit of claim 26 wherein said AHaH circuit comprises at least one memristor.
US16/010,439 2014-04-04 2018-06-16 Anti-hebbian and hebbian computing with thermodynamic ram Abandoned US20190065936A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/010,439 US20190065936A1 (en) 2014-04-04 2018-06-16 Anti-hebbian and hebbian computing with thermodynamic ram

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201461975028P 2014-04-04 2014-04-04
US14/674,428 US10049321B2 (en) 2014-04-04 2015-03-31 Anti-hebbian and hebbian computing with thermodynamic RAM
US16/010,439 US20190065936A1 (en) 2014-04-04 2018-06-16 Anti-hebbian and hebbian computing with thermodynamic ram

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/674,428 Continuation US10049321B2 (en) 2014-04-04 2015-03-31 Anti-hebbian and hebbian computing with thermodynamic RAM

Publications (1)

Publication Number Publication Date
US20190065936A1 true US20190065936A1 (en) 2019-02-28

Family

ID=54210057

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/674,428 Active 2036-12-27 US10049321B2 (en) 2014-04-04 2015-03-31 Anti-hebbian and hebbian computing with thermodynamic RAM
US16/010,439 Abandoned US20190065936A1 (en) 2014-04-04 2018-06-16 Anti-hebbian and hebbian computing with thermodynamic ram

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/674,428 Active 2036-12-27 US10049321B2 (en) 2014-04-04 2015-03-31 Anti-hebbian and hebbian computing with thermodynamic RAM

Country Status (1)

Country Link
US (2) US10049321B2 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9978015B2 (en) * 2014-05-30 2018-05-22 Knowmtech, Llc Cortical processing with thermodynamic RAM
US10311357B2 (en) * 2014-06-19 2019-06-04 Knowmtech, Llc Thermodynamic-RAM technology stack
US11521045B2 (en) 2017-06-14 2022-12-06 Knowm, Inc. Anti-Hebbian and Hebbian (AHAH) computing
CN110619907B (en) * 2019-08-28 2021-06-04 中国科学院上海微***与信息技术研究所 Synapse circuit, synapse array and data processing method based on synapse circuit
CN110619908B (en) * 2019-08-28 2021-05-25 中国科学院上海微***与信息技术研究所 Synapse module, synapse array and weight adjusting method based on synapse array
CN111027619B (en) * 2019-12-09 2022-03-15 华中科技大学 Memristor array-based K-means classifier and classification method thereof
CN111611218A (en) * 2020-04-24 2020-09-01 武汉大学 Distributed abnormal log automatic identification method based on deep learning

Family Cites Families (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7398259B2 (en) 2002-03-12 2008-07-08 Knowmtech, Llc Training of a physical neural network
US7412428B2 (en) 2002-03-12 2008-08-12 Knowmtech, Llc. Application of hebbian and anti-hebbian learning to nanotechnology-based physical neural networks
US20130289902A1 (en) 2012-04-30 2013-10-31 Knowm Tech, Llc Anomaly detection utilizing energy flow networks
US20030236760A1 (en) 2002-06-05 2003-12-25 Alex Nugent Multi-layer training in a physical neural network formed utilizing nanotechnology
US8990136B2 (en) 2012-04-17 2015-03-24 Knowmtech, Llc Methods and systems for fractal flow fabric
US20040039717A1 (en) 2002-08-22 2004-02-26 Alex Nugent High-density synapse chip using nanoparticles
US20110302119A1 (en) 2010-06-08 2011-12-08 Alex Nugent Self-organizing circuits
US20110145179A1 (en) 2009-12-10 2011-06-16 Knowmtech, Llc Framework for the organization of neural assemblies
US9269043B2 (en) 2002-03-12 2016-02-23 Knowm Tech, Llc Memristive neural processor utilizing anti-hebbian and hebbian technology
US8156057B2 (en) 2003-03-27 2012-04-10 Knowm Tech, Llc Adaptive neural network utilizing nanotechnology-based components
US8332339B2 (en) 2008-11-05 2012-12-11 Knowmtech, Llc Watershed memory systems and methods
US8983886B2 (en) * 2012-03-28 2015-03-17 Knowmtech, Llc Self-evolvable logic fabric
US8972316B2 (en) 2012-06-28 2015-03-03 Knowmtech, Llc Extensible adaptive classification framework
US8909580B2 (en) 2011-01-26 2014-12-09 Knowmtech, Llc Methods and systems for thermodynamic evolution
US20040193558A1 (en) 2003-03-27 2004-09-30 Alex Nugent Adaptive neural network utilizing nanotechnology-based components
US9104975B2 (en) 2002-03-12 2015-08-11 Knowmtech, Llc Memristor apparatus
US6889216B2 (en) 2002-03-12 2005-05-03 Knowm Tech, Llc Physical neural network design incorporating nanotechnology
US20120150780A1 (en) 2003-03-27 2012-06-14 Known Tech, LLC Physical neural network
US7392230B2 (en) 2002-03-12 2008-06-24 Knowmtech, Llc Physical neural network liquid state machine utilizing nanotechnology
US8781983B2 (en) 2009-12-29 2014-07-15 Knowmtech, Llc Framework for the evolution of electronic neural assemblies toward directed goals
US7752151B2 (en) 2002-06-05 2010-07-06 Knowmtech, Llc Multilayer training in a physical neural network formed utilizing nanotechnology
US7827131B2 (en) 2002-08-22 2010-11-02 Knowm Tech, Llc High density synapse chip using nanoparticles
US7426501B2 (en) 2003-07-18 2008-09-16 Knowntech, Llc Nanotechnology neural network methods and systems
US7409375B2 (en) 2005-05-23 2008-08-05 Knowmtech, Llc Plasticity-induced self organizing nanotechnology for the extraction of independent components from a data stream
US7502769B2 (en) 2005-01-31 2009-03-10 Knowmtech, Llc Fractal memory and computational methods and systems based on nanotechnology
US7420396B2 (en) 2005-06-17 2008-09-02 Knowmtech, Llc Universal logic gate utilizing nanotechnology
US7599895B2 (en) 2005-07-07 2009-10-06 Knowm Tech, Llc Methodology for the configuration and repair of unreliable switching elements
US7930257B2 (en) 2007-01-05 2011-04-19 Knowm Tech, Llc Hierarchical temporal memory utilizing nanotechnology
GB2457912A (en) * 2008-02-27 2009-09-02 Silicon Basis Ltd An FPGA which is reconfigured between each clock cycle
US8909576B2 (en) * 2011-09-16 2014-12-09 International Business Machines Corporation Neuromorphic event-driven neural computing architecture in a scalable neural network
US8918353B2 (en) 2012-02-22 2014-12-23 Knowmtech, Llc Methods and systems for feature extraction
US9099179B2 (en) 2013-01-04 2015-08-04 Knowmtech, Llc Thermodynamic bit formed of two memristors
US20150019468A1 (en) 2013-07-09 2015-01-15 Knowmtech, Llc Thermodynamic computing
US9679241B2 (en) 2013-09-09 2017-06-13 Knowmtech, Llc Thermodynamic random access memory for neuromorphic computing utilizing AHaH (anti-hebbian and hebbian) and memristor components

Also Published As

Publication number Publication date
US10049321B2 (en) 2018-08-14
US20150286926A1 (en) 2015-10-08

Similar Documents

Publication Publication Date Title
US20190065936A1 (en) Anti-hebbian and hebbian computing with thermodynamic ram
US10614358B2 (en) Memristive nanofiber neural networks
Thakur et al. Large-scale neuromorphic spiking array processors: A quest to mimic the brain
US8712940B2 (en) Structural plasticity in spiking neural networks with symmetric dual of an electronic neuron
US8843425B2 (en) Hierarchical routing for two-way information flow and structural plasticity in neural networks
US11176446B2 (en) Compositional prototypes for scalable neurosynaptic networks
US20200110999A1 (en) Thermodynamic ram technology stack
CN110998486A (en) Ultra-low power neuron morphological artificial intelligence computing accelerator
DE112008003510T5 (en) Micro- and / or nanoscale neuromorphs have integrated hybrid circuitry
WO2008130645A2 (en) Computational nodes and computational-node networks that include dynamical-nanodevice connections
CN104809501A (en) Computer system based on brain-like coprocessor
De Salvo Brain-inspired technologies: Towards chips that think?
Roy et al. Scaling deep spiking neural networks with binary stochastic activations
Velasquez et al. Automated synthesis of crossbars for nanoscale computing using formal methods
Asaka et al. Quantum random access memory via quantum walk
Ogbodo et al. Light-weight spiking neuron processing core for large-scale 3D-NoC based spiking neural network processing systems
US8983886B2 (en) Self-evolvable logic fabric
Tran et al. Hierarchical memcapacitive reservoir computing architecture
Khalil et al. N 2 OC: Neural-network-on-chip architecture
Pyle et al. Hybrid spin‐CMOS stochastic spiking neuron for high‐speed emulation of In vivo neuron dynamics
Qiu et al. A novel ring-based small-world NoC for neuromorphic processor
US11521045B2 (en) Anti-Hebbian and Hebbian (AHAH) computing
Anh et al. Hybrid genetic-bees algorithm in multi-layer perceptron optimization
He et al. Developing all-skyrmion spiking neural network
Katayama et al. An energy-efficient computing approach by filling the connectome gap

Legal Events

Date Code Title Description
AS Assignment

Owner name: KNOWMTECH, LLC, NEW MEXICO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NUGENT, ALEX;REEL/FRAME:046110/0687

Effective date: 20180616

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION