in Artificial Intelligence, Programming, Technology | Tech Notes

Introduction

In this Tech Note I introduce the design and structure of a Neural Network which emulates a 1-bit register. The purpose of such a design provides the opportunity for Neural Networks to exhibit Short Term Memory. This allows our artificial representation of biological neural networks to function more like their inspiration. Outlined below is a proof followed by a link to a Haskell prototype as a demonstration.

Limitations of the typical ANN

It could be said that a typical ANN (Artificial Neural Network) is more “inspired” by biology rather than being actually based on the neural structure of a brain. There are a handful of problems with the typical (though not ubiquitous) ANN design. A key design goal forces these limitations: the network is expected to receive an input and produce an output. i.e. you supply the ANN a set of inputs – then process the sum of all the interactions between nodes to arrive at an output.

By contrast a biological neural network is constantly running. It does not necessarily processing outputs in response to inputs. Brains are in fact constantly processing information and constantly making outputs. Making an ANN which only sums up the node interactions of a set input creates an immediate limitation in creating brain-like behaviour.

Biological Neurons

Biological neurons have several interesting emergent characteristics. It is by studying these emergent characteristics we can make our ANNs more “brain like”. The following characteristics inspired this algorithm.

Recharge Cycle

Each neuron in the brain accepts chemical messenger signals from other neurons. They exchange messengers called neurotransmitters at receptor sites called synapses. Neurotransmitters trigger receptor sites to send an electrical signal down to their parent cell. The parent cell then has a “threshold” of electrical signals that it will tolerate before it discharges.

This discharge sends a notice to its outgoing synapses to transmit neurotransmitters to their “post-synaptic” neurons. This discharge is called an action potential. The rhythmic transfer of action potentials throughout the brain creates the emergence of all the brains behaviour.

Electrical energy inside the neuron dissipates leaving behind a substantially lowered charge. Over the next several hundred milliseconds that neuron will “recharge” the lost energy to some default level. Consequently, attempting to trigger a recently discharged neuron requires more electrical energy than triggering an idle one.

Metronome Neurons

A select few neurons have unusual recharge cycles resulting in a metronome like behaviour. Aside: they are not officially called metronome neurons in the literature. They perform a function similar to the clock in a modern computer. Unlike a computer’s clock there are many scattered throughout the brain and they are often involved in a vast array of timing related neural circuits

The unusual behaviour of these neurons is that their “default” electrical state is higher than their threshold for releasing action potential. When these neurons release action potential, they spend the next few hundred milliseconds recharging just as any other neuron. However they continue to charge until they hit their threshold again and release another action potential.

This cyclic behaviour allows the brain to exercise some precise timing mechanisms similar to the way that computer architecture relies on the computers clock cycle for timing. It is worth noting that this is by analogy only.

Inhibitory Synapses

Synapses in the brain have receptor sites for different types of messenger chemicals. An unusual high level fact about the brain is that there are more inhibitory receptor sites in the brain than excitatory ones. The role of many receptor sites in the brain is to prevent other neurons from firing as opposed to encouraging them to send a signal. These inhibitory messages allow the networks to form patterns of activity not possible with only excitatory messages.

Feedback Loops

The final point of interest is how brains are wired in intricate feedback loops. Different brain regions send signals to each other that feed back to the original source. Within each region different lobes form complex feedback patterns before, during and after sending messages to other regions. Within lobes there are smaller subdivisions right down to a layered architecture at the level of individual neurons. Often a cluster of neurons requires a consensus achieved only via an “echo chamber” of feedback loops before that cluster passes the message down stream.

Elementary computing: a 1-bit register

The following diagram illustrates how a 1-bit register works in elementary computer architecture
A DFF feeds data back into a MUX which accepts a second piece of data from an external source. The MUX takes a 3rd connection supplying a write instruction. The DFF has an input from the computers clock and sends out an output to an external reader.

Simply put: a DFF will output a signal from the previous clock cycle. This signal may have come from the DFF itself, allowing it to re-send the same information in subsequent clock cycles. The MUX will either feed back to the DFF its own output, or if the “write” instruction is set it will supply it with a new value. The DFF is able to differentiate between the current clock cycle and the previous one based on the “tick/tock” it receives from the computer’s clock.

On a side note, the DFF is actually quite a complicated piece of architecture. It is implemented purely with Boolean Gates and does not “store” any signal in a capacitor or battery. This is the design for one type of DFF implemented using only NAND gates:
A DFF implementation using only NAND gates and several intricate feedback loops
Source: Edge triggered D Flip Flop on Wikipedia

Required Neural Network Architecture

Characteristics

To achieve a similar piece of engineering from an ANN requires a re-think of the typical Neural Network architecture. Key characteristics of the solution design:

  • The network must run continuously in a set of cycles, similar to the clock cycles of a computer
  • It must accept inputs from external sources that are capable of injecting data to its internal state
  • Any generated outputs must be captured by an external source

Composition of the network

In general terms we can describe the sets of components we will need for the new design:

  • M = Neural Network
  • All nodes in the network, N = {n | n ∈ N, ∀M ∃ N }
  • The set of input nodes, I = {i | i ∈ I, i ∈ N, I ⊂ N }
  • The set of output nodes, O = { o | o ∈ O, o ∈ N, O ⊂ N }
  • An input representing a new value to write, v = {v | v ∈ I }
  • An input representing a write instruction, x = {x | x ∈ I }
  • An output representing data from the register, s = {s | s ∈ O }
  • The edges between nodes; with “from node” f, “to node” t and “weight” w,
    E = {e | ef ∈ N, et ∈ N, ew ∈ Z, e = { ef,et,ew} }

5 phase approach

We operate the ANN in distinct cycles. Each cycle represents the amount of time it takes to complete a calculation. We represent a cycle as a function (cycle’) which performs a transformation of the state of the ANN to another state:

cycle’ :: State M -> State M

Note that our network is not dependent entirely on its own activity. Because we require input from the outside world (and output to it), a broader definition (cycle) must include IO:

cycle :: State M -> IO I -> (O -> IO ()) -> IO (State M)

Our function cycle accepts the current state of the network, a set of Input values to inject into that state and a function that sends outputs out of the network. Ultimately cycle returns a transformation of the state of the network including any external effects. We further break down a cycle into 5 distinct phases.

Phase 1 – inject IO data into the network state:

injectIO :: State M -> IO I -> IO State M
phase1 :: State M -> IO (State M)

Phase 2 – each node reads from a set of messages. The messages represent a commutative transformation of its internal state. We describe the components of the node and its state as follows:

defaultState :: Num
currentState :: Num
thresholdState :: Num
messages :: [Num]

From here we have a transformation of `[Num] -> Num -> Num` which modifies the currentState based on the message queue. Thus phase 2 is:

readMessages :: State n -> State n
networkRead :: State N -> State N
phase2 :: State M -> State M

Phase 3 – check each node’s updated currentState against its thresholdState and determine if it should send a message. We define the functions of phase 3 as:

thresholdTriggered :: n -> Boolean
extractEdge :: n -> e (n,Num)
sendMessages :: n -> State M -> State M
networkSend :: State N -> State N
phase3 :: State M -> State M

Phase 4 -extract from our state any output data and send it via IO:

outputNodes :: State M -> State O
sendOutputs :: State O -> IO ()
phase4 :: STate M -> IO (State M)

Phase 5 – run through our network and reset the internal state of each node back to its default value:

resetNode :: State n -> State n
resetNetwork :: State N -> State N
phase5 :: State M -> State M

Using this structure we can run each of our phases in a cycle and repeat until the network is required to shut down. With this architecture we can look at the design of a network that can emulate a 1-bit register.

Neural Network Register

The design

Like most good designs, this one starts on paper
6 nodes (0 - 5) create a feedback loop pattern for an ANN register design.
The premise is simple and not unlike our register design. For the sake of simplicity the numeric state of each node is represented by an integer, either 0, 10 or 20. The weight of each edge is also an integer, either -10, 10 or 20.

Overview

A single node (#1) represents the current value of the register. An input node (#4) represents the “write” bit, telling the register whether or not to write the incoming value to memory. A value node (#5) represents the incoming value via IO. If the incoming value is 1 (or True) then we inject a message into its queue. This message is equal to the node’s required threshold.

The remaining nodes (#0,#2,#3) provide the extra functionality we need to achieve a register. First we have an output node (#2) which we read via IO and sends data into the outside world.

Next a “Metronome Node” (#0) whose default state is equal to its threshold state. It provides a constant signal to both our primary node (#1) and our output node (#2). Both of which require a message of value 10 to reach their internal threshold of 20. Each round the Metronome Node resets its value to 10 which will automatically trigger its threshold unless otherwise interrupted.

The write node (#4) will send a -10 signal to the Metronome, preventing it from reaching its threshold. Thus if the write value is a 0 the primary node will not have a high enough state to trigger its threshold, resetting the register to 0.
The last remaining node (#3) provides the final push necessary to write to our register. If the write value is 1 it will add the extra 10 signal to #3 necessary to hit its 20 threshold.

Node #3 will override the suppression signal to the Metronome node by pushing a message signal to our primary node (#1). Also note that the #3 node is capable of receiving a signal but not writing it. In this way the value bit can be connected to a constant stream of data, and if it needs to be written into the register the write bit is triggered.

Dot graph representation

An early version of the prototype was used to generate this DOT graph. Here the nodes (N0-N5) are represented as vertexes on a graph (V0-V5). The position of the write bit, value bit and output bit correspond to the initial concept:
A DOT graph generated with GraphViz showing the described components of the networkdesign

Detailed Analysis

We can prove that the above configuration generates the desired effect fairly simply. For the sake of brevity we ignore Phase 1 and Phase 4 from this discussion. Because they deal with IO interactions we stick to Phases 2, 3 and 5, which only handle the state of the network.

We start with our graph of 5 nodes represented by the set M = {N0,N1,N2,N3,N4,N5}. From here produce 3 subsets at the end of each phase for each cycle:

  • X = {The set of nodes with outstanding messages}
  • G = {The set of nodes with an internal state > 0}
  • T = {The set of nodes with an internal state >= their threshold state}

Cycle 0

Phase 2 – Messages read

  • X { empty set }
  • G { N0(s10) }
  • T { N0(10/10) }

Phase 3 – Messages sent

  • X { N1(m10), N2(m10) }
  • G { N0(s10) }
  • T { N0(10/10) }

Phase 5 – Reset state to default

  • X { N1(m10), N2(m10) }
  • G { N0(s10) }
  • T { N0(10/10) }

Cycle 1

In this cycle a a write value of 1 is injected to N5. This message is set to the threshold value for N5, meaning it will definitely trigger this round. However, without writing being enabled in N4 we will see that the output of the register does not change.
Phase 2 – Messages read

Though N1 and N2 have an internal state greater than 0, this is not enough to trigger their threshold of 20. Only our node directly injected from IO is triggered (and the metronome node continues to trigger as usual).

  • X { empty set }
  • G { N0(s10), N1(s10), N2(s10), N5(s10) }
  • T { N0(10/10), N5(10/10) }

Phase 3 – Messages sent
Compared with the previous Phase 3, one extra node has ended up with a message – N3.

  • X { N1(m10), N2(m10), N3(m10) }
  • G { N0(s10), N1(s10), N2(s10), N5(s10) }
  • T { N0(10/10), N5(10/10) }

Phase 5 – Reset state to default

  • X { N1(m10), N2(m10), N3(m10) }
  • G { N0(s10) }
  • T { N0(10/10) }

Cycle 2

Our reset leaves only an extra message difference from Cycle 0.
Phase 2 – Messages read

  • X { empty set }
  • G { N0(s10), N1(s10), N2(s10), N3(s10) }
  • T { N0(10/10) }

We end cycle 2 here – the only difference is that N3 is greater than 0, however it is not enough to make a difference without going over its threshold. We can see that writing a value to the register (activating N5) without also sending a write instruction (activating N4) has no measurable effect on the network.

Cycle 3

Here we activate both N4 and N5, thus signifying a write value of 1 and also letting the network know that the register needs to be updated.
Phase 2 – Messages read

  • X { empty set }
  • G { N0(s10), N1(s10), N2(s10), N5(s10), N4(s10) }
  • T { N0(10/10), N5(10/10), N4(10/10) }

Phase 3 – Messages sent
Now we see a much larger difference that will feed its way through the network. N0 has a message of -10. When read (next cycle) it will prevent the node from sending out its metronome message. Additionally N3 now has x2 messages which will allow it to trigger.

  • X { N0(m-10), N1(m10), N2(m10), N3(m10,m10) }
  • G { N0(s10), N1(s10), N2(s10), N5(s10), N4(s10) }
  • T { N0(10/10), N5(10/10), N4(10/10) }

Phase 5 – Reset state to default
Notice that N0 has a triggered state at the end of Phase 5. The ordering of our phases ensures that it will not trigger when it comes time to message sending.

  • X { N0(m-10), N1(m10), N2(m10), N3(m10,m10) }
  • G { N0(s10) }
  • T { N0(10/10) }

Cycle 4

Phase 2 – Messages read
Because N0 did not trigger (suppressed by -10 from N4), N1 and N2 will not get their regularly scheduled signal.

  • X { empty set }
  • G { N3(s20) }
  • T { N3(20/20) }

Phase 3 – Messages sent
To compensate for switching off the N0 metronome, N3 has sent messages carrying a weight of 20 to both N1 and N2. If N3 had been inactive (as in Cycle 2) then everything would stop here and the register would be zeroed. In this case however, N3 has sent two messages out so we follow the trail.

  • X { N1(m20), N2(m20) }
  • G { N3(s20) }
  • T { N3(20/20) }

Phase 5 – Reset state to default

  • X { N1(m20), N2(m20) }
  • G { empty set }
  • T { empty set }

Cycle 5

In cycle 5 we see the first step of the chain of events necessary to create a feedback loop. This feedback look will allow the network to continue to output a value from the register.
Phase 2 – Messages read
Our N0 metronome node starts up again. Its broadcast provides the catalyst necessary for the feedback loop.

  • X { empty set }
  • G { N0(s10), N1(s20), N2(s20) }
  • T { N0(10/10), N1(20/20), N2(20/20) }

Phase 3 – Messages sent
Because N1 and N2 have cyclical edges, they provide the necessary loop to achieve a feedback signal.

  • X { N1(m10,m10), N2(m10,m10) }
  • G { N0(s10), N1(s20), N2(s20) }
  • T { N0(10/10), N1(20/20), N2(20/20) }

Phase 5 – Reset state to default

  • X { N1(m10,m10), N2(m10,m10) }
  • G { N0(s10) }
  • T { N0(10/10) }

Cycle 6

Phase 2 – Messages read

  • X { empty set }
  • G { N0(s10), N1(s20), N2(s20) }
  • T { N0(10/10), N1(20/20), N2(20/20) }

Cycle 6 has the same phase 2 as cycle 5 and will continue in its feedback loop. During this time the missing step 4 will run sending an output via IO from N2. This cycle is propped up by repeated signals from the metronome. To now write 0 to the register we simply send a write signal to the network, which as we saw early blocks the operation of the metronome, and the feedback loop ends.

Limitations of the design

At first it might seem that the key limitation is the single bit. This is unlikely to be a difficult task, as even in elementary computing the initial circuit design is easy to extend. However, even if a multi-bit register becomes difficult to implement, multiple registers can be lined up to form memory addresses.

The key limitation of this approach is the difference between electrical signals and the cycles in our new neural net. Current travels down a wire at near the speed of light, thus allowing a feedback loop to complete its electrical journey within a single clock cycle. The cycles on the Neural Network require the computing power to execute 5 phases.

It may be possible with a hardware implementation to speed this up. For our software implementation this means that writing to the network takes exactly 3 cycles. So for 3 cycles after a write instruction is given the data being read from the register is old. This provides an obvious (though not insurmountable) implementation constraint.

Prototype

Prototype in Haskell – MemNet

A prototype of the network called MemNet is available in Haskell. The code is open source and you can find MemNet on Bitbucket.

The network ultimately forms a cyclic graph whose state needs to change often. Because of this a standard data structure was not suitable in Haskell. The stateful nature of the graph even made “tying the knot” pointless. To accommodate the immutability required from Haskell a custom pointer system was created using an STArray. This adds a constraint to ensure that the existential type ‘s’ never leaks from the stateful array, while at the same time allowing the extraction of data out of the state. This relies upon the RankNTypes language extension to work.

There is a (reasonably terrible) GUI made in Gloss. It’s used in favour of a command line interface because the “output” value can constantly be streamed to the UI, while an input value is constantly streamed out of the UI. Toggling whether or not to set a write instruction is done by pressing enter, while toggling the value of the input bit is controlled by UP and DOWN arrows.

The decision to use Haskell

My overall AI research is also in Haskell and this prototype is a reflection my learning experience to-date. However, Haskell is (considered to be) poorly suited implementing a highly stateful and IO-based application. This is evident from the fact that I had to develop my own pointer system to overcome the lack of mutable state available for cyclic data structures.

Having a pointer system throws out the benefits of algebraic data types. There are certain assurances that cannot be given at compile time (such as a node actually existing in the array). Frustratingly this introduces the pure world of Haskell to the problem of Null Pointer Exceptions.

Further, Haskell forces the separation of both the stateful code and the IO code by strict adherence to the type system. While living in the world of NPEs it likely would have been simpler (and quicker) to implement something like this in Python. My lack of experience with Python meant that my only other choices were either in Java or JavaScript. The code that runs the network took about half a day of planning and writing in Haskell. I’m certain it could have been done in a couple of hours in Java (possibly less than an hour in JavaScript).

Ultimately Haskell’s type system is so expressive that the theory in this tech note translates incredibly well to code. This separation of stateful network data, pure functional transformations within the network and IO interactions outside the network. But this separation is more than type-trickery; it allows a code-based proof of the algorithm described above.

All that said, the hardest part of writing this code was not in fact the network itself. The most difficult component was the asynchronous plumbing necessary to get it to work. Thus the extra time taken to build the network is overshadowed by the effort required to solve concurrency problems.

Haskell’s type system makes concurrency far simpler than in other programming languages. The final prototype has the following variations on the above algorithm:

  • Phase 2, Phase 3 and Phase 5 are distinct phases with their own code, as described above
  • Phase 1 and Phase 4 that deal with IO occur within a single “do” block in an “IO ()” Monadic loop. The state of the network is encapsulated in the ST Monad and threaded through the loop in each tail call
  • A Mediator exists as a server-like function that separates the network from the outside world. The rate at which the Mediator operates is throttled by a lock mechanism provided from the UI. Meanwhile the Mediator throttles the network by using a simple Semaphore on the Input values.
  • The network, the Mediator and the UI run in separate threads. The input values from the UI are fed into the Mediator, which translates them for the network. Output values are then translated from the network into the UI

Scalability

There is one additional variation from the algorithm to the prototype. The algorithm specifically describes the network needed for a 1-bit register – two input bits and one output bit. Though the prototype has this exact configuration, the code does not impose these restrictions. The network is built by parsing a schema along with individual IO components. It works in this way:

  • Each Input and Output is a list of Boolean values representing the corresponding bits. The prototype has the primary input as a list of 2 Boolean values, with index 0 representing the write instruction and index 1 representing the write value. The output is a list with a single Boolean value representing the output from the network.
  • The network actually requests as a list of inputs and list of outputs. At the start of each cycle the inputs are folded into a list of nodes that will be triggered. At the end of each cycle the nodes that output data are mapped to their output counterparts and sent to the output via the Mediator.
  • The schema is a simple set of instructions describing each node and edge. In addition it describes which node connects to which bit inside which specific input/output.

By providing a network design and a list of inputs and outputs the potential design of the network is unbounded. This architecture allows the expression of the algorithm described above but can also scale as required.

Have you got a comment, criticism or suggestion? Contact Rick on or