6. Fildes, “Artificial Brain ‘10 Years Away.’”
7. See http://www.humanconnectomeproject.org/.
8. Anders Sandberg and Nick Bostrom, Whole Brain Emulation: A Roadmap, Technical Report #2008–3 (2008), Future of Humanity Institute, Oxford University, www.fhi.ox.ac.uk/reports/2008‐3.pdf.
9. Here is the basic schema for a neural net algorithm. Many variations are possible, and the designer of the system needs to provide certain critical parameters and methods, detailed on the following pages.
Creating a neural net solution to a problem involves the following steps:
Define the input.
Define the topology of the neural net (i.e., the layers of neurons and the connections between the neurons).
Train the neural net on examples of the problem.
Run the trained neural net to solve new examples of the problem.
Take your neural net company public.
These steps (except for the last one) are detailed below:
The Problem Input
The problem input to the neural net consists of a series of numbers. This input can be:
In a visual pattern recognition system, a two-dimensional array of numbers representing the pixels of an image; or
In an auditory (e.g., speech) recognition system, a two-dimensional array of numbers representing a sound, in which the first dimension represents parameters of the sound (e.g., frequency components) and the second dimension represents different points in time; or
In an arbitrary pattern recognition system, an n-dimensional array of numbers representing the input pattern.
Defining the Topology
To set up the neural net, the architecture of each neuron consists of:
Multiple inputs in which each input is “connected” to either the output of another neuron or one of the input numbers.
Generally, a single output, which is connected to either the input of another neuron (which is usually in a higher layer) or the final output.
Set Up the First Layer of Neurons
Create N0 neurons in the first layer. For each of these neurons, “connect” each of the multiple inputs of the neuron to “points” (i.e., numbers) in the problem input. These connections can be determined randomly or using an evolutionary algorithm (see below).
Assign an initial “synaptic strength” to each connection created. These weights can start out all the same, can be assigned randomly, or can be determined in another way (see below).
Set Up the Additional Layers of Neurons
Set up a total of M layers of neurons. For each layer, set up the neurons in that layer.
For layeri:
Create Ni neurons in layeri. For each of these neurons, “connect” each of the multiple inputs of the neuron to the outputs of the neurons in layeri–1 (see variations below).
Assign an initial “synaptic strength” to each connection created. These weights can start out all the same, can be assigned randomly, or can be determined in another way (see below).
The outputs of the neurons in layerM are the outputs of the neural net (see variations below).
The Recognition Trials
How Each Neuron Works
Once the neuron is set up, it does the following for each recognition triaclass="underline"
Each weighted input to the neuron is computed by multiplying the output of the other neuron (or initial input) that the input to this neuron is connected to by the synaptic strength of that connection.
All of these weighted inputs to the neuron are summed.
If this sum is greater than the firing threshold of this neuron, then this neuron is considered to fire and its output is 1. Otherwise, its output is 0 (see variations below).
Do the Following for Each Recognition Trial
For each layer, from layer0 to layerM:
For each neuron in the layer:
Sum its weighted inputs (each weighted input = the output of the other neuron [or initial input] that the input to this neuron is connected to, multiplied by the synaptic strength of that connection).
If this sum of weighted inputs is greater than the firing threshold for this neuron, set the output of this neuron = 1, otherwise set it to 0.
To Train the Neural Net
Run repeated recognition trials on sample problems.
After each trial, adjust the synaptic strengths of all the interneuronal connections to improve the performance of the neural net on this trial (see the discussion below on how to do this).
Continue this training until the accuracy rate of the neural net is no longer improving (i.e., reaches an asymptote).
Key Design Decisions
In the simple schema above, the designer of this neural net algorithm needs to determine at the outset:
What the input numbers represent.
The number of layers of neurons.
The number of neurons in each layer. (Each layer does not necessarily need to have the same number of neurons.)
The number of inputs to each neuron in each layer. The number of inputs (i.e., interneuronal connections) can also vary from neuron to neuron and from layer to layer.
The actual “wiring” (i.e., the connections). For each neuron in each layer, this consists of a list of other neurons, the outputs of which constitute the inputs to this neuron. This represents a key design area. There are a number of possible ways to do this:
(1) Wire the neural net randomly; or
(2) Use an evolutionary algorithm (see below) to determine an optimal wiring; or
(3) Use the system designer’s best judgment in determining the wiring.
The initial synaptic strengths (i.e., weights) of each connection. There are a number of possible ways to do this:
(1) Set the synaptic strengths to the same value; or
(2) Set the synaptic strengths to different random values; or
(3) Use an evolutionary algorithm to determine an optimal set of initial values; or
(4) Use the system designer’s best judgment in determining the initial values.
The firing threshold of each neuron.
Determine the output. The output can be:
(1) the outputs of layerM of neurons; or
(2) the output of a single output neuron, the inputs of which are the outputs of the neurons in layerM; or
(3) a function of (e.g., a sum of) the outputs of the neurons in layerM; or
(4) another function of neuron outputs in multiple layers.
Determine how the synaptic strengths of all the connections are adjusted during the training of this neural net. This is a key design decision and is the subject of a great deal of research and discussion. There are a number of possible ways to do this:
(1) For each recognition trial, increment or decrement each synaptic strength by a (generally small) fixed amount so that the neural net’s output more closely matches the correct answer. One way to do this is to try both incrementing and decrementing and see which has the more desirable effect. This can be time-consuming, so other methods exist for making local decisions on whether to increment or decrement each synaptic strength.
(2) Other statistical methods exist for modifying the synaptic strengths after each recognition trial so that the performance of the neural net on that trial more closely matches the correct answer.