Document

Gdansk University of Technology
Faculty of Electrical and Control Engineering
Department of Control Systems Engineering
Artificial Intelligence Methods
Neuron, neural layer, neural networks - surface of the neural
network response
Laboratory exercises – no T1
Auxiliary material for laboratory exercises
Authors:
Anna Kobylarz, mgr inż.
Kazimierz Duzinkiewicz, dr hab. inż.
Michał Grochowski, dr inż.
Gdańsk, 2015
The material contains selected parts of the lecture and additional information in order
to create artificial neural networks structures in the MATLAB environment using the
Neural Network Toolbox. In particular, it contains a description of the basic elements
of neuron, activation functions, a description of the main instructions available in the
MATLAB’s toolbox and an example of creating and learning perceptron neural
network.
Artificial neural networks - a single neuron
A neural network is characterized by:
1. Functions according to which a neuron responds to the inputs, called
excitation functions and activation functions;
2. The structure of the connections between neurons, called network
architecture;
3. The method of determining the weights of those connections, called the
learning algorithm.
Weights
Input
signals
p1
w1,1
p2
.
.
.
pj
.
.
.
pR
w1,2
.
.
.
Excitation
(propagation)
function
Activation
function
Response
signal
g()
f()
as
ns
Output
w1,j
.
.
.
bs
Threshold
Excitation
signal
w1,R
Where:
R – inputs’ number,
S – number of neurons
in a layer,
Fig. 1. Artificial neuron model scheme.
Fig. 2 shows how a neuron is presented in the MATLAB documentation along with
the accepted indications of inputs, weights, threshold and output.
Fig. 2. Scheme of a neuron with a single input. Symbols and notation from MATLAB.
2
For this case, the dimensions of the matrixes describing the neuron are as follows:
In cases where multiple signals on the input are given, the neuron and its scheme is
as follows:
p   p ; W  w ; b  b ; a  a 
Where:
R – number of inputs,
Fig. 3. Scheme of neuron with multiple (R) inputs.
Where:
R – number of inputs,
Fig. 4. Scheme of neuron with multiple (R) inputs. Symbols and notation from
MATLAB.
The dimensions of matrixes describing this neuron are as follows:
 p1 
p 
p   2  ; W  w1 , w2 ,  wR ; b  b ; a  a 
  
 
 pR 
Activation (transfer) functions of the neuron
3
In the Figs 5-7 examples of activation functions are shown, respectively step, linear
and sigmoidal logistic (log-sigmoid) function. All instructions available in the toolbox
are given in Table 1.
Fig. 5. Hard limit activation function.
Where: p – input signal to neuron, n – excitation signal of neuron, a – output signal
from neuron, w – weight value of neuron’s input and b – threshold value.
Fig. 6. Linear activation function.
Fig. 7. Log-sigmoid activation function.
4
Table 1. Activation (transfer) functions available in MATLAB’s Neural Network
Toolbox.
In Fig. 8 an example of a neuron with a symmetrical hard limit activation function
(perceptron) and two inputs is shown.
Fig. 8. Neuron with symmetrical hard limit activation function and two inputs –
perceptron.
5
Matrixes dimmensions for this neuron are as follows:
p 
p   1  ; W  w1,1 w1,2 ; b  b ; a  a 
 p2 
Artificial neural networks – neural layer
Figure 9 and 10 show a single neural network layer with a description of its
parameters.
Where:
R – inputs’ number,
S – number of neurons
in layer,
Fig. 9. Neural network layer scheme.
Fig. 10. Neuron with hard limit activation function – perceptron. Symbols and notation
from MATLAB.
6
Matrixes dimmensions for neural layer are as follows:
 w1,1
 p1 
w
p 
2 ,1
p   2 W

  


 
 pR 
 wS ,1
w1,2  w1,R 
 b1 
 a1 

a 


w2 ,2  w2 ,R 
b
b   2 a   2




 

 
 
wS ,2  wS ,R 
aS 
bS 
Artificial neural networks – multi-layer network
Figure 11 and 12 show a multi-layer (three-layer) feedforward neural network. As can
be seen, the outputs of each layer are the inputs of another of the layers.
Where:
R – number of inputs,
s1- number of neurons in the
first layer,
s2- number of neurons in the
second layer,
s3- number of neurons in the
third layer.
Fig. 11. Scheme of multi-layer feedforward neural network.
7
Where:
R – number of inputs,
s1- number of neurons in the
first layer,
s2- number of neurons in the
second layer,
s3- number of neurons in the
third layer.
Fig. 12. Scheme of multi-layer feedforward neural network. Symbols and notation
from MATLAB.
8
Basic commands of MATLAB Neural Network Toolbox ver. 8.2
Below in Table 2. the important commands from the MATLAB’s Neural Network are
shown. More commands can be obtained by typing ‘help nnet’, while the details of a
specific command, such as: the use of syntax, the algorithms, the application’s
example etc., can be obtained by typing ‘help’ command with any instruction, e.g.
‘help newp’.
Table 2. MATLAB’s Neural Network Toolbox ver 8.2 commands.
Command
Short description
Creating a network
network
newc
newcf
newff
newfftd
newlin
newlind
newp
Create a custom neural network.
Create a competitive layer.
Create a cascade-forward backpropagation network.
Create a feed-forward backpropagation network.
Create a feed-forward input time-delay backpropagation network.
Create a linear layer.
Design a linear layer.
Create a perceptron.
netprod
netsum
Product net input function
Sum net input function
Net input (excitation) functions
Functions initializing network parameters
initlay
Layer-by-layer network initialization function
mae
mse
msereg
sse
learncon
learngd
learngdm
learnis
learnlv1
learnlv2
learnos
learnp
learnpn
learnwh
Mean absolute error performance function
Mean squared normalized error performance function
Mean squared error with regularization performance function.
Sum squared error performance function
Learning methods
Conscience bias learning function
Gradient descent weight and bias learning function
Gradient descent with momentum weight and bias learning function
Instar weight learning function
LVQ1 weight learning function
LVQ2 weight learning function
Outstar weight learning function
Perceptron weight and bias learning function
Normalized perceptron weight and bias learning function
Widrow-Hoff weight/bias learning function
prestd
poststd
trastd
premnmx
postmnmx
tramnmx
prepca
trapca
postreg
Preprocesses the data so that the mean is 0 and the standard deviation is 1.
Postprocesses data which has been preprocessed by PRESTD.
Preprocesses data using a precalculated mean and standard deviation.
Preprocesses data so that minimum is -1 and maximum is 1.
Postprocesses data which has been preprocessed by PREMNMX.
Transform data using a precalculated min and max.
Principal component analysis.
Principal component transformation.
Postprocesses the trained network response with a linear regression.
Functions describing quality of network’s work
Processing of input and output data
9
Learning methods
trainb
trainbfg
trainbr
trainc
traincgb
traincgf
traincgp
traingd
traingdm
traingda
traingdx
trainlm
trainoss
trainr
trainrp
trainscg
Batch training with weight & bias learning rules.
BFGS quasi-Newton backpropagation.
Bayesian Regulation backpropagation.
Cyclical order weight/bias training.
Conjugate gradient backpropagation with Powell-Beale restarts.
Conjugate gradient backpropagation with Fletcher-Reeves updates.
Conjugate gradient backpropagation with Polak-Ribiere updates.
Gradient descent backpropagation.
Gradient descent with momentum.
Gradient descent with adaptive learning rate backpropagation.
Gradient descent w/momentum & adaptive learning rate backpropagation.
Levenberg-Marquardt backpropagation.
One step secant backpropagation.
Random order weight/bias training.
RPROP backpropagation.
Scaled conjugate gradient backpropagation.
Activation functions
compet
hardlim
hardlims
logsig
poslin
purelin
radbas
satlin
satlins
tansig
Competitive transfer function
Hard-limit transfer function
Symmetric hard-limit transfer function
Log-sigmoid transfer function
Positive linear transfer function
Linear transfer function
Radial basis transfer function
Saturating linear transfer function
Symmetric saturating linear transfer function
Hyperbolic tangent sigmoid transfer function
errsurf
maxlinlr
Error surface of single-input neuron
Maximum learning rate for linear layer
initnw
initwb
Nguyen-Widrow layer initialization function
By weight and bias layer initialization function
cell2mat
concur
con2seq
combvec
mat2cell
minmax
nncopy
normc
normr
seq2con
sumsqr
Convert cell array to numeric array
Create concurrent bias vectors
Convert concurrent vectors to sequential vectors
Create all combinations of vectors
Convert array to cell array with potentially different sized cells
Ranges of matrix rows
Copy matrix or cell array
Normalize columns of matrix
Normalize rows of matrix
Convert sequential vectors to concurrent vectors
Sum of squared elements of matrix or matrices
sim
init
adapt
train
Simulate dynamic system (neural network)
Initialize neural network
Adapt neural network to data as it is simulated
Train neural network
Functions allowing easier analysis
Functions initializing layer’s parameters
Operations on vectors
Operations on networks
10
Plotting
hintonw
hintonwb
plotes
plotpc
plotpv
plotep
plotperf
plotv
plotvec
Hinton graph of weight matrix.
Hinton graph of weight matrix and bias vector.
Plot error surface of single-input neuron
Plot classification line on perceptron vector plot
Plot perceptron input/target vectors
Plot weight-bias position on error surface
Plot network performance.
Plot vectors as lines from origin
Plot vectors with different colors
Others
nntool
gensim
Neural Network Toolbox graphical user interface
Generate a Simulink block to simulate a neural network.
11
An example of creating and learning a perceptron neural network.
Below are the basic commands used to create an example perceptron neural
network. The process of creating other networks is similar, only the parameters and
some instructions are different.
It is possible to create a perceptron network by using the ‘newp’ command.
Syntax:
net = newp(pr,s,tf,lf)
The parameters of this command are:
PR – RxQ matrix of Q1 representative input vectors.
S - SxQ matrix of Q2 representative target vectors.
TF – Transfer (activation) function, default = 'hardlim'.
LF – Learning function, default = 'learnp'.
It is not necessary to give the TF and LF parameters. The result of this command is
creating a perceptron network.
Under the name ‘net’ a structure is created. Within this structure all the information
about this network are stored.
Network initialization
‘newp’ command also initializes (gives) the initial (zero) values of weights and
thresholds.
Weights connecting the inputs to the network are stored in the structure net.IW. The
thresholds are stored in the network structure net.b.
It is also possible to choose values of these elements independently, e.g.:
net.IW{1,1}=[2 4]
net.b{1}=[5]
To restore the default weights and thresholds (initialization of the network
parameters) a command e.g. ‘init’ should be used.
net = init(net);
Network simulation
To study the network’s response for an input vector a ‘sim’ command should be used.
The syntax of this command is available after typing ‘help sim’ in the MATLAB’s
command window. Mostly it is sufficient to specify what network is going to be used
and to indicate input vector, in this case P:
12
Y = sim(net,P);
It simulates response of network ‘net’ for input P.
Defining input vector P as:
P = [0 0 1 1; 0 1 0 1];
And target vector T:
T = [0 1 1 1];
The result is network’s response:
Y=
1
1
1
1
which is not compliant with expectations (target T). To achieve wanted output values
(accordingly to the target) a proper selection of weights and bias values is necessary,
which means that the network has to learn how to respond properly.
Example
Design perceptron network consisting of a single neuron with two inputs. The first
input should be within the range [0; 1] and the second input within the range [-2; 2].
Save the proposed structure under the name ‘net’. In response to a predefined vector
P, the network should respond in accordance with the vector T.
net = newp([0 1; -2 2],1);
Response for vector P:
Y = sim(net,P)
Y=
1
1
1
1
Network learning
As we can see the network’s response is different from the vector T. In order to
achieve the correct response a change of the weights’ and biases’ values is needed,
either manually or in the learning process.
Command ‘train’ is used to start learning process. This statement evaluates the
network’s default learning method (modification of weights and thresholds) in order to
fit the response into the vector T. More options for ‘train’ command can be obtained
by typing ‘help train’.
13
net = train(net,P,T);
Response of the network after learning process is shown below.
Y = sim(net,P)
Y=
0
1
1
1
As it can be seen network responded properly. It completes the example.
It is possible to check with what values of weights and biases the task ends:
net.IW{1,1}
net.IW{1,1} =[1 1]
net.b{1} =[-1]
Changing the default parameters of network
After creating any network it has various default parameters such as: learning
method, the initialization method, the method of testing the quality of the network, the
network learning process and many others. To check them typing the name of the
created network in MATLAB’s command window is needed. These parameters can
also be changed with usage of appropriate commands. Below is some basic ones
(for the perceptron network created by ‘newp’ command).
Functions:
adaptFcn: 'trains'
initFcn: 'initlay'
performFcn: 'mae'
trainFcn: 'trainc'
Parameters:
adaptParam: .passes
initParam: (none)
performParam: (none)
trainParam: .epochs, .goal, .show, .time
Changing default values of network parameters (example values):
net.trainParam.show = 50;
net.trainParam.lr = 0.05;
net.trainParam.epochs = 300;
net.trainParam.goal = 1e-5;
14