An
introduction to neural networks-slides
here
This is a bird eye-view over the history of machine learning, including
its three waves: cybernetics, connectionism and deep learning. Also,
some applications, such as deep learning, are discussed.
Lecture of September
29:level-undergraduate and advanced undergraduate
A
few problems leading to machine learning algorithms-slides
here
The goal of this lecture was to introduce a few real life problems
and model them as machine learning problems. This implies finding
the inputs, setting up the weights and the activation functions,
and choosing an appropiate error function.
The
Adaline neuron-slides here
This material decribes in detail the Adaline neuron, introduced
by Widrow and Hoff in 1960. The inputs are random variables and
the activation function is just the identity. The learning algorithm
uses the least mean square. In spite of its simplicity, the beauty
is that it can be worked out first in closed form and then the
same formula is retrieved using the gradient descent method.
Lecture
of October 6: level-advanced undergraduate and graduate
Error
functions-slides here
The lecture presents different types of error functions used by
neural networks. We consider the case when the training data are
drawn from two random variables. The sum of square differences,
cross entropy and Kullback-Leibler divergence are just a few examples
we discussed in detail.
Lecture
of October 13: level-advanced undergraduate and graduate
The
sigmoid neuron, Learning by Logistic Regression and Neurons with continuum
inputs-slides here The sigmoid neuron is the paradigm of the neuron model.
The Logistic regression is used to show how a sigmoid neuron can be
used as a classifier. Other neuron models will be discussed, including
neurons with coninuum input. The weights in this case are modeled
by a measure subject to be found.
Lecture of October 20: level-advanced undergraduate
Backpropagation algorithm, part I. The
gradient descent method needs formulae for the gradient of the cost
function. These formulae depends on the delta sesitivities, which
can be obtained using the backpropagation method. Seevideo_part1,
video_part2, video_part3
Lecture of October 28: level-graduate
Neural networks as universal approximators
We have talked about the following results:
A simple two-layer neural network cannot approximate complicated functions.
However, any neural network with one hidden layer can approximate
any continuous function, given that the input space is compact. Similar
results hold for target functions in L^1 and L^2 spaces, as well as
for measurable target functions. See a sample of the discussion atvideo_part1, video_part2
Lecture of November 4 (this is a Saturday!) : level-advanced
graduate
Learning with one-dimensional inputs
This lecture deals with the case when the input variable is bounded
and one-dimensional, x in [0,1]. Besides its simplicity, this case
brings a couple of new things: (i) it can be treated elementary, without
the arsenal of functional analysis, and, (ii) due to its constructive
nature, it provides an explicit algorithm for finding the network
weights. Both cases of perceptron and sigmoid neuronal networks with
one hidden layer will be covered.
Lecture of November 11 - This is an online lecture about
how to build a 2-layer neural network using Tensor Flow.
Lecture of November 25 - This is an online lecture about
how to buid and train a deep neural network.
How to buid and train a deep neural network with
7 hidden layers. You can experiment changing the size of the hidden
layers (the default is set at 100 neurons). The video can be find
here.