Barca, Simona (2017) Mathematical background of neural networks and application to image processing. [Magistrali biennali]
Per questo documento il full-text online non disponibile.
Neural networks are computing systems modelled after the biological neural network of animal brain and their basic behaviour consists of learning from examples and applying this knowledge on new data. In image recognition, for example, a neural network learns to recognize a particular subject by watching many images containing it, and then, given a new image, it detects the presence of that subject. The aim of this thesis is to deal with the mathematical background of the main categories of networks, the recurrent and the feed-forward neural networks. We will introduce the very complex formalism that hides behind the intuitive aspect of the network working and it will let us model rigorously the typical behaviours of a human brain when performing specific tasks. In particular, the concept of learning corresponds to the seek of the optimal parameters of a model that minimize a cost function. What we will find out is that the traditional cost functions, namely the quadratic cost function and the cross entropy can be obsolete in some situations. In order to overcome this limitation, we will define a new cost function, never used up to now, that will improve the performance of the neural network, in terms of accuracy, stability and time of convergence to the optimal parameters.
Solo per lo Staff dell Archivio: Modifica questo record