Saturday, September 08, 2012

NNP1-Introduction to Neural networks

What is a Neural Network?

The human brain has millions of nerve cells called neurons which are highly interconnected to form the nervous system(which is responsible for coordination of all parts of the body).Artificial intelligence try to simulate some properties of biological neural networks. Learning in biological systems involves adjustment to the synaptic connections that exist between the neurons. Since our conventional computers are obviously not suited to solve complex real time problems, we therefore borrow features from the physiology of the brain as the basis for our new processing models. So an Artificial Neural Network(ANN) can be formally defined as a computer system modeled on the human brain and nervous system.

Background

The original inspiration of ANN came from neuroscience. Neuroscience is concerned with how the nervous system is organized and how it functions There were some initial simulations using formal logic. McCulloch and Pitts (1943) developed models of neural networks based on their understanding of neurology. These models made several assumptions about how neurons worked. Their networks were based on simple neurons which were considered to be binary devices with fixed thresholds. The results of their model were simple logic functions such as "a or b" and "a and b". Another attempt was by using computer simulations. Not only was neuroscience influential in the development of neural networks, but psychologists and engineers also contributed to the progress of neural network simulations. Another system was the ADALINE (ADAptive Linear Element) which was developed in 1960 by Widrow and Hoff (of Stanford University). The ADALINE was an analogue electronic device made from simple components
Although public interest and available funding were minimal, several researchers continued working to develop neuromorphically based computational methods for problems such as pattern recognition. During this period several paradigms were generated which modern work continues to enhance. Grossberg's (Steve Grossberg and Gail Carpenter in 1988) influence founded a school of thought which explores resonating algorithms. They developed the ART (Adaptive Resonance Theory) networks based on biologically plausible models. Anderson and Kohonen developed associative techniques independent of each other. Klopf (A. Henry Klopf) in 1972 developed a basis for learning in artificial neurons based on a biological principle for neuronal learning called heterostasis.
Werbos (Paul Werbos 1974) developed and used the back-propagation learning method, however several years passed before this approach was popularized. Back-propagation nets are probably the most well-known and widely applied of the neural networks today. In essence, the back-propagation net. is a Perceptron with multiple layers, a different threshold function in the artificial neuron, and a more robust and capable learning rule.
Progress during the late 1970s and early 1980s was important to the re-emergence on interest in the neural network field. Several new commercial with applications in industry and financial institutions are emerging.

Why use Neural networks?

In many real time applications we want our computers to perform complex tasks which are too complex to be noticed by either humans or other computer techniques. The word tasks can refer to Pattern recognition, data classification ,image analysis, cognitive modeling, adaptive control and much more. Besides these the characteristics which make neural networks a preferred technology includes: Adaptive learning, Self-Organization, Real Time Operation, Fault Tolerance, Data processing, Pattern Analysis and more.

Reference:
  1. Neural networks by Christos Stergiou and Dimitrios Siganos.
  2. Neural Networks Algorithms Applications & Programming Techniques - James A.Freeman.
  3. Fundamentals of the new Artificial intelligence by Toshinori Munakata.
  4. Wikipedia-Neuroscience, Neural networks.
Stay Tuned.

0 comments:

Post a Comment