Artificial Neural Networks (ANNs) is an abstract simulation of a real nervous system that contains a collection of neuron units communicating with each other via axon connections. Such a model bears a strong reasemblance to axons and dendrites in a nervous system.
The first fundamental modeling of neural nets was proposeed in 1943 by McCulloch and Pitts in terms of a computational model of "nervous activity". The McCulloch-Pitts neuron is a binary device and each neuron has a fixed threshold logic. This model lead the works of Jhon von Neumann, Marvin Minsky, Frank Rosenblatt, and many others.
Hebb postulated, in his classical book The Organization of Behavior, that the neurons were appropiately interconnected by self-organization and that "an existing pathway strenghens the connections between the neurons". He proposed that the connectivity of the brain is continually changing as an organism learns different functional tasks, and that cells assemblies are created by such changes. By embedding a vast number of simple neurons in an interactive nervous system, it is possible to provide computational power for very sophisticated informating processing. The neural model can be divided into two categories:
The first is the biological type. It encompasses networks mimicking biological neural systems such as audio functions or early vision functions.
The other type is application-driven. It depens less on the faithfulness to neurobiology. For this models the architectures are largely dictated by the application needs. Many such neural networks are represented by the so called connectionist models.