Monday, July 23, 2007

Sigmoid functions in neural networks

Sigmoid functions in neural networks
Sigmoid functions are often used in neural networks to introduce nonlinearity in the model and/or to clamp signals to within a specified range. A popular neural net element computes a linear combination of its input signals, and applies a bounded sigmoid function to the result; this model can be seen as a "smoothed" variant of the classical threshold neuron.

A reason for its popularity in neural networks is because the sigmoid function satisfies the differential equation y' = y(1 − y)

The right hand side is a low order polynomial. Furthermore, the polynomial has factors y and (1 − y), both of which are simple to compute. Given y = sig(t) at a particular t, the derivative of the sigmoid function at that t can be obtained by multiplying the two factors together. These relationships result in simplified implementations of artificial neural networks with artificial neurons.

Tuesday, July 17, 2007

Neural Network and Connectionist Models

The human brain is an incredibly impressive information processor, even though it "works" quite a bit slower than an ordinary computer. Many researchers in artificial intelligence look to the organization of the brain as a model for building intelligent machines.
Think of a sort of "analogy" between the complex webs of interconnected neurons in a brain and the densely interconnected units making up an artificial neural network (ANN), where each unit--just like a biological neuron--is capable of taking in a number of inputs and producing an output. Consider this description: "To develop a feel for this analogy, let us consider a few facts from neurobiology. The human brain is estimated to contain a densely interconnected network of approximately 10 ^ 11 neurons, each connected, on average, to 10 ^ 4 others. Neuron activity is typically excited or inhibited through connections to other neurons. The fastest neuron switching times are known to be on the order of 10 ^ -3 seconds---quite slow compared to computer switching speeds of 10 ^ -10 seconds.
Yet humans are able to make surprisingly complex decisions, surprisingly quickly. For example, it requires approximately 10^ -1 seconds to visually recognize your mother. Notice the sequence of neuron firings that can take place during this 10^ -1-second interval cannot possibly be longer than a few hundred steps, giving the switching speed of single neurons. This observation has led many to speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. One motivation for ANN systems is to capture this kind of highly parallel computation based on distributed representations."

[From Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).]

Thursday, July 12, 2007

Blue Brain Project

Inside our head nestles a forest of millions of neurons which weave together to make our thoughts. Man has long wanted to discover the secrets of the brain, and has done so with varying degrees of success.

Recently advancements in this area of science have been limited by the power of computers. But at Switzerland's École Polytechnique Fédérale de Lausanne, the Blue Brain Project aims to change this by simulating the structures and functions of the brain.

The Blue Brain project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.

Want to know more about the great Blue Brain Project please use this link
http://bluebrain.epfl.ch/page17871.html

Friday, July 6, 2007

The network in artificial neural network

The word network in the term 'artificial neural network' arises because the function f(x) is defined as a composition of other functions gi(x), which can further be defined as a composition of other functions. This can be conveniently represented as a network structure, with arrows depicting the dependencies between variables.
A widely used type of composition is the nonlinear weighted sum, where , where K is some predefined function, such as the hyperbolic tangent. It will be convenient for the following to refer to a collection of functions gi as simply a vector g = (g1,g2,...,gn).

Thursday, July 5, 2007

Economical uses of ANN

Economic Uses of ANN:

The economic uses of ANNs may be the most exciting.

Large financial institutions have used ANNs to improve performance in such areas as bond rating, credit scoring, target marketing and evaluating loan applications. These systems are typically only a few percentage points more accurate than their predecessors, but because of the amounts of money involved, they are very profitable. ANNs are now used to analyze credit card transactions to detect likely instances of fraud.
ANNs are used to discover other kinds of crime, too. Bomb detectors in many U.S. airports use ANNs to analyze airborne trace elements to sense the presence of explosive chemicals. And the personnel office of the Chicago Police Department uses ANNs to try to root out corruption among police officers.

Architecture of Neural Network

Architecture of neural networks
Feed-forward networks
Feed-forward ANNs (figure 1) allow signals to travel one way only; from input to output. There is no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organisation is also referred to as bottom-up or top-down.
Feedback networks
Feedback networks (figure 1) can have signals travelling in both directions by introducing loops in the network. Feedback networks are very powerful and can get extremely complicated. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or recurrent, although the latter term is often used to denote feedback connections in single-layer organisations.
Figure 4.1 An example of a simple feedforward network
Figure 4.2 An example of a complicated network
Network layers
The commonest type of artificial neural network consists of three groups, or layers, of units: a layer of "input" units is connected to a layer of "hidden" units, which is connected to a layer of "output" units. (see Figure 4.1)
The activity of the input units represents the raw information that is fed into the network.
The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.
The behaviour of the output units depends on the activity of the hidden units and the weights between the hidden and output units.
This simple type of network is interesting because the hidden units are free to construct their own representations of the input. The weights between the input and hidden units determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents.
We also distinguish single-layer and multi-layer architectures. The single-layer organisation, in which all units are connected to one another, constitutes the most general case and is of more potential computational power than hierarchically structured multi-layer organisations. In multi-layer networks, units are often numbered by layer, instead of following a global numbering.
Perceptrons
The most influential work on neural nets in the 60's went under the heading of 'perceptrons' a term coined by Frank Rosenblatt. The perceptron (figure 4.4) turns out to be an MCP model ( neuron with weighted inputs ) with some additional, fixed, pre--processing. Units labelled A1, A2, Aj , Ap are called association units and their task is to extract specific, localised featured from the input images. Perceptrons mimic the basic idea behind the mammalian visual system. They were mainly used in pattern recognition even though their capabilities extended a lot more.
Figure
In 1969 Minsky and Papert wrote a book in which they described the limitations of single layer Perceptrons. The impact that the book had was tremendous and caused a lot of neural network researchers to loose their interest. The book was very well written and showed mathematically that single layer perceptrons could not do some basic pattern recognition operations like determining the parity of a shape or determining whether a shape is connected or not. What they did not realised, until the 80's, is that given the appropriate training, multilevel perceptrons can do these operations

What is Neural Network?

The term neural network had been used to refer to a network of biological neurons. In more common usage, the term is often used to refer to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term 'Neural Network' has two distinct connotations:
Biological neural networks are made up of real biological neurons that are connected or functionally-related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.
Artificial neural networks are made up of interconnecting artificial neurons (usually simplified neurons) which may share some properties of biological neural networks. Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving traditional artificial intelligence tasks without necessarily attempting to model a real biological system.
Artificial neural network

An artificial neural network (ANN), often just called a "neural network" (NN), is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.

Refer to it

Refer to this site this gives the a detailed descripiton about neural networks and details :

http://www.psi.toronto.edu/~vincent/research/presentations/Neural%20Networks.pdf

Neural network

An artificial neural network (ANN), also called a simulated neural network (SNN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation.

In more practical terms neural networks are non-linear statistical data modeling or decision making tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.

Intro to Artificial Neural Network

An artificial neural network (ANN), often just called a "neural network" (NN), is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.
(The term "neural network" can also mean biological-type systems.)
In more practical terms neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.

A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.

More complex neural networks are often used in Parallel Distributed Processing.
What is a neural network?
Neural Networks are a different paradigm for computing:
von Neumann machines are based on the processing/memory abstraction of human information processing.
neural networks are based on the parallel architecture of animal brains.
Neural networks are a form of multiprocessor computer system, with
simple processing elements
a high degree of interconnection
simple scalar messages
adaptive interaction between elements
A biological neuron may have as many as 10,000 different inputs, and may send its output (the presence or absence of a short-duration spike) to many other neurons. Neurons are wired up in a 3-dimensional pattern.
Real brains, however, are orders of magnitude more complex than any artificial neural network so far considered.

Neural networks

What is a neural network?
Neural Networks are a different paradigm for computing:
von Neumann machines are based on the processing/memory abstraction of human information processing.
neural networks are based on the parallel architecture of animal brains.
Neural networks are a form of multiprocessor computer system, with
simple processing elements
a high degree of interconnection
simple scalar messages
adaptive interaction between elements
A biological neuron may have as many as 10,000 different inputs, and may send its output (the presence or absence of a short-duration spike) to many other neurons. Neurons are wired up in a 3-dimensional pattern.
Running the network consists of
Forward pass:

the outputs are calculated and the error at the output units calculated.
Backward pass:
The output unit error is used to alter weights on the output units. Then the error at the hidden nodes is calculated (by back-propagating the error at the output units through the weights), and the weights on the hidden nodes altered using these values.
For each data pair to be learned a forward pass and backwards pass is performed. This is repeated over and over again until the error is at a low enough level (or we give up).









Radial Basis function Networks
Radial basis function networks are also feedforward, but have only one hidden layer.
Typical RBF architecture:
Like BP, RBF nets can learn arbitrary mappings: the primary difference is in the hidden layer.
RBF hidden layer units have a receptive field which has a centre: that is, a particular input value at which they have a maximal output.Their output tails off as the input moves away from this point.
Generally, the hidden unit function is a Gaussian:
Gaussians with three different standard deviations.
Training RBF Networks.
RBF networks are trained by
deciding on how many hidden units there should be
deciding on their centres and the sharpnesses (standard deviation) of their Gaussians
training up the output layer.
Generally, the centres and SDs are decided on first by examining the vectors in the training data. The output layer weights are then trained using the Delta rule. BP is the most widely applied neural network technique. RBFs are gaining in popularity.
Nets can be
trained on classification data (each output represents one class), and then used directly as classifiers of new data.
trained on (x,f(x)) points of an unknown function f, and then used to interpolate.
RBFs have the advantage that one can add extra units with centres near parts of the input which are difficult to classify. Both BP and RBFs can also be used for processing time- varying data: one can consider a window on the data:
Networks of this form (finite-impulse response) have been used in many applications.
There are also networks whose architectures are specialised for processing time-series.
Unsupervised networks:
Simple Perceptrons, BP, and RBF networks need a teacher to tell the network what the desired output should be. These are supervised networks.
In an unsupervised net, the network adapts purely in response to its inputs. Such networks can learn to pick out structure in their input.
Applications for unsupervised nets
clustering data:
exactly one of a small number of output units comes on in response to an input.
reducing the dimensionality of data:
data with high dimension (a large number of input units) is compressed into a lower dimension (small number of output units).
Although learning in these nets can be slow, running the trained net is very fast - even on a computer simulation of a neural net.
Kohonen clustering Algorithm:
- takes a high-dimensional input, and clusters it, but retaining some topological ordering of the output.
After training, an input will cause some the output units in some area to become active.
Such clustering (and dimensionality reduction) is very useful as a preprocessing stage, whether for further neural network data processing, or for more traditional techniques.
Where are Neural Networks applicable?
..... or are they just a solution in search of a problem?
Neural networks cannot do anything that cannot be done using traditional computing techniques, BUT they can do some things which would otherwise be very difficult.
In particular, they can form a model from their training data (or possibly input data) alone.
This is particularly useful with sensory data, or with data from a complex (e.g. chemical, manufacturing, or commercial) process. There may be an algorithm, but it is not known, or has too many variables. It is easier to let the network learn from examples.
Neural networks are being used:
in investment analysis:
to attempt to predict the movement of stocks currencies etc., from previous data. There, they are replacing earlier simpler linear models.
in signature analysis:
as a mechanism for comparing signatures made (e.g. in a bank) with those stored. This is one of the first large-scale applications of neural networks in the USA, and is also one of the first to use a neural network chip.
in process control:
there are clearly applications to be made here: most processes cannot be determined as computable algorithms. Newcastle University Chemical Engineering Department is working with industrial partners (such as Zeneca and BP) in this area.
in monitoring:
networks have been used to monitor
· the state of aircraft engines. By monitoring vibration levels and sound, early warning of engine problems can be given.
· British Rail have also been testing a similar application monitoring diesel engines.

in marketing:
networks have been used to improve marketing mailshots. One technique is to run a test mailshot, and look at the pattern of returns from this. The idea is to find a predictive mapping from the data known about the clients to how they have responded. This mapping is then used to direct further mailshots.

A Brief history of ANN

Artificial neural networks:

ALSO REFFERRED TO AS NEUROMORPHIC systems, artificial intelligence and parallell distributed processing, artificial neural networks (ANNs) are an attempt at mimicing the patterns of the human mind.
A brief history of ANNs
In the early 1940's scientists came up with the hypothesis that neurons—fundamental, active cells in all animal nervous systems—might be regarded as devices for manipulating binary numbers—computers.
Early attempts at building ANNs required a great deal of computer power to replicate a few hundred neurons. Consider that an ant's nervous system is composed of over 20,000 neurons and a human being's nervous system consists of over 100 billion neurons.
More recently, ANNs are being applied to an increasing number of complex real world problems, such as pattern recognition and classification, with the ability to generalize and make decisions about imprecise data. They offer solutions to a variety of classification problems such as speech, character, and signal recognition, as well as prediction and system modeling where physical processes are not well understood or are highly complex (Hassoun, 2000).

This is a brightly glowing neuron and behind it is the layout for an artficial neural network.
Neurons 101
The single cell neuron consists of the cell body, or soma, the dendrites, and the axon. The dendrites receive signals from the axons of other neurons. The small space between the axon of one neuron and the dendrite of another is the synapse. The dendrites conduct impulses toward the soma and the axon conducts impulses away from the soma.
The function of the neuron is to integrate the input it receives through its synapses on its dendrites and either generate an action potential or not (Chicurrel, 1995).

ANNs 101
Neural Networks use a set of processing elements (or nodes) loosely analogous to neurons in the brain (hence the name, neural networks.) These nodes are interconnected in a network that can then identify patterns in data as it is exposed to the data. In a sense, the network learns from experience just as people do. This distinguishes neural networks from traditional computing programs, that simply follow instructions in a fixed sequential order.
Roll your mouse over the picture of the neuron above to see the basic layout or concept behind artificial neural networks. The bottom layer represents the input layer, in this case with 5 inputs. In the middle is something called the hidden layer, with a variable number of nodes. It is the hidden layer that performs much of the work of the network. The output layer in this case has two nodes, representing output values we are trying to determine from the inputs (Hassoun, 2000).

Possible futures of ANNs
The secrets of the human mind still elude us no matter how much we boost proccessing speed and capacity. That said, neural networks have given us great advancements in tasks such as Optical Character Recognition, financial forecasting and even in medical diagnosis.
For any group in which a known interrelationship exists with an unknown outcome there is a possibility that ANNs will be helpful. While the need for computer-based training and e-learning courses grows, the need to develop computer systems that can learn by themselves and improve decision-making will be an ongoing goal of information technology.

Artificial Neural Networks - Their Use and Their Application


Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques.
Other advantages include:

Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
Self-Organisation: An ANN can create its own organisation or representation of the information it receives during learning time.
Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance.
Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements(neurones) working in parallel to solve a specific problem.

An artificial neuron is a device with many inputs and one output. The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not.

Applications of neural networks:

*sales forecasting

*industrial process control

*customer research

*data validation

*risk management

*target marketing

The computing world has a lot to gain front neural networks. Their ability to learn by example makes them very flexible and powerful. Furthermore there is no need to devise an algorithm in order to perform a specific task; i.e. there is no need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture.

Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain.

Artifical Neural Networks

Refer to this site:

http://en.wikipedia.org/wiki/Artificial_neural_network

which tells about What is Artifical Neural Networks and its Applications.

Artiicial neural networks

Artificial Neural Networks
Introduction

It was often assumed in the early years of neural network research that implementation in special hardware would be required to take advantage of their capabilities. Such hardware, in particular, would probably be analog and involve multiple parallel processing elements and connections between them. However, the tremendous growth in the digital computing power of conventional von Neuman machines has allowed NNW simulations in software to achieve great success in a number of applications. Meanwhile, the development of hardware especially designed for NNWs has been slow and with only modest commercial success. This overview looks at some possible reasons for this slow development and some of the areas where hardware NNWs in fact have been very useful and where future growth will occur.

NNW Applications in General

NNW's, despite all appearances to the contrary, are appearing in ever increasing numbers of real world applications and are making real money:
OCR (Optical Character Recognition)
· Caere Inc ($3M profit on $55M revenue in 1997) "OmniPage Pro 6.0 significantly
increases accuracy with its exclusive Quadratic Neural Network(TM) (QNN)
technology, an enhancement to its industry-leading OCR engine..."
Data Mining
· HNC ($23M profit on $110M revenue in 1997). Their flagship product is Falcon.
"Falcon is a neural network-based system that examines transaction, cardholder, and
merchant data to detect a wide range of credit card fraud...".

These days a purchase of a new scanner typically includes a commercial OCR program. The algorithms used are proprietary but most OCR programs are believed to use NNWs. (Calera, started in 1986, did not admit to using NNW in its OCR programs until 1992 when Caere began advertising the use of them in its OCR products). Designers of OCR programs may choose NNWs to accomplish one or more of these steps with NNWs while using for other steps other techniques such as conventional AI (If-Then rules), statistical models, hidden Markov models, etc. The point is that NNWs are becoming commonly used tools but, just like other math techniques such as FFT and least squares fit, they are still only tools, not the whole solution. Few real problems of interest can be totally solved by a single NNW.

Introduction to ANN

An Artificial Neural Network is a network of many very simple processors ("units"), each possibly having a (small amount of) local memory. The units are connected by unidirectional communication channels ("connections"), which carry numeric (as opposed to symbolic) data. The units operate only on their local data and on the inputs they receive via the connections.The design motivation is what distinguishes neural networks from other mathematical techniques: A neural network is a processing device, either an algorithm, or actual hardware, whose design was motivated by the design and functioning of human brains and components thereof.
There are many different types of Neural Networks, each of which has different strengths particular to their applications. The abilities of different networks can be related to their structure, dynamics and learning methods.
Neural Networks offer improved performance over conventional technologies in areas which includes: Machine Vision, Robust Pattern Detection, Signal Filtering, Virtual Reality, Data Segmentation, Data Compression, Data Mining, Text Mining, Artificial Life, Adaptive Control, Optimisation and Scheduling, Complex Mapping and more.



Refer to URL:
http://www.statsoft.com/textbook/stneunet.html

Artificial Neural Network

1. Introduction to neural networks
1.1 What is a Neural Network?
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well.
1.2 Historical background
Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback and several eras.
Many importand advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few reserchers. These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.
For a more detailed description of the history click here
The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pits. But the technology available at that time did not allow them to do too much.
1.3 Why use neural networks?
Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. A trained neural network can be thought of as an "expert" in the category of information it has been given to analyse. This expert can then be used to provide projections given new situations of interest and answer "what if" questions.Other advantages include:
Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.
Self-Organisation: An ANN can create its own organisation or representation of the information it receives during learning time.
Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.
Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage.
1.4 Neural networks versus conventional computers
Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don't exactly know how to do.
Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements(neurones) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable.
On the other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.
Neural networks and conventional algorithmic computers are not in competition but complement each other. There are tasks are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks. Even more, a large number of tasks, require systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network) in order to perform at maximum efficiency.

Introduction to ANN

INTRODUCTION


Artificial Neural Networks (ANNs) is an abstract simulation of a real nervous system that contains a collection of neuron units communicating with each other via axon connections. Such a model bears a strong reasemblance to axons and dendrites in a nervous system.
The first fundamental modeling of neural nets was proposeed in 1943 by McCulloch and Pitts in terms of a computational model of "nervous activity". The McCulloch-Pitts neuron is a binary device and each neuron has a fixed threshold logic. This model lead the works of Jhon von Neumann, Marvin Minsky, Frank Rosenblatt, and many others.


Hebb postulated, in his classical book The Organization of Behavior, that the neurons were appropiately interconnected by self-organization and that "an existing pathway strenghens the connections between the neurons". He proposed that the connectivity of the brain is continually changing as an organism learns different functional tasks, and that cells assemblies are created by such changes. By embedding a vast number of simple neurons in an interactive nervous system, it is possible to provide computational power for very sophisticated informating processing. The neural model can be divided into two categories:


The first is the biological type. It encompasses networks mimicking biological neural systems such as audio functions or early vision functions.
The other type is application-driven. It depens less on the faithfulness to neurobiology. For this models the architectures are largely dictated by the application needs. Many such neural networks are represented by the so called connectionist models.

ANN-How They Learn

Artificial neural networks typically start out with randomized weights for all their neurons. This means that they don't "know" anything and must be trained to solve the particular problem for which they are intended. Broadly speaking, there are two methods for training an ANN, depending on the problem it must solve.

A self-organizing ANN (often called a Kohonen after its inventor) is exposed to large amounts of data and tends to discover patterns and relationships in that data. Researchers often use this type to analyze experimental data.

A back-propagation ANN, conversely, is trained by humans to perform specific tasks. During the training period, the teacher evaluates whether the ANN's output is correct. If it's correct, the neural weightings that produced that output are reinforced; if the output is incorrect, those weightings responsible are diminished. This type is most often used for cognitive research and for problem-solving applications.

Artificial Neural Networks

Network layers
The commonest type of artificial neural network consists of three groups, or layers, of units: a layer of "input" units is connected to a layer of "hidden" units, which is connected to a layer of "output" units. (see Figure 4.1)
The activity of the input units represents the raw information that is fed into the network.
The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.
The behaviour of the output units depends on the activity of the hidden units and the weights between the hidden and output units.
This simple type of network is interesting because the hidden units are free to construct their own representations of the input. The weights between the input and hidden units determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents.
We also distinguish single-layer and multi-layer architectures. The single-layer organisation, in which all units are connected to one another, constitutes the most general case and is of more potential computational power than hierarchically structured multi-layer organisations. In multi-layer networks, units are often numbered by layer, instead of following a global numbering.

What is ANN?

artificial neural network


<artificial intelligence> (ANN, commonly just "neural network" or "neural net") A network of many very simple processors ("units" or "neurons"), each possibly having a (small amount of) local memory. The units are connected by unidirectional communication channels ("connections"), which carry numeric (as opposed to symbolic) data. The units operate only on their local data and on the inputs they receive via the connections.
A neural network is a processing device, either an algorithm, or actual hardware, whose design was inspired by the design and functioning of animal brains and components thereof.
Most neural networks have some sort of "training" rule whereby the weights of connections are adjusted on the basis of presented patterns. In other words, neural networks "learn" from examples, just like children learn to recognise dogs from examples of dogs, and exhibit some structural capability for generalisation.
Neurons are often elementary non-linear signal processors (in the limit they are simple threshold discriminators). Another feature of NNs which distinguishes them from other computing devices is a high degree of interconnection which allows a high degree of parallelism. Further, there is no idle memory containing data and programs, but rather each neuron is pre-programmed and continuously active.
The term "neural net" should logically, but in common usage never does, also include biological neural networks, whose elementary structures are far more complicated than the mathematical models used for ANNs.

what is neural network?

What is a Neural Network?

A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. The motivation for the development of neural network technology stemmed from the desire to develop an artificial system that could perform "intelligent" tasks similar to those performed by the human brain. Neural networks resemble the human brain in the following two ways:

1. A neural network acquires knowledge through learning.
2. A neural network's knowledge is stored within inter-neuron connection strengths known as synaptic weights
.
The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data being modeled. Traditional linear models are simply inadequate when it comes to modeling data that contains non-linear characteristics

NEURAL NETWORKS

Introduction to neural networks

Like nanotechnology, neural networking is the use of technology to design and manufacture (intelligent) machines, built for specific purposes, programmed to perform specific tasks. However, unlike nanomachines, neural networks are designed to work like a nerve cell system, more similar to the workings of the human or biological brain in in its physical form.

With today's complex society there is a growing need for semi-autonomous systems that can do some of the thinking and controlling for us. The logic of a neural network approximates our own thinking structures the closest and gives us the opportunity to endow specific intelligence to designed control systems.

Neural Network applications

What exactly are neural networks used for? Artificial neural networks are powerful tools for use in classification, empirical modeling and pattern recognition, for example. They are useful in fields as diverse as financing and investing, business, medical, sports, science and manufacturing.

They are used to "predict" the rise and fall of stock prices, race course predictions (horse and dog racing), hospital length of stay, weather forecasting, earthquake prediction, plastics and concrete testing, gene recognition.

In the field of robotics and artificial intelligence, artificial neural networks are crucial to the development of the robotic brain, its logic, its ability to learn, its processing and analyses of input.

Neural network software and programming

In view of the complexity in designing neural networks it is not surprising that computers play a major role. No computer without software and applications made for working with neural networks, such as design, logic and implementation, are becoming more plentiful and mainstream. However, this is a growth industry and as such there always room for writing your own.

Neural network hardware

On the hardware front of neural network systems great strides have been made. Mimicking or simulating a neural network can be done in different ways. The biological approach necessitates the need to grow and condition or program actual biological nerve cells into specific behavior.

Introduction to ANN

Artificial neural networks are computers whose architecture is modeled after the brain. They typically consist of many hundreds of simple processing units which are wired together in a complex communication network. Each unit or node is a simplified model of a real neuron which fires (sends off a new signal) if it receives a sufficiently strong input signal from the other nodes to which it is connected. The strength of these connections may be varied in order for the network to perform different tasks corresponding to different patterns of node firing activity.
Neural networks are very different - they are composed of many rather feeble processing units which are connected into a network. Their computational power depends on working together on any task - this is sometimes termed parallel processing. There is no central CPU following a logical sequence of rules - indeed there is no set of rules or program. Computation is related to a dynamic process of node firings. This structure then is much closer to the physical workings of the brain and leads to a new type of computer that is rather good at a range of complex tasks.

Tuesday, July 3, 2007

Welcome

Dear Students
I extend a warm welcome to all of you to join me and explore the one of the most interesting, challenging and highly explored research area of Computer Science - Artificial Neural Networks. This area provides solutions to varied problems in a very efficient manner. So let's get together to know what this area is all about and what is in store for us. All the very best for this semester.

Suganya.