Simio Sync Registration is now open! Register for the May 14 -15, 2025 event!
Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Skip to content
Simio background artwork

Neural Networks: What is it and the History Behind It

Simio Staff

November 3, 2021

The concept of a ‘machine that thinks’ or an inanimate entity that thinks is traced back to the scientists of Ancient Greece who sought to imbibe objects with analytical capabilities. However, the success of the Ancient Greeks in actualizing the thinking machine was severely limited due to limitations with understanding how the human brain functioned and how nervous activity occurred. It would take a couple of centuries from conceptualizing the thinking machine to mapping out how the human brain could produce complex patterns through connected brain cells, otherwise known as, neurons.

The 1943 research paper ‘A Logical Calculus of the Ideas Immanent in Nervous Activity’ by Warren S. McCulloch and Walter Pitts introduced the world to the brain’s neural activity. The collaborators outlined the process in which neurons within the nervous system send signals to one another to initiate an impulse. It postulated that a neuron, which is connected to thousands of other neurons, sends and receives a varying amount of energy and a neuron sends out energy only when its threshold has been reached.

This important paper went on to pave the way for integrating neural networks in the STEM fields of AI, machine learning, and deep learning. The research became the foundation for leveraging neural networks within computer programs to allow them to independently recognize patterns and solve problems.

What are Neural Networks?

Neural networks reflect the behavioral patterns of the human brain by mimicking the process biological neurons communicate during neural activities. In the fields of machine learning and deep learning, neural networks are referred to as artificial neural networks (ANN) or Simulated Neural Networks (SNN). These connotations highlight the fact that the application of neural networks within these fields is inspired by the human brain.

Artificial neural networks consist of node layers including – an input layer, singular or multiple hidden layers, and an output layer. Here, nodes represent artificial neurons and each node has its associated weights and thresholds. When the output of an individual node is above its defined threshold, the node is activated and sends data to the next node layer within its network. Conversely, if the output does not exceed the node’s threshold, no data is sent to the next layer of the node network. Thus, a neuron’s character can be defined as a threshold logic unit (TLU) that inputs diverse weighted quantities, sums these quantities and once the summed quantities meet or surpass its defined threshold (theta), it outputs a quantity.

Labeling characters such as the input quantity, threshold, and output provide a better understanding of how neural networks function. For example, an input and its weight are labeled – X 1, X 2,…, X n and W 1, W 2, …,W n. Here X represents the input of a neutron or node and W its weight. Thus, the sum of the input and its weight becomes – X!*W! which yields the activation level (a) of a node. Making a = (X1 * W1) + (X2 * W2) +…+ (Xn * Wn).

In scenarios where the activation level (a) is equal to or greater than the threshold (theta), an output (y) is produced but if the activation level is less than theta no output occurs. Thus, theta must be met or surpassed to activate a neuron to send a weighted output to the next node layer.

How Threshold Logic Units Learn

Neural networks must be trained to improve their accuracy with problem-solving and decision making before taking advantage of them within AI applications. Improving the accuracy of a neural network starts with gaining an insight into the learning process of threshold logic units.

So, how do threshold logic units learn and how are neural networks trained?

The short answer is that a TLU learns by continuously changing its weights and threshold. The longer answer involves the process of supervised and unsupervised training. For supervised training, neural networks are trained by inputting a series of examples of:

  • Diverse items to be classified and
  • The proper classifications assigned to the items

The neural network takes the information provided and uses it to modify its weights until it matches the items to their proper classifications. Hence, the neural network analyzes both the input and output data to develop answers that match inputted quantities to an output.

For unsupervised training, only a series of items to be classified or input data is provided. The neural network applies statistical analysis to pinpoint differences or anomalies within the inputted data set. This means unsupervised training enables the neural network to work with data and produce results without any output coloring the network’s analytical directions.

Using a real-world scenario helps with understanding how neural networks learn. For example, a singular node can be trained to analyze whether additional equipment is needed on a shop floor using binary values. First, let’s assume that three factors influence its decision making:

  1. Is there enough accommodating capacity? (Yes = 1, No = 0)
  2. Does demand justify additional equipment? (Yes = 1, No = 0)
  3. Are there enough funds? (Yes = 1, No = 0)

Next, the following input assumptions apply:

  • X1 = 1, yes enough space is available for new equipment
  • X2 = 0, Demand does not justify the need for new equipment
  • X3 = 0, No enough funds for new equipment.

Next, weights must be assigned to each input. Here, a larger weight signifies greater influence on the final decision to be taken:

  • W1 = 3, because space is important to accommodate new equipment
  • W2 = 5, Demand is critical to deciding if new equipment is necessary
  • W3 = 2, Funds can always be raised if the need for new equipment is urgent

Lastly, a threshold value must be assigned to ensure the neural network evaluates the information provided to answer correctly. Here, the threshold value is 4 and it translates to a bias value of -4. Inputting all these values into the above stated formula, “a = (X1 * W1) + (X2 * W2) +…+ (Xn * Wn).” we have:

Y = (1 * 3) + (0 * 5) + (0 * 2) – 4 = -1.

The final calculation is -1 and it is lesser than the activation function needed to produce an output. Thus, the output of this node is 0. This means new equipment is not required or cannot be purchased at this time.

The neural network training highlighted in these examples is feedforward because they flow in one direction only – from input to output. Backpropagation is another technique that is used to train neural networks. For Backpropagation, the errors attributed to each neuron are included in the calculations to enable the appropriate adjustments of a network model’s parameters.