# What is an activation function?

by Stephen M. Walker II, Co-Founder / CEO

## What is an activation function?

An activation function in the context of an artificial neural network is a mathematical function applied to a node's input to produce the node's output, which then serves as input to the next layer in the network. The primary purpose of an activation function is to introduce non-linearity into the network, enabling it to learn complex patterns and perform tasks beyond mere linear classification or regression.

Non-linear activation functions are essential because they allow neural networks to approximate non-linear mappings from inputs to outputs. Without non-linearity, a neural network, regardless of how many layers it has, would behave just like a single-layer perceptron, which can only solve linearly separable problems.

Common types of activation functions include:

• Linear — A simple function that maintains the input's proportionality (e.g., identity function).
• Sigmoid — Maps input to a value between 0 and 1, useful for binary classification.
• Tanh (Hyperbolic Tangent) — Similar to sigmoid but maps input to values between -1 and 1, with zero-centered outputs.
• ReLU (Rectified Linear Unit) — Outputs the input directly if positive, otherwise outputs zero. It is widely used due to its simplicity and efficiency.
• Leaky ReLU — A variant of ReLU that allows a small, non-zero gradient when the input is negative.
• Softmax — Often used in the output layer of a classifier to represent probabilities across multiple classes.

The choice of activation function can depend on the specific requirements of the task, such as the need for probabilistic outputs or the type of problem (e.g., classification vs. regression). Some activation functions, like ReLU, have become popular due to their effectiveness in deep learning models and their ability to mitigate issues like the vanishing gradient problem.

Activation functions are crucial for the functioning of neural networks, as they provide the necessary non-linearity for handling complex data representations and enabling deep learning models to solve a wide range of problems.

## What are the common activation functions used in AI?

Activation functions in neural networks are mathematical functions that determine the output of a node or neuron. They introduce non-linearity into the network, allowing it to learn complex patterns and relationships in the data. Here are some common activation functions:

1. Sigmoid or Logistic Activation Function — This function maps any input to a value between 0 and 1, making it useful for models where the output is a probability.

2. Tanh or Hyperbolic Tangent Activation Function — Similar to the sigmoid function, but it maps any input to a value between -1 and 1. It is zero-centered, making it easier for models to learn from negative input values.

3. ReLU (Rectified Linear Unit) Activation Function — This function outputs the input directly if it is positive; otherwise, it outputs zero. It is the most used activation function in deep learning due to its computational efficiency and ability to enable faster learning in networks.

4. Leaky ReLU Activation Function — A variant of ReLU, it allows a small, non-zero output for negative input values, addressing the "dying ReLU" problem where neurons can sometimes get stuck in the off state and stop contributing to the learning process.

5. Softmax Activation Function — This function is often used in the output layer of a classifier, where the model needs to make a multi-class prediction. It gives the probability distribution over multiple classes, with all the probabilities summing up to 1.

6. Linear or Identity Activation Function — This function maintains the proportionality of the input, meaning the output is the same as the input. It is often used in problems where the output is a real value, such as regression problems.

7. Exponential Linear Units (ELUs) Function — This function tends to converge cost to zero faster and produce more accurate results. Negative inputs are mapped to a value that approaches -1 as the input approaches negative infinity.

The choice of activation function depends on the specific requirements of the task and the architecture of the neural network.

## What is the difference between linear and non-linear activation functions?

The primary difference between linear and non-linear activation functions in the context of neural networks lies in their ability to handle complexity and introduce non-linearity into the network.

A linear activation function, also known as the identity function, maintains the proportionality of the input, meaning the output is the same as the input. It doesn't do anything to the weighted sum of the input but simply outputs the value it was given. However, a neural network with a linear activation function, regardless of the number of layers, behaves just like a single-layer perceptron or a linear regression model. This is because the composition of multiple linear functions is still a linear function. Therefore, a network with linear activation functions can only solve linearly separable problems and cannot learn complex patterns or relationships in the data.

On the other hand, non-linear activation functions introduce non-linearity into the network, making it capable of learning and performing more complex tasks. They allow backpropagation because the derivative function is related to the input, making it possible to adjust the weights in the input neurons for better predictions. Non-linear activation functions can map any real value as input to a specific range, depending on the function. For example, a sigmoid function maps any input to a value between 0 and 1. This non-linearity allows neural networks to develop complex representations and functions based on the input data. Non-linear activation functions are essential for deep learning models as they enable the model to learn from a wide variety of data and differentiate between outputs.

While linear activation functions maintain the proportionality of the input, they limit the complexity of tasks that a neural network can perform. Non-linear activation functions, on the other hand, introduce non-linearity into the network, enabling it to learn complex patterns and perform more complex tasks.

## More terms

### What is the computational complexity of common AI algorithms?

The computational complexity of common AI algorithms varies depending on the specific algorithm. For instance, the computational complexity of a simple linear regression algorithm is O(n), where n is the number of features. Conversely, the computational complexity of more complex algorithms like deep learning neural networks is significantly higher and can reach O(n^2) or even O(n^3) in some cases, where n is the number of nodes in the network. It's important to note that a higher computational complexity often means the algorithm requires more resources and time to train and run, which can impact the efficiency and effectiveness of the AI model.

### What is selection in a genetic algorithm?

Selection is the process of choosing individuals from a population to be used as parents for producing offspring in a genetic algorithm. The goal of selection is to increase the fitness of the population by favoring individuals with higher fitness values. There are several methods for performing selection, including tournament selection, roulette wheel selection, and rank-based selection. In tournament selection, a small number of individuals are randomly chosen from the population and the individual with the highest fitness value is selected as the winner. In roulette wheel selection, each individual is assigned a probability of being selected proportional to its fitness value, and an individual is chosen by spinning a roulette wheel with sections corresponding to each individual's probability. In rank-based selection, individuals are ranked based on their fitness values and a certain proportion of the highest-ranked individuals are selected for reproduction.