What are hyperparameters?
by Stephen M. Walker II, CoFounder / CEO
What are hyperparameters?
Hyperparameters are the configuration settings used to structure the learning process in machine learning models. They are set prior to training a model and are not learned from the data. Unlike model parameters, which are learned during training, hyperparameters are used to control the behavior of the training algorithm and can significantly impact the performance of the model.
For example, in neural networks, hyperparameters can include the learning rate, the number of layers, the number of units in each layer, and the type of activation function to use. In other machine learning models, such as support vector machines, hyperparameters might include the kernel type and the regularization parameter.
The process of selecting the optimal hyperparameters is known as hyperparameter tuning or optimization. This is a critical step because the right set of hyperparameters can lead to more accurate models, while poorly chosen hyperparameters can result in models that underfit or overfit the data.
Hyperparameter tuning can be performed using various strategies, such as grid search, random search, Bayesian optimization, or gradientbased optimization. The goal is to find the combination of hyperparameters that yields the best performance on a validation set, which can then be tested on a separate test set to assess the model's generalization capability.
In practice, hyperparameter tuning can be computationally expensive and timeconsuming, especially for complex models and large datasets. Therefore, it's common to use techniques like crossvalidation to systematically evaluate the performance of different hyperparameter configurations.
What is a hyperparameter vs parameter?
A parameter is a variable that the model learns from the training data. These are internal to the model and are automatically updated during the learning process. For instance, in a linear regression model, the coefficients of the predictors are parameters. Similarly, in a neural network, the weights and biases associated with the nodes are parameters. These parameters are crucial as they define the model's representation of the data and directly influence the predictions made by the model.
On the other hand, a hyperparameter is a configuration variable that is external to the model and whose value cannot be estimated from data. These are set before the learning process begins and control the learning process itself. Examples of hyperparameters include the learning rate in optimization algorithms, the number of hidden layers in a neural network, or the number of clusters in a clustering algorithm. The choice of hyperparameters can significantly impact the performance of the model, and they are often tuned to optimize model performance.
While parameters are learned from the data during the training process, hyperparameters are set by the practitioner before training and guide the learning process. Both are crucial in building effective machine learning models.
What is an example of Hyperparameter tuning?
Hyperparameter tuning is the process of selecting the optimal set of hyperparameters for a machine learning model. Hyperparameters are configuration variables that are set before the training process of a model. They control the learning process itself, rather than being learned from the data. The choice of hyperparameters can significantly impact the model's performance, including its accuracy and generalization.
An example of hyperparameter tuning is the grid search method. In grid search, a set of hyperparameter values is defined to search over, and the algorithm tries all possible combinations of these values. For instance, if the hyperparameters include the learning rate and the number of hidden layers in a neural network, the grid search will try all combinations of these two parameters within the defined range. This method is exhaustive and can be computationally expensive, but it ensures that the best combination is found within the defined search space.
Another method is random search, which randomly samples combinations of hyperparameters within a defined range. This method can be more efficient than grid search, especially when dealing with a large number of hyperparameters, as it does not need to try all possible combinations.
Bayesian optimization is a more advanced method that uses a probabilistic model to guide the search for optimal hyperparameters. It builds a probability model of the objective function and uses it to select the most promising hyperparameters to evaluate in the true objective function.
Manual search is another method of hyperparameter tuning, often used when the number of hyperparameters is relatively small. This method allows the data scientist to have finegrained control over the hyperparameters. The data scientist defines a set of possible values for each hyperparameter, and then manually selects and adjusts the values until the model performance is satisfactory.
Automated hyperparameter tuning methods, such as grid search, random search, and Bayesian optimization, use an algorithm to search for the optimal values. These methods are practical when there are many hyperparameters with a wide range.
In all these methods, it's crucial to have a robust evaluation criterion to judge the model's performance with each set of hyperparameters. A common technique for this is kfold crossvalidation.
How do hyperparameters work?
Hyperparameters in machine learning are configuration variables that control the learning process of an algorithm. They are set before the learning process begins and are not learned from the data during the training process. Examples of hyperparameters include the learning rate, the number of hidden layers in a neural network, the number of decision trees in a random forest, and the batch size for minibatch gradient descent.
Hyperparameters are distinct from model parameters, which are learned during the training process. Model parameters define how to use input data to get the desired output. For instance, in a neural network, the weights connecting the nodes are model parameters.
The process of selecting the optimal set of hyperparameters for a machine learning model is known as hyperparameter tuning or hyperparameter optimization. This is an important step in the model development process, as the choice of hyperparameters can significantly impact the model's performance.
There are several methods for hyperparameter optimization, including:

Grid Search — This method involves defining a set of hyperparameter values and trying all possible combinations of these values. The combination that results in the best model performance is chosen.

Random Search — This method involves randomly selecting combinations of hyperparameters from a defined set. It is less systematic than grid search but can be more efficient when only a small number of hyperparameters significantly influence the model's performance.

Bayesian Optimization — This method builds a probabilistic model of the function mapping from hyperparameter values to the target evaluation metric. It then uses this model to select the likelytobeoptimal hyperparameters to evaluate next.

GradientBased Optimization — This method computes the gradient of the evaluation metric with respect to the hyperparameters and updates the hyperparameters in the direction that improves the evaluation metric.
The choice of hyperparameter optimization method depends on the specific problem, the computational resources available, and the complexity of the model.
Hyperparameters in Machine Learning
Hyperparameters are preset configurations that dictate the behavior of machine learning algorithms. Unlike model parameters, which are learned from data, hyperparameters such as learning rate, number of hidden layers, or clusters in kmeans, are manually selected to optimize performance. Their proper tuning is crucial as they significantly influence the learning outcomes and predictive capabilities of models.
Choosing hyperparameters involves a blend of expertise and systematic experimentation. Practitioners may start with heuristic approaches or rely on automated methods like grid search, random search, and optimization algorithms to navigate the vast configuration space. These techniques aim to identify the bestperforming hyperparameter set on a validation dataset, thereby enhancing model efficacy while avoiding overfitting.
The challenge in hyperparameter selection lies in its dependency on both the data and the task, requiring a careful balance between model complexity and generalization. In practice, hyperparameters are pivotal across various domains:
 In deep learning and neural networks, they adjust the network's structure and learning process.
 Support vector machines depend on hyperparameters to balance classification accuracy and model complexity.
 The number of clusters in kmeans clustering or the depth of trees in decision trees and random forests determines the granularity of the model's understanding.
 Gradient boosting's performance is tuned through the learning rate and the number of stages.
 In reinforcement learning, hyperparameters like the discount factor shape the agent's learning strategy.
 Natural language processing and computer vision applications rely on hyperparameters to refine models for tasks like word embedding and feature extraction.
Ultimately, welltuned hyperparameters can lead to more efficient learning processes and improved algorithm performance across a wide array of machine learning applications.