What is Nvidia A100?
The Nvidia A100 is a graphics processing unit (GPU) designed by Nvidia. It is part of the Ampere architecture and is designed for data centers and high-performance computing.
Learn moreUnderstand the complex world of Generative AI with our glossary of key terms and concepts.
The Nvidia A100 is a graphics processing unit (GPU) designed by Nvidia. It is part of the Ampere architecture and is designed for data centers and high-performance computing.
Learn moreIn abductive logic programming, a programmer writes a set of rules that describe a set of possible explanations for a given observation. The programmer then runs the program on a set of data, and the program outputs the most likely explanation for the data.
Learn moreAbductive reasoning, a key concept in AI, is a form of logical inference that starts with an observation or set of observations and then seeks the simplest and most likely explanation. It is a reasoning process that moves from the specific to the general.
Learn moreAn abstract data type (ADT) is a mathematical model for data types. It is a way of classifying data types based on their behavior and properties, rather than their implementation details.
Learn moreAbstraction in AI involves simplifying complex systems by hiding unnecessary details. This process is crucial in the implementation of data structures and algorithms, allowing for more efficient and manageable operations.
Learn moreAI, or artificial intelligence, is a branch of computer science that deals with creating intelligent machines that can think and work like humans. AI is changing the way we live and work, and it is poised to have a major impact on the economy in the years to come.
Learn moreAccuracy, Precision, Recall, and F1 Score are metrics used in classification tasks to evaluate the performance of a model. Accuracy measures the proportion of correct predictions, Precision measures the proportion of true positive predictions, Recall measures the sensitivity of the model, and F1 Score is the harmonic mean of Precision and Recall.
Learn moreAction language in AI is a set of commands or instructions that can be executed by a machine in order to complete a task. This could be something as simple as moving an object from one location to another, or it could be more complex, such as making a decision based on a set of data.
Learn moreAction model learning is a process in AI whereby a computer system is able to learn how to perform a task by observing another agent performing the same task. This is a powerful learning technique that can be used to teach a computer system new skills without the need for explicit programming. Action model learning has been used to teach a computer system how to play the game of Go, and has also been used to develop robotic systems that are able to learn new tasks by observing humans.
Learn moreThere are a number of possible actions that can be taken in AI. Some of these include:
Learn moreAn activation function is a mathematical function that is used to determine the output of a neural network. The function is used to map the input values (x) to the output values (y). The function is usually a sigmoid function or a rectified linear unit (ReLU).
Learn moreAn adaptive algorithm is an algorithm that changes its behavior based on feedback or data. In AI, this means that the algorithm can learn and improve its performance over time. This is different from a traditional algorithm, which is static and does not change.
Learn moreAn adaptive neuro fuzzy inference system (ANFIS) is a type of artificial intelligence that combines the benefits of both neural networks and fuzzy logic systems. ANFIS is able to learn and make decisions based on data, just like a neural network, but it can also handle imprecise or incomplete data, like a fuzzy logic system. This makes ANFIS ideal for applications where data is constantly changing or is not always accurate.
Learn moreAn admissible heuristic is a heuristic that is guaranteed to find the shortest path from the current state to the goal state. In other words, it is an optimal heuristic. Admissible heuristics are often used in pathfinding algorithms such as A*.
Learn moreAffective computing is a branch of artificial intelligence that deals with the study and design of systems and devices that can recognize, interpret, process, and simulate human emotions. It is an interdisciplinary field that draws on psychology, cognitive science, neuroscience, and engineering.
Learn moreThere are three primary types of agent architectures in AI:
Learn moreNeurocybernetics is the study of how the nervous system and the brain interact with cybernetic systems. It is a relatively new field that is still being explored, but it has the potential to revolutionize the way we think about artificial intelligence (AI).
Learn moreAn AI accelerator is a type of hardware accelerator that is specifically designed to speed up the training of artificial intelligence models. AI accelerators can be used to train both supervised and unsupervised models, and are often used in conjunction with GPUs.
Learn moreEnsuring Large Language Models (LLMs) operate safely and handle adversarial inputs effectively.
Learn moreAn AI-complete problem is one that cannot be solved by a computer using artificial intelligence. This is because the problem is too difficult for the computer to understand and solve. The only way to solve an AI-complete problem is to have a human being solve it.
Learn moreAIML is an acronym for Artificial Intelligence Markup Language. It is an XML-based language used by programmers to create natural language software agents. AIML was developed by the Artificial Intelligence Foundation in the early 1990s.
Learn moreAn algorithm is a set of instructions that are followed in order to complete a task. In AI, algorithms are used to create and train models that can then be used to make predictions or decisions.
Learn moreThere are many ways to design algorithms that are more efficient in AI. One way is to use heuristics, which are rules of thumb that can help guide the search for a solution. Another way is to use meta-learning, which is a technique for learning from previous experience to improve future performance. Finally, algorithms can also be made more efficient by using parallel computing, which allows multiple computations to be done at the same time.
Learn moreAlgorithmic probability, in the context of AI, refers to the likelihood of a particular program producing a specific output. For instance, if we are trying to predict the output of a program, we might say that there is a 50% chance of a certain output.
Learn moreAlphaGo, developed by Google DeepMind, is a revolutionary computer program known for its prowess in the board game Go. It gained global recognition for being the first AI to defeat a professional human Go player.
Learn moreAmazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies like AI21 Labs, Anthropic, Cohere, and Stability AI, along with a broad set of capabilities for building generative AI applications with security, privacy, and responsible AI.
Learn moreAmbient intelligence (AmI) is a term coined by technology futurist Mark Weiser in the late 1990s to describe a world where technology is so embedded into our everyday lives that it becomes invisible.
Learn moreThe analysis of algorithms involves understanding the performance of algorithms in terms of time and space complexity. This analysis is crucial in determining the efficiency of an algorithm and can greatly influence the choice of algorithm for a particular task. The time complexity of an algorithm is typically expressed in Big O notation, which provides an upper bound on the time taken by an algorithm as a function of the input size.
Learn moreThere are many factors to consider when building an AI analytics system, but some of the most important ones are listed below:
Learn moreAndrej Karpathy is a renowned computer scientist and artificial intelligence researcher known for his work on deep learning and neural networks. He served as the director of artificial intelligence and Autopilot Vision at Tesla, and currently works for OpenAI.
Learn moreAnswer set programming (ASP) is a form of declarative programming based on the stable model semantics of logic programming. It is used for knowledge representation and reasoning under the answer set semantics.
Learn moreThe anytime algorithm is a search algorithm that is designed to find a solution to a problem as quickly as possible, while also being able to continue searching for a better solution if more time is available.
Learn moreAn API is an interface that allows two pieces of software to communicate with each other. In the context of AI, an API can be used to allow a [machine learning](/glossary/machine-learning) model to interact with a web application or another piece of software. This can be used to provide predictions or recommendations to users of the application.
Learn moreApproximate string matching is a technique used in AI to find strings that are similar to a given string. This technique is often used to find misspellings or to find strings that are close to a given string.
Learn moreApproximation error is the difference between the estimated value of a function and the actual value of the function. In AI, approximation error is often used to measure the accuracy of a [machine learning](/glossary/machine-learning) algorithm.
Learn moreArgumentation framework is a system that allows computers to reason and debate like humans. It is based on the principles of logic and argumentation, and it can be used to solve problems and make decisions.
Learn moreArtificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply knowledge in a way that is indistinguishable from human intelligence across a wide range of domains and tasks.
Learn moreAn artificial immune system (AIS) is a computational system that is inspired by, and mimics, the immune system of vertebrates. The immune system is a complex network of cells and molecules that protect the body from infection and disease. AISs are designed to detect and respond to computer viruses and other malicious software in a similar way that the immune system detects and responds to biological threats.
Learn moreArtificial Intelligence (AI) is a branch of computer science that aims to create systems capable of performing tasks that would typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, language understanding, and content generation.
Learn moreArtificial intelligence (AI) is a branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously.
Learn moreAn artificial neural network (ANN) is a computational model that is inspired by the way biological neural networks work. These models are used to recognize patterns, cluster data, and make predictions.
Learn moreThe Association for the Advancement of Artificial Intelligence (AAAI) is a nonprofit scientific society devoted to advancing the scientific understanding of artificial intelligence (AI) and its applications. Founded in 1979, AAAI is the world’s largest AI society and a leading publisher of AI research. AAAI sponsors conferences, symposia, and workshops, as well as educational programs and public outreach efforts. AAAI also awards grants, scholarships, and other forms of support to AI researchers and students.
Learn moreThere's no definitive answer to this question since it depends on a number of factors, including the specific algorithm in question and the implementation details. However, in general, the asymptotic computational complexity of an algorithm is the amount of time or resources required to run the algorithm as the input size grows. In other words, it's a measure of how well the algorithm scales.
Learn moreAttention mechanisms are a type of model that allows Large Language Models (LLMs) to weigh different parts of the input differently when making predictions.
Learn moreWhen it comes to AI, one of the key questions is how do we attribute causes to events? This is a difficult question to answer, as there are often many factors that contribute to any given event. However, there are some methods that can be used to try and attribute causes to events.
Learn moreAugmented reality (AR) is a technology that superimposes computer-generated images on a user's view of the real world, providing a composite view.
Learn moreAutoGPT is an open-source autonomous AI agent that, given a goal in natural language, breaks it down into sub-tasks and uses the internet and other tools to achieve it. It is based on the GPT-4 language model and can automate workflows, analyze data, and generate new suggestions without the need for continuous user input.
Learn moreAn automaton is a self-operating machine, or a machine that can operate without human intervention. In AI, an automaton is a machine that can learn and make decisions on its own.
Learn moreThere are many benefits of using automated planning and scheduling in AI. One benefit is that it can help to optimize resources and save time. Automated planning and scheduling can also help to improve decision-making and coordination among team members. Additionally, it can help to reduce the need for manual intervention, and improve the overall efficiency of an organization.
Learn moreAutomated reasoning is a subfield of AI that deals with the automation of deduction. Deduction is the process of drawing conclusions from given premises. Automated reasoning allows computers to reason deductively from a set of given premises. This can be used to solve problems in a wide range of fields, including mathematics, philosophy, and artificial intelligence.
Learn moreAutonomous computing is a term used to describe a computer system that is able to manage itself. This can be done through a variety of means, such as self-configuration, self-optimization, self-healing, and self-protection.
Learn moreThere are many potential benefits of autonomous cars, especially when it comes to safety. One of the biggest benefits is that autonomous cars can help to reduce the number of accidents on the road. They can do this by reacting faster than human drivers to potential hazards and by making better decisions about when to brake or swerve.
Learn moreThere are many benefits to using autonomous robots in AI. One benefit is that they can help to speed up the process of training data for [machine learning](/glossary/machine-learning) algorithms. They can also help to improve the accuracy of these algorithms by providing more data for the algorithm to learn from. Additionally, autonomous robots can help to reduce the cost of data collection and annotation by doing these tasks themselves. Finally, autonomous robots can also help to improve the safety of data collection by avoiding dangerous or difficult-to-reach areas.
Learn moreBackpropagation is a method for training neural networks. It is a method of training where the error is propagated back through the network in order to update the weights. This is done by first calculating the error at the output layer, and then propagating the error back through the network. The weights are then updated according to the error.
Learn moreBPTT is a neural network training algorithm that is used to train recurrent neural networks. It is a variant of the backpropagation algorithm that is used to train feedforward neural networks. BPTT is an efficient algorithm for training recurrent neural networks because it takes into account the dependencies between the current input and the previous inputs.
Learn moreBackward chaining is a technique used in artificial intelligence (AI) that involves working backwards from a goal to determine the best course of action to take. It is often used in planning and problem-solving applications.
Learn moreA bag-of-words model is a simple way to represent text data. It is a representation where each word in the text is represented by a number. The order of the words is not taken into account, so this model is also called a bag-of-words model.
Learn moreA bag-of-words model is a simple way to represent text data. It is a representation where each word in the text is represented by a number. The order of the words is not taken into account, so this model is also called a bag-of-words model.
Learn moreBatch normalization is a technique used to improve the training of deep neural networks. It is a form of regularization that allows the network to learn faster and reduces the chances of overfitting.
Learn moreIn Bayesian programming, a computer program is given a set of data and a set of rules, and then asked to predict the probability of something happening. For example, a Bayesian program might be given data about the weather and asked to predict the probability of rain.
Learn moreThe bees algorithm is a swarm intelligence algorithm that was developed to solve optimization problems. It is based on the foraging behavior of bees. The algorithm has been used to solve problems such as the travelling salesman problem and the knapsack problem.
Learn moreBehavior informatics is the study of how people interact with technology and systems. It encompasses everything from how people use search engines to how they interact with social media. By understanding how people interact with technology, we can design better systems that are more user-friendly and efficient.
Learn moreA behavior tree is a decision tree-like structure used to create AI behaviors. It is composed of nodes, which can be either actions or conditions. Conditions are used to test whether or not an action should be taken, while actions are the actual behaviors that are executed.
Learn moreThe belief-desire-intention (BDI) software model is a computational model of the mind that is used in artificial intelligence (AI) research. The model is based on the belief-desire-intention (BDI) theory of mind, which is a psychological theory of how humans think and make decisions.
Learn moreBERT is a type of Large Language Model (LLM) that is trained to understand the context of language and make predictions about future words or phrases.
Learn moreThe bias-variance tradeoff is a pivotal concept in machine learning that encapsulates the tension between a model's complexity (variance) and its precision in predicting outcomes (bias). This article explores the nuances of this tradeoff, its impact on model performance, and strategies to strike an optimal balance.
Learn moreBig data is a term that refers to the large volume of data that organizations generate on a daily basis. This data can come from a variety of sources, including social media, website interactions, and sensor data.
Learn moreIn computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows.
Learn moreA binary tree is a hierarchical data structure in which each node has at most two children, referred to as the left child and the right child. This structure allows for efficient search, insert, and delete operations, making it a fundamental concept in computer science and artificial intelligence.
Learn moreThe blackboard system is a central idea in AI. It is a metaphor for the way that the AI system works. The blackboard is a central place where all the information is stored. The system works by adding new information to the blackboard and then using that information to solve problems.
Learn moreThe BLEU Score, or Bilingual Evaluation Understudy, is a metric used in machine translation to evaluate the quality of translated text. It measures the similarity between the machine-generated translation and the human reference translation, considering precision of n-grams.
Learn moreA Boltzmann machine is a type of artificial intelligence that is based on a neural network. It is named after Ludwig Boltzmann, who developed the Boltzmann distribution, which is a statistical distribution that describes the distribution of energy in a system.
Learn moreThe Boolean satisfiability problem, also known as SAT, is a problem in AI that is used to determine whether or not a given Boolean formula can be satisfied by a set of truth values. A Boolean formula is a mathematical formula that consists of a set of variables, each of which can take on one of two values, true or false. The problem is to determine whether there exists a set of truth values for the variables that makes the formula true.
Learn moreA Brain-Computer Interface (BCI) is a direct communication pathway between an enhanced or wired brain and an external device. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.
Learn moreBrain technology refers to the development of technologies that are designed to help us understand and manipulate the brain. This can include everything from drugs to improve cognitive function, to brain-computer interfaces that allow us to control devices with our minds.
Learn moreIn AI, the branching factor of a tree is the number of children that each node has. A higher branching factor means that each node has more children, and thus the tree is more complex. A lower branching factor means that each node has fewer children, and thus the tree is simpler. The optimal branching factor depends on the specific problem that the AI is trying to solve.
Learn moreIn AI, brute-force search is a method of problem solving in which all possible solutions are systematically checked for correctness. It is also known as exhaustive search or complete search.
Learn moreA capsule neural network is a type of artificial intelligence that is designed to better model hierarchical relationships. Unlike traditional AI models, which are based on a flat, fully-connected structure, capsule neural networks are based on a hierarchical structure that is similar to the way that the brain processes information.
Learn moreCase-based reasoning is a type of AI that is used to solve problems by looking at similar cases that have already been solved. This type of AI is often used in fields such as medicine, law, and engineering.
Learn moreChain of thought prompting in Machine Learning refers to the process of guiding a [machine learning](/glossary/machine-learning) model through a series of related prompts to generate more coherent and contextually relevant outputs. This process can significantly enhance the performance of [machine learning](/glossary/machine-learning) models as it provides them with a structured way to generate outputs.
Learn moreA chatbot is a computer program that simulates human conversation. It uses artificial intelligence (AI) to understand what people say and respond in a way that simulates a human conversation. Chatbots are used in a variety of applications, including customer service, marketing, and sales.
Learn moreChatGPT is an AI chatbot developed by OpenAI that uses natural language processing to create humanlike conversational dialogue.
Learn moreCloud robotics is a field of robotics that deals with the design, construction and operation of robots that are connected to the cloud. The cloud allows robots to share data and resources, and to be controlled and monitored remotely.
Learn moreCluster analysis is a method used in AI to group similar data points together, minimizing the variance within each group. It's a powerful tool for discovering natural groupings in data, with applications ranging from customer segmentation to fraud detection and gene function grouping.
Learn moreCobweb is a [machine learning](/glossary/machine-learning) algorithm that was developed in the early 1990s. It is a type of artificial intelligence that is used to create and interpret models of data. Cobweb is used to find patterns in data and to make predictions about future data.
Learn moreCognitive architecture, whether biological like the human brain or artificial like an AI system, is a theoretical framework that helps us understand the organization and interaction of cognitive processes. It's used in AI to design intelligent systems that mimic human cognition, with examples including SOAR, ACT-R, and CLARION.
Learn moreCognitive computing is a branch of AI that deals with making computers think and learn like humans. It involves creating algorithms that can understand, reason, and learn from data. This allows computers to solve problems and make decisions in ways that are similar to humans.
Learn moreCognitive science is the study of the mind and its processes. It covers a wide range of topics, from how the mind works to how it learns and remembers. cognitive science is also concerned with the application of these findings to artificial intelligence (AI).
Learn moreThere are many combinatorial optimization problems in AI, but some of the most common ones are the knapsack problem, the traveling salesman problem, and the minimum spanning tree problem.
Learn moreA committee machine is a [machine learning](/glossary/machine-learning) algorithm that is trained using a committee of models, each of which is trained on a different subset of the data. The predictions of the committee are then combined to make a final prediction.
Learn moreCommonsense knowledge is a type of knowledge that is considered to be basic and self-evident. In the context of artificial intelligence (AI), commonsense knowledge refers to the ability of a computer system to understand and process information that is considered to be common sense.
Learn moreCommonsense reasoning is one of the most important and difficult problems in AI. It is the ability to make deductions based on everyday knowledge, such as the fact that people have bodies and can move around, that objects can be moved and combined, and that events happen in time.
Learn moreComputational chemistry is the branch of chemistry that uses computers to perform chemical calculations and simulations. It is a relatively new field that has only emerged in the past few decades, as computers have become more powerful and sophisticated.
Learn moreThe computational complexity of common AI algorithms varies depending on the specific algorithm. For instance, the computational complexity of a simple linear regression algorithm is O(n), where n is the number of features. Conversely, the computational complexity of more complex algorithms like deep learning neural networks is significantly higher and can reach O(n^2) or even O(n^3) in some cases, where n is the number of nodes in the network. It's important to note that a higher computational complexity often means the algorithm requires more resources and time to train and run, which can impact the efficiency and effectiveness of the AI model.
Learn moreComputational creativity is a field of AI research that deals with the creation of new, original artifacts using computational methods. These artifacts can be anything from poems to paintings to pieces of music.
Learn moreComputational cybernetics is a field of AI that deals with the design and analysis of computational systems that can learn and adapt. It is concerned with the ways in which these systems can be made to behave in ways that are similar to the way humans and animals learn and adapt.
Learn moreComputational humor is a branch of AI that deals with the generation and recognition of humor. It is an interdisciplinary field that combines techniques from artificial intelligence, cognitive science, linguistics, and psychology.
Learn moreComputational intelligence (CI) is a branch of artificial intelligence (AI) that deals with the design and development of intelligent computer systems. CI systems are able to learn and adapt to new situations and environments, making them well-suited for tasks that are difficult or impossible for traditional AI systems.
Learn moreComputational learning theory is a subfield of artificial intelligence (AI) that deals with the design and analysis of [machine learning](/glossary/machine-learning) algorithms. The goal of computational learning theory is to understand the computational properties of these algorithms, including their ability to learn from data and generalize to new data.
Learn moreComputational linguistics is the study of how to create computer programs that can process and understand human language. It is a branch of artificial intelligence that deals with natural language processing.
Learn moreComputational mathematics plays a crucial role in AI, providing the foundation for data representation, computation, automation, efficiency, and accuracy.
Learn moreComputational Neuroscience is a field that leverages mathematical tools and theories to investigate brain function. It involves the development and application of computational models and methodologies to understand the principles that govern the structure, physiology and cognitive abilities of the nervous system.
Learn moreComputational Number Theory in AI involves efficient computation of large numbers and complex mathematical operations. The Monte Carlo algorithm is one of the many AI algorithms used for this purpose, known for its speed and accuracy.
Learn moreThere are many problems that AI is trying to solve, but one of the most important is the problem of how to make computers smarter. AI is trying to find ways to make computers better at understanding and responding to the world around them. This is a difficult problem because it requires computers to be able to learn and understand like humans do. However, if AI can solve this problem, it will have a huge impact on the world.
Learn moreThere are many ways to collect data for training a [machine learning](/glossary/machine-learning) algorithm, but some methods are more effective than others. One of the most important things to consider when collecting data is the quality of the data. The data should be representative of the real-world data that the algorithm will be used on, and it should be free of any errors or biases.
Learn moreCAutoD is a toolkit for developing and deploying autonomous vehicles. It is based on the Robot Operating System (ROS) and provides a set of tools and libraries for building, testing, and deploying autonomous vehicles.
Learn moreArtificial intelligence (AI) is a branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously.
Learn moreComputer vision is a field of artificial intelligence that deals with providing computers with the ability to interpret and understand digital images. It is closely related to fields such as image processing, pattern recognition, and [machine learning](/glossary/machine-learning).
Learn moreConcept drift is a phenomenon that occurs when the statistical properties of a data set change over time. This can pose a challenge for machine learning algorithms that are trained on data sets with a fixed set of statistical properties. When the properties of the data set change, the performance of the machine learning algorithm can degrade.
Learn moreConnectionism is a branch of artificial intelligence that is inspired by the way the brain works. The basic idea is that the brain is made up of a large number of simple processing units, or neurons, that are interconnected. This interconnected network of neurons is able to learn and perform complex tasks by adjusting the strength of the connections between the neurons.
Learn moreA consistent heuristic is a rule of thumb that helps an AI system make decisions by narrowing down the options and choosing the best one. It is based on past experience and knowledge, and it is intended to help the AI system find a solution that is close to the optimal solution.
Learn moreA constrained conditional model is a type of artificial intelligence that is used to predict future events. It is based on the idea that if we can constrain the conditions under which an event will occur, we can more accurately predict it. For example, if we know that a certain event will only occur when the weather is sunny, we can more accurately predict when that event will occur.
Learn moreConstraint logic programming is a subfield of AI that deals with the use of constraints to solve problems. Constraints can be used to restrict the search space of a problem, making it easier to find a solution. CLP can be used for a variety of tasks, including planning, scheduling, and resource allocation.
Learn moreConstraint programming is a subfield of AI that deals with the problems of finding solutions to constraints. In other words, it is a way of solving problems by imposing restrictions on the possible solutions.
Learn moreA constructed language is a language that is created artificially, typically for a specific purpose such as international communication or to serve as a lingua franca. Some well-known examples of constructed languages are Esperanto, Klingon, and Dothraki.
Learn moreContext Analysis in AI refers to the process of understanding the surrounding information that gives meaning to a piece of data. It involves the interpretation of various factors such as the source, time, location, and other relevant details that can influence the interpretation of the data. Context Analysis plays a crucial role in various AI applications such as natural language processing, information retrieval, and knowledge representation.
Learn moreIn Large Language Models (LLMs), a context window refers to the amount of text (measured in tokens) that the model can consider at once when generating a response or continuing a piece of text. It sets the limit for how much previous information the model can refer to while making predictions.
Learn moreIn AI, control theory is the study of how agents can best interact with their environment to achieve a desired goal. The goal of control theory is to design algorithms that enable agents to make optimal decisions, while taking into account the uncertainty of the environment.
Learn moreA convolutional neural network (CNN) is a type of neural network that is typically used in computer vision tasks. CNNs are designed to process data in a grid-like fashion, making them well-suited for image processing. CNNs typically consist of an input layer, a series of hidden layers, and an output layer. The hidden layers of a CNN typically contain a series of convolutional layers and pooling layers.
Learn moreAI Copilots are intelligent systems designed to assist in tasks like writing design documents, creating data architecture diagrams, and auditing SQLs against approved patterns. They are expected to become more prevalent in data architecture, helping to expedite the daily process of a data architect and potentially leading to cost optimization as productivity increases.
Learn moreCrossover is a technique used in artificial intelligence, in which two or more different solutions are combined to create a new solution. The new solution is then evaluated to see if it is better than the original solutions. If it is, then it is used as the new starting point for the next generation of solutions. This process is repeated until a satisfactory solution is found.
Learn moreIn [machine learning](/glossary/machine-learning), the dark forest algorithm is a method for detecting malicious nodes in a network. It is based on the principle that malicious nodes are more likely to be connected to other malicious nodes than to benign nodes. The algorithm works by first identifying the nodes that are most likely to be malicious, and then propagating that information to the rest of the network.
Learn moreThe Dartmouth workshop in AI is a two-day event that brings together AI researchers from around the world to discuss the latest advances in the field. The workshop is organized by the Dartmouth College Department of Computer Science and is sponsored by the Association for the Advancement of Artificial Intelligence.
Learn moreThe process of labeling data to train or fine-tune Large Language Models (LLMs).
Learn moreLe processus d’étiquetage des données pour entraîner ou affiner les grands modèles de langage (LLMs).
Learn moreData augmentation is a technique used to artificially increase the size of a training dataset by creating modified versions of existing data. This is done by applying random transformations to the data, such as cropping, flipping, rotation, and adding noise. The hope is that by increasing the size of the training dataset, the model will be better able to generalize to new data.
Learn moreData Flywheel, a concept in data science, refers to the process of using data to create a self-reinforcing system that continuously improves performance and generates more data.
Learn moreIn artificial intelligence, data fusion is the process of combining data from multiple sources to produce more accurate, reliable, and actionable information. The goal of data fusion is to provide a more complete picture of a situation or phenomenon than any single data source could provide on its own.
Learn moreData integration is a process of combining data from multiple sources into a single, coherent view. This is done in order to enable better decision making, improve efficiency, and gain insights that would otherwise be hidden in silos.
Learn moreData labeling in Machine Learning refers to the process of annotating data to make it understandable for machine learning models. This process can significantly impact the performance of machine learning models as it provides them with the necessary information to learn from the data.
Learn moreData mining is the process of extracting valuable information from large data sets. It is a relatively new field that combines elements of statistics, computer science, and artificial intelligence.
Learn moreData Pipelines are a series of data processing steps where the output of one process is the input to the next. They are crucial in machine learning operations, enabling efficient data flow from the data source to the end application.
Learn moreData science is a field of study that combines statistics, computer science, and [machine learning](/glossary/machine-learning) to extract insights from data. It is a relatively new field that has emerged in the past few years as the volume of data available to organizations has grown exponentially.
Learn moreA data set is a collection of data that is used to train an AI model. It can be anything from a collection of images to a set of text data. The data set teaches the AI model how to recognize patterns.
Learn moreA data warehouse is a centralized repository where large volumes of structured data from various sources are stored and managed. It is specifically designed for query and analysis by business intelligence tools, enabling organizations to make data-driven decisions. A data warehouse is optimized for read access and analytical queries rather than transaction processing.
Learn moreDatalog is a declarative programming language for querying databases. It is based on the relational model and uses first-order logic. Datalog is a subset of Prolog, and its syntax is a subset of Prolog's.
Learn moreA decision boundary is a line or surface that separates different regions in data space. It is used to make decisions about which class a new data point belongs to. In AI, a decision boundary is used to separate training data into classes so that a classifier can learn to make predictions about new data.
Learn moreA decision support system (DSS) is a computer program that aids decision-makers in making complex decisions. A DSS is an interactive system that uses data, models and analytical tools to support decision-making.
Learn moreThere is no easy answer when it comes to making decisions in uncertain situations, especially when it comes to AI. However, there are a few things to keep in mind that can help you make the best decision possible.
Learn moreDecision tree learning is a method of [machine learning](/glossary/machine-learning) that is used to create a model of decisions based on data. This model can be used to make predictions about future events. Decision tree learning is a powerful tool for predictive modeling, and has been used in many different fields such as medicine, finance, and marketing.
Learn moreDeclarative programming is a programming paradigm that expresses the logic of a computation without describing its control flow.
Learn moreIn AI, a deductive classifier is a type of algorithm that is used to classify data by using a set of rules that are provided by the user. This type of algorithm is often used when there is a small amount of data to be classified, and the rules that are used to classify the data are known in advance.
Learn moreThe name of the first chess computer to beat a world champion in AI was Deep Blue. Deep Blue was developed by IBM and was first used in competition in 1997. In May of 1997, Deep Blue beat world champion Garry Kasparov in a six-game match by a score of 3.5 to 2.5.
Learn moreDeep learning is a subset of [machine learning](/glossary/machine-learning) that is concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Neural networks are a set of algorithms that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, meaning they are defined by a set of numbers, or vectors.
Learn moreIn AI, the default logic is a reasoning method that allows for the drawing of conclusions from a set of given premises that are incomplete or uncertain. It is based on the principle of assuming the truth of something unless there is evidence to the contrary.
Learn moreDescription logic is a formalism used for knowledge representation and reasoning in artificial intelligence. It is based on the idea of formally describing a set of concepts and their relationships. Description logic is closely related to first-order logic, but it is more expressive in that it allows for the description of complex concepts and their relationships.
Learn moreA Developer Platform for LLM Applications is a platform designed to facilitate the development, deployment, and management of applications powered by Large Language Models (LLMs). It provides a suite of tools and services that streamline the process of building, training, and deploying these large language models for practical applications.
Learn moreDevelopmental robotics is a subfield of AI that deals with the design and development of robots that can learn and adapt to their environment. This is in contrast to traditional robots, which are designed to perform specific tasks and do not have the ability to learn or adapt.
Learn moreThere is no one-size-fits-all answer to this question, as the best way to diagnose a problem in AI will vary depending on the specific problem at hand. However, some general tips that may be useful include:
Learn moreThere are many different types of dialogue systems in AI, each with its own strengths and weaknesses. Some of the most popular types are rule-based systems, statistical systems, and neural networks.
Learn moreIn [machine learning](/glossary/machine-learning) and statistics, dimensionality reduction or feature selection is the process of reducing the number of random variables under consideration by obtaining a set of principal variables. It can be divided into feature selection and feature extraction.
Learn moreDirect Preference Optimization (DPO) is a reinforcement learning algorithm that aims to optimize the policy directly based on the preferences among trajectories, rather than relying on the reward function.
Learn moreA discrete system is a system where the state space is discrete. This means that the system can only be in a finite number of states. In AI, discrete systems are often used to model problems where the state space is too large to be continuous. Discrete systems are often easier to solve than continuous systems, but they can be less accurate.
Learn moreDAI is a type of artificial intelligence that is designed to mimic the decision-making process of humans. DAI systems are able to learn from experience and make decisions based on data, rather than being explicitly programmed to do so.
Learn moreDEL, or Description Logic, is a family of formal logics that can be used for representing and reasoning about the concepts and relationships in a domain. DEL is closely related to, but distinct from, other logics such as first-order logic, propositional logic, and modal logic.
Learn moreEager learning is a type of [machine learning](/glossary/machine-learning) where the algorithm is trained on the entire dataset, rather than waiting to receive a new data instance before starting the training process. This approach is often used when the dataset is small, or when the training process is fast.
Learn moreIn computer science, the Ebert test is a test used to determine whether a given program is intelligent. The test is named after its creator, German computer scientist Klaus Ebert.
Learn moreAn echo state network is a type of artificial neural network that has a recurrent connection within the network. The echo state network is a special type of recurrent neural network (RNN) that is designed to have a stable internal state, even when the input to the network is changing. This internal state allows the echo state network to remember information for a short period of time, which is useful for tasks such as prediction and classification.
Learn moreEffective Accelerationism is a philosophy that advocates for the rapid advancement of artificial intelligence technologies. It posits that accelerating the development and deployment of AI can lead to significant societal benefits.
Learn moreL'accélérationisme efficace est une philosophie qui préconise l'avancement rapide des technologies de l'intelligence artificielle. Elle postule que l'accélération du développement et du déploiement de l'IA peut conduire à des avantages sociétaux significatifs.
Learn moreEmbedding is a technique that involves converting categorical variables into a form that can be provided to machine learning algorithms to improve model performance.
Learn moreAn embodied agent is an artificial intelligence (AI) system that is designed to interact with the physical world. This can include robots, virtual assistants, and other types of intelligent systems.
Learn moreEmbodied cognitive science is a field of cognitive science that emphasizes the importance of the body and the environment in cognition. It is closely related to the field of embodied artificial intelligence (AI), which emphasizes the importance of embodied cognition in AI.
Learn moreEnsemble averaging is a technique used in AI to improve the performance of a model by combining the predictions of multiple models. The models are trained on different subsets of the data, and the predictions are combined using a weighted average. The weights are typically chosen to minimize the error of the ensemble.
Learn moreIn AI, error-driven learning is a method of learning where the AI system is constantly making predictions and then being corrected when it makes a mistake. This allows the AI to learn from its mistakes and improve its predictions over time. This type of learning is often used in supervised learning, where the AI is given a set of training data to learn from.
Learn moreThere are a number of ethical implications of artificial intelligence (AI). One of the most significant is the potential for AI to be used for harm. AI systems are capable of carrying out tasks that can cause physical or psychological harm to people. If these systems are not designed and operated responsibly, there is a risk that they could be used to cause harm on a large scale.
Learn moreAn evolutionary algorithm is a type of AI that mimics the process of natural selection in order to find the best solution to a problem.
Learn moreEvolutionary computation is a type of AI that mimics the process of natural selection to find solutions to problems. It involves creating a population of potential solutions (called "individuals" or "chromosomes") and then selecting the best ones to create the next generation. This process is repeated until a satisfactory solution is found.
Learn moreECF is a framework for developing and deploying AI applications. It is based on the idea of using a modular, pluggable architecture to support the development of AI applications. ECF consists of four key components:
Learn moreExistential risk from artificial general intelligence (AGI) is the risk of human extinction or permanent civilizational decline as a result of creating intelligent machines that can independently learn and improve upon their own cognitive abilities.
Learn moreAn expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, using a combination of rules and heuristics, to come up with a solution.
Learn moreIn AI, fast-and-frugal trees are decision trees that are designed to make decisions quickly and with a limited amount of information. These trees are often used in situations where time is of the essence and there is not enough data to make a more informed decision. Fast-and-frugal trees are based on the principle of parsimony, which states that the simplest explanation is usually the correct one. This principle is often used in scientific research, and it can also be applied to decision-making.
Learn moreThere are many different methods for feature extraction in AI, but some of the most common include:
Learn moreIn [machine learning](/glossary/machine-learning), feature learning or representation learning is a set of techniques that aim to learn features or representations useful for further learning tasks, often with the help of unsupervised learning.
Learn moreThere are a few different types of feature selection methods in AI. Some common methods are:
Learn moreFederated learning is a machine learning approach where data remains on local devices and only model updates are shared. This method ensures data privacy and allows for efficient model training.
Learn moreFine-tuning is the process of adjusting the parameters of an already trained model to enhance its performance on a specific task. It is a crucial step in the deployment of Large Language Models (LLMs) as it allows the model to adapt to specific tasks or datasets.
Learn moreFirst-order logic is a formal system used in mathematics, computer science, and philosophy. It is also known as first-order predicate calculus, the lower predicate calculus, quantification theory, and predicate logic. First-order logic is distinguished from propositional logic, which does not use quantifiers, and second-order logic, which allows quantification over relations and functions.
Learn moreFLOPS, or Floating Point Operations Per Second, is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For AI models, particularly in deep learning, FLOPS is a crucial metric that quantifies the computational complexity of the model or the training process.
Learn moreFluent AI is a type of artificial intelligence that is able to understand and respond to natural language. It is designed to mimic the way humans communicate, making it easier for people to interact with computers and other devices.
Learn moreA formal language is a language that is characterized by a strict set of rules that govern its syntax and semantics. Formal languages are used in many different fields, including mathematics, computer science, and linguistics.
Learn moreIn artificial intelligence, forward chaining is a data-driven approach to problem solving that begins with a set of facts and moves forward to derive new conclusions from them. It is also known as bottom-up reasoning or data-driven reasoning.
Learn moreFoundation models are large deep learning neural networks trained on massive datasets. They serve as a starting point for data scientists to develop machine learning (ML) models for various applications more quickly and cost-effectively.
Learn moreLes modèles de base sont de grands réseaux neuronaux d'apprentissage profond formés sur d'énormes ensembles de données. Ils servent de point de départ pour les data scientists pour développer plus rapidement et de manière plus rentable des modèles d'apprentissage automatique (ML) pour diverses applications.
Learn moreA frame is a data structure that represents a "snapshot" of the world at a particular moment in time. It contains all of the information that an AI system needs to know about the world in order to make decisions.
Learn moreFrame language is a language used to describe the world in terms of a set of objects, their properties, and the relationships between them. It is the basis for many AI applications such as natural language processing, knowledge representation, and reasoning.
Learn moreThe frame problem is a problem in AI that deals with the issue of how to represent knowledge in a way that is useful for reasoning. The problem is that there is an infinite number of ways to represent any given piece of information, and each representation has its own advantages and disadvantages. The challenge is to find a representation that is both expressive and efficient.
Learn moreWhen we think about artificial intelligence (AI), we often think about Hollywood depictions of robots becoming sentient and then turning against humanity. But in reality, AI is already being used in many different ways, from helping us find new cures for diseases to providing customer service support. And as AI continues to evolve, we will only see more and more applications for it.
Learn moreThe long-term implications of AI development are both immensely exciting and somewhat scary. On the one hand, AI has the potential to completely transform the way we live and work, making many tasks easier and freeing up time for us to pursue other interests. On the other hand, as AI gets smarter and more sophisticated, there is a risk that it could become uncontrollable and even dangerous.
Learn moreA fuzzy control system is a type of AI that uses fuzzy logic to make decisions. Fuzzy logic is a type of logic that allows for approximate reasoning, which is useful for making decisions in uncertain situations. Fuzzy control systems are used in a variety of applications, including control of industrial processes, robotic systems, and vehicle systems.
Learn moreFuzzy logic is a type of AI that uses mathematical concepts to approximate human reasoning. It is used in many different fields, including decision making, control systems, and data mining. Fuzzy logic is based on the idea that things can be partially true, and that these partial truths can be combined to form a more accurate picture of the world.
Learn moreIn AI, a fuzzy rule is a rule that is not precise. It is based on approximate rather than exact reasoning. This means that it can deal with imprecise or incomplete information.
Learn moreIn AI, a fuzzy set is a set where each element has a degree of membership. This degree is often represented by a number between 0 and 1, where 1 indicates full membership and 0 indicates no membership.
Learn moreGame theory is the study of strategic decision making. It is often used in artificial intelligence (AI) to model how rational agents should make decisions.
Learn moreGCP Vertex is a managed machine learning platform that enables developers to build, deploy, and scale AI models faster and more efficiently.
Learn moreA GenAI Product Workspace is a workspace designed to facilitate the development, deployment, and management of AI products. It provides a suite of tools and services that streamline the process of building, training, and deploying AI models for practical applications.
Learn moreGGP is a game-playing agent developed by Google DeepMind. It is based on the Monte Carlo tree search algorithm and uses a deep neural network to select its moves.
Learn moreA GAN is a generative adversarial network, which is a type of artificial intelligence algorithm. It is made up of two neural networks, one that generates data and one that tries to classify it. The two networks compete against each other, with the generator trying to fool the classifier and the classifier trying to correctly identify the data. The goal of the GAN is to generate data that is realistic enough to fool the classifier.
Learn moreGPT is a type of Large Language Model (LLM) that is trained to understand the context of language and generate human-like text.
Learn moreA genetic algorithm is a type of AI that uses a process of natural selection to find solutions to problems. It is based on the idea of survival of the fittest, where the fittest solutions are those that are most likely to survive and reproduce.
Learn moreIn AI, a genetic operator is a function that is used to mutate or crossover two individuals in a population of potential solutions to a problem. The goal of using genetic operators is to generate new solutions that are more fit than the existing population.
Learn moreGGML is a C library focused on machine learning, created by Georgi Gerganov. It provides foundational elements for machine learning, such as tensors, and a unique binary format to distribute large language models (LLMs) for fast and flexible tensor operations and machine learning tasks.
Learn moreGGML est une biblioth\`eque C ax\'ee sur l\'apprentissage automatique, cr\'e\'ee par Georgi Gerganov. Elle fournit des \'el\'ements fondamentaux pour l\'apprentissage automatique, tels que les tenseurs, et un format binaire unique pour distribuer de grands mod\`eles de langage (LLM) pour des op\'erations de tenseur rapides et flexibles et des t\^aches d\'apprentissage automatique.
Learn moreGlowworm swarm optimization (GSO) is a population-based metaheuristic algorithm for global optimization that was proposed by C.A.C. Coelho in 2008. It is inspired by the bioluminescent behavior of glowworms.
Learn moreGoogle DeepMind is a pioneering artificial intelligence company known for its groundbreaking advancements in AI technologies. It has developed several innovative AI systems, including the renowned DeepMind AI, a learning machine capable of self-improvement over time. DeepMind Technologies is also actively involved in the development of other AI technologies such as natural language processing and computer vision.
Learn moreGoogle Gemini is an AI model that has been trained on video, images, and audio, making it a "natively multimodal" model capable of reasoning seamlessly across various modalities.
Learn more[OpenAI](/glossary/openai)'s GPTs, are a new way to create custom versions of ChatGPT for specific purposes.
Learn moreA graph is a data structure that consists of a set of nodes (vertices) and a set of edges connecting them. The edges can be directed or undirected.
Learn moreA graph is a data structure that consists of a set of nodes (vertices) and a set of edges connecting them. The edges can be directed or undirected.
Learn moreA graph database is a database that uses graph structures for semantic queries with nodes, edges, and properties to represent and store data. A key concept of the system is the graph, which directly relates data items in the store.
Learn moreA graph is a data structure that consists of a set of nodes (vertices) and a set of edges connecting them. The edges can be directed or undirected.
Learn moreThere are a few different ways to traverse a graph in AI. The best way depends on the specific graph and what you are trying to accomplish.
Learn moreGrok is the first technology developed by Elon Musk's new AI company, xAI. It's an AI chatbot designed to rival others like ChatGPT. Grok is modeled after "The Hitchhiker’s Guide to the Galaxy" and is designed to have a bit of wit and a rebellious streak. It's intended to answer the "spicy questions" that other AI might avoid.
Learn moreGrouped Query Attention (GQA) is a technique used in large language models to speed up the inference time. It groups queries together and computes their attention jointly, reducing the computational complexity and making the model more efficient.
Learn moreL'Attention de requête groupée (GQA) est une technique utilisée dans les grands modèles de langage pour accélérer le temps d'inférence. Elle regroupe les requêtes ensemble et calcule leur attention conjointement, réduisant la complexité computationnelle et rendant le modèle plus efficace.
Learn moreThe Nvidia H100 is a high-performance computing device designed for data centers. It offers unprecedented performance, scalability, and security, making it a game-changer for large-scale AI and HPC workloads.
Learn moreLe Nvidia H100 est un appareil de calcul haute performance conçu pour les centres de données. Il offre des performances, une évolutivité et une sécurité sans précédent, ce qui en fait un élément clé pour les charges de travail AI et HPC à grande échelle.
Learn moreThe halting problem is a problem in computer science that is unsolvable. It is also known as the halting problem of Turing machines. The halting problem is a decision problem which asks if it is possible to determine, given a description of a Turing machine, whether the machine will ever halt. The answer to the halting problem is "No", meaning that it is not possible to determine whether a Turing machine will halt. The halting problem is important because it is one of the few problems in computer science that is unsolvable.
Learn moreA heuristic is a rule of thumb that helps us make decisions quickly and efficiently. In artificial intelligence, heuristics are used to help computers find solutions to problems faster than they could using traditional methods.
Learn moreHuman-in-the-loop (HITL) is a blend of supervised machine learning and active learning, where humans are involved in both the training and testing stages of building an algorithm. This approach combines the strengths of AI and human intelligence, creating a continuous feedback loop that enhances the accuracy and effectiveness of the system. HITL is used in various contexts, including deep learning, AI projects, and machine learning.
Learn moreA hyper-heuristic is an AI technique that combines multiple heuristics to solve a problem. Heuristics are simple, rule-based methods for solving problems. By combining multiple heuristics, hyper-heuristics can find solutions to problems more quickly and efficiently than using a single heuristic.
Learn moreThe process of optimizing the settings within an LLM to improve performance.
Learn moreHyperparameters are parameters whose values are set before the learning process begins. They play a crucial role in the performance of [machine learning](/glossary/machine-learning) algorithms. Unlike other parameters, hyperparameters are not learned from the data and are typically set manually and tuned for optimal performance.
Learn moreThe IEEE Computational Intelligence Society (IEEE-CIS) is a professional society of the Institute of Electrical and Electronics Engineers (IEEE) focused on computational intelligence.
Learn moreIncremental learning is a [machine learning](/glossary/machine-learning) method where new data is incrementally added to a model, and the model is retrained on the new data. This allows the model to continuously learn and improve over time.
Learn moreModel inference is a process in machine learning where a trained model is used to make predictions based on new data. This step comes after the model training phase and involves providing an input to the model which then outputs a prediction. The objective of model inference is to extract useful information from data that the model has not been trained on, effectively allowing the model to infer the outcome based on its previous learning. Model inference can be used in various fields such as image recognition, speech recognition, and natural language processing. It is a crucial part of the machine learning pipeline as it provides the actionable results from the trained algorithm.
Learn moreAn inference engine is a component of an expert system that applies logical rules to the knowledge base to deduce new information or make decisions. It is the core of the system that performs reasoning or inference.
Learn moreInformation integration is a process of combining data from multiple sources into a single, coherent view. This is often done in order to support decision making or other processes that require a comprehensive understanding of the data.
Learn moreInformation Processing Language (IPL) is a programming language that was developed in the late 1950s and early 1960s for artificial intelligence (AI) applications. It was one of the first high-level languages and a precursor to LISP.
Learn moreIn artificial intelligence, intelligence amplification (IA) is a process of improving intelligence using technology. The goal of IA is to create a feedback loop between humans and artificial intelligence, where the AI provides suggestions and the human decides which to implement.
Learn moreAn intelligence explosion is a hypothetical scenario in which artificial intelligence (AI) becomes so powerful that it poses a threat to humanity. The term was first coined by I. J. Good in 1965, and has been popularized by Vernor Vinge and Elon Musk.
Learn moreAn intelligent agent is a software program that is able to autonomously make decisions or take actions in order to achieve a specific goal. In artificial intelligence, intelligent agents are commonly used to solve complex tasks that are difficult or impossible for humans to do.
Learn moreIn artificial intelligence, intelligent control is the use of AI techniques to build systems that can reason, learn, and act autonomously. Intelligent control systems are able to make decisions and take actions based on their understanding of the world and their goals.
Learn moreAn intelligent personal assistant is a software agent that can perform tasks or services for an individual. These tasks or services are typically related to managing information or providing assistance with common tasks.
Learn moreWhen it comes to programming languages, there are a lot of expressions that can be used to perform different tasks. In AI, there is a particular expression that is used in order to determine the meaning of a certain word or phrase. This expression is known as the semantic analysis.
Learn moreIntrinsic motivation is a powerful force that drives us to do what we do. It is the desire to do something because it is interesting, enjoyable, or personally meaningful.
Learn moreAn issue tree is a graphical representation of the relationships between various issues. It is used to help identify and organize the issues that need to be addressed in order to achieve a desired goal.
Learn moreThe junction tree algorithm is a message-passing algorithm for inference in graphical models. It is used to find the most probable configuration of hidden variables in a graphical model, given some observed variables.
Learn moreThe Kardashev Gradient is a concept in AI that refers to the varying levels of technological advancement and energy utilization of civilizations, as proposed by the Kardashev Scale. In the context of AI, it can be used to gauge the potential progress and impact of AI technologies.
Learn moreA kernel method is a technique used in [machine learning](/glossary/machine-learning) to estimate the value of a function at a given point. It is a generalization of the concept of a support vector machine (SVM). Kernel methods are used in a variety of [machine learning](/glossary/machine-learning) tasks, including regression, classification, and clustering.
Learn moreKL-ONE is a knowledge representation language used in AI. It was developed by John McCarthy and Patrick J. Hayes in the early 1980s. KL-ONE is based on the idea of Conceptual Graphs, which were developed by John Sowa.
Learn moreIn artificial intelligence, knowledge acquisition is the process of gathering, selecting, and interpreting information and experiences to create and maintain knowledge within a specific domain. It is a key component of [machine learning](/glossary/machine-learning) and knowledge-based systems.
Learn moreA knowledge-based system is a system that uses artificial intelligence techniques to store and reason with knowledge. The knowledge is typically represented in the form of rules or facts, which can be used to draw conclusions or make decisions.
Learn moreIn AI, knowledge engineering is the process of acquiring, representing, and reasoning with knowledge in order to solve problems. It is a key component of many AI applications, such as expert systems, natural language processing, and [machine learning](/glossary/machine-learning).
Learn moreIn artificial intelligence, knowledge extraction is the process of extracting knowledge from data. This can be done through a variety of methods, including [machine learning](/glossary/machine-learning), natural language processing, and data mining.
Learn moreKIF is a knowledge representation and reasoning system developed by the Stanford AI Lab. It is used by a number of AI applications, including the Cyc project, and has been incorporated into a number of commercial products. KIF provides a formal language for representing knowledge as a set of first-order logic sentences, and a inference engine for reasoning over these sentences.
Learn moreIn AI, knowledge representation and reasoning is the process of representing knowledge in a format that can be used by computers to solve problems. This process involves representing knowledge in a formal language that can be interpreted by a computer program, and using reasoning algorithms to solve problems.
Learn moreLangChain is an open-source framework designed to simplify the creation of applications using large language models (LLMs). It provides a standard interface for chains, integrations with other tools, and end-to-end chains for common applications.
Learn moreLarge Language Models (LLMs) are advanced artificial intelligence models that employ deep learning techniques to understand, generate, and predict text.
Learn moreLazy learning is a [machine learning](/glossary/machine-learning) technique that delays the learning process until new data is available. This approach is useful when the cost of learning is high or when the amount of training data is small.
Learn moreThe Levenshtein distance is a string metric for measuring the difference between two sequences. It is calculated as the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.
Learn moreLisp is a family of computer programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in 1958, Lisp is the second-oldest high-level programming language in widespread use today. Only Fortran is older, by one year. Lisp was invented by John McCarthy while he was at the Massachusetts Institute of Technology (MIT).
Learn moreLlama 2: The second iteration of Meta's open-source [large language model](/glossary/large-language-model). It's not a single model but a collection of four models, each differing in the number of parameters they contain: 7B, 13B, 34B, and 70B parameters.
Learn moreLlamaIndex, formerly known as GPT Index, is a dynamic data framework designed to seamlessly integrate custom data sources with expansive language models (LLMs). Introduced after the influential GPT launch in 2022, LlamaIndex is an advanced tool in the AI landscape that offers an approachable interface with high-level API for novices and low-level API for seasoned users, transforming how LLM-based applications are built.
Learn moreA glossary of top Large Language Model LLM application programming frameworks.
Learn moreUn glossaire des principaux cadres de programmation d'applications de grands modèles de langage (LLM).
Learn moreAn LLM App Platform is a platform designed to facilitate the development, deployment, and management of applications powered by Large Language Models (LLMs). It provides a suite of tools and services that streamline the process of building, training, and deploying these large language models for practical applications.
Learn moreUne plateforme d'application LLM est une plateforme conçue pour faciliter le développement, le déploiement et la gestion d'applications alimentées par de grands modèles de langage (LLM). Elle fournit un ensemble d'outils et de services qui simplifient le processus de construction, d'entraînement et de déploiement de ces grands modèles de langage pour des applications pratiques.
Learn moreEmerging Architectures for LLM Applications is a comprehensive guide that provides a reference architecture for the emerging LLM app stack. It shows the most common systems, tools, and design patterns used by AI startups and sophisticated tech companies.
Learn moreLLM Evaluation is a process designed to assess the performance, reliability, and effectiveness of Large Language Models (LLMs). It involves a suite of tools and methodologies that streamline the process of evaluating, fine-tuning, and deploying LLMs for practical applications.
Learn moreL'évaluation des LLM est un processus conçu pour évaluer la performance, la fiabilité et l'efficacité des Modèles de Langage à Grande Échelle (LLMs). Il implique un ensemble d'outils et de méthodologies qui rationalisent le processus d'évaluation, de réglage fin et de déploiement des LLMs pour des applications pratiques.
Learn moreLLM Monitoring is a process designed to track the performance, reliability, and effectiveness of Large Language Models (LLMs). It involves a suite of tools and methodologies that streamline the process of monitoring, fine-tuning, and deploying LLMs for practical applications.
Learn moreLLMOps, or Large Language Model Operations, is a specialized discipline within the broader field of MLOps (Machine Learning Operations) that focuses on the management, deployment, and maintenance of large language models (LLMs). LLMs are powerful AI models capable of generating human-quality text, translating languages, writing different kinds of creative content, and answering questions in an informative way. However, due to their complexity and resource requirements, LLMs pose unique challenges in terms of operations.
Learn moreLLMOps ، أو عمليات النماذج اللغوية الكبيرة ، هي تخصص داخل مجال أوسع من MLOps (عمليات التعلم الآلي) يركز على الإدارة والتنفيذ والصيانة للنماذج اللغوية الكبيرة (LLMs). النماذج اللغوية الكبيرة هي نماذج الذكاء الاصطناعي القوية التي تستطيع إنتاج نصوص بجودة بشرية ، وترجمة اللغات ، وكتابة أنواع مختلفة من المحتوى الإبداعي ، والإجابة على الأسئلة بطريقة معلوماتية. ومع ذلك ، بسبب تعقيدها ومتطلبات الموارد ، تواجه النماذج اللغوية الكبيرة تحديات فريدة من نوعها فيما يتعلق بالعمليات.
Learn moreLarge Language Model Operations (LLMOps) is a field that focuses on managing the lifecycle of large language models (LLMs). The complexity and size of these models necessitate a structured approach to manage tasks such as data preparation, model training, model deployment, and monitoring. However, performing these tasks manually can be repetitive, error-prone, and limit scalability. Automation plays a key role in addressing these challenges by streamlining LLMOps tasks and enhancing efficiency.
Learn moreLLMOps, or Large Language Model Operations, is a rapidly evolving discipline with practical applications across a multitude of industries and use cases. Organizations are leveraging this approach to enhance customer service, improve product development, personalize marketing campaigns, and gain insights from data. By managing the end-to-end lifecycle of Large Language Models, from data collection and model training to deployment, monitoring, and continuous optimization, LLMOps fosters continuous improvement, scalability, and adaptability of LLMs in production environments. This is instrumental in harnessing the full potential of LLMs and driving the next wave of innovation in the AI industry.
Learn moreLLMOps,或大型语言模型操作,是MLOps(机器学习操作)更广泛领域中的专门学科,专注于管理、部署和维护大型语言模型(LLMs)。LLMs是强大的AI模型,能够生成人类质量的文本,翻译语言,编写各种创意内容,并以信息化的方式回答问题。然而,由于它们的复杂性和资源需求,LLMs在操作方面提出了独特的挑战。
Learn moreLLMOps, eller Large Language Model Operations, er en specialiseret disciplin inden for det bredere felt af MLOps (Machine Learning Operations), der fokuserer på styring, implementering og vedligeholdelse af store sprogmodeller (LLMs). LLM'er er kraftfulde AI-modeller, der er i stand til at generere menneskekvalitetstekst, oversætte sprog, skrive forskellige typer kreativt indhold og besvare spørgsmål på en informativ måde. Imidlertid stiller deres kompleksitet og ressourcekrav unikke udfordringer i form af operationer.
Learn moreData management is a critical aspect of Large Language Model Operations (LLMOps). It involves the collection, cleaning, storage, and monitoring of data used in training and operating large language models. Effective data management ensures the quality, availability, and reliability of this data, which is crucial for the performance of the models. Without proper data management, models may produce inaccurate or unreliable results, hindering their effectiveness. This article explores why data management is so crucial for LLMOps and how it can be effectively implemented.
Learn moreData quality plays a crucial role in Large Language Model Operations (LLMOps). High-quality data is essential for training effective models, ensuring accurate predictions, and maintaining the reliability of AI systems. This article explores the importance of data quality in LLMOps, the challenges associated with maintaining it, and the strategies for improving data quality.
Learn moreLLMOps, oder Large Language Model Operations, ist eine spezialisierte Disziplin innerhalb des breiteren Feldes der MLOps (Machine Learning Operations), die sich auf das Management, die Bereitstellung und die Wartung von großen Sprachmodellen (LLMs) konzentriert. LLMs sind leistungsstarke KI-Modelle, die in der Lage sind, menschenähnlichen Text zu erzeugen, Sprachen zu übersetzen, verschiedene Arten von kreativen Inhalten zu schreiben und Fragen auf informative Weise zu beantworten. Aufgrund ihrer Komplexität und Ressourcenanforderungen stellen LLMs jedoch einzigartige Herausforderungen in Bezug auf den Betrieb dar.
Learn moreModel deployment is a crucial phase in Large Language Model Operations (LLMOps). It involves making the trained models available for use in a production environment. This article explores the importance of model deployment in LLMOps, the challenges associated with it, and the strategies for effective model deployment.
Learn moreLLMOps, o Operaciones de Modelos de Lenguaje Grande, es una disciplina especializada dentro del campo más amplio de MLOps (Operaciones de Aprendizaje Automático) que se centra en la gestión, implementación y mantenimiento de modelos de lenguaje grande (LLMs). Los LLMs son modelos de IA poderosos capaces de generar texto de calidad humana, traducir idiomas, escribir diferentes tipos de contenido creativo y responder preguntas de manera informativa. Sin embargo, debido a su complejidad y requisitos de recursos, los LLMs plantean desafíos únicos en términos de operaciones.
Learn moreExploring data is a fundamental aspect of Large Language Model Operations (LLMOps). It involves understanding the data's structure, quality, and potential biases. This article delves into the importance of data exploration in LLMOps, the challenges it presents, and the strategies for effective data exploration.
Learn moreLLMOps, ou Large Language Model Operations, est une discipline spécialisée dans le domaine plus large des MLOps (Machine Learning Operations) qui se concentre sur la gestion, le déploiement et la maintenance des grands modèles de langage (LLM). Les LLM sont des modèles d'IA puissants capables de générer du texte de qualité humaine, de traduire des langues, d'écrire différents types de contenu créatif et de répondre aux questions de manière informative. Cependant, en raison de leur complexité et de leurs exigences en matière de ressources, les LLM posent des défis uniques en termes d'opérations.
Learn moreLarge Language Models (LLMs) are powerful AI systems that can understand and generate human language. They are being used in a wide variety of applications, such as natural language processing, machine translation, and customer service. However, LLMs can be complex and challenging to manage and maintain in production. This is where LLMOps comes in.
Learn moreInfrastructure is the backbone of LLMOps, providing the necessary computational power and storage capacity to train, deploy, and maintain large language models efficiently. A robust and scalable infrastructure ensures that these complex models can operate effectively, handle massive datasets, and deliver real-time insights.
Learn moreLLMOps, o Large Language Model Operations, è una disciplina specializzata all'interno del campo più ampio di MLOps (Machine Learning Operations) che si concentra sulla gestione, il deployment e la manutenzione di grandi modelli di linguaggio (LLM). Gli LLM sono potenti modelli di intelligenza artificiale in grado di generare testi di qualità umana, tradurre lingue, scrivere diversi tipi di contenuti creativi e rispondere a domande in modo informativo. Tuttavia, a causa della loro complessità e delle esigenze di risorse, gli LLM pongono sfide uniche in termini di operazioni.
Learn moreLLMOps、または大規模言語モデル操作、はMLOps(マシンラーニング操作)の広範な分野の中で、大規模言語モデル(LLMs)の管理、デプロイ、および保守に焦点を当てた専門的な分野です。 LLMは、人間のようなテキストを生成し、言語を翻訳し、さまざまな種類のクリエイティブなコンテンツを作成し、情報的な方法で質問に答える能力を持つ強力なAIモデルです。しかし、その複雑さとリソース要件のため、LLMは操作に関して独自の課題を提示します。
Learn moreLLMOps, 또는 대형 언어 모델 운영,은 MLOps (머신러닝 운영)의 더 넓은 분야 내에서 대형 언어 모델 (LLMs)의 관리, 배포, 유지 관리에 초점을 맞춘 전문 분야입니다. LLMs는 인간 수준의 텍스트를 생성하고, 언어를 번역하고, 다양한 종류의 창의적인 콘텐츠를 작성하고, 정보를 제공하는 질문에 답하는 능력이 있는 강력한 AI 모델입니다. 그러나 그들의 복잡성과 자원 요구 사항으로 인해, LLMs는 운영 측면에서 독특한 도전을 제기합니다.
Learn moreThe LLMOps Lifecycle involves several stages that ensure the efficient management and maintenance of Large Language Models (LLMs). These AI systems, capable of understanding and generating human language, are utilized in various applications including natural language processing, machine translation, and customer service. The complexity of LLMs presents challenges in their operation, making LLMOps an essential discipline in their production lifecycle.
Learn moreModel observability is a crucial aspect of Large Language Model Operations (LLMOps). It involves monitoring and understanding the behavior of models in production. This article explores the importance of model observability in LLMOps, the challenges associated with it, and the strategies for effective model observability.
Learn moreEngineering models and pipelines play a crucial role in Large Language Model Operations (LLMOps). Efficiently engineered models and pipelines are essential for training effective models, ensuring accurate predictions, and maintaining the reliability of AI systems. This article explores the importance of engineering models and pipelines in LLMOps, the challenges associated with maintaining them, and the strategies for improving their efficiency.
Learn moreLLMOps, ou Operações de Modelos de Linguagem de Grande Escala, é uma disciplina especializada dentro do campo mais amplo de MLOps (Operações de Aprendizado de Máquina) que se concentra na gestão, implantação e manutenção de modelos de linguagem de grande escala (LLMs). LLMs são modelos de IA poderosos capazes de gerar texto de qualidade humana, traduzir idiomas, escrever diferentes tipos de conteúdo criativo e responder perguntas de maneira informativa. No entanto, devido à sua complexidade e requisitos de recursos, os LLMs apresentam desafios únicos em termos de operações.
Learn moreLLMOps, или операции с большими языковыми моделями, это специализированная дисциплина в более широкой области MLOps (операции с машинным обучением), которая фокусируется на управлении, развертывании и обслуживании больших языковых моделей (LLM). LLM - это мощные модели ИИ, способные генерировать текст качества человека, переводить языки, писать различные виды творческого контента и отвечать на вопросы информативным способом. Однако из-за их сложности и требований к ресурсам, LLM представляют уникальные вызовы с точки зрения операций.
Learn moreLarge Language Model Operations (LLMOps) refers to the processes and practices involved in deploying, managing, and scaling large language models (LLMs) in a production environment. As AI technologies become increasingly integrated into our digital infrastructure, the security of these models and their associated data has become a matter of paramount importance. Unlike traditional software, LLMs present unique security challenges, such as potential misuse, data privacy concerns, and vulnerability to attacks. Therefore, understanding and addressing these challenges is critical to safeguarding the integrity and effectiveness of LLMOps.
Learn moreLLMOps, eller Large Language Model Operations, är en specialiserad disciplin inom det bredare området MLOps (Machine Learning Operations) som fokuserar på hantering, distribution och underhåll av stora språkmodeller (LLMs). LLMs är kraftfulla AI-modeller som kan generera text av mänsklig kvalitet, översätta språk, skriva olika typer av kreativt innehåll och svara på frågor på ett informativt sätt. På grund av deras komplexitet och resurskrav ställer LLMs unika utmaningar när det gäller operationer.
Learn moreExperiment tracking plays a crucial role in Large Language Model Operations (LLMOps). It is essential for managing and comparing different model training runs, ensuring reproducibility, and maintaining the efficiency of AI systems. This article explores the importance of experiment tracking in LLMOps, the challenges associated with it, and the strategies for effective experiment tracking.
Learn moreLLMOps, або Операції з великими моделями мови, це спеціалізована дисципліна в межах ширшого поля MLOps (Операції з машинним навчанням), яка зосереджується на управлінні, розгортанні та обслуговуванні великих моделей мови (LLM). LLM - це потужні моделі AI, здатні генерувати текст якості людини, перекладати мови, писати різні види творчого контенту та відповідати на питання інформативним способом. Однак, через їх складність та вимоги до ресурсів, LLM ставлять унікальні виклики з точки зору операцій.
Learn moreVersioning in Large Language Model Operations (LLMOps) refers to the systematic process of tracking and managing different versions of Large Language Models (LLMs) throughout their lifecycle. As LLMs evolve and improve, it becomes crucial to maintain a history of these changes. This practice enhances reproducibility, allowing for specific models and their performance to be recreated at a later point. It also ensures traceability by documenting changes made to LLMs, which aids in understanding their evolution and impact. Furthermore, versioning facilitates optimization in the LLMOps process by enabling the comparison of different model versions and the selection of the most effective one for deployment.
Learn moreThere are a few key differences between logic programming and other AI programming paradigms. For one, logic programming is based on a declarative programming paradigm, meaning that the programmer declares what the program should do, rather than how it should do it. This makes logic programming programs more human-readable and easier to understand.
Learn moreIn artificial intelligence, long short-term memory (LSTM) is a recurrent neural network (RNN) architecture that is used in the field of deep learning. LSTM networks are well-suited to classifying, processing and making predictions based on time series data, since they can remember previous information in long-term memory.
Learn moreMachine learning is a subset of artificial intelligence (AI) that deals with the design and development of algorithms that can learn from and make predictions on data. These algorithms are able to automatically improve given more data.
Learn moreThere are many different machine listening tasks in AI, but some of the most common ones include:
Learn moreMachine perception is the ability of a machine to interpret and understand the environment around it. This is a key area of research in artificial intelligence (AI) as it enables machines to interact with the world in a more natural way.
Learn moreMachine vision is a field of AI that deals with the ability of machines to interpret and understand digital images. It is similar to human vision, but with the added ability to process large amounts of data quickly and accurately. Machine vision is used in a variety of applications, including facial recognition, object detection, and image classification.
Learn moreA Markov chain is a model used to predict the future state of a system based on its current state. The model is named after Andrey Markov, who first proposed it in the early 1900s.
Learn moreA Markov decision process, or MDP, is a mathematical framework for modeling decision-making in situations where outcomes are uncertain. MDPs are commonly used in artificial intelligence (AI) to help agents make decisions in complex, uncertain environments.
Learn moreThere are many different types of optimization methods used in AI, and the choice of which method to use depends on the specific problem being solved. Some common optimization methods used in AI include gradient descent, evolutionary algorithms, and simulated annealing.
Learn moreThe objectives of an AI system can be divided into two categories: functional objectives and non-functional objectives.
Learn moreMechatronics is the combination of mechanical and electronic engineering, with a focus on the design and manufacture of smart, connected products and systems. It is an interdisciplinary field that merges the principles of mechanical engineering, electronics, control engineering, and computer science to create sophisticated products and systems.
Learn moreThere are a few different ways to reconstruct a metabolic network, but the best way to do it in AI is to use a technique called constraint-based reconstruction. This technique uses a set of constraints to reconstruct the network, and it has been shown to be very accurate.
Learn moreMetaheuristics are a type of algorithm that are used to find approximate solutions to optimization problems. They are often used when the exact solution is too computationally expensive to find. Metaheuristics work by iteratively improving a solution until it is good enough to be considered the final answer.
Learn moreThe METEOR Score, or Metric for Evaluation of Translation with Explicit Ordering, is a metric used in machine translation to evaluate the quality of translated text. It measures the similarity between the machine-generated translation and the human reference translation, considering precision, recall, synonymy, paraphrase, and sentence structure.
Learn moreMistral 7B is a 7.3 billion parameter language model that represents a significant advancement in large language model capabilities. It outperforms the 13 billion parameter Llama 2 model on all tasks and surpasses the 34 billion parameter Llama 1 on many benchmarks. Mistral 7B is designed for both English language tasks and coding tasks, making it a versatile tool for a wide range of applications.
Learn moreMistral 7B est un modèle de langage de 7,3 milliards de paramètres qui représente une avancée significative dans les capacités des grands modèles de langage. Il surpasse le modèle Llama 2 de 13 milliards de paramètres sur toutes les tâches et dépasse le modèle Llama 1 de 34 milliards de paramètres sur de nombreux benchmarks. Mistral 7B est conçu pour les tâches en langue anglaise et les tâches de codage, ce qui en fait un outil polyvalent pour une large gamme d'applications.
Learn moreThe Mistral "Mixtral" 8x7B 32k model is a scaled-down GPT-4 with an 8-expert Mixture of Experts (MoE) architecture, using a sliding window beyond 32K parameters. This model is designed for high performance and efficiency, surpassing the 13B Llama 2 in all benchmarks and outperforming the 34B Llama 1 in reasoning, math, and code generation. It uses grouped-query attention for quick inference and sliding window attention for Mistral 7B — Instruct, fine-tuned for following directions.
Learn moreMixture of Experts (MOE) is a machine learning technique that involves training multiple models, each becoming an "expert" on a portion of the input space. It is a form of ensemble learning where the outputs of multiple models are combined, often leading to improved performance.
Learn moreLe Mélange d'Experts (MoE) est une technique d'apprentissage automatique qui implique la formation de plusieurs modèles, chacun devenant un "expert" sur une partie de l'espace d'entrée. C'est une forme d'apprentissage en ensemble où les sorties de plusieurs modèles sont combinées, ce qui conduit souvent à une amélioration des performances.
Learn moreML Ops, or Machine Learning Operations, refers to the practice of managing and orchestrating machine learning models in production environments. This includes maintaining and monitoring Large Language Models (LLMs) to ensure optimal performance and reliability.
Learn moreThe MMLU Benchmark, or Massive Multi-task Language Understanding, is an LLM evaluation test dataset split into a few-shot development set, a 1540-question validation set, and a 14079-question test set that measures text models' multitask accuracy across 57 tasks like math, history, law, and computer science in zero-shot and few-shot settings to evaluate their world knowledge, problem-solving skills, and limitations.
Learn moreModel checking is a process of verifying the correctness of a model of a system. The model is typically a transition system, which is a mathematical representation of a system. The verification process consists of checking that the model satisfies a set of properties. These properties can be safety properties, which state that something bad will never happen, or liveness properties, which state that something good will eventually happen.
Learn moreMonte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in game play. MCTS was first introduced by Robert A.J. van den Herik in 2006 as an extension to Monte Carlo tree search in the game of Go.
Learn moreA multi-agent system is a system composed of multiple agents that interact with each other to accomplish a common goal. Multi-agent systems are used in a variety of fields, including artificial intelligence, economics, and sociology.
Learn moreMulti-swarm optimization is a technique used in artificial intelligence (AI) to optimize a function by iteratively improving a set of candidate solutions. It is a metaheuristic, meaning it is a high-level strategy for finding good solutions to problems that may not have an obvious or simple solution.
Learn moreMultimodal in Machine Learning refers to models that can process and relate information from different types of data such as text, images, and audio. This ability can significantly enhance the performance of machine learning models as it allows them to understand complex data and make more accurate predictions.
Learn moreA mutation is a random change to a solution in a population of solutions. Mutations can be beneficial, harmful, or neutral to the solution's fitness. In artificial intelligence, mutations are often used to generate new solutions in the hope of finding a better solution.
Learn moreMycin is a computer program that was developed in the 1970s at Stanford University. It was one of the first expert systems, and was designed to diagnose and treat infections in humans. Mycin was written in the Lisp programming language, and used a rule-based system to make decisions.
Learn moreA naive Bayes classifier is a simple [machine learning](/glossary/machine-learning) algorithm that is used to predict the class of an object based on its features. The algorithm is named after the Bayes theorem, which is used to calculate the probability of an event occurring.
Learn moreA naive semantic is a semantic that is not based on any specific domain knowledge. It is simply a set of rules that are used to interpret the meaning of a text.
Learn moreIn computer science, name binding is the technique of associating a name with a value. This can be done statically (at compile time) or dynamically (at run time). In static name binding, the association between a name and a value is set at compile time and cannot be changed. In dynamic name binding, the association between a name and a value can be changed at run time.
Learn moreNamed-entity recognition (NER) is a sub-task of information extraction that seeks to locate and classify named entities in text into pre-defined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc.
Learn moreA named graph is a graph that has been given a name. This name can be used to refer to the graph when needed. Named graphs are often used in AI applications, as they can help to keep track of different graphs that are being used.
Learn moreNatural language generation (NLG) is a subfield of artificial intelligence (AI) that is focused on the generation of natural language text by computers. NLG systems are used in a variety of applications, including automatic summarization, report generation, question answering, and dialogue systems.
Learn moreNatural language processing (NLP) is a subfield of artificial intelligence (AI) that deals with the interaction between computers and human (natural) languages.
Learn moreNatural language programming is a subfield of AI that deals with the ability of computers to understand and process human language. It is an interdisciplinary field that combines linguistics, computer science, and artificial intelligence.
Learn moreA network motif is a recurring pattern of connectivity within a complex network. These patterns can provide insight into the function and design of the network. In the context of artificial intelligence (AI), network motifs can be used to identify patterns in data that may be indicative of certain behaviours or relationships. For example, a network motif may be used to detect patterns of activity in a neural network that are indicative of learning.
Learn moreNeural machine translation is a subfield of artificial intelligence (AI) that deals with the translation of text from one natural language to another. Neural machine translation is a neural network-based approach to machine translation that is designed to mimic the way the human brain processes language.
Learn moreA neural Turing machine (NTM) is a neural network architecture that can learn to perform complex tasks by reading and writing to an external memory. The NTM is a generalization of the long short-term memory (LSTM) network, which is a type of recurrent neural network (RNN).
Learn moreNeuro-fuzzy is a term used to describe a type of artificial intelligence that combines elements of both neural networks and fuzzy logic.
Learn moreNeurocybernetics is the study of how the nervous system and the brain interact with cybernetic systems. It is a relatively new field that is still being explored, but it has the potential to revolutionize the way we think about artificial intelligence (AI).
Learn moreNeuromorphic engineering is a new field of AI that is inspired by the way the brain works. This type of AI is designed to mimic the way the brain processes information, making it more efficient and effective than traditional AI.
Learn moreA node is a point in a network where data or communication can enter or leave. In AI, nodes are used to represent data points, and the connections between them represent relationships between the data. Nodes can be connected to other nodes to form a network, which can be used to represent anything from a simple relationship between two data points, to a complex system of interconnected data.
Learn moreA nondeterministic algorithm is an algorithm that, given a particular input, can produce different outputs. This is in contrast to a deterministic algorithm, which will always produce the same output for a given input.
Learn moreIn computational complexity theory, NP (nondeterministic polynomial time) is a class of problems for which a solution can be verified in polynomial time by a deterministic Turing machine. NP includes all problems that can be solved in polynomial time, but it is not known whether all problems in NP can be solved in polynomial time. The most famous problem in NP is the P vs NP problem, which asks whether every problem for which a solution can be verified in polynomial time can also be solved in polynomial time.
Learn moreIn computer science, the NP-completeness or NP-hardness of a problem is a measure of the difficulty of solving that problem. A problem is NP-complete if it can be solved by a polynomial time algorithm, and if it is also NP-hard.
Learn moreIn computer science, NP-hardness is the defining feature of a class of problems that are informally "hard to solve" when using the most common types of algorithms. More precisely, NP-hard problems are those that are at least as hard as the hardest problems in NP, the class of decision problems for which a solution can be verified in polynomial time.
Learn moreOccam's Razor, in the context of AI, is a principle that advocates for simplicity. It suggests that the simplest model or explanation is often the most correct. This principle is frequently applied in machine learning when selecting between different models, with a preference for the model that provides the simplest explanation.
Learn moreIn recent years, there has been a growing interest in artificial intelligence (AI) and its potential to revolutionize various industries. One area of AI that has received particular attention is offline learning, which refers to the ability of AI systems to learn from data that is not necessarily connected to the internet.
Learn moreOllama is a user-friendly tool designed to run large language models (LLMs) locally on a computer. It supports a variety of AI models including LLaMA-2, uncensored LLaMA, CodeLLaMA, Falcon, Mistral, Vicuna model, WizardCoder, and Wizard uncensored. It is current1ly compatible with MacOS and Linux, with Windows support expected to be available soon.
Learn moreOllama est un outil convivial conçu pour exécuter des modèles de langage de grande taille (LLM) localement sur un ordinateur. Il prend en charge une variété de modèles d'IA, y compris LLaMA-2, LLaMA non censuré, CodeLLaMA, Falcon, Mistral, le modèle Vicuna, WizardCoder et Wizard non censuré. Il est actuellement compatible avec MacOS et Linux, avec un support pour Windows prévu prochainement.
Learn moreOnline machine learning is a process where machines are able to learn and improve on their own, without human intervention. This is done by feeding the machine data, which it can then use to improve its performance. The benefits of online machine learning include the ability to learn at a much faster pace than traditional methods, and the ability to learn from a wider variety of data sources.
Learn moreIn AI, ontology learning is the process of automatically extracting ontologies from text. This is typically done by first extracting a set of terms from the text, and then using a set of heuristics to determine which terms are related.
Learn moreOpen Mind Common Sense is an AI project that is trying to create a computer system that has common sense. The project is being developed by the Massachusetts Institute of Technology (MIT) and is funded by the United States government. The aim of the project is to create a system that can understand the world the way humans do. The project is still in its early stages, but the team has made some progress. In 2016, they released a dataset of more than 200,000 common-sense facts. The team is now working on developing algorithms that can learn from this data and make predictions about the world.
Learn moreOpen-source software (OSS) is software that is released under a license that allows users to freely use, modify, and distribute the software. OSS is often developed in a collaborative manner, with developers sharing their code and working together to improve the software.
Learn moreOpenAI is a research company that promotes friendly artificial intelligence in which machines act rationally. The company is supported by co-founders Elon Musk, Greg Brockman, Ilya Sutskever, and Sam Altman. OpenAI was founded in December 2015, and has since been involved in the development of artificial intelligence technologies and applications.
Learn moreOpenAI est une entreprise de recherche qui promeut une intelligence artificielle amicale dans laquelle les machines agissent de manière rationnelle. L'entreprise est soutenue par les co-fondateurs Elon Musk, Greg Brockman et Reid Hoffman. OpenAI a été fondée en décembre 2015 et a depuis participé au développement de technologies et d'applications d'intelligence artificielle.
Learn moreOpenCog is an artificial intelligence project aimed at creating a cognitive architecture, a machine intelligence framework and toolkit that can be used to build intelligent agents and robots. The project is being developed by the OpenCog Foundation, a non-profit organization.
Learn morePartial order reduction is a technique used in AI to reduce the search space of a problem by considering only a subset of the possible solutions. This can be done by using a heuristic function to prune the search space, or by using a constraint satisfaction algorithm to find a solution that is guaranteed to be optimal.
Learn moreA POMDP is a Partially Observable Markov Decision Process. It is a mathematical model used to describe an AI decision-making problem in which the agent does not have complete information about the environment. The agent must use its observations and past experience to make decisions that will maximize its expected reward.
Learn moreParticle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It is a population-based stochastic optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling.
Learn moreThere are many pathfinding algorithms used in AI, but some of the most common are A*, Dijkstra’s, and Breadth-First-Search.
Learn moreThere are many different methods for pattern recognition in AI, but some of the most common include:
Learn morePaul Cohen was an American mathematician best known for his groundbreaking work in set theory, particularly the Continuum Hypothesis. He was awarded the Fields Medal in 1966.
Learn morePerplexity is a measurement in information theory that is used to determine how well a probability distribution or probability model predicts a sample. It may be used in the field of natural language processing to assess how well a model predicts a sample.
Learn moreAn LLM (Large Language Model) playground is a platform where developers can experiment with, test, and deploy prompts for large language models. These models, such as GPT-4 or Claude, are designed to understand, interpret, and generate human language.
Learn moreUn terrain de jeu LLM (Large Language Model) est une plateforme où les développeurs peuvent expérimenter, tester et déployer des invites pour de grands modèles de langage. Ces modèles, tels que GPT-4 ou Claude, sont conçus pour comprendre, interpréter et générer le langage humain.
Learn morePre-training is the process of training large language models (LLMs) on extensive datasets before fine-tuning them for specific tasks.
Learn moreIn first-order logic, predicates are applied to individuals. So, for example, the predicate "is a person" can be applied to "John" to give the proposition "John is a person". In higher-order logic, predicates can be applied to other predicates. So, for example, the predicate "is a person" can be applied to the predicate "is taller than 5 feet" to give the proposition "is a person is taller than 5 feet".
Learn moreA prediction model is a type of machine learning model that is trained to make predictions about future outcomes based on historical data.
Learn morePredictive analytics is a branch of artificial intelligence that deals with making predictions about future events. This can be done using a variety of techniques, including [machine learning](/glossary/machine-learning), statistical modeling, and data mining.
Learn morePCA is a technique used to reduce the dimensionality of data. It is often used to speed up [machine learning](/glossary/machine-learning) algorithms or to make visualizations clearer.
Learn moreThe principle of rationality is the idea that agents (like us humans) should make decisions that are in their best interests. In other words, we should try to be as rational as possible when making decisions.
Learn moreProbabilistic programming is a subfield of AI that deals with the construction and analysis of algorithms that take uncertain input and produce uncertain output. A key feature of probabilistic programming languages is that they allow the programmer to express uncertain knowledge in the form of probability distributions over possible worlds. This makes it possible to write programs that reason about and learn from uncertain data.
Learn moreA production system is a set of rules or procedures for carrying out a task. In artificial intelligence, production systems are used to create programs that can solve problems.
Learn moreThere is no one-size-fits-all answer to this question, as the best programming language for AI development will vary depending on the specific application or project you are working on. However, some popular choices for AI development include Python, R, and Java.
Learn moreProlog is a programming language that is particularly well suited to artificial intelligence (AI) applications. Prolog has its roots in first-order logic, a formal logic that is used in mathematics and philosophy.
Learn morePrompt engineering for Large Language Models (LLMs) like Llama 2 or GPT-4 involves crafting inputs (prompts) that effectively guide the model to produce the desired output. It's a skill that combines understanding how the model interprets language with creativity and experimentation.
Learn moreA proposition is a statement that is either true or false. In AI, propositions are often used as a way of representing knowledge. For example, a proposition might be used to represent the fact that a certain object is a chair.
Learn moreProximal Policy Optimization (PPO) is a reinforcement learning algorithm that aims to maximize the expected reward of an agent interacting with an environment, while minimizing the divergence between the new and old policy.
Learn morePython is a programming language with many features that make it well suited for use in artificial intelligence (AI) applications. Python is easy to learn for beginners and has a large and active community of users, making it a good choice for AI development. Python also has a number of libraries and tools that can be used for AI development, making it a powerful tool for AI developers.
Learn moreThe problem of qualification in AI is that it is difficult to determine whether or not a machine is truly intelligent. This is because there is no agreed-upon definition of intelligence, and what one person may consider to be intelligent behavior may not be seen as such by another. This problem is compounded by the fact that AI technology is constantly evolving, making it hard to keep up with the latest developments. As a result, it can be difficult to know if a machine is truly intelligent or not.
Learn moreIn AI, a quantifier is a logical operator that expresses the quantity of something. For example, the quantifier "there exists" expresses the existence of something, while the quantifier "for all" expresses the universality of something.
Learn moreQuantization in [machine learning](/glossary/machine-learning) is a technique used to speed up the inference and reduce the storage requirements of neural networks. It involves reducing the number of bits that represent the weights of the model.
Learn moreQuantum computing is a type of computing where information is processed using quantum bits instead of classical bits. This makes quantum computers much faster and more powerful than traditional computers. Quantum computing is still in its early stages, but it has the potential to revolutionize the field of artificial intelligence (AI).
Learn moreQuery language is a language used to make requests of a computer system. In the context of artificial intelligence, a query language can be used to make requests of an AI system in order to obtain information or take action.
Learn moreR is a programming language and free software environment for statistical computing and graphics supported by the R Foundation for Statistical Computing. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.
Learn moreA radial basis function network is a type of artificial neural network that uses a radial basis function as an activation function. A radial basis function is a function that takes a multidimensional input and produces a scalar output. The output of a radial basis function is always positive, regardless of the sign of the input. This makes radial basis function networks well-suited for applications where the output is a positive number, such as regression or classification.
Learn moreRAGAS, which stands for Retrieval Augmented Generation Assessment, is a framework designed to evaluate Retrieval Augmented Generation (RAG) pipelines. RAG pipelines are a class of Large Language Model (LLM) applications that use external data to augment the LLM's context.
Learn moreRAGAS, qui signifie Retrieval Augmented Generation Assessment, est un cadre conçu pour évaluer les pipelines de génération augmentée par récupération (RAG). Les pipelines RAG sont une classe d'applications de grands modèles de langage (LLM) qui utilisent des données externes pour augmenter le contexte du LLM.
Learn moreA random forest is a [machine learning](/glossary/machine-learning) algorithm that is used for classification and regression. It is a ensemble learning method that is used to create a forest of random decision trees. The random forest algorithm is a supervised learning algorithm, which means it requires a training dataset to be provided. The training dataset is used to train the random Forest model, which is then used to make predictions on new data.
Learn moreReasoning is the process of drawing logical conclusions from given information. In AI, reasoning is the ability of a computer to make deductions based on data and knowledge.
Learn moreA recurrent neural network (RNN) is a type of neural network that is designed to handle sequential data. RNNs are often used for tasks such as language modeling and machine translation.
Learn moreRed teaming is a process where a group of security professionals, known as the red team, simulate attacks on an organization’s systems to identify vulnerabilities and test its defenses.
Learn moreIn AI, region connection calculus is a method of representing and reasoning about space. It is based on the idea of dividing space into regions, and then representing the relationships between those regions using a set of calculus rules. This allows for a more flexible and expressive way of reasoning about space, and has been used in applications such as robot navigation and scene understanding.
Learn moreReinforcement learning is a type of [machine learning](/glossary/machine-learning) that is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. The agent learns by interacting with its environment, and through trial and error discovers which actions yield the most reward.
Learn moreReservoir computing is a type of artificial intelligence that is based on the idea of using a reservoir of simple, interconnected nodes to perform complex computations. The nodes in the reservoir are randomly connected, and the connections between them are constantly changing. This makes it difficult for an attacker to reverse engineer the system.
Learn moreRDF is a standard model for data interchange on the Web. RDF is a directed, labeled graph data format for representing information in the Web. RDF is often used to represent, among other things, personal information, social networks, metadata about digital artifacts, as well as provide a means of integration over disparate sources of information.
Learn moreA restricted Boltzmann machine is a type of artificial intelligence that can learn to represent data in ways that are similar to how humans do it. It is a neural network that consists of two layers of interconnected nodes. The first layer is called the visible layer, and the second layer is called the hidden layer. The nodes in the visible layer are connected to the nodes in the hidden layer, but the nodes in the hidden layer are not connected to each other.
Learn moreThe Rete algorithm is a well-known AI algorithm that is used for pattern matching. It was developed by Charles Forgy in the 1970s and is still in use today. The Rete algorithm is based on the idea of production rules, which are if-then statements that describe a set of conditions and a corresponding action. The Rete algorithm is designed to efficiently evaluate a set of production rules against a set of data. It does this by creating a network of nodes, which represent the production rules, and then matching the data against the nodes. If a match is found, the corresponding action is taken. The Rete algorithm is a powerful tool for AI applications that require pattern matching, such as data mining, text classification, and image recognition.
Learn moreRetrieval-augmented Generation (RAG) is a technique used in natural language processing that combines the power of pre-trained language models with the ability to retrieve and use external knowledge.
Learn moreRetrieval Pipelines are a series of data processing steps where the output of one process is the input to the next. They are crucial in machine learning operations, enabling efficient data flow from the data source to the end application.
Learn moreReinforcement Learning from AI Feedback (RLAIF) is a type of machine learning that combines reinforcement learning (RL) and supervised learning from AI feedback to create more efficient and safe AI systems.
Learn moreL'apprentissage par renforcement à partir des retours d'IA (RLAIF) est un type d'[apprentissage automatique](/glossary/machine-learning) qui combine l'apprentissage par renforcement (RL) et l'apprentissage supervisé à partir des retours d'IA pour créer des systèmes d'IA plus efficaces et sûrs.
Learn moreReinforcement Learning from Human Feedback (RLHF) is a type of machine learning that combines reinforcement learning (RL) and supervised learning from human feedback to create more efficient and safe AI systems.
Learn moreRobotics is an interdisciplinary field that integrates computer science, mechanical engineering, and other related fields to design and construct robots. These robots are used to perform tasks that are difficult, dangerous, or impossible for humans to do.
Learn moreThe ROUGE Score, or Recall-Oriented Understudy for Gisting Evaluation, is a metric used in natural language processing to evaluate the quality of summaries. It measures the overlap of n-grams, word sequences of n words, between the generated summary and the reference summary.
Learn moreRule-based systems are one of the most commonly used types of AI systems. They are used to make decisions by following a set of rules that have been defined in advance.
Learn moreIn AI, satisfiability is the ability of a system to find a solution that meets all the requirements or constraints of a problem. A problem is considered satisfiable if there exists at least one solution that meets all the requirements. In contrast, an unsatisfiable problem has no solutions that meet all the requirements.
Learn moreScaling laws for Large Language Models (LLMs) refer to the relationship between the model's performance and the amount of resources used during training, such as the size of the model, the amount of data, and the amount of computation.
Learn moreThere are a few different types of search algorithms in AI. Some of the more common ones are:
Learn moreSelection in a genetic algorithm is the process of choosing which individuals will be allowed to reproduce and pass on their genes to the next generation. This is done by selecting individuals with higher fitness values, which means they are more likely to produce offspring that are also fit and able to survive.
Learn moreSelf-management in AI is the ability of AI systems to autonomously manage themselves in order to achieve their objectives. This includes the ability to monitor and control their own resources, to adapt their behavior in response to changes in their environment, and to learn from experience.
Learn moreIn artificial intelligence, a semantic network is a knowledge representation technique for organizing and storing knowledge. Semantic networks are a type of graphical model that shows the relationships between concepts, ideas, and objects in a way that is easy for humans to understand. The nodes in a semantic network are concepts, and the edges between nodes represent the relationships between those concepts. Semantic networks are used to represent both simple and complex knowledge structures.
Learn moreA semantic query is a question posed in a natural language such as English that is converted into a machine-readable format such as SQL. The goal of semantic query is to make it possible for computers to answer questions posed in natural language.
Learn moreIn computer science, artificial intelligence, and logic, a semantic reasoner is a system that attempts to derive meaning from symbolic representations of information. The formal study of the deduction of meaning from symbols is called logical inference.
Learn moreSemantics in AI refers to the study and understanding of the meaning of words and phrases in a language. It involves the interpretation of natural language to extract the underlying concepts, ideas, and relationships. Semantics plays a crucial role in various AI applications such as natural language processing, information retrieval, and knowledge representation.
Learn moreIn short, sensor fusion is the process of combining data from multiple sensors to estimate the state of an environment. This is often used in robotics and autonomous systems, where multiple sensors are used to gather data about the world around them.
Learn moreSeparation logic is a logical framework for reasoning about the safety of programs that manipulate heap-allocated data structures. It allows programmers to reason about the memory safety of their programs without having to think about the underlying memory management infrastructure.
Learn moreSimilarity learning is a branch of [machine learning](/glossary/machine-learning) that deals with the problem of finding similar items in a dataset. It is often used in recommendation systems, where the goal is to find items that are similar to the items that a user has already liked.
Learn moreSimulated annealing is a technique used in AI to find solutions to optimization problems. It is based on the idea of annealing in metallurgy, where a metal is heated and then cooled slowly in order to reduce its brittleness. In the same way, simulated annealing can be used to find solutions to optimization problems by slowly changing the values of the variables in the problem until a solution is found.
Learn moreIn AI, situation calculus is a formalism for representing and reasoning about actions and change. It was developed by John McCarthy and Patrick J. Hayes.
Learn moreIn computer science, SLD resolution is a theorem proving technique for automated deduction, used in automated theorem provers and inference systems. It is a refinement of the resolution principle for first-order logic.
Learn moreSliding Window Attention (SWA) is a technique used in transformer models to limit the attention span of each token to a fixed size window around it. This reduces the computational complexity and makes the model more efficient.
Learn moreL'Attention à Fenêtre Glissante (SWA) est une technique utilisée dans les modèles de transformateurs pour limiter la portée de l'attention de chaque jeton à une fenêtre de taille fixe autour de celui-ci. Cela réduit la complexité computationnelle et rend le modèle plus efficace.
Learn moreSoftware 2.0 refers to the new generation of software that is written in the language of [machine learning](/glossary/machine-learning) and artificial intelligence. Unlike traditional software that is explicitly programmed, Software 2.0 learns from data and improves over time. It can perform complex tasks such as natural language processing, pattern recognition, and prediction, which are difficult or impossible for traditional software. The capabilities of Software 2.0 extend beyond simple data entry and can include advanced tasks like facial recognition and understanding natural language.
Learn moreArtificial intelligence (AI) is a branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously.
Learn moreIn AI, SPARQL is a query language for databases. It allows you to query data in a database and get results that are based on that data. SPARQL is used to find patterns in data, and to make queries that can be used to find data that is similar to what you are looking for.
Learn moreThere are many ways to represent spatial data for AI applications. One common approach is to use a grid system, where each cell in the grid represents a specific location. This can be used to create a map of the area, which can then be used by AI algorithms to find the best path between two points, or to identify patterns in the data.
Learn moreSpeech recognition is a process of converting spoken words into text. It is also known as automatic speech recognition (ASR) or speech to text (STT).
Learn moreA spiking neural network is a type of artificial neural network that uses discrete time steps to simulate the firing of neurons in the brain. This type of neural network is more efficient than traditional artificial neural networks and can more accurately model the brain's processing of information.
Learn moreSTRIPS is a planning algorithm that was developed by Stanford AI Lab in the early 1970s. STRIPS is an acronym for "STanford Research Institute Planning System". The algorithm was designed to be used with a robotic arm, but it can be applied to other planning problems as well.
Learn moreA state in AI is a representation of the current situation or environment that the AI system is in. This can be thought of as the "snapshot" of the current situation that the AI system is trying to make sense of. In order to make decisions, the AI system needs to be able to understand the current state of the world around it.
Learn moreStatistical classification is a method of machine learning that is used to predict the probability of a given data point belonging to a particular class. It is a supervised learning technique, which means that it requires a training dataset of known labels in order to learn the mapping between data points and class labels. Once the model has been trained, it can then be used to make predictions on new data points.
Learn moreSRL, or Structured Representation Learning, is a type of AI that focuses on learning from structured data. This can be data that is already organized in a specific way, or data that is generated by a process that is designed to produce structured data. SRL is different from other AI methods in that it is specifically designed to learn from this type of data. This makes it well suited for tasks such as image recognition and natural language processing.
Learn moreStephen Cole Kleene was an American mathematician and logician who made significant contributions to the theory of algorithms and recursive functions. He is known for the introduction of Kleene's recursion theorem and the Kleene star (or Kleene closure), a fundamental concept in formal language theory.
Learn moreStephen Wolfram is a British-American computer scientist, physicist, and businessman. He is known for his work in theoretical particle physics, cellular automata, complexity theory, and computer algebra. He is the founder and CEO of the software company Wolfram Research where he worked as the lead developer of Mathematica and the Wolfram Alpha answer engine.
Learn moreStochastic optimization is a method of optimization that uses randomness to find an approximate solution to a problem. It is often used in problems where the search space is too large to be searched exhaustively, or when the objective function is too complex to be evaluated accurately.
Learn moreWhen it comes to artificial intelligence, there is no one-size-fits-all definition. In general, AI can be described as a computer system that is able to perform tasks that would normally require human intelligence, such as visual perception, natural language processing, and decision-making.
Learn moreA subject-matter expert in AI is someone who is an expert in a particular area of AI. They may be experts in machine learning, natural language processing, or any other area of AI. Subject-matter experts in AI are often able to develop new applications of AI and to improve existing AI systems.
Learn moreSuperintelligence is a term used to describe a hypothetical future artificial intelligence (AI) that is significantly smarter than the best human minds in every field, including scientific creativity, general wisdom and social skills.
Learn moreSupervised fine-tuning (SFT) is a method used in [machine learning](/glossary/machine-learning) to improve the performance of a pre-trained model. The model is initially trained on a large dataset, then fine-tuned on a smaller, specific dataset. This allows the model to maintain the general knowledge learned from the large dataset while adapting to the specific characteristics of the smaller dataset.
Learn moreLe peaufinage supervisé (SFT) est une méthode utilisée en [apprentissage automatique](/glossary/machine-learning) pour améliorer les performances d'un modèle pré-entraîné. Le modèle est initialement formé sur un grand ensemble de données, puis peaufiné sur un ensemble de données plus petit et spécifique. Cela permet au modèle de conserver les connaissances générales acquises à partir du grand ensemble de données tout en s'adaptant aux caractéristiques spécifiques du plus petit ensemble de données.
Learn moreSupervised learning is a [machine learning](/glossary/machine-learning) paradigm where a model is trained on a labeled dataset. The model learns to predict the output from the input data during training. Once trained, the model can make predictions on unseen data. Supervised learning is widely used in applications such as image classification, speech recognition, and market forecasting.
Learn moreA support vector machine (SVM) is a supervised learning algorithm primarily used for classification tasks, but it can also be adapted for regression through methods like Support Vector Regression (SVR). The algorithm is trained on a dataset of labeled examples, where each example is represented as a point in an n-dimensional feature space. The SVM algorithm finds an optimal hyperplane that separates classes in this space with the maximum margin possible. The resulting model can then be used to predict the class labels of new, unseen examples.
Learn moreSwarm intelligence (SI) is a subfield of artificial intelligence (AI) based on the study of decentralized systems. SI systems are typically made up of a large number of simple agents that interact with each other and their environment in order to accomplish a common goal.
Learn moreSymbolic AI is a subfield of AI that deals with the manipulation of symbols. Symbolic AI algorithms are designed to solve problems by reasoning about symbols and relationships between symbols.
Learn moreSynthetic intelligence is a process of programming a computer to make decisions for itself. This can be done through a number of methods, including but not limited to: rule-based systems, decision trees, genetic algorithms, artificial neural networks, and fuzzy logic systems.
Learn moreSystems neuroscience is a field of study that investigates the relationship between the nervous system and behavior. This approach to AI seeks to understand how the brain produces intelligent behavior and how artificial intelligence can be used to replicate or exceed human intelligence.
Learn moreThe technological singularity is a theoretical future event where technological advancement becomes so rapid and exponential that it surpasses human intelligence. This could result in machines that can self-improve and innovate faster than humans. This runaway effect of ever-increasing intelligence could lead to a future where humans are unable to comprehend or control the technology they have created. While some proponents of the singularity argue that it is inevitable, others believe that it can be prevented through careful regulation of AI development.
Learn moreIn artificial intelligence, temporal difference learning (TDL) is a kind of reinforcement learning (RL) where feedback from the environment is used to improve the learning process. The feedback can be immediate, as in Q-learning, or delayed, as in SARSA.
Learn moreA tensor network is a powerful tool for representing and manipulating high-dimensional data. It is a generalization of the matrix product state (MPS) and the tensor train (TT) decompositions, and can be used to represent a wide variety of data structures including images, videos, and 3D objects.
Learn moreTensorFlow is a powerful tool for [machine learning](/glossary/machine-learning) and artificial intelligence. It is an open source library created by Google that is used by developers to create sophisticated [machine learning](/glossary/machine-learning) models. TensorFlow makes it easy to train and deploy [machine learning](/glossary/machine-learning) models. It has a wide range of applications including image recognition, natural language processing, and time series analysis.
Learn moreThere is no one-size-fits-all answer to this question, as the relationship between TCS and AI will vary depending on the specific application or industry. However, in general, TCS can be used to help train and develop AI systems, as well as to provide data that can be used to improve and optimize AI algorithms. Additionally, TCS can be used to help monitor and control AI systems, as well as to provide insights that can be used to improve AI decision-making.
Learn moreThere is a strong relationship between AI and computation. AI is heavily reliant on computation in order to function. In fact, AI is often referred to as computational intelligence. This is because AI relies on computers to process and store data, as well as to carry out complex calculations.
Learn moreIn AI, Thompson sampling is a method for balancing exploration and exploitation. It works by maintaining a distribution over the space of possible actions, and selecting the action that is most likely to be optimal according to that distribution. The distribution is updated at each step based on the rewards obtained.
Learn moreThere is no definitive answer to this question as it depends on a number of factors, including the specific algorithm in question and the implementation thereof. However, in general, the time complexity of an algorithm is the amount of time it takes to run the algorithm as a function of the input size. For example, if an algorithm takes 10 seconds to run on an input of size 10, it would take 100 seconds to run on an input of size 100. The time complexity of an algorithm is typically expressed as a Big O notation, which gives the upper bound on the running time.
Learn moreTokens in foundational models are the smallest units of data that the model can process. In the context of Natural Language Processing (NLP), a token usually refers to a word, but it can also represent a character, a subword, or even a sentence, depending on the granularity of the model.
Learn moreTokenization is the process of converting text into tokens that can be fed into a Large Language Model (LLM).
Learn moreTracing in distributed systems is a method used to monitor applications and troubleshoot problems by tracking requests as they are processed. Tracing provides visibility into the performance and reliability of applications and services, which can be critical in a distributed system where requests can span multiple services and machines.
Learn moreA transformer is a type of machine learning model that is trained to understand the context of language and make predictions about future words or phrases.
Learn moreThe Transformer Library is a collection of state-of-the-art [machine learning](/glossary/machine-learning) models and community-built tools for Natural Language Processing (NLP). It provides pre-trained models that can be fine-tuned on specific tasks, and allows for the sharing and collaboration on models.
Learn moreTranshumanism is the belief that the human race can and should be improved through the use of technology. This can be achieved through the use of artificial intelligence (AI), which can help us to enhance our physical and mental abilities.
Learn moreA transition system is a mathematical model used to describe the behavior of a system. In AI, transition systems are used to describe the behavior of agents. A transition system consists of a set of states, a set of transitions, and a set of rules that determine how the transitions can be executed. The transition system is a powerful tool for reasoning about the behavior of agents.
Learn moreIn computer science, tree traversal is the process of visiting each node in a tree data structure in a specific order. There are three common ways to traverse a tree: in-order, pre-order, and post-order.
Learn moreA quantified Boolean formula (QBF) is a formula in which variables are quantified by existential (there exists) or universal (for all) quantifiers. QBF is a generalization of propositional logic, which does not allow variables to be quantified.
Learn moreA Turing machine is a hypothetical machine thought of by Alan Turing in 1936 that is capable of simulating the logic of any computer algorithm, no matter how complex. It is a very simple machine that consists of a tape of infinite length on which symbols can be written, a read/write head that can move back and forth along the tape and read or write symbols, and a finite state machine that controls the head and can change its state based on the symbols it reads or writes. The Turing machine is capable of solving any problem that can be solved by a computer algorithm, making it the theoretical basis for modern computing.
Learn moreThe Turing test is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Alan Turing, who proposed the test in 1950, stated that "a computer would deserve to be called intelligent if it could deceive a human into believing that it was human." The test does not check the ability to give correct answers to questions, but rather the ability to gain human approval as a result of its responses.
Learn moreA type system is a system that helps to ensure the correctness of programs by assigning a type to each value in the program. In AI, a type system can be used to help ensure that the data used by the AI system is consistent and of the correct type. For example, if the AI system is designed to work with data that is of the type "real", then the type system can help to ensure that all of the data used by the AI system is of that type. This can help to prevent errors and improve the overall quality of the AI system.
Learn moreIn [machine learning](/glossary/machine-learning), unsupervised learning is a type of self-organized learning that does not require labeled data. The key to unsupervised learning is that it can find patterns in data that are not labeled. This is different from supervised learning, which requires data to be labeled in order to find patterns.
Learn moreA vector database is a type of database that uses vector model to store, manipulate and retrieve data.
Learn moreA vision processing unit, or VPU, is a specialized type of microprocessor that is designed to efficiently process the large amounts of data that are typically associated with computer vision applications.
Learn moreIBM Watson is a question-answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci.
Learn moreWeak AI is a term used to describe AI systems that are not as powerful or intelligent as strong AI systems. While weak AI systems may be able to perform certain tasks, they are not as capable as strong AI systems when it comes to general intelligence.
Learn moreThe WER Score, or Word Error Rate, is a metric used in speech recognition to evaluate the quality of transcribed text. It measures the minimum number of edits (insertions, deletions, or substitutions) required to change the system output into the reference output.
Learn moreWolfram Alpha is a computational knowledge engine or answer engine developed by Wolfram Research. It is an online service that answers factual queries directly by computing the answer from externally sourced "curated data."
Learn moreThe World Wide Web Consortium (W3C) is an international community that develops standards for the World Wide Web. The W3C was founded in October 1994 by Tim Berners-Lee, the inventor of the World Wide Web.
Learn moreZephyr 7B is a state-of-the-art language model developed by Hugging Face. It is a fine-tuned version of the Mistral-7B model, trained on a mix of publicly available and synthetic datasets using Direct Preference Optimization (DPO). The model is designed to generate fluent, interesting, and helpful conversations, making it an ideal assistant in various tasks.
Learn moreZephyr 7B est un modèle de langage de pointe développé par Hugging Face. C'est une version affinée du modèle Mistral-7B, formée sur un mélange de jeux de données publiquement disponibles et synthétiques en utilisant l'Optimisation Directe des Préférences (DPO). Le modèle est conçu pour générer des conversations fluides, intéressantes et utiles, ce qui en fait un assistant idéal pour diverses tâches.
Learn moreCollaborate with your team on reliable Generative AI features.
Want expert guidance? Book a 1:1 onboarding session from your dashboard.