Klu raises $1.7M to empower AI Teams  

What is Dartmouth workshop in AI?

by Stephen M. Walker II, Co-Founder / CEO

What was the Dartmouth Workshop?

The Dartmouth Workshop, officially known as the Dartmouth Summer Research Project on Artificial Intelligence, was a seminal event in the history of artificial intelligence (AI). It took place in 1956 at Dartmouth College in Hanover, New Hampshire, and is widely considered the founding event of AI as a distinct field of study.

The workshop was organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The project lasted approximately six to eight weeks and was essentially an extended brainstorming session. The organizers based the workshop on the conjecture that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it".

The workshop brought together a diverse group of participants from various backgrounds, including mathematics, psychology, and electrical engineering. The attendees shared a common belief that the act of thinking is not unique to humans or even to biological entities. The discussions during the workshop were wide-ranging and helped seed ideas for future AI research.

Despite the optimism and high expectations, the workshop did not lead to immediate breakthroughs towards human-level AI. However, it did inspire many people to pursue AI goals in their own ways. The term "artificial intelligence" was coined during this period, and the Dartmouth Workshop played a significant role in establishing AI as a legitimate field of research.

More terms

What is TensorFlow?

TensorFlow is an open-source software library developed by Google Brain for implementing machine learning and deep learning models. It provides a comprehensive set of tools and APIs for defining, training, and deploying complex neural network architectures on various hardware platforms (e.g., CPUs, GPUs, TPUs) and programming languages (e.g., Python, C++, Java).

Read more

Ollama: Easily run LLMs locally

Ollama is a streamlined tool for running open-source LLMs locally, including Mistral and Llama 2. Ollama bundles model weights, configurations, and datasets into a unified package managed by a Modelfile. It supports a variety of AI models including LLaMA-2, uncensored LLaMA, CodeLLaMA, Falcon, Mistral, Vicuna model, WizardCoder, and Wizard uncensored. It is currently compatible with MacOS and Linux, with Windows support expected to be available soon.

Read more

It's time to build

Collaborate with your team on reliable Generative AI features.
Want expert guidance? Book a 1:1 onboarding session from your dashboard.

Start for free