What is Dartmouth workshop in AI?

by Stephen M. Walker II, Co-Founder / CEO

What was the Dartmouth Workshop?

The Dartmouth Workshop, officially known as the Dartmouth Summer Research Project on Artificial Intelligence, was a seminal event in the history of artificial intelligence (AI). It took place in 1956 at Dartmouth College in Hanover, New Hampshire, and is widely considered the founding event of AI as a distinct field of study.

The workshop was organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The project lasted approximately six to eight weeks and was essentially an extended brainstorming session. The organizers based the workshop on the conjecture that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it".

The workshop brought together a diverse group of participants from various backgrounds, including mathematics, psychology, and electrical engineering. The attendees shared a common belief that the act of thinking is not unique to humans or even to biological entities. The discussions during the workshop were wide-ranging and helped seed ideas for future AI research.

Despite the optimism and high expectations, the workshop did not lead to immediate breakthroughs towards human-level AI. However, it did inspire many people to pursue AI goals in their own ways. The term "artificial intelligence" was coined during this period, and the Dartmouth Workshop played a significant role in establishing AI as a legitimate field of research.

More terms

Mathemical Optimization Methods

Mathematical optimization, or mathematical programming, seeks the optimal solution from a set of alternatives, categorized into discrete or continuous optimization. It involves either minimizing or maximizing scalar functions, where the goal is to find the variable values that yield the lowest or highest function value.

Read more

What is batch normalization?

Batch normalization is a method used in training artificial neural networks that normalizes the interlayer outputs, or the inputs to each layer. This technique is designed to make the training process faster and more stable. It was proposed by Sergey Ioffe and Christian Szegedy in 2015.

Read more

It's time to build

Collaborate with your team on reliable Generative AI features.
Want expert guidance? Book a 1:1 onboarding session from your dashboard.

Start for free