Large Language Models vs Symbolic AI

LLMsSymbolic AI
  • Robust, scalable
  • Huge language-specific training requirements
  • Black-box, unpredictable
  • Emergent intelligence? Existential threat?
  • Formalizable, predictable
  • Language-independent, inter-modal
  • One-shot learning, multi-step inferencing
  • Fragile, difficult to engineer and scale up

Do we still need symbolic AI?

  • It depends.
  • What was your goal?
  • To build intelligent machines?
    • Are LLMs a significant advance in building intelligent machines?
    • Are we done?
  • To model human intelligence and learning?
    • Are LLMs a plausible model of human cognition and learning?
    • Do they answer questions about how we think?

Hinton's View

Some claims

ClaimQuestions
The human brain works like LLMs do. Do humans learn like LLMs do?
Neural networks are more biological than symbolic AI. Should/must AI research emulate evolution, i.e., neurons before reasoning? Does thought require neurons?

One possible compromise

Neuro-symbolic AI

https://en.wikipedia.org/wiki/Neuro-symbolic_AI
  • Scalable, robust, predictable
  • Language-independent, multi-modal
  • Human-level training requirements

Readings