The two hemispheres of the human brain process signals and learn from abundant data. Something similar is happening inside ChatGPT
Are you worried that you will lose your job? Are you concerned that superintelligent AI will become an existential threat to humanity? The press is busy keeping up with the latest concerns in AI. However, readers are being misled because these are not the right questions. What is happening behind the scenes is more interesting, and where advances in AI are heading is more complex.
My goal is to give you my perspective as an insider. I was a pioneer in developing learning algorithms in the 1980s, which gave us tools for understanding how populations of neurons in brains can solve complex computational problems. My book with Patricia Churchland on The Computational Brain introduced a generation of neuroscientists to this new perspective. In the last ten years, there has been a revolution in neuroscience driven by new techniques for simultaneously recording hundreds of thousands of neurons in dozens of brain areas.
The brain revolution has coincided with the AI revolution, spurred by the same learning algorithms we invented in the 1980s, scaled up to network models with trillions of weights. The Deep Learning Revolution, a book I wrote in 2018, tells the story of how AI shifted from a logic-based to a brain-based model of computation. Brains and AI are now converging, as you will learn in the coming months.
OpenAI released ChatGPT to the general public in November 2022, which opened a new chapter in AI and a great debate on whether large language models (LLMs) are intelligent or understand what they are saying. Language is a quintessential human ability, and we assess a person's intelligence from what they say. I have written a new book on ChatGPT and the Future of AI that will be published in October 2024.
https://www.amazon.com/ChatGPT-Future-AI-Language-Revolution/dp/0262049252
Here are some general questions I am often asked and some additional questions that I will explore in future posts:
1. Your vision for AI was modeled after the human brain. Why did you decide to follow that path of thinking?
Nature evolved solutions for all the problems underlying intelligence, including vision, language, and planning. So why not peak under the hood? This was not popular in the 20th century nut became a wildly successful strategy in the 21st century. But it took vastly more computer power than anyone imagined in the 20th century. What more can we learn from nature?
2. How has the convergence of AI and neuroscience accelerated advances in each field?
AI researchers in the 20th century tried to create AI by writing computer programs using logic. AI models today are probabilistic and use the same mathematics neuroscientists use to model brains, creating a virtuous circle between these two communities. Will this lead to a new theory for intelligence?
3. What are transformers, and how do they form the basis of all large language models (LLMs)?
A transformer is a generative neural network model that pulls itself up by its bootstraps by learning how to predict the next word. It learned how to understand sentences so it could autocomplete questions. Could brains work the same way?
4. What are some key challenges that researchers and developers face in advancing the capabilities of LLMs, and how do you suggest addressing these challenges?
The challenge for researchers and developers is keeping up with all the advances that are taking place every day. Universities are facing the challenge of keeping up with the demand of students who want to major in computer science and computational neuroscience. How long can this continue?
5. Is it possible to teach LLMs cultural values or the difference between 'good' and 'bad' behavior?
We treat large language models like pampered children and should not be surprised when they become spoiled brats. We know how to raise children. How can we become better parents for LLMs?
6. What is the importance of generalization in neural networks?
We are good at taking a concrete experience and generalizing from it. LLMs can also generalize, which is the source of their creativity. How much better will LLMs become at generalizing and creativity?
7. How do you see the role of LLMs evolving in society in the coming years?
We are at the very beginning of a cornucopia of LLM applications that will help you organize your life, help students learn better, and make companies more productive. How long will this take to unfold?
8. How would you differentiate artificial intelligence from superintelligence?
The term 'artificial' is a relic of the 20th century. Intelligence is a spectrum, not a unique capacity, which includes our fellow creatures' intelligence. AI is on the spectrum, the most recent addition to the club. Why is this so difficult for some academic experts to accept?
AI is already superintelligent in integrating the world's knowledge into a single, enormous, generative neural network model with creative abilities. This came as a shock to the world as well as to the engineers who created it. Are there more shocks in store for us?
9. In what ways do you think the public perception of LLMs may influence their adoption and acceptance in different industries?
CEOs and other C-suite chiefs struggle to understand LLMs and how they will affect their businesses. The public already uses LLMs and is way ahead of them on the learning curve. How long will it take for companies to catch up?
10. Would describing LLMs as merely ‘parroting’ their training data be inaccurate?
Academic critics have called LLMs ‘stochastic parrots.’ Parrots are pretty intelligent, and Irene Pepperberg trained an African grey parrot called Alex to answer questions in English about numbers, colors, and shapes of objects. Are academic critics of AI stochastic parrots?
I will be answering these italicized questions in my upcoming Substacks.
In Part 2, we will meet a talking dog and confront an alien mind.
Who is Terry?
· Ph.D. in the physics from Princeton University with John Hopfield
· Francis Crick Chair at the Salk Institute for Biological Studies
· Distinguished Professor at the University of California at San Diego
· Served on the Advisory Committee to the Director of NIH for the BRAIN Initiative
· President of the Neural Information Processing Systems (NeurIPS) Foundation that organizes the largest AI meeting
· Member of the National Academy of Sciences, the National Academy of Medicine, and the National Academy of Engineering
· Runs 5 miles a day on the beach and hikes the world’s mountains
Super nice reading this. I've been a fan since I took the Learning how to learn MOOC on Coursera. You were great alongside Barbara Oakley. We are in exciting times and I look forward to reading your perspective as things unfold.
I have been an ardent follower of your work together with Barbara Oakley. I am a veterinarian who turned a business professor and I am immensely interested in AI. Also I am an enthusiast, and feel that I live in the science fiction world I always dreamt of. Thank you for the blog, I am looking forward to learning again!