Artificial intelligence has, for decades, been fodder for science fiction films, philosophers and sleep-deprived computer programmers, but suddenly it seems to be everywhere.
ChatGPT reached 100 million users at an unprecedented rate. Bill Gates recently declared that “the Age of AI has begun.” And the Biden administration last month began exploring new measures to hold artificial intelligence systems accountable for their impact.
But for a lot of people, it’s still a fuzzy concept that doesn’t affect their day-to-day lives.
So this might be a good moment to take a step back and review the basics. Here’s a guide to help you understand what all the hype is about.
Why is everyone suddenly talking about AI?
You can thank (or blame) one specific company: OpenAI, a tech startup based in San Francisco with a few hundred employees. In November, OpenAI released the chatbot ChatGPT to the public, and it quickly became clear that it was leaps and bounds ahead of chatbots that had come before. It was like talking to someone who knew everything.
The tool, which the company says is only one step in a long process of developing AI, quickly went viral. Other tech companies, such as Google and Meta, had been testing similar chatbots behind closed doors, but OpenAI made its widely available — a decision that was controversial because of the unknown risks.
What’s so great about a chatbot?
Mediocre chatbots have been around for a long time. Think of the customer service chat windows that pop up on some websites. In 2016, Microsoft even released an AI chatbot named Tay but quickly canceled it after people taught it to use racist language.
ChatGPT came on the scene as something different. Not only could it answer a seemingly unlimited number of questions, but it could also write screenplays, summarize huge amounts of information and imitate a human in conversation somewhat convincingly. It immediately seemed, at a minimum, that it could one day make everyday life more efficient.
And chatbots are only one piece of AI, along with images, animated videos, facial recognition technology and more.
Let’s back up. What even is AI?
At its simplest, AI can be boiled down to a few words: machines that think. Or, even better, machines that can imitate thinking.
The term has its origins among scientists after World War II. British mathematician Alan Turing in 1950 all but predicted the development of “digital computers” that could persuasively imitate humans, and in 1955, American mathematician John McCarthy and colleagues at Dartmouth College coined the term “artificial intelligence” in a research proposal.
“Generative AI,” a newer term, refers to software like ChatGPT that gives rise to new material. You can find a more extensive glossary of AI terms here.
Is it really possible for computers to ‘think’?
We could write a whole book on this one, but here’s a short answer: No, they can’t. While a few people believe AI is already coming alive, they’re a small group, and the idea is really a distraction from what’s going on inside the computers.
If you’d like a longer answer, NBC News spoke with several philosophers about how they approach the question.
So what’s really going on inside the computers?
AI software is able to imitate humans so convincingly because it’s good at prediction: It guesses the word or sentence or image you want to see next. (Some detractors have called this “glorified autocomplete.”)
And the systems are so good at prediction because their human creators have fed them so many human-created past examples — including huge parts of the internet. The raw material that goes into AI models is called training data, and although some companies are secretive about what they use, well-known sources of data include Reddit and Wikipedia.
OK, so AI mines insight from lots of data. How?
AI learns by example. By looking to us, language models identify patterns in how we write and speak, distilling concepts like tone, word placement and even idioms. Those patterns are then translated into math in a process called “model training.” Like children learning new words and grammar, AI must understand the rules of engagement.
When large language models like ChatGPT receive prompts, that knowledge allows them to both understand what we’re asking for and construct responses.
ChatGPT takes training further with its secret sauce: reinforcement learning from human feedback, or RLHF. This fine-tuning technique does the heavy lifting. In this stage, human graders score model output, heavily penalizing answers that are wild, inappropriate or downright nonsensical while rewarding those that are informative and humanlike. That enables fluid conversational exchanges.
While there are other fine-tuning techniques, RLHF has been considered groundbreaking in language modeling, and it is used by companies such as OpenAI or Hugging Face, a startup that offers tools to coders building their own AI models.
Is AI just another fad from Silicon Valley?
On one level, AI chatbots may bear some resemblance to those disappointing ideas — do we all really want to spend our days talking to a computer? — but there are reasons to think AI is more than another passing trend.
For one thing, money is pouring into the sector, with $1.7 billion in startup investments alone in the first three months of 2023, according to the research firm PitchBook. In addition, tangible uses are already popping up, from hit songs to help for the blind.
Why is all this happening now?
It has been 26 years since the triumph of IBM’s Deep Blue computer program over chess champion Garry Kasparov — a milestone in AI research and development. Since then, computer chips have gotten much faster and can handle the huge amount of data required for modern AI, and new ways of writing software have also made the process more efficient.
Chipmakers such as Nvidia and tech companies including Google, Meta and OpenAI have poured resources into those two areas, as well as into consolidating talented computer scientists under their respective roofs.
When can I expect this to start affecting my life?
Don’t expect to wake up one morning and suddenly live in an AI world. Instead, expect that changes will come a little at a time: a hit song created with AI, a new test at the doctor’s office to detect cancer or slightly better customer service. OpenAI has licensed its technology to Morgan Stanley so its investment advisers can give better advice and to Khan Academy so its students have access to a chatbot tutor.
Think of all the businesses or products you deal with every day, and there’s a good chance one is using similar technology or will in the near future — even if the only immediate impact is a little bit more efficiency.
Can we expect any big changes?
It’s hard to know what to count on, but yes, there’s plenty of dreaming going on in AI startups. If AI software can make both human work and computers more efficient, could all that brainpower be put toward major advances in other new areas?
Is AI going to make lots of jobs irrelevant?
The predictions run the gamut, so if you’re confused, you’re not alone. OpenAI CEO Sam Altman has suggested that AI will lead to a utopia in which people don’t need to work, while others warn of mass unemployment among computer programmers.
Even economists who specialize in labor are stumped, advising that AI will change people’s jobs and supplement existing work but otherwise avoiding specific predictions.
One set of researchers recently tried to rank jobs by risk that AI will alter what people do. In trouble, according to them: telemarketers, humanities professors and credit authorizers. Harder to replace: dancers, stonemasons and steelworkers.
And who’s going to make money from this?
Again, the predictions are all over the place, from a more equitable society to a less equal one. A lot depends on how politicians and voters react, and the Biden administration and Congress are paying increasing attention to AI research and development.
But some of the early leaders are the big tech companies, such as Google, Meta and Amazon; OpenAI, which converted in 2019 from a nonprofit to a for-profit company; and whoever survives among the dozens of AI startups that collectively are raising billions of dollars from early investors.
What could possibly go wrong?
If you go by science fiction films or the nightmares of a few researchers, there’s a chance of killer robots: AI becoming sentient beings with motivations of their own.
Prompted by that scenario, thousands of people, including Elon Musk and some AI researchers, signed a petition calling for a pause of at least six months in training new AI systems. Some top tech executives and researchers, however, didn’t sign it. And at least so far, there isn’t overwhelming information suggesting that humans are in immediate danger because of AI.
So how worried should I be?
Most of the immediate risks have to do with short-term abuse by humans, not robots. There’s ongoing research into using AI to crack people’s passwords, and The Washington Post uncovered someone using an AI-generated photo as a thirst trap, possibly for cash.
One thing to watch: how quickly we see progress in physical robots. The hardware hasn’t advanced as far as the software, and two years ago, OpenAI disbanded its robotics team even after it had gotten a robot to solve a Rubik’s Cube. But now OpenAI is investing in a Norwegian robotics company.