Technology

AI has arrived in restaurants. But what is it, exactly?

Questions abound about this new-to-the-industry technology. Here’s a look at what we know about artificial intelligence, and what we don’t.
AI has a seat at the table in restaurants. | Image by Nico Heins and Midjourney

If you’ve visited a restaurant recently, it’s quite possible that you had a brush with artificial intelligence. 

Perhaps it was in the drive-thru, where a robotic voice greeted you and took your order. Or maybe it was while ordering online, where an AI-powered algorithm tried to get you to add a side of fries with your burger.

There’s also a chance AI was working in the background to help the restaurant stay on top of its inventory or create a Facebook post promoting a new special.

Whatever the case may be, AI has officially arrived in the restaurant industry. According to a report published this year by tech supplier TouchBistro, 9 out of 10 full-service operators said they were using some form of AI. And a survey by Square found that 98% of restaurants believe AI can help ease their labor woes.

It has restaurants’ attention mainly for its ability to automate tasks typically done by humans at a time when costs are high and margins are thin. Meanwhile, tech suppliers have rushed to capitalize on that interest by rolling out new AI-powered features.

But non-experts likely still have a lot of questions about this new (to restaurants) technology. What is AI? Why is it suddenly everywhere? And is it really the game-changer it’s said to be? 

It would be impossible to tackle everything there is to know about AI in a single article. But here’s a look at some of the most basic questions, with a few restaurant-specific examples sprinkled in.

What is artificial intelligence?

The textbook definition of AI is something along the lines of this, from IBM: “Technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.”

Put more simply, “Can machines think like human beings and do things as humans do?” said Subhoda Kumar, an AI expert and professor at Temple University’s Fox School of Business and Management.

And even more simply: “It’s automated decision-making,” said Jeremy Kedziora, endowed chair of artificial intelligence at the Milwaukee School of Engineering. “Every form of AI of which I am aware can be cast in that way.”

Indeed, the concept predates modern machines by hundreds of years. One of the first examples goes back to 19th century mathematician and astronomer Carl Gauss, who developed an equation that could predict the movement of planets.

The term artificial intelligence would not be coined until 1956, by Dartmouth College mathematics professor John McCarthy, who led a two-month brainstorming session focused on developing machines that could use language, solve problems and learn.

That got scientists thinking seriously about AI. But it would take decades of technological progress to make it the sophisticated technology it is today. 

The initial roadblocks were a lack of data with which to train machines as well as a lack of computing power. But over the years, the widespread digitization of information combined with quantum leaps in computer processing capacity—particularly in the 1980s and 2010s—have helped bring many of McCarthy’s ideas to life.

Today, artificial intelligence is all around us. The Roomba robot vacuum cleaner, Apple’s Siri voice assistant and TikTok’s algorithm are all powered by AI.

What is machine learning?

These days, you can’t really talk about AI without mentioning machine learning (ML). It’s the way most AI is trained, and involves feeding a computer reams of data that essentially teaches the AI how to learn, enabling it to do things on its own. 

Though the terms are often used together, AI and ML are not interchangeable: It’s possible to create an AI program without ML. For instance, Kumar said, you could design a system that plays chess by coding every possible chess move. In that case, “the machine has not learned anything. You have coded every option,” Kumar said.

ML allows the AI to make independent decisions based on the patterns it picks up in the data.

Kedziora gave the example of a student who created a program that could play Super Mario. The student let the AI play tens of thousands of rounds of the video game and “rewarded” it for making the right moves, a form of ML called reinforcement learning.

At the beginning, the AI would make Mario stand still or run the wrong way. But as it gathered more information through the reinforcement process, “by the end of two weeks, Mario was really, really capable,” Kedziora said.

As computer power has increased, so has the capacity of machine learning. Today, a sophisticated form of ML called deep learning uses layers of nodes, or neural networks, to process information in a way that approximates the human brain.

Approximates is the key word there. AI cannot yet fully replicate the human brain, because our brains are still a mystery even to us.

“We do not completely understand everything about why our brain does what it does,” Kumar said. “And since we do not have a complete understanding of that, we cannot write a program for that.”

At the same time, we also don’t always know why AI makes the decisions it does. This is a problem called explainability.

“It’s really actually deeply troubling,” Kedziora said, particularly when AI is used in high-stakes fields like healthcare or finance. It raises the question of who is liable if the AI makes the wrong decision.

Most AI models have low explainability, Kumar said, and a lot of AI research is now focused on this problem.

Case study #1: AI speaking

Restaurant tech startup Slang.ai developed an AI bot that can answer phone calls and take reservations for restaurants. 

It used machine learning to train the system on a database of more than 4 million restaurant phone calls. That has allowed it to understand the most common questions that restaurants get, such as “When do you close?” or “Do you allow pets?”

“You can phrase questions millions of ways and it will know what you’re trying to say,” said co-founder and CEO Alex Sambvani.

But Slang’s training process is never really finished. The company has a whole team that monitors transcripts of conversations to ensure the AI is performing well. “If there are opportunities to improve, we retrain,” Sambvani said.

It’s rare that Slang gets a question it doesn’t recognize. But it does happen. In those cases, the team manually adds the question to the training data, along with dozens of alternative ways of asking that question. 

Two recent examples were questions about corkage fees and a fee for bringing in your own cake. “Those are two questions that aren’t super common, but do come up every now and then,” Sambvani said. They are now part of Slang’s core database.

For Slang, the heavy AI lifting comes in the understanding of the question. How the system responds to those questions is more rule-based. 

“You could have a completely flexible generated response,” Sambvani said, “but it gets risky.” 

That’s because AI tends to “hallucinate,” or produce nonsense, when it encounters a situation it hasn’t been trained on.

One thing AI has not yet mastered is the art of asking followup questions, Kumar noted. “If it doesn’t understand, it asks questions that are nonsense or keeps asking the same question.”

Why is AI suddenly so popular?

We’ve already established that AI is not a new technology. But it is a hot technology, and not just in restaurants.

A good proxy for AI’s ascendance has been Nvidia, a company that makes microchips and software that power AI. Its stock price is up more than 1,750% over the past five years, making it one of the best-performing stocks over that time.

But according to Kedziora, there was not necessarily a technological breakthrough that put AI on the map in recent years. Rather, there has been continuous and rapid improvement, plus a little something called ChatGPT. 

ChatGPT is an AI chatbot developed by OpenAI. It’s a form of AI known as alarge language model (LLM) that can comprehend and generate text. LLMs have been around since as early as 2016, but OpenAI was the first to put one in a consumer-friendly package. 

“What led to the mainstreaming in December 2022 was that OpenAI wrapped their LLM in this really compelling, easy-to-use platform that you could access,” Kedziora said. “That had a lot to do with AI’s splash.”

LLMs, by the way, are behind much of the consumer-facing AI used by restaurants, such as drive-thru voicebots and programs that can generate marketing copy or social media posts. Some are even built on the same code that powers ChatGPT.

Case study #2: Scanning spec sheets

Eleven36, an online seller of restaurant equipment based in Waukesha, Wisconsin, was looking for a way to automate the time-consuming process of putting product information on its website.

That data, such as voltage, dimensions and materials, is usually found in product spec sheets. 

“We find a lot of times getting data directly off the spec sheet is your closest source of truth for the product,” said Cody Bell, Eleven36’s product data manager. However, getting the data from the sheets onto the website has to be done manually. 

So the company turned to students in Kedziora’s AI practicum at the Milwaukee School of Engineering to develop a program that could automate the work. 

The group of five juniors created a model that’s designed to scan equipment spec sheets, grab the important data and put it into a spreadsheet.

It is built on a large language model called DistilBERT that uses the context before and after a word to determine if that word is one of the bits of data it’s supposed to be grabbing.

The model is trained on 32 spec sheets at a time, and it reads each of them 100 times. The more sheets the model reads, the better it gets. As of early April, it was pulling out 63% of the labels it was supposed to.

That’s pretty impressive when compared to the baseline, which would be a completely random output, Kedziora said. It’s still not as accurate as a human, but it’s much faster: It takes just 1 second for the AI to process one spec sheet.

The problem the group faced was simple: It needed more data. The AI was being trained on about 200 documents; the group wanted to double that to 400. 

“Variety is good in this case,” Kedziora said. “You want it to learn how to find a concept versus memorize a document.”

The final product will be a body of code that can manage the whole project, from reading the spec sheets to creating an output that can be uploaded to Eleven36’s website.

The company said such a program would help it scale faster and provide restaurants with a better shopping experience on its site.

“The more information that we can give the customer to make a buying decision, the more they’re gonna put that trust in us,” Bell said.

How effective is AI?

It depends what type of AI you’re talking about.

For restaurants, there are two general applications of the technology. It can be used internally to help manage things like inventory and scheduling. And it can be used to interact with consumers by taking their orders or making recommendations.

“The first one is in pretty good shape,” Kumar said. “The second one is still a work in progress.”

The first application is a form of predictive AI, which uses historical data to predict future events. In a restaurant context, a predictive AI tool could look at an operation’s sales data and estimate how much of a certain SKU to order or how many employees to staff on a given day.

These programs can typically operate on their own, without human oversight. And they require a less robust machine learning model than generative AIlike ChatGPT—think hundreds of parameters rather than billions.

By comparison, taking customers’ orders in the drive-thru is a far more complex job with an almost infinite number of possible scenarios that the AI has to respond to.

At White Castle, which is using AI to take drive-thru orders at dozens of locations, “they discovered that in order to be able to support the ordering of their No. 1, there were well over 100 ways in which a customer can actually order that item,” said Ben Bellettini, SVP of sales for restaurants at SoundHound, White Castle’s AI voice supplier.

SoundHound solved part of that problem by using machine learning to train its AI on White Castle’s menu and the many ways a customer might verbalize an order. But it also incorporates a “secret sauce” it calls Deep Meaning Understanding that enhances the AI’s understanding of what the customer wants.

“The customer may order a burger with cheese and then order chicken tenders,” Bellettini said. “Our system is smart enough to know to add that cheese modifier to the burger.” That has helped SoundHound achieve an order accuracy rate of 90%, he said.

SoundHound’s AI is completely autonomous, Bellettini said. But that is not always the case for generative AI programs. Many rely on what is called a human-in-the-loop model that employs humans to monitor responses and take over when necessary.

“I think the gold standard is to have a totally automated system in which the AI does all the triage,” Kedziora said. “In practice, it’s not uncommon to have humans reviewing AI output regularly.”

Though that human involvement is costly, it is often necessary to ensure accuracy. And it can make the AI stronger in the long run.

“Until it is very good, you have to use a human,” Kumar said. “Companies which are moving too fast and trying not to use a human at all will suffer in the short-term.”

In general, generative AI models still have a long way to go before most restaurants can depend on them, said Rich Shank, VP of innovation at researcher Technomic. 

“Big players can experiment with them and have success,” he said during a presentation at the Restaurant Leadership Conference last month. “But they’re not quite mature enough for everyone in the marketplace just yet.”

Members help make our journalism possible. Become a Restaurant Business member today and unlock exclusive benefits, including unlimited access to all of our content. Sign up here.

Multimedia

Exclusive Content

Financing

One big post-pandemic change at restaurants: More people are dining alone

The Bottom Line: As off-premises sales at restaurants have taken off over the past five years, more consumers are eating alone, and often in their cars. What is the impact on the industry?

Financing

On Wall Street, investors bet on fast food

The Bottom Line: Restaurant stocks have taken a big hit this year, along with most of Wall Street, brought on by economic uncertainty and the threat of tariffs. Large, quick-service companies are the exception.

Financing

In appreciation of Junior Bridgeman

The Bottom Line: The basketball star made a fortune operating Wendy’s and Chili’s restaurants and paved the way for generations of athletes to preserve their earnings.

Trending

More from our partners