Reasoning Engines and the Truth About AI

John Ball – PAT

About this episode

In this episode, we chat with John Ball, from PAT.ai.
As an expert in language and machine reasoning, he’ll guide us through the present and future machines in the are of language understanding.🚀

We will cover:

– the truth about brute-force AI and machine learning
– new machine reasoning methods and symbolic approaches to AI
– current applications of these new systems

Listen to the full episode or read the transcription below. 💪

Interview Transcript

Jordi Torras:
Hello everybody, and welcome to this new episode of the Inbenta Future of Customer Success Podcast. I’m really excited today to have John Ball. He is the founder and CTO of PAT. PAT stands for P-A-T, a company specializing in artificial intelligence. John knows a lot about the field. Right, John?

John Ball:
I do. It’s great to be here.

Jordi Torras:
All right. All right. That’s awesome. So, John, would you introduce yourself? Who are you?

John Ball:
Sure. Yeah, so John Ball, the founder of PAT Inc. I’ve been doing this for a long time. So when you go back to my very beginnings, it was probably in the 1970s, I worked on what became the internet at my father’s office. He was an educational psychologist that worked on Sesame Street, so he did a lot of experiments on me. And that’s why I think I’ve taken such an interest in how people learn and edit that language.

John Ball:
But if you rewind to like 1983, I was a student at Sydney University in Australia. And my professor said he one thing we’ll never be able to do is emulate brains, emulate language. And for the first time in my life, that was a challenge. And I was going to prove her wrong, no matter how long it took. So I spent the entire summer in the library, reading books on cognitive science. So linguistic, psychology, philosophy, neuroscience, and after about 20 years, I pretty much solved the problem, I think, of how brains work. Because when you think of it, a brain is this big sensory machine. And we recognize images and sounds, and we can put them all together and we can suggest what’s going to happen in the future based on our experience.

John Ball:
So that’s what Patom Theory is. And then from that time, I’ve focused on the best application I could find for this brain theory to prove that it works well, and that’s language. Because when we talk, we’re using not just the visual experiences and the auditory experiences, but how all of those combine. And one of my heroes, Alan Turing, wrote the concept of this imitation model where you can tell if something’s intelligent if you can have a conversation with it.

John Ball:
And all of my peers today seem to be missing the key point that he made, which is you can talk about anything. So when he showed that you can have this game against a machine and the machine is unknown, whether it’s a machine or a person, if you ask questions about chess, it should be able to answer. And that’s not a language challenge, that’s a knowledge challenge about a different domain completely. So that’s sort of my quick introduction to where I am, and I’m happy to elaborate.

Jordi Torras:
Wow. That’s amazing. And if you could just hold off for a second. Did you say Patom? What is that? What does that mean?

John Ball:
Oh, Patom Theory.

Jordi Torras:
Yeah. What is Patom? What is a Patom?

John Ball:
Oh well, when you study what brains can do and what deficits result in people with various kinds of brain damage, it appears that what the brain is doing is matching patterns. You can think of a visual pattern as you can recognize a car or a person, so visual, or you can hear the sound of a person and you recognize who that is. So all of these are patterns, and the atom bit comes from the fact that you have to somehow have only one version of every specific pattern in your life. So all of my patterns for you, Jordi, are now going to be all connected together, and I’m not going to confuse you with my mother or my dog, because they’re completely different entities. I can sort of take you through what that looks like in reality as we go forward.

Jordi Torras:
Okay. So a pattern is an entity or a symbol, right? Sometimes I would call it, right?. That is matched by a pattern. And that’s where this Patom word comes from. That’s amazing.

John Ball:
Right. And if we look at what people think brains are doing today, we know at the cellular level you’ve got a neuron and the neuron will activate or it won’t. Then it has this statistical property because depending on the weights that it connects to, the other neurons may or may not fire. Yeah.

John Ball:
Oh. But just think of it another way. Rather than think of the statistical nature, think of the fiery nature, which is a pattern has just been matched. So it’s a pure symbolic machine, the brain. It either matches patterns or it doesn’t and they go up in this hierarchy. So that’s what Patom Theory leads you to.

Jordi Torras:
Got it. And you cracked the code. You’re able to put a brain into a computer. That’s what you guys are doing.

John Ball:
The principle is you take that theory and then you apply it to language, which I think is the best use. Because certainly when I started full time in 2006, if you wanted to get visual like a camera or something, you were getting this compressed stream which wasn’t very easy to manipulate and not brain like. Or you could get a microphone, which is also a compressed stream of sound. So none of those were good applications. But there’s nothing clearer than whether a machine can understand the language or not. So that’s what we focused on.

Jordi Torras:
And you’ve been in that for a long time. At Inbenta, it is exactly the same problem that we try to solve. So we’re going to have fun today in this, and I hope that the audience will also appreciate that.

John Ball:
Great. I’m looking forward to it.

Jordi Torras:
Now let’s talk business. You, you are the founder and the CTO of PAT, right? P-A-T.

John Ball:
Right.

Jordi Torras:
And now that you’re talking patterns, I see a pattern here. It’s like Pat is related to this Patom Theory, right? So tell me, what is PAT? What do you guys do there?

John Ball:
Probably the best example is what we’re about to release in the airline industry. So one of the big airline groups came to us with this challenge of making our round-the-world ticketing system work better. And so when you think of that, and then you use any of the three big consortiums in the world, if you use any of those systems, you find that none of them work particularly well, they’re all quite difficult to get a valid trip booked. And so we looked at that problem and over a period of six months iterating with the customer, we found out that there’s a very good way to use language to augment that, but there was one missing piece. And the missing piece was how do you actually understand what the rules of this particular ticket are? Right. So if you think of it, airlines are built on, “I’m going to fly from here to there, starting at this time and ending at that time.”

Jordi Torras:
Yep. So it’s easy, right?

Jordi Torras:
It’s easy from the distance.

John Ball:
That’s right. However, these tickets are built on regions, so you can fly to Europe and then you can fly around Europe a couple of times, and then you go to the next country, say North Americas. And then you can fly backwards and forwards a little bit there, and then you have to go in the same direction. That would be counterclockwise to say Asia, and then you finish up back in the original place. So, that’s how the rules are built. But that’s not a language thing, that’s a reasoning thing. And what we did was after we got the briefing and we were given six months to solve this problem, and we had this concept of a multimodal environment where you would see the map of the world, you would say things and as you’re saying them, you would then see it drawn onto the map. So we could always check with the user, “How does this look?” And it’s quite impressive when you see it and it’s out in probably four weeks now.

Jordi Torras:
Oh, wow!

John Ball:
I’ll give you the updates.

Jordi Torras:
Yes, yes. We’re going to be so happy to see that. So you say, “Hey, I’m in Sydney and I would like to fly to San Francisco next weekend.” And then pretty much if I say that, a person would kind of build that map in their head. They would see that arrow. That’s exactly what you’re going to do. So that map is going to be physically there.

John Ball:
Right.

Jordi Torras:
Wow. So basically interpreting what you are saying.

John Ball:
Yes. And the thing that’s going to give you confidence about using AI of the future is knowing that it understood what you said. I’ve used different devices, Alexa, Siri, for example, and you can say something, but you’re never sure if it got all of the words or if it’s left something out and then it does something and you go, “Oh, that’s not what I expected.” But if you can have the two things in sync, what you’re seeing with what you’ve said, and then the machine prompting you, “How does this look?” Then you’re going to be much more confident that it’s doing what you want.

John Ball:
So, one of the things that we implemented was this integration between the multimodality of the system and the user. In fact, we did a little cheat here because we wanted to have very simple language so that you can interact simply, and we wanted to emulate what people really do when they’re interacting with another person, with an agent. So for example, if you went to a travel agent and said, “I want to round the world ticket.” What would that cost? You would probably find the agent would say, “Oh, it’s about $5,000.” Or whatever it is. $12,000 for business class.

John Ball:
That’s, that’s really quick. And that’s great because then you can go and find out from your friends when you can go. You’re not ready yet to go because you don’t know who’s available. So you get your quote and then find your friends and dates and times, and then you can come back and book. And that’s what we’ve emulated. So in about some 10 seconds with this system, you’d give your starting point, it’ll give you your estimate for the flight without doing anything extra.

John Ball:
Then as you go through and navigate and say, “I’m going to these countries.” In fact, the numbers were scary. So the first time I tried it, it took me 45 minutes. I was lost. I got caught in Japan. I couldn’t finish it. So 45 minutes to not succeed versus the new system, a couple of minutes to actually succeed. And the conversation’s very natural. So it’s pretty exciting, and I think that this next-generation system will be the type of system that we’re going to see lots of in the future for these types of complex interactions.

Jordi Torras:
That might become standard. That might become the way that, for what you say, you will as a human, right, as a customer, you will prefer to use that system to a human, guessy sort of guy, right?

John Ball:
Right.

Jordi Torras:
At least for that specific task.

John Ball:
Yeah. And in the industry, it’s called goal-directed conversation. And we’re exploiting this feature of language which is that… It’s called focus. So when I ask you a question, “Where are you leaving from?” You can’t at that point easily say, “My mother’s not very well today.” Because you’re not answering my question. The focus is about where you’re leaving from. So by using this concept of narrow focus and asking questions, the system can ask the user questions. They can answer it. And if they answer all of my questions, I’m ready to give them their price, their ticket, and move on to the next task.

John Ball:
Goal direction is an extremely important concept for the next generation. And it takes away the complexity that a lot of people that do chatbots have had, which is you start with, “How can I help you today?” And you say, “Well, my mother’s not very well. Do you know any doctors?” If you open the questions so wide, you’re not doing what computers are best at, which is answering your question. So the concept of next-generation is to say, “I don’t care what your IT system is today. Put us in front of it, we’re going to look at the questions that you already ask, and then we’re going to ask those questions for you.” So if somebody starts going off topic and we just ask our next question, “When do you want to leave? How long do you want to be away for?” We have the answers that we need to answer to IT problem.

Jordi Torras:
If you answer with the mother question, the system’s going to say, “Yeah, nice. But again, where are you leaving from?”

John Ball:
Exactly. Yeah. So the more human you make it, but still get the job done, the better. And I think if I had the choice, I would prefer the system to get the job done even if it’s not going to be chatty with me about the weather and how I feel and that type of thing. But you can always add that later.

Jordi Torras:
You have your conversational system language based and goal-based, so that the system will have a goal, will have that done, have the job done and when, from where to where you want to go. And there’s going to be feedback on, okay, this is the chart in the world of how your flight is going to look like. So you’re pretty sure that there is a handshake and everybody understood that.

John Ball:
Right. Right.

Jordi Torras:
That’s amazing.

John Ball:
And even the way an agent would speak to you, just compare it to today’s system. When do you want to leave? And then it puts up a date for this month and you can toggle through December, January, February 23rd. What time? 5:00 PM. That’s not how people operate. What you want to say is, “When do you want to leave?” “Next month.” “Great.”

Jordi Torras:
All right.

John Ball:
Because the specificity, the details of the date don’t matter yet. What you care about is when you want to go. If it’s very specific, great, tell me the date. 5th of February. Okay. Or next month, I don’t care. And then when you check for ticketing, that’s the time you can see what’s available. So we’re focusing on the user and letting the system do the hard work of the requirements.

John Ball:
And the point I was going to make was we teamed up with a company called Elemental Cognition to do the reasoning part because we’re trying to make a really clear distinction between AI and NLP or language processing language understanding. So we’re focused on the language understanding bit, which is the interaction, but there is an AI piece which is what do you want to say next? And Elemental Cognition, David Ferrucci is the guy that did the Jeopardy systems for IBM back in 2011.

Jordi Torras:
Yeah, the origin of Watson.

John Ball:
So, he’s the guy that started… Right. So he was the founder of that whole concept. And EC had just finished a big reasoning engine or head a year ago when we were working with them on this project to get it all going. So the AI piece, if you can solve the reasoning part with a tool like the EC tool, and then you’ve got the language understanding bit that’s working right and you design it for goal direction, then that’s a really good basis for the next generation of systems. Sorry if I’m sort of going all over the place here.

Jordi Torras:
No, no, no. I believe that’s great for me, for our audience too, to kind of understand the kind of work and framework that you guys are working. And I’m following you on Twitter and PAT on Twitter as well. You have great publications out there. And one of the things that I see is, and you can see that trend in artificial intelligence today, which is many folks would say, “Well, artificial intelligence is essentially machine learning and in reality is deep learning and that’s it.” So just by using deep learning alone, we have it, we cracked the code and now we can build any intelligence that we want.

Jordi Torras:
I see you guys saying, “Well, does that work always? What about having a more symbolic reasoning here, right, and using symbolic AI”. So, what do you think about that? What are the limitations of using what I call brute-force machine learning?

John Ball:
Yeah. Well, I’ve been following the science for a long time and the types of things that deep learning, machine learning does extremely well, you see it on the Amazon website. You’re buying a few things, people who have bought that also buy this. Or you see it on the Disney Channel, people that watch this also watch that. So it’s really good at that correlation between the specifics of what somebody’s just done and then the options of what comes next. But with language, it doesn’t seem to me to work very well because I’ve used Siri and Alexa and I can ask questions and I have a very close to 0% success rate in getting what I want. I find if I read the card for Alexa, which they send me every month, I can read it and it’ll answer. Great, but I can never convert my language into the commands that these systems require.

John Ball:
And the reason for that is because language isn’t about commands, it’s not about a string of words, it’s actually about this interaction of embedded words all over the place. So the type of thing I want to ask my TV is, “Can you play the movie again I watched last week?” And have it interact with me that way. But there’s an embedded element in that sentence that you can’t get with a command. And if you think of it, if I were to say I want to buy something, that something can be like in a very large number of items that a particular company has. But if I then qualify that with another sentence, like I want to buy something that I also bought last week. Now that’s a very complex issue, and the number of commands you’d have to enter would be exponentially large. And that’s why you don’t see chatbots working like people. And the solution to that is do what people do, which is in linguistics so that the science of language answers those questions, and we’re not getting it today.

Jordi Torras:
Totally. If you look at Siri, Alexa, these guys, you ask a question and when it gets to a certain complexity, at least that’s what Alexa is telling me, it’s like, “Hey Jordi, this is what I found about that on the internet.” So basically it’s using a traditional web-based search engine, like Google or whatever. That’s what happens in a remarkably big percentage of user questions, right?

John Ball:
Right. Yes. Which is annoying if you’re trying to navigate somewhere and that’s its default escape.

Jordi Torras:
Exactly. It’s going to say, “Okay, are you telling me to call mom, or are you telling me to start Spotify?” If it’s not a command that I can react, I want to go Google because really I don’t know what to do with that.

John Ball:
Yeah. And look, search has gone extremely well over the last 20 years. But the current tech that we see in chatbot technology is still search engine technology applied to a different problem. And I don’t like it because it’s not how language works.

Jordi Torras:
No.

John Ball:
And there’s no really good way to pivot from that as the tech giants that are developing these things, because they almost have to start again. They’ve put so much research into this one particular strategy that’s not getting results.

Jordi Torras:
Absolutely.

John Ball:
I compare it to the epicycle model of astronomy. You keep tinkering around with your model, where the sun is going around the earth and you keep modeling this, but it doesn’t work. And then you put epicycles and epicycles and epicycles, and it still doesn’t work. And you then try and put more. So, to me, that’s what’s happening with some of these modern systems. They’re tinkering around with it, it’s working a little bit better.

Jordi Torras:
Trying to add little things, fixing a problem without realizing that thinking out of the box would be the right way. A question for you. What is for you symbolic AI?

John Ball:
Well, for me, the symbolic AI aligns with the brain theory that we recognize specific things with excruciating accuracy. And if you look at what’s being discussed, there are things like causality where certain things result from others, and language is infused in that type of thing. So language is infused in the symbolic concept. The alternative is statistical or break-forward systems. It doesn’t have the accuracy and precision that you need for a language.

Jordi Torras:
Absolutely, yeah.

John Ball:
Again I’m focused on the language piece, not the whole AI piece.

Jordi Torras:
And out there, you see some news in the press that says, now this and that algorithm now has the understanding capacity of a high school student. What do you think about that?

John Ball:
Look, part of it is the media trying to create an interesting story. It’s great to think that machines are much smarter than people, but the reality is… In fact, there was a great headline that said, “Maybe we can’t build super-intelligent machines, but let’s let the machines design it.” Well, the machines don’t design anything. It’s people designing the machines to do something, not the other way around. And the machines certainly aren’t intelligent in the way you and I are intelligent. They don’t have language, they don’t have memories as we have.

John Ball:
Yeah. So, when I look at some of the other headlines like GPT-3 or there’s a new version of some type of language model, and it now has 12 billion parameters. And you think, well, what’s a parameter and who cares? And then if you compare it to what language requires, it’s not 10 to the 12 or something combinations, language is infinite. And I did an analysis that showed for the motion in English. So I moved from A to B towards C. When you do the analysis of that, I got 10 to the 3000 combinations. So if you are telling me that 10 to the 12 is this amazing result, and I’m then telling you that you need 10 to the 3000 just to do motion, there’s something fundamentally broken in the way people are presenting their findings. Because they haven’t solved the problem at all. They’ve just solved this very small thing.

Jordi Torras:
Exactly. And probably you know Gary Marcus, right?

John Ball:
Sure. Yeah, Gary.

Jordi Torras:
And he has this brilliant book, Rebooting AI. Great book. Great theories. And I think we look at this algorithm GPT-3 and all that, and it’s like ruining the party all the time. It’s like, “No, it’s not like a high school intelligent guy. It’s not even understanding anything, right? Because they miss this symbolic component or this atom capacity to understand what we’re saying.

John Ball:
Yeah. And it’s kind of annoying that some of the luminaries in the field are exaggerating the capability of the system. Jeff Hinton, for example, said that by now there would be no need for radiologists because it could all be automated. And it wasn’t true, and it’s not been true. And yet a lot of people were confused by that for five or 10 years, because they were waiting for this machine to take over. And I think recently Andrew Ng said, “Well, it was interesting, but we thought it would work because we did it all at this one particular Stanford medical facility. But then we went to another one nearby and then it didn’t work at all.” You know, who would’ve known? And my point for scientist is don’t make an outrageous claim if you haven’t actually done all of that first, because it’s not good for the rest of the science because so many people are following these guys, and so much oxygen is taken up by that type of claim.

Jordi Torras:
Absolutely. One of my favorite movies is 2001: A Space Odyssey, right?

John Ball:
Right. Yes.

Jordi Torras:
Which was recorded in 1968 or something like that. And you see this computer, HAL 9000, which is very intelligent and a bad guy admittedly. And the guy is able to understand humans, talk to them, read lips, play chess and, and drive this big starship and whatnot. And then I use this example because that’s how in 1967 folks imagined artificial intelligence would be in 2001. And what we had in 2001 was Microsoft Clip.

John Ball:
Yes, the paper clip.

Jordi Torras:
Like that was it. So, on one hand, it’s like there’s a speeding, right? We have an exponential innovation and we are all thrilled by that. But at the same time, when it comes to AI, it’s like, well, there’s been sometimes a big over-promising over there. So between these two visions on let’s see we are not over-promising, but at the same time there is obvious progress and all that. What is your vision on artificial intelligence? Where are we? What are we going to get and when?

John Ball:
Okay, well, committing to date is always difficult because you basically need to have a lot of people following and then implementing innovations. But what it will look like in the future is the things which we’ve already proven with Patom Theory actually work. We’ve shown in the lab that when you’re recognizing sentences, we can keep track of its meaning independently to the language, and we can then take that meaning and answer questions about it. So a lot of the challenges have already been solved. And in theory, the interesting part with the language is that if you do what people were talking about in the ’50s and ’60s, which is you start to store books in this meaning based representation, and then you use a language generator for the language you’re interested in to then convert it to your language, you suddenly make the world a much smaller place because all of the world’s information is stored in a common way and everybody has access to the same thing.

John Ball:
Google was talking about that in 2000. They’ve got a translator, but today, nobody can rely on that translation to be accurate. So until you go and fix the linguistic model that underpins the Google system, then you’re always going to have problems. And in terms of why we haven’t progressed, if you go back to the 1960s, people were very focused on, you need to recognize the grammar of the language, and the grammar is built on parts of speech. And the thing that we’ve shown is that you don’t need parts of speech at all, because that’s highly ambiguous. Single sentences can be unsolvable with parts of speech. But if you use the meaning of the words in a particular language, you can have the meaning always being the same for all of the world’s languages.

John Ball:
Role and Reference Grammar, which is the book I came across in 2012— that’s Robert Van Valin’s, one of his students’ books—actually answers all these questions about how the world’s languages work. So I was worried about how English works, and these guys in this RRG community can explain how all of these different languages work and yet map to the same thing. And in fact, I don’t know if you can see that book, it’s not coming across. Anyway, it’s Emma Pavey’s book called The Structure of Language. And that was the book I found in a bookshop.

Jordi Torras:
I think we can’t see the book as the AI of Zoom conferencing is saying, “No, that’s a background.” That’s an example of when you put the book in front of the camera and the AI gets all confused and blurs it. But that’s a great book.

John Ball:
Yes. So I found it amazing that at that point I’d been studying for like 25 years. And then I found this book which answered all the questions that I’ve been thinking I had to do myself, and they’re hard problems. And yet we’re not teaching the modern generation of students, how meaning works and how conversations work which these guys understand in depth. Instead, we teach them maybe statistics or you don’t even need to understand the meaning of words, because you can use this… Whatever the mumbo-jumbo words are to discuss what the language is about. And it’s a shame.

Jordi Torras:
Yes.

John Ball:
And so by converting people to learn what they need to do in order to manipulate language will be much better off for the next generation. And that’s why I don’t pick on a particular date for big breakthroughs, but we already have things working that can just be extended.

Jordi Torras:
Fair enough. And every time I am being asked the same question, I never say dates either because I don’t know. So the use case that you told us about, about this ticketing system for airlines is amazing. And I got a question for you, how do you think AI and these advances in pattern and atom and Patom recognizing is going to improve customer service in the future?

John Ball:
Well today, customer service tends to be this tiered model where you’ll call up, they’ll try to have some automation to help you, maybe there’s a chatbot to help you. When that doesn’t help you go to a person and then potentially you can go to an expert, and that’s the hierarchy. I think we can answer a lot of questions more quickly before we go to the humans. There’s another thing which we also learn through this ticketing system, which is people are good at the language part, but they’re not good at the systems part. So if you can automate the system, so this reasoning engine I described, by simply hitting the reasoning engine with, “Here’s the information I’ve got”, it can give you an answer. And so that’s better than a person. And the interaction with the language and see what you want is better than having to wait for a person to become available.

John Ball:
So we think that’s going to be a big improvement in customer service. I’ll use my buzzword again, the next generation systems where you simply front-end an existing IT system with language and multi-modality is going to be a good step forward.

Jordi Torras:
Totally. Absolutely. John, I could be talking with you about that for hours, but there are some time limitations on this podcast. But our audience will want to see how they can get in touch with you. Is there any way they can contact you, they can learn about you? What would be the best way for them to get in touch?

John Ball:
Well, a couple of ways. Probably the easiest way is to contact our CEO, [email protected]. I probably shouldn’t give her name across the airwave like this, but she’s very responsive.

Jordi Torras:
You can say as many names as you want.

John Ball:
The other thing is we’ve got a discord server so people can interact about the technology and the solutions that we built. And we’ve got a website pat.ai. So you can also go there and initiate contact through that. So yeah, we’re very happy to talk to people about what we’re working on, so feel free to contact us.

Jordi Torras:
Awesome. So listen guys, the audience, pat.ai. You can go there. You will see the amazing things that these guys are building, the work of John with these amazing technologies in which we are all working to make the world a better place, or at least where machines are more intelligent. And they need it. I can tell you that. All right John, thank you so much for being here with us today.

John Ball:
Thank you, Jordi. It’s been a pleasure. It’s nice to talk to somebody that knows so much.

Jordi Torras:
Oh no. I’m just learning. You are the expert and thank you so much. I have learned so much today. It’s been amazing. All right. Thank you so much.

John Ball:
My pleasure. Hope to talk to you again.

Jordi Torras:
Absolutely. Thank you.

Thanks so much for tuning in. This podcast was brought to you by Inbenta. Inbenta symbolic AI implements natural language processing that requires no training data with Inbenta’s extensive lexicon and patented algorithms. Check out this robust customer interaction platform for your AI needs, from chatbots to search to knowledge centers and messenger platforms. Just go to our website to request a demo at inbenta.com. That’s I-N-B-E-N-T-A.com and if you liked what you heard today, please be sure to subscribe to this podcast and leave us a review. Thank you.

Read more
Ebook

How can chatbots serve your omnichannel strategy? Download our ebook!

Omnichannel strategies involve, of course, several channels, but did you ever thought about chatbots?
Discover how to easily integrate a chatbot in your omnichannel strategy and explore the benefits of an ‘omnibot’