Artificial intelligence, or AI, is one of the most important technologies of our time—yet some feel its potential is just beginning to be realized. In this episode, AI expert and author of Own the AI Revolution Neil Sahota defines terms with Tony, discusses the enablers of AI technology and platforms, and delves into the recent open letter signed by tech leaders who are demanding a pause in AI research.
Artificial Intelligence: Definitions, Possibilities, and Limits
Tony Roth: Chief Investment Officer
Neil Sahota, CEO ACSILabs
Tony Roth: This is Tony Roth, Chief Investment Officer of Wilmington Trust, and you are listening to our latest episode of Capital Considerations. Today we're going to start a conversation with Neil Sahota. Neil is an IBM master inventor, and is an expert on artificial intelligence. We're going to talk about what is AI, what is the meaning of AI, what is the definition of AI, what are the possibilities of AI?
And then we're going to talk about what are some of the challenges and limitations potentially of AI, at least in the foreseeable future. So let me give Neil a proper enrollment here in our conversation. Neil is a United Nations AI advisor. He is co-founder of the UN's AI For Good initiative. He's also a CEO of a company called ACSILabs where he focuses on cognitive research and technology development to improve human thinking—so, I’m going to definitely try to enroll in that company—problem solving and decision making for organizations. He has 20 plus years of business experience, and he works to inspire clients and business partners to foster innovation and develop next generation products powered by AI, and he's the author of a bestselling book, Own the AI Revolution. So, thank you so much, Neil, for joining us today.
Neil Sahota: I'm really excited to be here. Tony, thanks for having me on.
Tony Roth: Yeah, it's going to be a cool conversation. I want to remind everybody that any companies that we're referred to are not an endorsement or a diss, if you will, of any companies, private or public.
There's so much to talk about, and it's obviously a hot topic for probably two reasons. One is that, with open AI in the emergence of the release of ChatGPT, there's a big buzz around AI and I have two teenage daughters and they like talking about it because of the possibilities to, of course, elevate their perceived abilities of themselves and their peers intellectually in an academic context.
But the other reason that I think it's so important to talk about now is that, Neil, you may not have had a chance to read it, but we've been in the marketplace with a lot of analysis on the economic challenges that we're confronting as a country and as a world, but particularly the U.S. from an inflation standpoint, capital market forecast focused on it.
And at its core we think it's a labor problem. We think that there's a deficit of workers due to immigration fall off, due to early retirement during the pandemic and such. And AI is maybe one of the few things, maybe one of the only things, that may be a part of the solution to try to fill the gap where we don't have those workers.
And I know that it could do harm to certain employees and maybe good for the economy, but that's part of what we'll talk about and figure out. I think the starting point is, and it's funny, we put in to ChatGPT for the episode, you know, what questions should we ask to understand AI? And I have to say that the answers were a little bit anodyne. They were a little bit boring. So, we can't really say that the outline is driven by AI because I think that it wasn't the most fascinating set of questions. But let's start with a question that we thought of ourselves, obviously, which is, what is AI? When you think about AI, what is it?
Neil Sahota: Well, Tony, the simplest definition is basically, AI is a computer system that can do tasks that require some level of cognition.
So, if you think about like the blue collar, white collar type of work, we know we're used to machines automating blue collar work, repetitive consistent tasks. With AI, it can actually do some white color work. So you think about like processing an insurance claim? Viewing patient symptoms. Reading a legal complaint. These are all things AI can do, even though it requires a level of thinking.
Tony Roth: Okay, so I'm going to be a little bit persnickety because not only am I a philosopher, I was a philosophy major at Brown, and I specialize in mind body philosophy, but I'm also a lawyer, so I'm going to press a little bit on some of the words that you use.
And for me, cognition is a big word. I want to be really clear about this because in defining what AI is, it's equally important, I think, to try to set the parameters around what it isn't. And I want to make sure I'm under understanding this, because for me, cognition means, at least heretofore, a biological phenomenon that creates some sense of consciousness.
And maybe there's not self-consciousness, but there's at least perception. And are you saying that AI needs to have that, or are you saying that AI needs to have something that resembles that or replicates the capabilities of that?
Neil Sahota: It’s more resembles and replicates that. If you think about the work like knowledge workers do, they have to apply some sort of thought.
They have to do some sort of analysis. It's not just I'm executing step 1, 2, 3, 4. I’ve got to figure out what's the right path. It might be I go step one to step eight because of this particular situation or this particular data. That's what really AI is trying to do is assess what's going on based on its training. And then perform the right activities for that task.
Tony Roth: AI, I’m going to define for us as a process or capability that is, let's call it computer driven or generated or enabled, that can do something that we're familiar with, typically having come from cognition or people doing it rather than inanimate objects doing it.
That's what AI essentially is, and it creates the possibility for, especially as you get into really powerful computers doing things really fast and really well that maybe people have historically done. But it doesn't involve doing it with cognition.
Neil Sahota: That's a pretty good summarization there. Tony.
Tony Roth: What's your take on the arc of AI and where we've come from, where we're going and how important this moment is. Is this a real milestone, if you will, with ChatGPT and the overall ecosystem? Or is this just sort of a linear process where computers get faster and as computers get faster, this stuff that resembles cognition just incrementally gets better?
What's going on? Are we about to go through a step change in our lives because of these capabilities? Take us through what you think is happening.
Neil Sahota: This is an inflection point. That's the real reality here. We did the Jeopardy challenge. That was over 12 years ago, but a lot of the focus has always been kind of at the enterprise level.
We were talking about like generative AI, like ChatGPT. But that started nine years ago. I was working with guys like Alex da Kid, to use that to create music and lyrics. People were using this to design urban transportation. I hate to say it, some bad actors are using it to create deep fakes.
We've kind of reached this point now where there's been enough training at a broad enough level that you look at something like ChatGPT. We've taken a big step towards the holy grail of AI, which is the personal kind of assistant or concierge that now we have a little AI tool that anyone can use, where a lot of simple, routine, everyday things like, “hey, summarize and use for me,” or “help me write my resume.”
Tony Roth: We've had Siri for a long time, and maybe Siri's always been AI or Alexa. But I've been able to say to Siri for years: Hey, tell me what the weather's going to be tomorrow. It would look up the weather forecast for me, and it was just essentially a summary. How is it different than that or is it?
Neil Sahota: So you use a great phrase right there, Tony, “look up.”
And that's what traditionally Siri, Alexa have been. They're just essentially search engines. You can just query them by voice, but they're going and searching for that specific answer. And if it doesn't exist then they don't know. The difference here is you look at like ChatGPT, you could teach it your work history, your skillsets and say: Hey, here's a job description.
Write me a resume and a cover letter for this job, and it's going to actually go and produce that for you. It's not going to be perfect. You got to do some polishing on that, but it's actually going to produce something for you. And that's the key difference. It's not looking up information, it's actually trying to piece some things together that we may not actually fully know yet.
Tony Roth: And is that why it's called generative AI? In other words, if you're just looking something up, all you're doing is you're saying here: Hey, I found something, here it is. If you’re creating something that doesn't exist, that's a generative process. Is that pretty much how that term plays a role there?
Neil Sahota: That's spot on Tony, and we use generative specifically for that reason. It's not actually creating. Generative AI, like ChatGPT or DALL-E 2, it’s not like it's just creating something brand new. It has to be prompted, it has to get the direction from people because AI can only do what we've taught it.
We have not figured out a way to teach machines creativity or imagination.
Tony Roth: So that's somewhat reassuring, I think. But let's get into then what it is doing. In its most simplest state, my understanding of what ChatGPT is doing is think of the Library of Congress somehow tied to a really powerful computer that is programmed using probability and statistics in some way. So, where I say, summarize for me the 15th century French stoics, which is Montaigne and all these different French philosophers, that system will be able to find the primary text in the Library of Congress and use probabilities and statistics to essentially screen that information and create some type of summary.
And then I get the summary and maybe I would say, well, it's good in this way, but it's bad in that way. And then it could actually “ learn” by being prompted to do things differently the second time and get better. Is that sort of what the system is doing?
Neil Sahota: That is what it's doing.
And maybe give a hard example based off what you're saying there. Let's say you get that summary and you're looking at that like, you know, that's sort of good. Could ChatGPT—could you write that so a fifth grader would understand? It would actually go through and change the language and try and tailor it so that it's essentially dummy down the information.
Tony Roth: What is it that has changed recently that has enabled us to get to this inflection? Is it simply that the size of the databases and the variety of databases of information that's attached to these computer chips, these systems of probability and statistics, and the power of those systems have all together reached some critical mass that the usefulness of these systems has now arrived at this minimal viable product, if you will.
And that's what's happening. Or is it something else?
Neil Sahota: It's actually slightly different. It’s not that we still have access to way more data. I mean, we definitely are generating more data. It's the training. And that's the thing that a lot of people forget about is that AI can only do what we've taught it.
It's not their—hey, super smart machine, tell me which stocks to pick. If you've never taught it about stocks or what is a stock or analysis it can't do that. So we've just reached this inflection point now where we've had so much kind of cross-training for these systems. They can actually do a lot of different things and not just at the enterprise level anymore, but at the individual level.
So like all these assistant tasks now it knows that, and that's a big reason why the ChatGPT, it literally went from about a hundred thousand users to 10 million users overnight because somebody was like, I could use it for this, I could use it for that. Even though they're simple, mundane type of tasks, the AI has been not taught how to do them.
Tony Roth: And so when you say it knows how to do something, what you mean by that is when it gets a certain prompt, it's been programmed and it can actually program itself in a sense, right? By incorporating the feedback from its prior cycles, but when it gets a prompt, it's been programmed that here is the most likely set of data that it can output, that will satisfy to the highest degree of satisfaction in the prompt. That's sort of what it means to know something.
Neil Sahota: You generally got the right idea. The prompting is the direction, our guidance, because these systems don't think for themselves. They don't do things for themselves. AI is not really programmed. That's the thing I think a lot of people kind of get squishy on that.
It's not that the AI is following a set of instructions like a traditional software program. Especially this piece that we call machine learning, a component of AI where we've given it something we call ground truth, which is rules on how to make decisions. Think of it this way, Tony, you got a three-year old child.
How do you teach that kid the difference between good and bad behavior? You can't just give the kid every possible example, there’s too many. And you can't just wait until they do something good or do something bad and reward or punish, that's too reactive. What most parents try to do is give the child rules on good and bad behavior.
So if you help people, that's good. You hurt people, that’s bad. And then as the child is looking and making an evaluation, as it's getting more data or experiences, observations, the child can start assessing whether that's good or bad behavior. We have human teachers, our parents, actual teachers, all that to help us along the way to help us course correct and learn more effectively.
Exact same thing with AI. We give it these rules. We give it tons of data. We let it try things. We have human trainers. Hey, AI, that’s good. That's absolutely right. Oh, no, that's actually incorrect. This is the reason why it's incorrect. It's only partially right, but that's how the AI actually learns. Now, unlike us where we need, what, 10,000 hours to master something, an AI system can go to an ethicist with a PhD in just a few weeks.
Tony Roth: In order to get there, though, the AI requires feedback from human beings.
Neil Sahota: That's right.
Tony Roth: If it has that measure of feedback, then it can improve in the sense that it's more likely to provide output that we will regard as valuable and or accurate. And that's what machine learning consists in. It's taking that feedback and it's increasing the probability of providing a response or an output that we regard as valuable or accurate.
Neil Sahota: Hundred percent. That's why we hear people talk about the quality of the data and we worry about things like bias, particularly implicit bias. This is why. Because if we teach the AI system these things unintentionally or intentionally, it will obviously warp what the AI produces.
Tony Roth: The question that I'm dying to ask you, but I know it's not a simple answer, is if you look at AI today, is it at the level of a five-year old or a 10-year old, or a 15-year old?
Now, I know that my daughter told me last night that they had one of the AI systems—you probably know which one, I don’t—take the LSAT and get a perfect score and take the bar exam and pass. So, on some level, AI is in advance of the best law students probably, but on another sense, you would never rely on AI today, at least, to go into a courtroom and litigate an important case, right?
Neil Sahota: That was actually ChatGPT. And when you think about like the LSAT or the MCAT, or even the bar exam, it's a standardized type of test. So, the fact that AI, it has an eidetic memory, it remembers everything in reads, sees, hears, experiences, observes. It can draw upon that pretty quickly to take that type of test.
But as you know, Tony, when you're in the courtroom, things don't fall into nice buckets or steps. There's other things that happen, other things that have to get connected to make your case, and that's where AI will definitely struggle. It can't do things it doesn't know, it can't think off the cuff, it can't do well with first-of-a-kinds, because it's never experienced it before.
Tony Roth: So, I want to talk more about what the application curve looks like for AI in different kinds of functions. But I want to ask you first about an application of what I think of as AI that has disappointed relative to promise, but not relative to intuition. And I think it's fascinating to understand why that deficit exists, which is self-driving vehicles.
And so, Elon has been promising the world for seven years in a row. This is the year Tesla's going to reach level four autonomy, and without having any analytic understanding of where he's going with that or why he believes that, intuitively, I've always been very dismissive of those claims because I'm a driver and I know intuitively that I'm reacting to situations in real time. That somehow my brain, my neural network, my sensory perceptions taking in that data is capable of doing, and you have to be better than a human in order for a human to accept what it is that he's promising. Am I correct that that's an application of AI and secondly, why is there such a big deficit and will that deficit close eventually?
What's your take on that whole narrative?
Neil Sahota: So you're right, that's AI. I'll share a couple of things here that I think will probably flabbergast most people, to be perfectly honest. We are making progress towards autonomous vehicles level four. Actually, I think Mazda is getting close to level five in China, and they have the most complex set of traffic laws and road systems.
A big reason we're not quite there yet is the trust factor you were actually talking about it. It's not that we expect autonomous vehicles to be better drivers than we are, we expect them to be perfect drivers, which is not possible. We're teaching the machines and nothing is perfect, at least nothing human made.
So, it's unfortunately a fallacy and everyone, worries about the trolley problem. If you have to hit one of two people, who do you hit? People aren't comfortable with making that decision. It's one thing to say that when we're actually driving, you have half a second to react. There's some air cover on that. But having to make these life and death decisions beforehand, freak people out.
And I totally get that. It’s not an easy thing to decide. What surprises a lot of people is that, like at the United Nations, we, we don't talk about when to legalize autonomous vehicles, we actually talk about when to ban human drivers because humans actually introduce the most variability in the system.
So, if you were actually to trip that out and go with all autonomous vehicles, you actually create a much more stable system with a lot less accidents. There's about, if I remember the last UN report, 20 million auto accidents worldwide a year that result in fatality or permanent horrific injury. The estimate is that if it was just autonomous vehicles, that number would drop by at least 95%.
Tony Roth: You're simplifying the problem that you're asking AI to solve by eliminating human drivers, so I understand that you could reach level five or whatnot if you didn't have human drivers because the problem is now much simpler. But that's not the world we live in, right? We live in a world where we know we're not going to get anytime soon to outlaw human drivers.
So, let's talk about the problem that we are confronting, which is a world where we have drivers like wife, but you know, probably I'm as bad as she is.
Neil Sahota: It's also a generational thing because if you look at the younger generations, especially the young millennials, generation Z. They would prefer not to drive.
In fact, they're not even getting driver's licenses. They had the convenience of ride share, for example. I myself, based on Southern California, I would love an autonomous vehicle, so I don't have to drive through traffic everywhere. At least it would free up my time to do other things. So, I think that that cultural shift will eventually happen.
It'll probably happen a little bit faster than I think most people anticipate. We can't discount the fact that sometimes the solution is we as people need to step back and recognize that maybe there's a different way of doing this. We may not like it. It may diminish our place in the universe a bit, so to speak, but that's also the kind of existential reality that we're living in.
There are things that humans are much better than machines at, don’t get me wrong, creative thinking, imagining, but there are some things that machines are better than us. So why not tap into that advantage? This is not human versus machine. This is not natural intelligence versus artificial intelligence.
This is more about what we call hybrid intelligence, where we're trying to augment our own human capabilities with machine abilities. So, let's tap into the strengths of both and make it about human and machine.
Tony Roth: It's very compelling, Neil, to say that, let’s get to a better world by allowing the machines to take over certain tasks, and that's all well and good, but let's just predicate that this conversation is about understanding what AI is likely to be able to do or not do.
If we posit a world where humans for the next 50 years or some extended period of time are going to be behind the wheels of many of the vehicles on the road, and they're going to behave in totally arbitrary and unexpected and self-harming ways, that's a much harder problem again for AI to solve, to be able to exist within.
The question I have is, absent a policy intervention that gets those people off the road, will AI be up to the task or does it require a simplification of the problem by taking people off the road In order to reach those levels of autonomy?
Neil Sahota: We have to simplify the problem. We can get autonomous vehicles driving fairly well, but there's always going to be these weird scenarios that occur, and that's our challenge.
We always think about the exceptions rather than the norm. You think about flying that plane to leave the gate, the airline, the crew, and all that, they're doing about 2000 different safety checks, and all it takes is one to derail a flight. Recently I had a thing, we left the gate for some reason, on one of the the landing gear, one of the tires went flat.
People were groaning. What is this going to mean? But it's like, But there was 1,199 things that went right, you don't ever hear about. That's always going to be the perception. You could have autonomous vehicles drive a billion miles a day, but it only takes one accident for people to lose their confidence in the system. Without simplification, the progress is not going to meet expectation.
Tony Roth: This is fascinating. Your invocation of the airline example, the flying example, is so interesting to me cause I'm probably the oddball. People say to me every day with that, I talk about this topic and I happen to travel a lot, so I'm in the context. They say, well, would you get on an airplane with no pilot?
And the answer is, I would much faster get out on an airplane with no pilot, I think than to get in the backseat of a vehicle with no driver because the problem of solving flying from point A to B intuitively strikes me as a much more inherently simple task than driving on a road where I'm going to be in close proximity to hundreds, if not thousands of vehicles.
Whereas a flight with the FAA controlling where all the airplanes are to a good degree and so on and so forth, am I on the right track there or am I not thinking about that right?
Neil Sahota: You're spot on. Less variability, more control, more rigid. It's actually very similar to Singapore where they introduced self-driving buses and self-driving taxis back in 2019.
If you've ever been to Singapore, they have very strict traffic rules. Nobody speeds. Nobody jaywalks because there's huge fines and there's cameras everywhere. You get caught and you get fined. You get caught again, you get tossed in jail. So people are very compliant with the rules. Less variability in the system.
Tony Roth: I think we've accomplished a lot of territory. But let me ask you one really provocative question to end the episode. The takeaway to me is to recognize that AI does, and at least for the receivable future, will only do what we essentially are able to program or teach it to do in the sense that we've used that word.
It doesn't have consciousness, doesn't have cognition, and I think that's got to be understood by really smart people that are in this field. Nonetheless, there is this letter that was recently signed, a so-called open letter by leading people in the field. I don't know if you were a signatory or not, that basically said, let's take a pause in AI because it's moving too quickly and it could present risks to humankind. So tell us about that.
Neil Sahota: I'm very familiar with the letter. I know quite a few of people who signed the letter. I'll start off by saying it's the right problem, but they're offering the wrong solution. It's a rapid pace of change right now, and there are people working on stuff they don't fully understand.
But the feasibility of having people just kind of hit the pause button and getting every country, company, individual to stop, especially the bad actors, that's not realistic. We can't survive just being reactive to things anymore. We have to find a new proactive model to jump ahead of all these things, to understand different uses or misuses of these technological tools.
Tony Roth: When I think of the reasons that we'd want to prevent AI from moving forward in different paths, there are ethical reasons that relate to all kinds of different things, medical, elimination of human function, all that kind of stuff. But then this sounds like this is not that. This sounds like this is more about harm or damage, certain risks, that these machines could be so powerful, they could do things that we're not ready for.
Can you give us an example of one of the risks that people are worried about that have led them to sign this letter?
Neil Sahota: This is already being done, but if you think about your newsfeeds or your social media, what content that's getting pushed to you, it's all controlled by AI algorithms. AI with a little bit of information learns your psychographics, like your personality, your hobbies, your interests.
It just starts feeding you the same kind of stuff over and over again because it knows you like it. What ends up doing is creating this echo chamber. So all you ever hear is the same opinions you already believe in—
Tony Roth: Right.
Neil Sahota: it, limits your perspective. But if someone wants to kind of warp your thinking, here's a really powerful way to influence your thought, your mind, your decision-making process without you even knowing it.
Tony Roth: That has just such powerful implications for what we're experiencing as a country right now, and what we have over the last number of years as a democracy and the need to have the politic if you will, make informed decisions and intelligent decisions. We'll talk about that when we resume in our next episode. Neil Sahota, thank you.
Neil Sahota: Hey, my pleasure.
Tony Roth: So thanks everybody for listening today, Wilmington Trust.com for a complete roundup of our latest take on the economy and the markets and how we're responding in portfolios. Thanks everybody, and we'll talk to you next time.
Disclosure
This podcast is for educational purposes only and is not intended as an offer or solicitation for the sale of any financial product or service or recommendation or determination that any investment strategy is suitable for a specific investor.
Investors should seek financial advice regarding the suitability of any investment strategy based on the investor’s objectives, financial situation, and particular needs. The information on Wilmington Trust’s Capital Considerations with Tony Roth has been obtained from sources believed to be reliable, but its accuracy and completeness are not guaranteed. The opinions, estimates, and projections constitute the judgment of Wilmington Trust as of the date of this podcast and are subject to change without notice.
The opinions of any guest on the Capital Considerations podcast who are not employed by Wilmington Trust or M&T Bank are their own and do not necessarily represent those of M&T Bank Corporate or any of its affiliates.
Wilmington Trust is not authorized to and does not provide legal or tax advice. Our advice and recommendations provided to you is illustrative only and subject to the opinions and advice of your own attorney, tax advisor, or other professional advisor.
Diversification does not ensure a profit or guarantee against a loss. There is no assurance that any investment strategy will be successful. Past performance cannot guarantee future results. Investing involves a risk and you may incur a profit or a loss.
Any reference to company names mentioned in the podcast should not be constructed as investment advice or investment recommendations of those companies. Third-party trademarks and brands are the property of their respective owners. Third parties referenced herein are independent companies and are not affiliated with M&T Bank or Wilmington Trust. Listing them does not suggest a recommendation or endorsement by Wilmington Trust.
Private market investments are only available to investors that meet the U.S. Securities and Exchange Commission’s definition of qualified purchaser and accredited investor.
Facts and views presented in this report have not been reviewed by and may not reflect information known to professionals in other business areas of Wilmington Trust or M&T Bank and may provide or seek to provide financial services to entities referred to in this report.
M&T Bank and Wilmington Trust have established information barriers between their various business groups. As a result, M&T Bank and Wilmington Trust do not disclose certain client relationships or compensation received from such entities in their reports.
Investment products are not insured by the FDIC or any other governmental agency, are not deposits of or other obligations of or guaranteed by Wilmington Trust, M&T Bank, or any other bank or entity, and are subject to risks including a possible loss of the principal amount invested.
Wilmington Trust is a registered service mark used in connection with various fiduciary and non-fiduciary services offered by certain subsidiaries of M&T Bank Corporation including, but not limited to, Manufacturers & Traders Trust Company (M&T Bank), Wilmington Trust Company (WTC) operating in Delaware only, Wilmington Trust, N.A. (WTNA), Wilmington Trust Investment Advisors, Inc. (WTIA), Wilmington Funds Management Corporation (WFMC), and Wilmington Trust Investment Management, LLC (WTIM). Such services include trustee, custodial, agency, investment management, and other services. International corporate and institutional services are offered through M&T Bank Corporation’s international subsidiaries. Loans, credit cards, retail and business deposits, and other business and personal banking services and products are offered by M&T Bank, member FDIC.
© 2023 M&T Bank and its affiliates and subsidiaries. All rights reserved.
Neil Sahota
CEO ACSILabs
What can we help you with today