The AI Doomsday Clock is Broken
This major Nvidia-backed AI startup is betting on the future of chatbots, minus the terrifying apocalypse scenarios predicted by some AI experts
Earlier this year, one of the biggest AI funding events took place when a group of high-profile investors including Nvidia poured $150 million into a little-known AI startup called Kore.ai. Long before OpenAI began promoting its Scarlett Johansson-style AI conversational chat assistant, Kore.ai founder and CEO Raj Koneru and his company had already solved many of the issues around practically deploying AI as a human assistant.
The rapid development of AI chatbots and assistants has quickly spread into the mainstream, impacting everything from businesses focused on knowledge work to the world of academia and even the entertainment business via video and music. But while we’re all enjoying these new tools, some are wondering: What comes next? What happens if and when these AI tools evolve into something even more intelligent than human beings?
This is just one of the major questions I posed to Koneru during our recent discussion, and his answers may shock some of those who believe that Artificial General Intelligence (AGI) and then AI super intelligence will fundamentally change our lives on earth, and maybe even our very existence.
Adario Strange:
Recently, Ilya Sutskever stepped down from his position at OpenAI, the company he co-founded in 2015. In June, he announced a new startup called Safe Superintelligence. Their mission statement is focused on being an artificial intelligence company squarely devoted to safety in artificial intelligence (AI).
The AI research world is divided on the topic of AI safety. Some think it's all silly fear-mongering, that we don't have anything to worry about, and that AI is simply a tool. However, others in AI believe we're not paying enough attention to super intelligence as a looming concern for humans beyond just business and work, but also in relation to our existence on the planet. What is your perspective on this topic?
Raj Koneru:
Well, it's very difficult to define what [super intelligence] is. But here’s something to consider…whatever model that's out there, whether it's a current-day model or a model five years from now that's far more intelligent than what it is right now, is a request-response system. You're going to send it some request, and you're going to get a response. And if you can put guardrails on the front side when you send the request in, to check what input you're sending in, and put guardrails on the output, you can control that model and the use of that model. We have a guardrail framework in our platform, and whenever we send anything to any LLM (Large Language Model), whether that LLM is hosted in our own platform, or hosted externally, we enable those guardrails, both at the input side and the output side.
Now there's this concept of autonomous bots. That is coming. It's not there yet, but it's coming. And that's sort of maybe more relevant to the super intelligence element, where the bots go do things without instruction. You give it an overall goal, and it just goes and talks to other bots and just tries to achieve that goal without any [direct] controls. Today, the use cases that enterprises are trying to put out to consumers, and even consumers using something like ChatGPT or Google Gemini, are all request-response systems.
Those are controlled inputs and controlled outputs. There has not been a scenario yet where bots just go wild and control things. These are controlled environments. So I think the responsibility is less, in my opinion, on the model company. It is the user of that model who has to put the guardrails in place to make sure that the models are not doing autonomous work. Platforms like ours enable [businesses] to safely use those models, as opposed to letting the models just do whatever it does on its own.
Now, what Ilya is going to do in his company? Maybe it's a guardrail company, maybe it's a model company. We don't know, but the concept is the same. How do you control the model to not go haywire, essentially? And that is being done today in production with guardrails of the type that we provide. We have a toxicity guardrail, we have a bias guardrail, we have a guardrail on profanity. You can create your own guardrail on our framework, and that's how you go about it.
Strange:
Do you think we ever will achieve or encounter artificial super intelligence that is completely liberated from guardrails, free of any controls by the human hand?
Koneru:
Well, it depends on the humans, right? At the end of the day, the models are going to get more and more intelligent, the more data they're trained on. They're going to know more things. They're going to be able to generate more things, and maybe be able to do more things. So let's say there's this super intelligent model, and it's achieved Artificial General Intelligence, which seems to be the holy grail of where the model companies want [to go]. Let's say we are there today, and this fantastic model exists. It's still on us as to how we're going to use that model.
The model is not going to go take over the world. It's just a machine at the end of the day. Now if I choose to let the model make the decisions on all the decisions that need to be made for a particular use case, then yes, I'm in trouble. Because I'm not controlling anything, it may do something that could be harmful. But I am stupid enough to have done that? So if I have a super intelligent computer or a model, if I use it appropriately, with appropriate guardrails, it's going to make my use case much better.
So I'm not so worried about people saying that [AI is] going to take over the world and they'll do things which could start wars or start a famine. These are extreme imaginations. In the practical world, I think the use of these models for specific use cases will enable these enterprises and businesses to control what the models can do.
Strange:
What you're saying makes a lot of sense if we're just solving for the business community and good actors. But we have open source AI and, particularly in the realm of generative AI art and video, we’re occasionally seeing some of the open source models being used to create things that are a little racy, and even a bit little distasteful in some cases.
So it seems logical, in my view, that at some point someone in the open source AI space will aggressively develop an AI model that is something deliberately created to operate on its own and explore the world and potentially create chaos. We've seen that open source software isn’t something that you can necessarily control, so far. Have you considered that? Is that not something worth worrying about?
Koneru:
I think we're saying the same thing at the end of the day. I give you something, and it's up to you to use it appropriately. Whether it's open source, and let's say somebody created a model that's distasteful, that's not secure, that's not safe, [or] that creates havoc. The model itself is not going to do it until you put it to use. So it's on you to put the guardrails in place, to not let the model go do those things.
Going back 25 or 30 years ago, there was no Internet, essentially. Now we can't live without it. Similarly, these models are going to make our lives better, but much like the internet also had bad actors, there will be bad actors using these [AI] models.
—Raj Koneru, Kore.ai
Strange:
What about the worst-case scenario, and a bad actor out there uses open source tools to do exactly that—create havoc? They're not beholden to the business community. They're not aligned with a particular government, or maybe they are, and that's why they're doing it.
Koneru:
I mean, this is actually much simpler, in my opinion. If I'm a bad actor and I went and used this [AI] to do some harm, there's evidence that I did that. So the laws of the land in whichever country will apply to me… You can't stop a person from making a bomb, but if they use the bomb, then, of course—because the laws come into play, whether it's terrorism or whether it's a [some other] crime—you catch the guy and put him in jail. Same thing here [with AI]. The technology is just a tool. It's a means to an end.
Technology has improved our lives significantly. Going back 25 or 30 years ago, when I grew up, there was no email, there was no Internet, essentially. But now we can't live without it. Similarly, these models are going to make our lives better, but much like the internet also had bad actors, there will be bad actors using these [AI] models. So I think it's on law enforcement doing its job and making sure that people are not doing bad things with the models. But you can't stop the development of these models, whether it's open source or commercial in general.
Strange:
Nevertheless, science fiction scenarios are often brought up in which an AI essentially takes on its own independence, moves on into the world, away from its creator or whoever trained it and begins to have its own thoughts and goals. You're saying that you believe that a lot of these science fiction scenarios are just that, science fiction, and not rooted in probable reality?
Koneru:
Yes, I completely believe that. I think, at the end of the day, humans are going to control how these things are used. These things are not, in my opinion, anytime soon, or probably ever, just going to autonomously go and do things in general. So if I'm the creator of something that's doing harm, whether I intended it or not, I'm responsible, and the law has to apply to me. And then you try to stop that machine from continuing to do harm.
So those scenarios [where AI] models and bots take over cities and create havoc…I don't believe in all that. Because at the end of the day, the humans are the ones that are actually putting these models to work.
This interview excerpt has been edited for brevity and clarity. You can listen to or watch the entire interview here.