How AI Will Disrupt Hollywood & What Elon Musk vs. OpenAI Means with David Raskino
The co-founder of Irreverent Labs talks the future of AI in Hollywood and why that's just the start of a widespread technology remapping of nearly everything
During a recent Hundred Year Podcast episode, I spoke with David Raskino, the co-founder and chief technology officer of Irreverent Labs, to explore the looming changes and opportunities in the rapidly evolving artificial intelligence (AI) space. Raskino is a software engineer, a veteran of Microsoft, and a serial entrepreneur who has dedicated his career to machine learning and artificial intelligence. Last month, the company secured new funding from Samsung Next to develop the company’s vision of AI-powered entertainment.
We talked about everything from the doomsaying around artificial general intelligence (AGI), the battle between OpenAI and Elon Musk, and the concept of open-source versus for-profit AI ventures. But the conversation really took an interesting turn when we delved into a discussion around AI and Hollywood and how technology is disrupting an industry already in the throes of technological change.
This is just an excerpt of our entire discussion. If you want more, I encourage you to listen to or watch the full conversation via the Hundred Year Podcast here.
Adario Strange:
When I look at artificial intelligence (AI) and how so many things can be automated and allow you to offload a lot of processes and tasks to various AI implementations, it seems like the landscape may have to change in terms of investing in startups.
Instead of looking for a bunch of two-person startups that need a certain amount of cash and perhaps a special ops team of 50 people, it seems like we're about to see 1,000 startups that are led by maybe one person that have funding in the range of $500,000 to $1 million. For example, let’s say Carol in Iowa uses Auto-GPT mixed with her skills from some other industry, perhaps what she did in banking, and she has a possible unicorn idea that is only possible using AI tools that were not available to her before that now allow her to work in a far more lean way. Does that sound outlandish?
David Raskino, co-founder, CTO, Irreverent Labs:
Let's start with what has not changed. What's not changed is that there are many value chains in the world that make up the world economy that are still yearning for automation in some way. That hasn't changed. Capital's interest in generating a result that is increasingly uninterested in a return based on human labor hasn't changed as well. Now, it’s just a question of how capital investigates, learns, and discovers how technology is going to rewire these value chains.
I think the period of discovery that we are embarking upon now as it relates to AI is not dissimilar to the Apple App Store moment, where there is this new platform that has emerged, and nobody knows where to draw the line between what the platform is going to do and what the app developers are going to do.
Is OpenAI going to continue to fund the investment in their extensions? If they've said, “Hey, ChatGPT-4 is going to be the thing that we work with for a while, we don't have plans for GPT-5,” are they going to then build their own apps on top of it? We just don't know.
I think the period of discovery that we are embarking upon now as it relates to AI is not dissimilar to the Apple App Store moment, where there is this new platform that has emerged, and nobody knows where to draw the line between what the platform is going to do and what the app developers are going to do.
Is Microsoft going to feel that? I think there's a period of discovery about “where's that line?” And venture capitalists (VCs) are going to have to work that out.
What I do see happening is that the first wave of AI app developers has come. I'm not going to name any names, but there are app developers that are valued way over what they’ll be able to do. And I think that will become clear when those app developers turn into companies and have to diversify into more than one product. And suddenly, capital is going to realize, “Hey, this is actually a single app company.”
I only say that because in this new world of AI diversifying into more than one product, it’s not that simple. It requires real research. It's not just fine-tuning a model. I think there are lots of things you can do with just fine-tuning a model. But then if there are lots of things you can do with just fine-tuning models, and they're fairly easy to do, it's very obvious to the market and lots of people are going to do it. It's going to become a commodity. And building a business on a commodity is hard.
So it just means there's a lot more competition, and you have to differentiate yourself. That differentiation is going to require breakthroughs. I think we are entering that discovery phase. Now, if you're not [creating breakthroughs], I think there's a lot of value that the app developers can create in a short amount of time. Some percentage of the frenzy we're seeing is a lot of people showing shiny examples of things that you can do that look great.
You might even get a lot of early traction, but there is just a hard wall you're going to hit at some point in both what you're able to do as an app on top of a foundation model and a hard wall in terms of how you expand into adjacent areas once you've saturated what you can do with that one use case.
Making sense of the battle between Elon Musk and OpenAI
Strange:
You brought up OpenAI, which was founded, from my understanding, as an open-source ethos company, but now it's mostly closed-source. And this change is something that appeared to annoy Elon Musk, who seeded it to some degree. He claims to have come up with a lot of the ideas behind the original organization and I think he even said that he came up with the name for the company. Now he is saying he's going to start his own ChatGPT competitor called TruthGPT (presumably an offshoot of X.ai, which launched in March).
But here's the thing that doesn't really track for me: Years ago, when he talked about OpenAI, it seemed like this was part of the altruistic arm of his endeavors, like, “Hey, I'm going to put some money into this [AI effort] for the good of all humanity.”
So when he says going to create a competitor to OpenAI and ChatGPT, my thought isn't that he's necessarily going to try to create what he meant to create with OpenAI, and that it's going to be all open-source. It instead sounds like maybe he thinks he missed an opportunity. Perhaps the thinking is, "I was trying to do something altruistic, but if these guys are just going to profit off of it, well then I'll just profit off of it, too, with my own new thing.” How do you see this as you've been watching this AI story unfold?
Raskino:
I think there's some, some amount of comedy here that I just think we should recognize. It's just deeply ironic that one of the world's greatest capitalists is commenting negatively about a company that seeks to return to its shareholders a profit. I cannot help but chuckle about that.
Strange:
Do you think he was naive in thinking that he would seed OpenAI and that it would somehow remain this kind of open-source community, almost like a Wikipedia of AI-style organization?
Raskino:
I don't know, but as an observer, I can't help but notice a couple of things. And obviously, these are just my private readings of the tea leaves, they may not be true. One thing is that the picture I get from any of Elon Musk's commentary is that he greatly enjoys attention. And it's hard to know what is attention-seeking, and what is serious. So I generally just tend to tune him out. Because he tweets so much and says so many things, I wouldn't be doing my work if I just followed his musings.
To believe that a consequential AI company would exist that serves 100 million daily users with a product that has the opportunity to reshape society, and then to think that that company would not seek profit making, I think there's some level of naivety that goes into that.
The second thing is that I recall reading on the OpenAI website the reasons for Elon’s departure—that all parties came to the conclusion that there was a conflict of interest with what OpenAI felt it was interested in doing and the kinds of things that Tesla was interested in doing.
And so what looked like some mutual agreement, at least the way it was worded on the website, just says that he decided to step down because of a conflict of interest. So if he stepped down as a result of a conflict of interest, then he stepped down. And that's it. If I resigned from a company, I left the company. I should not have any problem with what the company decides to do. I'm not there.
Strange:
Aside from OpenAI and Musk, do you think it's naive in general to think that one could create an AI startup that would be open-source, and its primary goal is not profit, but only the betterment, safety, and advancement of humanity?
Raskino:
I think to believe that a consequential AI company would exist that serves 100 million daily users with a product that has the opportunity to reshape society, and then to think that that company would not seek profit making, I think there's some level of naivety that goes into that.
The reason I say that is, number one, the temptation is too high. America is a nation of capitalists. Number two, given the amount of compute and cost and the kind of people they wish to attract, it would require very deep-pocketed benevolent investors who just wanted to plow charitable donations into the company.
And even that is fraught with danger. Even charitable donations come with agendas. How realistic is that? Has that ever been done? The Bill and Melinda Gates Foundation is an example of a consequential foundation that does charitable work around the world. But Bill and Melinda Gates were very, very involved with that. They put their personal wealth into it, and on the back of that were able to get other like-minded people to join them. So could OpenAI have been that? Possibly, but I don't know that all the founders felt that way. If they did, it wasn't clear in public that they felt that way.
This is how Hollywood is about to transform before our very eyes
Strange:
Paint a picture of Hollywood in 10 years. What does that look like in terms of AI's involvement and its impact on the industry?
Raskino:
I expect that short films will be recognized as a new class of motion picture entertainment. I'm talking about very short films, TikTok length. And the reason I say this is because human attention is very, very hard to retain these days. Not only are we strapped for time, but we have to work harder to make ends meet. We don't want to sit around and wait and see things and TikTok and the availability of content on the Internet, Instagram, and TikTok has made it so that we just have super short attention spans. So I believe that short entertainment as a category emerges and is generally accepted. I even think there was a company that spun up briefly…
Strange:
Oh, you mean Quibi, right?
Raskino:
That's right. I think they were just ahead of their time. I think a year from now, short films will be a category that is more recognized, and I think AI will play a huge role in that. And it would not surprise me if a good portion of those actors are synthetic actors. I think short film is where it's going to happen because it is accessible enough for the average person to produce. I think the quality is going to be there, the production tools are going to be there, and people's attention is already there. And people will be making short stories like they are on Instagram and TikTok, just longer running stories. I think people are going to want to tune into that content.
Hollywood and other businesses are going to look for a way to use more software and more automation. And the job roles will change. So it may be cutting costs temporarily, it may be reorganizing the structure of the labor they have internally. But that's just the forward march of technology.
Strange:
So given that, do you think 2023 is the last moment where we know for sure that a real person was, for example, singing into a mic, or playing an instrument on a song, or the person I'm seeing on screen is even real? Are we around the last point of departure for obvious reality in entertainment right now?
Raskino:
I think we're entering that phase. How long [the status quo] is going to last is really going to be a function of how long there are people out there who want to sit and watch an hour-and-a-half-long movie. I'll give you a crazy number. I was looking recently at how many big studios have been producing films in America and Canada over the last 20 years. On average, it works out to be about 500 movies a year, and that's falling off a cliff. Do you know what's on the rise? Independently-produced films. Whichever metrics you look at, they all tell the same story.
I looked at the numbers on Instagram and TikTok alone, those numbers just tell a clear story about where people's interests are. And as we have more younger generations, that's just going to compound in the direction I'm talking about. So I think the writing's on the wall.
There are six or seven big studios in North America that produce films. There are maybe another like eight or nine that kind of matter, but they’re tier-two. So there are not that many studios that are spending hundreds of millions of dollars and spending two years producing a ninety-minute-long movie that will be watched in theaters. And how long is that theater-going going to last? It will be there, but it's clearly on the decline. And it's not just during the Covid years, even before the pandemic, movie theater-going was on the decline.
Strange:
Give me the blue sky forecast for Hollywood’s incumbents. What is the positive version of Hollywood’s future for someone who is an actor or works in the studio system as a producer or in another capacity?
Raskino:
I think two things are going to happen. First, because of the democratizing power of video production, everyone will become a part-time actor, or at least become a lot more accessible to the world. We're going to find all kinds of people out there who are actually really good at acting and have a great stage presence.
How is Hollywood going to feel the effect of that? They’re not going to feel the effect of that in big movie productions. You're not going to go from David from Bombay [filming amateur short films] to John Wick 5. They're still going to do a John Wick 5 with Keanu Reeves.
Strange:
I don’t know, I've seen some pretty good productions, like really bare bones in terms of being low budget, that were iPhone-recorded productions, already coming out of places like Africa.
Raskino:
Yes, and just imagine with more tools, a greater democratizing effect. This stuff is getting really good. There's so much creativity around the world, but the cost of getting into entertainment [for many] is just too high. It's a life choice. People don't want to make singular life choices, because the world is so unstable, [some people] need multiple side hustles. So people are naturally going to eke into the world of entertainment by being part-time actors.
And Hollywood is going to feel that in some way. One way I know they will feel it is through their P&L (profit and loss) statements. When American businesses have a P&L problem, the first thing they do is cut costs. And that cost is going to come from anyone who is not directly supporting revenue.
So Hollywood and other businesses are going to look for a way to use more software and more automation. And the job roles will change. So it may be cutting costs temporarily, it may be reorganizing the structure of the labor they have internally. But that's just the forward march of technology, it's just going to happen. And any company that sits on the sideline is going to be going to be irrelevant eventually. So Hollywood will find a way out of sheer survival. Do we see the big studios exist 50 years from now? That I don't know.
Strange:
With regard to AI, what are you fearful of? What’s the thing that keeps you up at night?
Raskino:
Generally, I just don't live in fear. But if there's one real risk that I see with AI in the very short term, it is not the dislocation of labor, it is an adversarial use of AI to deceive people and do more harmful social programming that causes division, especially in Western democracies at precisely the time that democracy needs to reinvent itself. The reason why I think of that as sort of the greatest potential harm is because adversarial hackers already have a playbook for doing it. They've honed that playbook. We can see clear evidence of it being used over the last 20 years.
[AI] is such a powerful tool, and has such a phenomenal ability to deceive, because it can appear to be reasonable. We’re putting a significant burden on evaluating [the veracity of] communications on the average person. I think that burden is still great, and it will have an impact, for sure.