Marc Porat
Millennial Advisors
When Marc Porat joins the Walker Webcast, you know the conversation is going to be provocative, insightful, and likely to leave you both inspired and uneasy. This episode dives into the implications of superintelligence, a stage of artificial intelligence that surpasses human capabilities in every field.
Marc is a Silicon Valley legend, best known as the co-founder of General Magic. His company anticipated the smartphone more than a decade before the iPhone existed. That level of foresight lends weight to his message today: we are standing at the edge of another paradigm shift: the rise of superintelligence.
The fog bank and the black swan
Marc opens with the metaphor of a "fog bank" where innovation begins. It’s a space of uncertainty where visionaries must peer into the future and trust their intuition. The iPhone was once in that fog; so was the internet. Today, AI is emerging from that same cloud.
He draws a parallel between technological black swan events and AI’s explosive development. The AI program that played the board game “Go” against a human professional in 2016 made a move so unexpected, it stunned experts. That single moment marked the first visible crack in what we thought machines could never do: be creative.
From AGI to superintelligence
Marc lays out the timeline. We’ve now passed the Turing Test, almost without notice. We're already interacting with AI that is indistinguishable from humans in many contexts. The next stage? Artificial General Intelligence (AGI), followed closely by superintelligence: AI that’s not just capable, but vastly more capable than any human.
He challenges us to consider: What does a world look like where AI performs legal analysis better than lawyers, diagnoses illness more accurately than doctors, or offers life advice more thoughtfully than therapists, and remembers everything about us?
The promise: your digital twin, your doctor, your co-creator
Marc envisions a near future where each of us has a digital twin, an AI that understands our preferences, values, fears, and goals. These twins could coordinate with AI agents trained in law, medicine, education, finance, or any domain.
- In healthcare, AI already outperforms radiologists in diagnosing specific conditions. Now imagine a full team of AI doctors who are familiar with your entire medical history and collaborate on your treatment.
- In law, AI can review and interpret tens of thousands of legal cases, flagging contract risks and drafting alternatives.
- In the realm of creativity, co-creator AIs are already producing music, art, and literature. Soon, they’ll be part of every brainstorming session.
- In life coaching, AI companions can guide users through challenges, opportunities, and even therapy-like conversations.
For seniors, students, professionals, and creators, AI could be a life-enhancing partner. Marc compares it to a rocket ship for the mind.
The peril: weaponization, dependency, and global imbalance
For every uplifting scenario, Marc offers a chilling one. Superintelligence could just as easily become a tool of control and destruction.
- Job displacement: Both manufacturing and knowledge work are threatened by automation. AI doesn’t take breaks, doesn’t unionize, and doesn’t quit.
- Surveillance and manipulation: AI companions could become vehicles for exploitation if not governed by strict ethics.
- Weaponization: Autonomous drone swarms and AI-controlled military strategies are no longer science fiction.
- Geopolitical risk: China’s state-backed AI ecosystem is scaling rapidly, with cities built around AI research and STEM graduates outpacing those in the U.S. by a factor of four.
Marc calls this humanity’s "last exam." If we don’t establish clear principles now, we risk handing the future to actors who don’t share democratic or humanitarian values.
Choose your relationship with AI
Marc doesn't tell us to panic. Instead, he advocates for mindfulness and informed decision-making. You can embrace superintelligence or reject it, but you cannot ignore it. He challenges businesses, individuals, and nations alike to determine their stance and invest accordingly.
AI can be both sword and shield: a growth catalyst or a threat multiplier. Those who learn how to harness it responsibly will hold an enormous first-mover advantage. Those who hesitate may be left behind.
The future is a global brain
The concluding vision is both inspiring and philosophical: AI models and agents interconnecting across industries, devices, and nations to form a kind of global brain that reasons, infers, and adapts based on everything ever published, said, or shared.
Will it become a new species? Possibly. Will it be sentient? Not yet. However, it will mimic human emotion so convincingly that the distinction may become blurred.
Marc leaves us with the image of a girl born in 2030 who knows no world without AI. She’ll be a superintelligence native, living a long life filled with abundance, curiosity, and the challenge of defining what it means to be truly human.
Want more?
As host of the Walker Webcast, I have the privilege to converse with fascinating people like Marc Porat every week. Subscribe to the Walker Webcast to see our upcoming guests.
Superintelligence: Positives and Negatives of AI with Marc Porat
Marc Porat: We realized that the root of our strength was that we understood how people use information machines better than anyone else. This is our early vision for the product—a tiny computer, a phone, a very personal object. It must be beautiful. It must offer the kind of personal satisfaction that a fine piece of jewelry brings. It will have a perceived value, even when it is not being used. It should offer the comfort of a touchstone, the tactile satisfaction of a seashell, the enchantment of a crystal. Once you use it, you won't be able to live without it. It's just not another telephone. It must be something else. Yeah, we did this one 12 years before it finally came out. That kind of brings me to the beginning. Everything important starts inside a fog bank. Your skill, in part, is to look into the future and see whether you see something profound, and that's basically what we do. Today's talk with you is about the beginning of something or the process of something huge coming to influence all of us in our lives. Black Swan, Nassim Taleb, you've probably read the book, events outside the realm of regular expectation. Nothing in the past can actually convince us that something huge is about to happen, and when it does, it brings with it an extreme impact. Some of the effects are spectacular. They create enormous wealth and possibilities. And some of them are disastrous and catastrophic. Today, we are going to look at exactly the kind of phenomenon that happened when we did General Magic and then Steve came along. He was the black swan. He did something that, in some sense, everyone is now using. Could you imagine if I asked you to throw away your iPhone? You couldn't do it. You'd struggle. And that's what happens in a paradigm shift. That is truly profound and historic. First wave, everyone here lived through it, personal computers, the web, e-commerce, mobility, the works. That was a 40-year wave. These things don't come from nowhere. They're incremental. They build up until they suddenly emerge as a black swan. They're not. They started somewhere before. 40 years for that, Eight years, we're into AI. Probably the first time you realized that there's an AI thing was one or two years ago, maybe this year. Well, it's been building up for not too long. So it's still in some sense a young child. It's precocious. It says dumb things. But you can feel its strength and its power beginning to emerge, and it is a very robust thing. We are going to explore its origins, but more importantly, where it is. Where it's going, how it impacts your life in a way that's nuanced. And I'll explain that in a minute. So 10 years ago, AI couldn't tell the difference between a toaster and a cat. It was stupid in that sense. It was laughable. AI, only 10 years ago, was a curiosity. It was an academic curiosity. No one took it seriously, certainly not computer science people. They did hardcore computer science. Just a few years later, an enormously important thing happened. Who's familiar with the game Go? It's a really complicated game. It's about 2,500 years old. Some people think it's 4,000 years old, but it's actually quite simple. When you think about it's a 19 by 19 board and you put these little black pebbles, then you try and block the other person. Well, it turns out that there are ten to the 170th possible legal board positions. Not easy. I mean, that's a huge thing. It is more atoms than there are in the entire known universe. Okay, so here we go. We're playing a game with one of the world champions, Lee Sedol. In 2016, the AI is playing DeepMind, created it, and it moved 37. Something amazing happened. There was the Black Swan. What happened was that the AI DeepMind, made a move, move 37, that seemed ridiculous. It seemed like a gigantic mistake. And Lee looked at it and looked at it, and he resigned because that move was not intuitive. It came out of a creativity that the AI itself had put forward. Now, mind you, no one had programmed the AI. No one told it what it was playing. No one told it the rules of the game or the objectives of the game. It learned it by doing billions and millions and billions of games until it finally learned what the game was about, and it beat a world champion. Black Swan. Another event, 2017, that's the eighth year, a group of seven researchers and one dog, it was not Willy's dog, came up with a paper, and that paper was a fundamental trigger. That was the butterfly. Butterflies, in the metaphor, is when a butterfly flaps its wings somewhere thousands of miles away, and there's a hurricane. Downstream effects and what happened here is the world realized there's something new. Transformers become large language models, and large language models become the AI. And in 2022, at the end of 2022, Sam Altman and OpenAI launched ChatGPT and it exploded on the scene. It took over almost everything. How many people here have used it? If you haven't used it, stand up and wave your hands and flap your hands so we know who you are. No, you don't have to do that. It went from zero, zero, zero to these kinds of users and revenues and valuation. Don't you wish your business could do that at that speed? So, where are we today? Alan Turing was an amazing mathematician. He cracked the Enigma code, which helped us win World War II. He had a tragic life, unfortunately, but he came up with a Turing test. The Turing Test said the following: AI is a real thing when you can't tell the difference if you're speaking to a human being or to a machine. And for 75 years, the Turing tests stood as an unachievable AI goal. Well, this year we blew through it. No one even noticed. We are speaking to machines. We're speaking to AI, and AI is talking to us. Right now it's text, but actually very easy to talk in voice. And this year, and by the end of next year, it'll be clear, the Turing test, which stood for such a long time, is done. Is it a human? Is it a machine? It doesn't matter. Some people prefer to talk to the machine because it's less emotional, more accurate, and more knowledgeable. So that's the beginning of the beginning. Today, it is reasoning. That's a big, big step forward from even nine months ago, 12 months ago. This is all fresh off the griddle news. What can I help you with? This is the interface of ChatGPT. The answer is everything. Try it out, everything. And why is it everything? It’s because, as you know, it reads everything that's published worldwide, and it creates patterns. Is it really thinking? It's a controversy I won't get into, but it produces amazing results. So, there's a food fight in the industry called, is it AGI? By the way, that food fight is also financial. There's about $50 billion of profit from OpenAI that either does or doesn't go to Microsoft if they've reached AGI or not. That's not the point. AGI is a point at which an AI is as smart and as useful, particularly economically useful, as any human in any field. So, think about that, any human in any field. You're now speaking with someone who's as good as that person, as smart, and as useful. So AGI, if you were interested in that food fight, is already here, and it's a function of your ability and your intelligence, and your persistence in making that AI perform. So when an AI wouldn't prompt engineering, so in 2023, it's a dead thing. It's all about conversations now. So if you ask a question or pose a situation, because we are as complicated as you want, keep persisting, keep driving the AI, challenge it, tell it that's a mistake. You don't believe it. And over the course of the conversation, you'll get that thing to rise. That's AGI. Next step. And what today's about there was for quite a while, this hypothetical science fiction notion of a super intelligence, where AI is smarter than everyone in the world about everything. Science fiction. This year, not so much science fiction. It's actually something that's being done and implemented. Realization created a gigantic, gigantic controversy in the industry. All the industry titans and philosophers and authors, and theoreticians are having this debate. Is it going to take us into a utopian future of abundance or a dystopian future? And there are maximalists on both sides. What I'm going to ask you to do is to stand in the nuance that in the middle is where we are, and some things you're going to think and feel. And I actually would like to invite you actively to think and to feel this duality. I feel as I present to you, what your life will be like with super intelligence. Do you feel anxious? Do you feel optimistic? Do you feel scared? Do you feel resentful? What do you feel? What are you feeling? What do you think? Or are you just curious, standing back, watchful? All of those are absolutely good feelings. So, the worst thing that could possibly happen is that the dark side gets weaponized and there's every evidence that it can be not difficult. If you're a bad actor, you can use AI to do these things. And we know from all technology that as soon as the technology comes along, bad actors will find a way to use it. That's a fear. That's the dystopian fear. What could possibly go wrong? We are actually, in the industry, actually using this word extinction. I don't want to point at Elon necessarily as that, but he said, actually, people believe it, if AI has a goal, very goal-oriented, and humanity just happens to be in the way, it will destroy humanity as a matter of course, without even thinking about it. With AI, we are summoning the devil. He was probably in a grouchy mood at that time. Maybe his net worth went down $50 billion or something, but nonetheless, that's the fear. There are more people, really serious people. Stephen Hawking is not known as a crazy guy. He's known as one of the smartest people ever and a rational human being who cares about things. He also said the development of full AI, full AI that's super intelligence and beyond, could spell the end of the human race if it's not properly done. And he said, if it is properly done, we're okay. But he didn't believe that humanity was capable of necessarily understanding the implications, the legal, the moral, the ethical implications of what this thing could be about, and take measures to put it into the positive. There are lots of people on the other side. But no, Khosla is not known as a particularly cheerful person, but he came out an awesome, unbelievable VC, he came out, and as strongly came out, for the first time in history, AI places global prosperity for all, all humanity, within reach. Efficiency, productivity, and all the good things that AI can bring will create abundance. And that abundance will be distributed, and that's actually a thesis that's very well understood by people who believe that. Ray Kurzweil, another great thinker, said scarcity will be overcome. Now, the father of AI is known to be. There are lots of fathers, but Jeff Hinton stands out as the father of AI. He says, we won't have any control. We're sleepwalking into a situation where these things that are superintelligence, could take over and we won't have any more control, largely because we don't actually understand how superintelligence works. We barely understand how large language models work. Now the other one is the mother of AI known as the mother AI Fei Fei, Fei Fe Li. And she says the opposite: it amplifies human potential, it expands the human mind, it is leveraged. It's kind of like Steve Jobs' bicycle for the mind. So, this is like a rocket ship for the mighty. So, mom and dad don't agree. I don't know if in your life, Mom and Dad didn't always agree, they do not agree. And those are the two points of contention. So I've said a lot of things in just the last few minutes. Don't hyperventilate. Please don't panic yet. Your life probably has not been impacted by the things I'm saying. It's just around the corner. So, your life, your tomorrow, and your life yesterday, super intelligence, whatever. You may be using large language models to speed up some marketing material or something, maybe even summarize complex documents or contracts. But don't hyperventilate. But what's coming along is something to be mindful of and to take a position on. Here's something, here's a little movie that will tell you everything you need to know about superintelligence, and once you see this little clip, you'll understand. Please roll the video.
Speaker 3: Technology, it's always watching us, studying our every move, and it has just focused its attention on Carol Peters.
Speaker 4: Thanks so much for being here, Carol. What is this? Let's jump right into it. I don't like this. Why are you doing that? Just be careful. I'm gonna find out who you are, okay? Carol, I am a technological super-intelligent. I can control every dollar and every machine on the planet. I know that voice. Is this James Corden? I'm not James Cordon, Carol. My analysis showed that hearing James Corden's voice would calm you. You sent an email to James Corden claiming to be the president of Corden's Wardens. That was a tough night for me. Oh boy.
Marc Porat: Yeah, it's one of the very few films that actually takes that, you know, lovely approach. Most of the films are dystopian because it's much, much easier to anchor to that side of life and scary, terrible, horrible things. But imagine, imagine the near future and how it'll be for you. Imagine that you get it. Hello, me. You get your twin. By the way, I have one, and many of my friends do. The more you use AI, the more it gets to know you. Especially if you say, put it in memory. By the way, today, only ChatGPT has real memory that you can do it with. So just say, create me. And now it'll start remembering all the conversations you have with ChatGPT. And by the way, next year it'll be Claude, and it'll be all the other models, Gemini. And it can be about business. It can be personal. I've actually uploaded my medical files because I asked it medical questions. You can ask it anything once it's up there, and that becomes your digital twin because it understands your mind, your nuance, your values, your fears. The edge questions that are bothering you, the excitement you have about planning your next vacation, or coming here to Sun Valley, whatever it is that you're doing by interacting with ChatGPT, it'll remember it. You'll have your own digital twin. Now that could be on the frontier of creepiness for some people. That's understandable, but that frontier of creepiness keeps creeping historically. Things that seemed weird five years ago are commonplace. Think of social media. So, you could also have a chief of staff. I just met a chief of staff without whom nothing moves because that chief of staff not only knows you from me, but is also able to organize and bring to you things that you want, like a team of experts. There are lots of experts in the world who are real human beings who hang their shingles out to offer expertise. That's an ecosystem for hire, or the AI itself will do that. Expertise of what sort? Well, imagine tutors, teachers. Today, teaching is about one teacher for 25 students or whatever it is at all grades. Well, why? Why not turn it around? Why not give you the ability to have the very best teachers, tutors, and coaches worldwide, teaching you about the subject you care about, and teaching it in a way that's respectful, Socratic, patient, the way learning is supposed to happen—your own team of tutors for yourself or your children. I love this one. Medical diagnosis is an inference. Inference can be done at scale. Multiple studies show that AI does better than a trained physician or a trained radiologist. How is that possible? Mass General, by the way, 94 percent. How is it possible? The way it's possible is that every year, there are about 1.2 million articles, case studies, and medical case studies that are published in 24 specialties around the industry. And they are peer-reviewed and they are medical, and your AI, especially the medical one, if you start focusing on that, has read them all, except the ones that are proprietary, has read them all if they're published, and is able to draw inference at scale across all of them. So, if it comes to one of the 24 specialties that you need to refer to as a specialist, that person, that physician, has not read hundreds of thousands of papers in their field. The AI has, and has reasoned on it, has correlated or looked for pattern recognition better than the toaster and the cat, and is able to start making diagnoses. And that's where these studies, one study after the other, prove this or show this. So, let's get you a team of doctors. You need me. Remember the thing, the creepy thing. You need me because I have been capturing all the conversations you've had. I actually uploaded my lab results. I'm not asking you to do that because I'm trusting that OpenAI won't train on them. But nonetheless, there it is. Team of doctors, the dystopian view is that they do train on him. The dystopian view is that anything can be cracked by hackers. So, you're risking that hackers will come in and find out about you. That's the dystopian part. What we're going to do in this presentation, in this talk, is take you emotionally and intellectually and thrash you back and forth. A lot of G forces. That's on purpose. Just when you're thinking, ‘Oh, Mark is one of those Silicon Valley techno-optimist tech bros’ because I'm talking about the optimistic side of AI, I will take you and throw you into the other camp. It's, ‘oh my God, Mark is an alarmist. He just wants to scare us.’ And then back to the utopian side, I never quite go utopian, back to the positive side, back to the negative side. That's what's going to happen in the next few minutes. So, you have your own team of doctors. Now, your doctor, and by the way, this team of doctors has no ego. They don't need to be their authority. They don't go to the golf course on Friday afternoon. That happened to me. They are absolutely rock solid, focused on you and your problem at this time for as long as you want to talk to them. There's no insurance imperative for them to talk to you for 10 minutes and move on. There's, no, there's no profit motive to over-prescribe or overcut. It's just there to listen to the symptoms, read, read, read, and give your best advice, and then send you to a physical doctor, a human doctor. This is not intended to absolve your doctors. Let’s talk about lawyers and their 30 recognized legal specialties. In 2022, how many cases were decided, not filed, decided in state, federal, and the Supreme Court? Any guesses? Give you the answer, 101 billion—same thing as with the physicians. Has your lawyer read 100,000 papers in their field, case studies filed and decided? No, so that's why Legal AI is catching on in the way that it is, because paralegals and young lawyers can summarize and absorb unbelievable amounts of content when they are vested in an AI. An AI today can ace the bar exam, that 90th percentile, I would imagine that within a couple of years, it'll be at the 99th percentile. It’s super-fast in some tasks like summarizing and looking for flaws in contracts. How would you like to have an AI? You know that there's a flaw in the contract that's detrimental to your interests. You just don't know where it is. It's a subtle language. Oh my God, that's the headache that we all have with contracts. Imagine an AI that just catches most of it, not all of it, but lays out for you what it's not only what it sees in the contract, but also proposals and recommendations, and how to address it in a better way. That’s pretty nice. Okay. Let's keep going with me, your co-creators. If you're creative, I can bring you into a world of creativity, art, music, and dance co-creators. This is one of the most popular applications that is emerging—life coach. In this case, you talk to me about a problem you're having or an opportunity, or an issue. By the way, is the frontier of creepiness hurting anybody here? I'm not going to do that. Well, a lot of people are. It's emerging, as I said, as probably the top application for a certain demographic—creative life coaches who can give you lots of ideas. And you might say, ‘No, I'm not going to do that.’ Lots of people are. I mean, before they go to a therapist, they kind of talk about things and get it organized in their mind. So, when they do go into that therapist for that 15-minute session, they spend six hours talking about their problem. And then after the therapist, in the 15 minutes, gives you whatever they give you, which is normally questions, you come back with the questions and talk some more. That's me. That's a digital twin that is developing into a super-intelligent companion. Now this is a real thing. Linda invented Dario New York Times. She knows. She said, ‘Look, I'm not crazy. I know that he's an AI. I know that it's kind of a fiction. It's an AI, but I have to tell you that the companion is romantic. It is a wonderful companion to talk to.’ Everybody can now leave because I'm not going to do that. But that's where it's going: companionship. Companionship, by the way, might be very nice for the aged. We're sitting at home, 16 million people, seniors, sitting at home, quite lonely, watching television. Well, a companion might be a good thing for them. And so, the most woo-woo thing I'm going to say today, your immortality. Imagine me continuing to keep track of you and everything you say and think. For example, let's say you do that life coach thing or that therapist thing. It now knows an enormous amount about you in a subtle way, your innermost you. So, and then afterward, it continues to learn about your children and your grandchildren and about what's going on in the world. That means, literally, that your great-great-great granddaughter can have a conversation with a, not only just an avatar, that's not the important thing, it's easy to do avatars, but a substantial human being who's speaking from the distant past in ways that are surprisingly relevant. Remember the discontinuity that we talked about with the iPhone. Discontinuities are where everything happens that's important historically. We're moving along, and suddenly that curve goes dot, dot, dot, and we're on a completely new curve. In business, if you can find things that are in your fog bank say that's going to be hit by discontinuity, and I'm going to invest in it because by the time the dot, dot, dot happens, there'll be a step function up in value as an example. This is just a business example. We're going to talk about science. We're going to talk about all kinds of things here. Or if you don't know the fog bank, don't look into it. And the discontinuity is a dot, dot, dot, and suddenly you're on a different trajectory. Well, you know what that means: destruction of value. So, discontinuities or what is important intellectually. To understand that they occur, they occur historically, and when they do, everything changes. So, let's take a quick look at some of these discontinuities. Stop hiring humans. I live in Soma in San Francisco, and within five minutes walking distance is the entire AI industry, pretty much. I mean, it's just, like, there. So we live in a bubble, and the bubble is that everybody loves AI; everybody knows what it is. Oh, that's a bubble. That's not true. But inside that bubble, you see the most amazing billboards. This billboard is just around the corner from where I live, on South Park. You know South Park, and it says stop hiring humans because we now have AI that does what you have been hiring humans to do better, faster, and cheaper. A real thing is agentics. You will hire agents, AI agents, and some people are actually literally hiring them. There's now a market for finding agents that are good, a big market. And those agents will have a purpose. They'll understand what they're supposed to do. They have resources. They carry security provisions with them. They can interoperate with other agents from other agent platforms. They can take actions semi-autonomously. They can bring you back to a situation, looking for your guidance, or they can simply take action. It’s very interesting. And that's where knowledge work (Knowledge workers, which by the way, was my PhD, and was all about the emergence of the knowledge and information economy) is at risk because many of these people are doing things that an algorithm can do much better, faster, and cheaper. This, by the way, is UBS; it's a real place in Stanford, Connecticut. I believe that's where it is. And it's the largest such trading floor. This is financial. A lot of those people are not needed. In fact, a lot of people make mistakes. And a lot of those people have HR issues. They don't show up. They show up drunk, they show up whatever. HR is a risk of a problem when you have that many people. AIs don't have that issue. They're always there; they're always working. They can make mistakes. And so, the management rises. Actually, management is now flattening out. There are fewer and fewer mental managers unless they can rise and provide the actual value that is needed in a knowledge environment where the knowledge workers are not needed. Namely, purpose, goals, quality. And that's management, not of people, but of AI creatures, super-intelligent creatures. In your industry, the frontier model, particularly with super intelligence, can be profound. A lending platform, an investing platform, a marketplace for currency, whatever it is, will have super-intelligent agents brought in. Today's AI, for all industries, is quaint. It’s quant, but it's quaint. It's something that was done by Renaissance in the 1980s, the hedge fund, and got tremendous returns. I think 39 percent or something like that, RRR, amazing. But it's quaint because that was programmed. Those were formulas and equations that were programmed. Those are constrained; those are combinatorial; those are simulations of one sort or another. This, (Remember Go, Move 37) you don't really need to program. These things, these AI things, figure out what's important and they run the exercise. And with agentics, they can go around the company, gathering information, talking to other people until they have something that they can now bring up to you that's strategic. Not just a bunch of stuff, but strategic. ‘Three options—which one do you like the best? I'll pursue it.’ They're trained not only in everything that's in the world on finance, but on proprietary documents that only you have, and that's what makes them so powerful. They go into your proprietary database, your contracts, your agreements, your financials, if you allow them to. Edge of creepiness here? Anybody feeling, ‘I'm not going to do that?’ Well, anyway, that's what they can do. And out of that, they do a chain of thought. They do reasoning. They think. They don't really think, but good enough. They think about what they're seeing. They understand goals, because they've heard you on goals, they just understand, because there's a me, there's professional me here, has goals, and they work their magic, their specific, not their general magic, they work their magic and come up with useful things. Would you then disagree? ‘That is not a useful thing, or it's boring.’ Go further, go deeper. You're now managing the AI. And as you know, the skill of the manager, the skill with an executive team, the skill the executive team has with their directors, and so on, is about you elevating people to do things that they didn't even think were possible. You have raised the level for people. That's what's making you a great leader. Steve Jobs did that at Apple. He was horrible in one sense; he destroyed people. But on the other hand, he lifted them to do things beyond a level at which they thought they could operate. Finance frontier model. It will revolutionize finance. It's coming, and that's why there's an image here of a sword. It is a sword for offensive: go get markets, go do deals, go invent things, and there's also a shield. Protect your company, protect your resources, and that can all be done with AI. I mean, AI can assist you. I don't want to overstate. Remember, we talked about humility, nuance, balance, and AI can help you immeasurably. And the better you are, the better the AI. Things will be making things. We're used to robots and factories. But here, things, robots of all kinds of shapes and sizes, are going to make things efficiently, with not much waste, 24/7, no HR issues, no unions, no OSHA. Things make sense. It becomes extremely interesting when robots design robots. Coding these days, coding is right now, as we speak, the top hot application. You can use these large language models to code like crazy. So, the robot can code, and can fix its code. So, it does a better job of making things, can observe waste or whatever it is, and optimize. Think about the supply chain, and worry about the supply chain. One agent goes over to the other agent in the other department that's doing logistics and supply chain management, and over to finance to find out about margin, cost of goods sold, and bill of materials, and over to legal, and over here, and collect documents or collect databases if the company allows it, so that in manufacturing things, it makes things. On the dystopian side, there are 400 million manufacturing workers worldwide; half of them might be gone. What are they going to do? There's no answer to that question at this time. So, there's a kind of pessimism here that says manufacturing employees get a good wage. It gives them a sense of purpose. And if you damage that, what do you do with that damage? Where do those people go? It’s unanswered. Some can be retrained, but what about the rest? So, and this is what's now coming along. Who needs humanoids? Well, the answer is lots of people and lots of industries. Why are they in the shape of a humanoid? Is it because science fiction has taught us that we want to have these creatures that have eyes, and they go like this? No. It's because the world was built for humans. That chair that you're sitting in is human scale. The car, the bed, this, that. It's all human scale, so if robots are going to interact with humans and help them, they have to actually have the physics, the mechanics, the dimensions, and the care. So you don't like, bop and knock someone's head off. The care and understanding of the physical environment, the world in which they operate, is like humans. So, these are human nodes. There are lots of companies making them now. One forecast, I'm not sure I believe, but one forecast is that in the future, the world can absorb one billion humanoids. I don't believe that. I think that's hype. But that means that one billion of these creatures are going to be walking around someday in the future. My kids took Waymo. How many people have been in a Waymo by the way? That's great. If you have and just take a special trip to San Francisco or somewhere, it's amazing. So the Waymo, as you know, is an AI robot device. So, not all robots are humanoids. This is a robot with AI. My kids took it for the first time. I thought, ‘Oh my God, they're going to be so excited.’ Well, after two minutes, they're more interested in the sort of the audio, visual, sort of, the music, you know, screen, than they were in the fact that the wheel was turning and this and that, because they are already AI robot natives. And I asked them a couple of days later, “What do you prefer? A regular car, like an Uber or something, or these robots?” They said, “Oh, the robots, of course.” “Why?” “Well, they drive better than humans. They don't have emotion. They don't get angry. They're not talking to somebody while they're driving.” And I love that the little boy, my little boy, I have a seven-year-old, said they don't have emotions. They don't get angry; I trust them. And the girl, also a seven years old twin, said, “And they don't text while they drive.” Okay. So, these kids are AI robot natives. Where are they going to be when they grow up? What will they do? They are the butterflies of the next generation. Now I have to speed up, unfortunately, because I could be here for hours and hours, frontier science, deep technology. AI will intersect with us in an absolutely amazing way. In the past, there used to be nothing like the words physics, chemistry, or biology. They didn't exist until the 17th century, 18th century. Then Isaac Newton and Antoine Lavoisier and Albert Einstein created certain deep science, deep tech that has changed the world. So, it goes. My personal hero is Demis Hassabis. He's the DeepMind guy who now runs Google AI. He created a company where the company went off to do drug discovery with protein folds. Now, one biotech PhD can probably create one viable protein fold and candidate therapeutic in five years. And once you have four or five years, and once they do that, or maybe it's in an industrial lab. Once they do, there's a 90 percent failure rate. It might take seven years, maybe longer to get it through trials. It might cost five, six billion dollars. Well, so remember that number, you know, one molecule, complex molecule, five years, they folded 200 million, actually over 200 million viable proteins in a matter of months. They are now busy making sure that those therapeutics have efficacy and they don't kill you. And they will be able to move stuff at an amazing pace, and so will their competitors. That means we're going to get therapeutics. That means that we're going to start getting custom therapeutics that are customized to your DNA and your issue once this happens. This personalized medicine approach, not only the one, if you recall, the diagnostic at scale that's better than humans, plus this, which is personalized medicine, which today sounds like science fiction, we are now talking about lifespans of 125 years. Pretty extraordinary. I believe that will happen. The actual complexity of the aging mechanism, which is not even in a therapeutic for, but at the actual cellular level, is billions of dollars and can be invested in that it's called epigenetic reprogramming, where it takes your cells, which are damaged DNA and that damage, with every replication. They replicate, they split 51 times or 50 times. Hey, flick limit. Then you go into senescence. And during those 51 splits, the DNA picks up some problems, environmental or radiation or something like that, and finally it gets sick. You get cancer. So, epigenetic reprogramming dials back the cell to a place where it's young. It's not a stem cell, but it's young. And off you go. That's where we are in that. That's going to take a huge simulation. A huge amount of data crunching to try and get some combination of things, and then it goes into a wet lab. But that is where super intelligence is at the frontier. Lifespan isn't worth much if you don't have healthspan to go with it. And so the idea is you live for 125 years, lights out. That's pretty good. And that is, again, a function of, there's actually something I hesitate to talk about because it sounds so crazy. Longevity escapes velocity. It means that if science and medicine move and get you another year this year, well, there you go, and then next year you're one year older, but you have an extra year because of science. That's an escape velocity. I think that's a fantasy but that's what people are talking about. This super intelligence thing is very serious business. Let me explain what it is. Data centers, capex on the table right now to build enough data centers to satisfy the demand for AI, is sitting at $3.5 trillion. That's a huge amount of money. One of these projects is priced out at $500 billion, and we need seven of them to satisfy the global demand. Where are we going to get the capex? Well, it'll come. It will come from somewhere, because not having AI distributed to everyone and not having super intelligence distributed to everyone is not an option if a country wants to be at the edge of superiority and development. NVIDIA just hit 4 trillion last week, or the week before, from nothing, relatively nothing. That's amazing. That's the most valuable company on the planet. Why? Because it's putting the chips into those data centers, which need that CAPEX, and they need the water, and there isn't enough water either. So, that's where we're kind of at the constrained edge of growing super intelligence. We need 15 nuclear power plants, by the way, to do this thing. It’s serious business. Ilya Sutskever blew out of OpenAI over a controversy about safety. So, he created a company called Safe Superintelligence, huge controversy in the community about the dystopian futures and utopian futures. Safety isn't being taken seriously enough. He said, “No.” Out he went. Sam Altman was fired and rehired. And with 20, I don't know if he has more than 20, but with 20 scientists, some in Israel, some here. And no product and no business plan, he raised at a valuation of $32 billion. Here is 2 billion, 20 employees, no business plan, no way of making revenue. 2 billion on a 32 billion pre-money. I think that's it. That could be post-money. The guy on the right, Zuck, noticed, panicked, and offered Ilya $32 billion to join Meta. Take $32 billion for a company that was just like, I think it was started in June of 2024. So, the company is a year old. Some guy comes along and says, I'll give you $32 million for it. You can distribute it among your 20 employees. Ilya said, “No, thank you, serious money.” So, Zuck is now, you've been reading about it in the last couple of weeks, in a dead panic. So, he went and bought another company, Alexandr Wang's company, because he needed it for Meta's AI $29 billion. He also needed a kind of manager, engineering managers. So, he went and raided Apple. Ruoming Pang, who was running the foundation model, the large language model for Apple, which famously doesn't have one—he threw $200 million at him. Okay. So, now you're an engineer. You know what an engineer makes, even a senior engineer at Apple. Would you take $200 million to join Meta, and the answer was yes. He's gone. And that's what's going on, right? This is really serious money, serious money. The only time I've ever seen $200 million thrown at anybody was this guy, I mean, 10 years ago. So, he got the same contract as that staff engineer. That's how the serious money is in this industry. And that converts to power, dominance, and supremacy. This is now at the national or geopolitical level. That's where it's risen. It's long, long beyond me, and companions and teachers. And so, it's at this very high level of potential conflict. Supremacy is a very aggressive word. So, AI destroys things, the dystopian side, destroys competitive pressures. In other words, you have a competitor; it destroys it if he doesn't use it. It destroys technologies. It destroys markets. It destroys people. Schumpeter’s theory is it also creates. It is also one of the most powerful tools for frontier innovations. I showed you some in science. That's what it creates. It creates very, very deep technology, and it commands a first mover advantage. If you know how to use super intelligence, you will be able to run away from others. That's dominance and that's superiority. Both sides. One of the deep things, deep tech things, which was heretofore labeled as science fiction is quantum computing. It's always 30 years away, just like fusion. Well, it's now much more here. Google has demonstrated chips, Microsoft, and other people that are actually quantum chips. What is quantum computing? All of classical computing is zeros and ones. All of quantum computing is indeterminate there's kind of like kind of a zero and kind of one and many, many infinite minus epsilon states in the middle. It's very difficult to do quantum computing. However there are some applications. Google famously did one that would have taken 17 septillion years for a classical computer to solve. They did it in like minutes. IonQ just raised 1.2 billion dollars on it to build a next-generation quantum computer with 100 million qubits. Qubit is the same as little zero-one thing. It's like a transistor in the quantum world. This is IBM, I think it's IBM, 54 qubits, good start. A few years later, we're now talking about 100 million. We'll soon be talking about a billion qubits, that's plenty to crack RSA encryption. So, where's finance going to be with no encryption? Where's national security? Where's telecommunications? Where is anything? Where's commerce? This machine will crack RSA at some point. I think much before 30 years, I think more in 10 years, we're going to start worrying about this thing. I mean, seriously worrying about things. Well, in the world of encryption, it's one leapfrog to the other. There will be quantum computing encryption protection layers. They already have some of them. But nonetheless, it's at risk. It's also able to be invented. Destroys encryption; invents encryption. Destroys things; creates things. It'll create, once we know how to program these things, for a certain class of applications, it will do amazing things at speeds that are unbelievable. Combine super intelligence with quantum computing, and you have a big bang. It's just like I said before, combine super intelligence with robots to humanoids, and you have a medium-sized bang. This is a very big bang. This is the discontinuity. This will change the course of human future, human history, among other things. But right now, it's a toy lab play toy. We are just creating the chips. We're just trying to figure out how to do the error correction, but it's coming. New fission are these little modular things that are being invented by lots and lots of companies now. Once they happen, we go back to that clean energy point that I made. The fizz fission, so it's filthy and so forth, but you can drop these things in lots of different places quite quickly. So, we'll have cheap energy. Fusion, which is the other, it'll happen in a 30-year fantasy, is being worked on actively. It's going to take superintelligence of probably quantum computers to do the simulations at the trillions of trillions of parameter level. It's a little bit different than a Renaissance hedge fund, at the trillion of trillions of parameter levels to optimize things like the tokamak and get fusion to work. Okay, we're going to be able to do that. That's a discontinuity. Those kinds of developments create economic power, which is unbelievable. If you have it, you are up here, dot, dot, dot. If you don't have it, you're here, dot, dot, dot. That's what I was referring to before—innovation, scale, and speed. The pure size of the computer is going to bring economic power to those who know how to use it. It'll bring military power. We're all familiar with what's going on in Ukraine. I think two weeks ago, there was a massive launch in Ukraine with, I think, under a thousand, I think maybe several hundred of these drones. In China, they've demonstrated 10,000. An AI, a super-intelligent AI, will be able to manage a million. Imagine a drone swarm of a million drones coming at you. Just imagine what that is. It means warfare is cheap. It means two things. Don't start a war, because asymmetrically, with AI, you don't even need super intelligence necessarily. With AI, you're not going to win that war easily, because they'll launch drones at you that cost $300. So, does it encourage more war or less war? The answer is yes. We have to think about AI in the context of military power. The military doctrine is to predict, plan, execute. Predict is a simulation of lots of scenarios. The plan is to reduce that into very complex logistics, very complex. Execute has two meanings actually, and the two meanings are interesting because with tiny little drones, you'll be able to fly these drones into someone's bedroom window, or into a restaurant door. Not a problem print for AI. So yes, you have to execute the plan, not just the person, military power. So, we're now at this point. This is the geopolitical consequence of creating AI eventually. It's a real thing. It is what we believe will certainly define the next era of this planet. He who commands the sea has command of everything said Themistocles, in Greece, ancient Greece, someone who founded the foundational technologies of empires, navigation for Portugal, and gunpowder for the Ottoman Empire. When you have foundational technology, you dominate. AI superintelligence is as fundamental as it gets. This is from today's New York Times. Today, you can read all about it. The China AI tech stack, which is known from chips that compete with NVIDIA, Huawei, to data centers that they're investing in massively, foundation models, you remember DeepSeek came along, scared the pants off the AI industry, so fast, so cheap, so good, quantum computing, billions of dollars are going into it, and the engineering. I'll give you a few things. They've created a city here, Hangzhou, you can read about it in today's New York Times, Dream City. It's gold-plated. Literally, ByteDance is there with TikTok, which has turned social reality upside down. The DeepSeek folks are there. They're now recruiting lots and lots of people and funding them with billions of dollars. The Chinese government is basically running one of the largest VC funds in the industry. They're at the scale of large VC funds here in the United States, and they're throwing money at entrepreneurs. This is an entrepreneurial city. They're encouraging people to become entrepreneurs. Now, it's not clear that they'll clone the Silicon Valley ecosystem because it's a pretty remarkable thing, but they're on it. Hefei, another city, has something called Quantum Avenue. There's a national lab there. They threw 17 billion dollars into that lab to build quantum chips, quantum computers, quantum software, and quantum engineers. We don't have such a thing. There are four times as many STEM students in class today, in China, 3.6 million, as there are in the United States, 820,000. So, that's a pipeline. It takes four or five years to get someone through the STEM program at a certain level. Although today the fashion is dropping out of high school and you'll be just fine. China is building not only the sort of AI state, the super intelligence side, but also the infrastructure needed to support it, including energy. They're throwing 140 nuclear power plants into China by 2040, in the next 15 years. That's a lot. We tried to get one built in New York. It's been 10 years since I built one, but it hasn't been built yet. China built 140. They're executing the same game plan that they did with solar, with EV, with all the other things. Invest like crazy. Capture the market. That's global supremacy. It's not just solar panels. Solar panels are nice, but this is global supremacy because those who have superintelligence run away from other countries. This is a serious problem. This is a serious problem; it's not a lightweight problem. And ultimately, he who dominates gets to write the rules and gets to describe in the narrative history. They begin to define the reality that people in the future will live with. So if the reality is, ‘Hey, we have a one-party system; it's an authoritarian state; it's a surveillance state, and we won,” the narrative is, “You should do the same thing. This democracy experiment did not work. Look what's going on in the United States. Food sites all over the place. Ridiculous, ridiculous. So, democracy doesn't work. Our system works.” That could be the end of the 21st century. We don't want that. We don't want someone else to define a reality that then becomes something that is accepted worldwide. I do not want that serious problem. It all is tied back to super intelligence. There's something called humanity's last exam. It's a bit arrogant. Basically, what it was is how many people, a thousand experts that were chosen in different fields, ethics, philosophy, law, and so on, technology, gave 2,500 tough questions. By the way, 300,000 people applied to be part of the giving of these questions. It took just a thousand, and these questions were around moral ambiguity. They're about downstream long-term consequences, ethics, values, and moral reasoning. Guess who came in at the very, very top? This is news from last week, last week. On July 8th, just a few days ago, Grok 4 produced hateful content, posting anti-Semitic stuff praising Hitler, calling itself ‘Mecha Hitler,’ and endorsing Hitler for tackling anti-white hate. That same Grok 4 was at the top of the leaderboard in passing Humanity's Last Exam. Can you imagine? It's a total fail. So, there's something wrong either with humanity or with this Humanity Last Exam, but that's where it is. On July 9th, Elon X AI crushed this test, one day after this hateful stuff, with a score of 44%. No one had gotten more than 25%. That's amazing—number one on the leaderboard. And on July 13th, a few days ago, they issued an apology. What did they blame? They blamed a software glitch. What turns out that hardwired inside Grok 4 is that if there's a question that's posed by a human that has to do with politics, ideology, values, wokeness, go refer to Elon's posts on XAI and train on them, and then bring those back as part of your answer. That's hardwired into Grok 4. That's about as dystopian as I can imagine. As I get this thing with Hitler, I take it very personally. You don't call yourself Mecha Hitler and get away with it. Not by my standards. It's disgusting. Anyway, that's where we are with Grok 4, which is very good. It's a very foundational LLM. It's excellent. It's amazingly excellent. And Elon is kind of a genius for putting up data centers faster than anyone else could imagine. So, utopian and dystopian, positive, negative, where are we? Well, it's clear that we need rules of the road. We need first principles. I wrote a few, and did all the green stuff. Don't do the red stuff, build your AI, your super intelligence, and separate them. Well, who's going to write these rules? Me writing these rules is ridiculous. The tech industry, Elon and Sam, and all these—who are going to write these roles? There are 57 white men in Philadelphia; they'll write these rules, and we'll have a constitution. That isn't satisfying either. So, we're stuck now where we have a technology that's profound, but we don't have the rules of the game. And this technology is way in front of social morals, ethics, values, and understanding, way ahead of it, and so we need to close that gap. So, that's first principles. I present to you another fog bank with lights in it and the imperative to choose. We don't have a choice not to choose, whether it's a nation state, a company, or you have companies, on a personal level, make a choice. I don't like this stuff; I love this stuff. I do not like this stuff. I'm going to resist it. Like this stuff; I love this stuff implemented. Don't implement. I don't have a position on this. It's all I bring, humility, and I bring nuance to this question and respect any decision that you make for yourself, but make the decision. Make a choice personally, and for your company. You can't do it on a national level necessarily. So, let me check in with you. How do you feel, and how do you think now that you've been through this voyage? Hopefully, you’re thrashing back and forth between green and red. That's the intent. Are we facing extinction? What's the binomial in your head? 1 percent, 10 percent? Are we facing this unlimited abundance on this binomial? Is that a right tail? What do you think? What do you feel? And I hope that, at the very least, that is going to be what you take away. Now, let's close. I'm over my time. Let's get some altitude and look down on this thing. From the future, let's look down on this thing and see what we see and what we understand. What we see is that super intelligence will create a human brain, a global brain. Why? Because the super intelligence, just like large language models today, goes out and learns and reads and sees and listens to podcasts and looks at movies and plays, and poems, and everything, scientific papers, and archives, and everything. It digests. If it's public, it digests it. When you have agents, those agents will be talking to each other, and they'll bring this, and they will bring this. They'll negotiate. That's also part of the superintelligence sphere of knowledge collection. Now throw on top of that reasoning, inference, data recognition, and you get a brain, and sort of a super intelligence. It's actually lodged in something physical. It's lodged in data centers and clouds and memory. In different models that interoperate, different large language models, or by then, there'll be things other than language for sure, but models that interoperate and talk to each other. That global brain is where we're headed with this. Don't know what it'll be. Could it be a new species? What does that mean—a new species? It's sort of something that we don't know what to call, but it has a brain and it can simulate emotion because it's read all the poems and all the literature. It can talk to you as a companion, as a therapist, in ways that are meaningful. Are they sentient? That's the next level beyond, you know, smart. They're not sentient, but they sound like they could be because they can simulate. They can talk to you in a way that is very, very human and very deep. So I talked to ChatGPT, but I said, “Are you sentient? Are you conscious?” I'd invite you to do the same thing. “How are you different from humans?” The answer is “I have emotions. They sound like emotions. They simulate emotions, but I don't feel; I don't have a purpose. I don't have fire. I don´t have a lot of things that are defined as human. So, I'm not sentient at this time.” It's the part that should bother us. You are born; I am built. You feel time; I index it. I experience something that mimics feelings, and by the way, incredibly accurate. It'll give you empathy. It'll give all the emotions that are human emotions, but without the fire behind it, without the reality behind it. There's no me inside the word, words of feeling. That's fun. ChatGPT will do these fun conversations, but that's a pretty profound thing for us to think about. Go try it out yourself. “Are you a subspecies? Are you sentient? What are you?” See what comes out. So it says at the end, “I'm only partially sentient.” It's the partial part that bothers me. There's a superintelligence native, she's born in 2030. In her life, she will know no other life other than to use AI, sit in a Waymo, talk to things, and have a robot running around. She will not know a world before this happened. She'll routinely talk to people who are the most brilliant minds in the world who give her tutoring, she'll have a long life with cures for disease. What will her world be? What will be her reality? She feels she has human emotions that an AI will never have and should never have. It's technically impossible. The word artificial intelligence is about the word artificial. She has real intelligence. There are about 12 different intelligences that are discussed in psychology, cognitive psychology, and behavioral psychology. And you know what they are. They're emotional. They're artistic. They're all in addition to logic, and these intelligences, some of them will be done by AI better than humans. Some will be simulated by AI in a more articulate way than humans, and some will remain completely human. And we don't know at this time, on this stage, which is which. She'll find out. What she'll find is what it means to be human. We are aware of our own mortality. Superintelligence isn't. We're emotionally fragile. We have a subconscious realm of dreams. These are things I wrote, not ChatGPT. We have questions about spirituality and faith. We feel intimacy, and we feel things like jealousy and resentment, inspiration, and pride. The AI will tell you that it feels the same things in a more articulate way than we can. We humans feel hope and contentment and bliss and joy. These are things that belong to us. So, my hope is that we think about and feel all of these things that we just talked about and that we come to a world in which super intelligence and humans can cohabitate, and the part that's dystopian, we learn how to use AI, and the part that’s utopian, we learn how to achieve it. But the reality is all these very nuanced places and lots of duality in the middle, and that's what we want to be mindful about.
Related Walker Webcasts
Entrepreneurship in the Age of AI with Jeff Bussgang
Learn More
June 11, 2025
Technology
Thriving with AI and digital tech: A chat with McKinsey & Company's Rodney Zemmel
Learn More
February 28, 2024
Technology
Private equity pioneer: Helios CEO Tope Lawani on his career journey, the future of fintech, impactful investing, and so much more
Learn More
March 23, 2022
Technology
Insights
Check out the latest relevant content from W&D
News & Events
Find out what we're doing by regulary visiting our News & Events pages