Cyril Maury is a Partner at Stripe Partners, where he leads strategy and innovation work for global technology clients including Meta, Microsoft, and Spotify. Based in Barcelona, he specializes in integrating social science and data to guide product strategy and business model development.
Late in our conversation, we discuss these two pieces: “When place matters again: strategic guidelines for a splintered world” from May 2025, and ”Interpreting Artificial Intelligence: the influence and implications of metaphors” from Sept 2023.
I always start these conversations with a question I borrowed from a friend—someone who helps people tell their stories. It’s a big, beautiful question, and I love it so much that I tend to over-explain it before asking. But before I do, I want you to know you’re in complete control. You can answer however you like, and there’s no way to get it wrong. The question is: Where do you come from?
I come from France, which is the obvious answer. But there’s more to it. My mother is Vietnamese, and my father is French, though with roots in Algeria, another former French colony. In many ways, I’m an unusual product of colonialism—a strange outcome of its complicated legacy.
Maybe because of that background, I became curious about the world early on. I grew up in Grenoble, a provincial city in the French Alps, and I quickly became interested in history, geography, and people. I wanted to see how the world looked beyond my immediate surroundings.
As soon as I could, I pursued exchange programs through university. In France, the typical path is to move from the provinces to Paris. I did that, and once in Paris, I realized there was even more beyond France itself. I spent time in the U.S., doing a year at UC Santa Barbara—an incredibly beautiful place—and then spent a few years in Latin America: São Paulo, Buenos Aires, Bogotá. Eventually, I moved on to the Middle East, to Iran, eager to explore still more cultures.
During that journey, it struck me: what if I could make understanding people my job? What could be better than being paid to do what we all enjoy—being curious about others’ lives and stories? That realization led me into the world of research and consulting. I started my career in Spain at agencies focused on understanding behavior and helping companies develop better products based on that understanding.
After Spain, I returned to France. About five years ago, I joined Stripe Partners, a decentralized agency headquartered in London. We have people working everywhere—from Hong Kong to Edinburgh to Berlin. I’m currently based in Barcelona, which is where I’m speaking from now.
Growing up, I was very aware of the absence of my mother’s Vietnamese heritage in our home. She was born in Saigon, when it was still a French territory. During the war, she left for France. I was born a few years after she arrived, in the early 1980s, a time when France emphasized full integration into the Republic. That meant speaking French and adopting French customs. My mother followed that path. She never spoke Vietnamese to me or my brother—not a single word. I speak English, Spanish, and Portuguese, but I can’t say anything in Vietnamese.
She had very few Vietnamese friends. We would hear her speak Vietnamese on the phone occasionally—mostly with family in the United States—but she would always close the door. It created this strange feeling: a culture present only in its absence. I grew up knowing that something was different, even if I couldn’t name it. As I got older, I came to understand it as a consequence of colonialism, but as a child, it simply felt... odd.
As a kid, I didn’t have a clear idea of what I wanted to be. What I did have was an intense curiosity—about people, about cultures, about how things worked elsewhere. That curiosity led me, step by step, to where I am now. I studied political science to understand ideas and ways of thinking. Then I went to business school to learn the more practical aspects of the world. Along the way, I kept seeking opportunities to live and study abroad.
Toward the end of business school, I met someone—just a friend of a friend—who had started working at an innovation consultancy in Spain. He said, “This seems like something you’d enjoy.” And he was right. On paper, it made sense. That was almost twenty years ago, and I’ve been doing it ever since.
And tell me, catch us up. You're in Barcelona. Tell me a little bit about what you're doing and the work that you're, what are you working on?
Yeah. So I'm based in Barcelona, a partner at Stripe Partners.
What we do at Stripe Partners is, largely, we have a number of methodologies and tools that help us surface and understand people. Originally, what we're known for is ethnography. Some of the founders are PhDs in anthropology, and we really started by trying to leverage that set of tools as much as possible.
As we grew as a company, we added other tools to give us different lenses on human behavior—primarily data science. We now have a healthy and cutting-edge data science practice.
The last pillar of what we do is design. We also have designers who do design research and all kinds of work to, one, understand user insights in different ways, and two, ensure that the understanding we develop can be used to inform digital product strategies in the best possible way.
We also have the tools to ensure that we use these insights to create something that will help stakeholders understand what it is—the human truth—we're trying to make visible. So that's what we do as a company.
Within that, my personal role involves a lot of work on technology projects, because I would say about 75% of our clients are technology companies. That means a lot of projects for Google, Meta, Spotify. In the last couple of years, much of that has focused on AI.
Some of the projects that I found particularly interesting have been about understanding how people engage differently with AI solutions in different markets. It’s fascinating, because there are so many layers of complexity to unpack.
First, the solutions themselves are difficult to understand—even for the people who design and build them. They're the first digital tools that are probabilistic, not deterministic. So that’s one layer of uncertainty.
The second layer is that their behavior depends on the users themselves. Different users can interact with the same AI solution, and it will behave differently for each of them—and even differently for the same user over time. There's this almost dialectical path between the AI and the user, which is hard to understand at scale because it’s so context-dependent.
The third layer is how users make sense of these experiences. That interpretation is shaped by cultural beliefs and narratives. As we've seen in our projects, this is deeply local. Someone in Germany, someone in India, someone in Brazil—they’ll interpret the same interaction differently because they come with different expectations.
So, long answer, but that’s the AI work: a lot of global-scale AI deployment projects.
The other major area I’ve been focused on is healthcare, which I’m helping to develop at Stripe Partners. We’ve done—and I’ve done—a lot of projects aimed at understanding what we call disease areas or therapeutic areas.
These projects are especially interesting because they require understanding multiple layers: the biology of the disease, how particular drugs work, how people experience and make sense of their conditions, how they interpret treatment, and how it all fits into their lived experience. And then, you add the complexity of the healthcare system itself, which differs dramatically between the U.S., Europe, and elsewhere.
Some of the areas I’ve worked on recently include Alzheimer’s disease and dementia—which is absolutely fascinating. Also haemophilia, which is more niche but still quite complex. And we’ve been doing a lot in obesity and weight management—trying to understand how that space is shifting culturally.
One of our clients there—this is public—is Novo Nordisk. We help them make sense of the cultural shift happening now around weight loss and weight management, which feels quite unprecedented—maybe even historic.
Amazing. I'm just going to ask the question: what is Stripe Partners?
Yeah, it’s a good question. That's perhaps the hardest question so far.
I was going to preface it, but I figured I'd just go right at it.
No, for sure. So Stripe Partners is what I would call a strategic consultancy, which really is focused—laser focused—on one thing, which is developing robust, robust, robust understanding of human behaviors, again, like through as much as a variety of lenses and methods that we can. And then like what we do is we take this hopefully novel insights, new way to understand humans, and we link that with the business strategy side of one of our clients.
So let's say that it's—I'm going to take an example here—let's say that we work for Google, for example. We do a lot of work with them in different markets—so in India, in Brazil, in Japan, right?
As any of those tech companies, they have a really good, usually, understanding of U.S. users. They understand quite well—well, because it's their first, their largest market, it's their oldest market, it's the market that individually has people they're the closest to—but they usually have like a very poor understanding of anything else. So they have a poor understanding of Europe, they have a poor understanding of India, of Brazil.
And typically what would we help them try to understand? We might try to help them understand how users—people—want to interact with and what they're looking for, what are like the mental models that are behind any form of trying to plan for entertainment in general, right? So that's like from going to the cinema, to the restaurant, to travel—and this again is really culturally rooted.
I'm just going to take one example, which is really quite simple to express and transmit. We've done this project in Japan, in India, in Brazil—same project, same research question—really trying to understand these behaviors linked with entertainment and, in this case, going out for dinner.
In Japan, going out for dinner alone is something that is absolutely common and that people of all, you know, walks of life do very commonly. And the reason they do it—there's many reasons, right? But it might be because they want to create that liminal space between the office and a very cramped house, right? For example, it might be because the experience of food is for them something that is quite unique and that is best experienced, if you will, alone, right?
So you need to try to understand all of this and see what are the motivations, the cultural models behind it. But that's only one part of what we need to try to do. Then what we need to try to do is—we need to try to understand what are the implications for Google from a business perspective. And here, what do you need to understand?
Well, you need to understand that in Japan, the whole system of the digital journey—at the center of which you have eating out—is incredibly distinct from the one that you have in the U.S. It starts from this very odd-for-us system of booking restaurants with points that are really quite odd, where you need a lot of precise information in order to go and book. And then you have the whole payment system, which is completely distinct again.
It's not like in the U.S., where you basically have your credit card and you pay. There, you have this points system—again, very intricate. You need to go to Japan and really spend a lot of time to try to understand that stack. And you need to, as well, then understand that side of things to then put the two things together and say, okay, so what happens here at Google is that first, you have these different behaviors, and second, you have these different tech ecosystems.
The business opportunities are here, here, and here. And in order to leverage them, you can try to develop this particular UI, this particular user experience that will be better suited for this local usage of, for example, eating alone. But just understanding the user need is not enough. You need to then be able to understand the business side of things. How does that translate operationally?
Very quickly, we usually have two main types of stakeholders. You have the UXR—so UX researcher—within the companies we work with. Their main role is to understand the needs of the users, this cultural thing. They're usually very passionate about that. And then they have a different stakeholder themselves, who we often interact with—who's like the PM, that we call the PM in the tech companies—and it's someone who's in charge of the business product decisions.
You need to understand those two in order to then provide recommendations that make sense both from a user side and from a business side.
That's basically a very long answer to say that the heart of what Stripe Partners is, is bridging the gap between these two needs—these two stakeholders that speak usually a different language, that have slightly different needs. What we do is we try to create an alignment between the two and provide value to both.
It's amazing. What do you love about it? Where's the joy in your work for you?
That's a good question. It is not in making PowerPoint slides. Some of my colleagues would say that it is, and I could say it, but then I would lie—which I think would not be very useful.
What did you say? It's not in what?
It's not in making PowerPoints. It's not PowerPoints. It's not Google Slides. No, I think for me—and that's probably something that you hear a lot in those conversations—it's just like being able to go to these places that I would never have otherwise. Some of them are in countries and places that look amazing on paper. I was in Tokyo last December for Google, but a couple of months ago, I was in Cincinnati, in Ohio, which is a place that I don't think many people go to for tourism. But it was unbelievably interesting to be there, spend a week there, see a place that I would never have gotten to see otherwise.
What's really interesting to me—I'm going to take an analogy here, which I think is quite funny. I don't know if you ever played video games. I used to play video games when I was a kid. I played this video game called Warcraft and Starcraft. The way it works—and lots of video games are like that—you have a map, and this map is all dark at first. You don't know anything, and then you drop somewhere and you start to see something.
Based on that information, you infer a model of that world. You say, okay, so there's trees here. It's probably a place with a lot of trees. Then as you walk, what's dark becomes light. You have these pockets of knowledge that you develop. You see, well, actually, that's really not a place with a lot of trees. There's trees, but there's also some lakes and also some other things.
The way I see it is you can look at anything, at any layer of complexity. What we do is always—we move from not knowing anything to knowing more things. As you learn more things, you can reframe, re-evaluate your understanding of the whole thing.
For example, I've been to the U.S. many, many times. I've been a lot of time to Chicago, to the coast, whatever. I've never been to Cincinnati. Now that I've been to Cincinnati—I spent a week there—that's a thing that used to be dark, unknown, that now I know. That reshapes my whole understanding of what the United States is.
That's what I'm passionate about. That's what I want to do more and more in my work.
I think that as researchers, what happens is we are incredibly lucky to be in situations where there is a common understanding between the people you're going to go to and talk to. We do a lot of ethnographic research. I would go to this suburban place in Cincinnati, and here there's a family of a guy I would never have even interacted with in my whole life.
Then I spent three hours at that guy's place. Five minutes into the conversation, we're talking about the most intimate things in his life—his health. He's opening up because there is this shared understanding that this is a researcher that comes in. There is this exchange, this unspoken agreement that I'm never going to see this guy ever in my life. I'm going to tell things to him that I don't even tell to my wife. That's a true story.
For example, for some of these weight management projects, we had a discussion about weight with someone who was obviously a little bit overweight. She was like, "I have never told my weight of when I was really overweight to anyone. You're the first people I tell it to. Even my husband—I never told it to."
That's because you created that space where she feels safe. That's largely a function of the process, not of anything we do. I think that's what's unique about our jobs.
What you've described—I couldn't agree more. It's thrilling to be in that space. The questions that come for me here are: What's the value of that? How do you articulate the value of that to your client? How do you create the space for that kind of exploration? That's the thing. What makes that so vital? How do we talk about what makes that vital? Then what's your experience? You clearly have success in creating the permission to make that space. I'm always wondering—how does that become possible? Creating that possibility—that's the whole thing.
Yes. Those are hard questions.
The first one is about the clients. Here, very practically, we have two types of clients—clients who are usually from large tech companies and clients who are not from large tech companies. The way to talk to them and to ensure that they see the value of this type of deep, usually slow, ethnographic research process is distinct.
When it's the tech company, a lot of the time, our clients there are themselves people who come from a background of social science in academia. They already know the value of that. Then they make trade-offs between how complex or foundational is the question I want to answer versus how tactical it is.
You don't need to walk them through what the process is, what the benefits are. They're seasoned researchers. They've done that a number of times.
That is, I think, very unique to these very large tech companies. That's Meta, that's Spotify, that's Google. There's probably 20 companies in the world that have that level of maturity and which, for better or worse, have understood—I think very early on—that their business model is predicated on them being able to understand people.
That's what they probably do too well already. They're willing to invest in that in many different ways. That's why a lot of those projects also have a data science component to it.
Really, they know that the foundation of a successful product that they can then monetize is a very, very fine understanding of human behavior. That's for these types of clients.
Then you have the other type of client, which is 25% of our revenues. That's going to be legacy companies. That's going to be a telco company. That's going to be an FMCG company.
That's going to be, for example, in my case, a healthcare company. Here, you usually have more of a job, which is a job of bringing the stakeholder and the client with you on that journey of understanding—first, what are the different methodologies that exist; two, what is each of these methodologies best for in terms of what type of research question you will tend to try to get an answer for with this method; and three, how they can then translate that into business decisions.
Those processes are usually longer. The sales cycle, to be very precise and very concrete, is longer. What you need to try to do here, in a sense, is really go at length to help them see and give concrete examples of what is a type of insight that can only be surfaced with these slow ethnographic methodologies and how that can unlock business value.
It's in this showing of the actual outcome that you get, usually, the best response. I would say—to summarize—really two distinct situations: you need to really adapt to who the buyer is.
You've been at this a while. How would you describe how it's changed—the openness to this approach or the fluency in these methods?
That's also a good question. I think that for me, I'm always... Let's take the tech world first, which is the one that I've been immersed in a bit more over the last five years.
Here, I think even five years ago, the level of sophistication and understanding of the stakeholders was already very high, but they were more open to do what we call foundational work. They were more open to fund a three- or four-month study where you would try to go into different markets and understand—for example, I'm going to take a concrete example—how music in general can be used by people to create meaningful connections.
That is a question that is a very difficult question—a question where you don't really instinctively at first see what are the business implications of that. You need to really invest in order to develop that understanding, particularly if it's in distinct markets.
Those foundational projects—I think they were more common five years ago with the tech companies—because the tech companies were, to a degree, still in a phase of real growth, and their product was changing quite fast. This particular example I took is from Spotify.
If you go back in time and you think of Spotify five years ago, they were still tweaking their product. It was still what we call a growth-phase company. Because of that, they needed to understand the unknown unknowns.
You move five years forward into the future—to today. Basically, what happens is that all of these tech companies—and now we're going to talk about AI on the side, because that's a different thing—but right up to, let's say, one year ago, when AI was still not as central as now, all of their products were basically very mature products.
If you think of Spotify, it hasn't changed much in the last two or three years. Even a more telling example—I do a lot of projects for Instagram. Five years ago, Instagram was still something that was changing fast. It was still adding users.Now, Instagram doesn't add any more users. If anything, in Western markets, it is losing users. Instagram is a product that is incredibly mature.
There's so many features on Instagram. If you try to think meaningfully about how Instagram has changed in the last two or three years, you can't think of anything. Basically, there's so many layers of complexity and features, and so many teams that are, to a degree, competing—but also trying to obviously collaborate—that it has become such a very large thing that any meaningful change has so many second- and third-order consequences that it's actually not implemented.
The research that these large companies tend to commission is much more tactical—even if they have the understanding and the sophistication internally to commission foundational work.
The foundational work that is still commissioned now, from what I see, is about 90% in the space of AI—because AI is the big unknown. Who says “big unknown” says it's okay to spend money to try to understand what we don't know—to try to understand the unknown unknowns.
That's a great thing, I think, for foundational and strategic research companies like us, because as we've seen the share of foundational strategic projects going down for anything that is not AI, what is now going up is anything that is AI.
That's where you really need to position yourself, I think, if you're an agency that wants to do strategic work with tech companies.
I hear in the background—tell me if I'm right or wrong— this quote I always attribute it to Donald Rumsfeld, the kind of the known knowns and the unknown unknowns. Is that true? Is that correct?
I mean, that's where I've heard it. It's the name of the documentary, right? I think it's the guy that did The Thin Blue Line, I think.
That's right, it was Errol Morris.
Yeah, Errol Morris, exactly. He did a fascinating one. Well, he did like one on McNamara, which was incredible. That's The Fog of War.
Ah, yes, of course.
Fascinating, fascinating. And then like later he did that one on Rumsfeld. I think the documentary itself, it might be called The Unknown Unknowns, which is this quote of one of the briefings that he gave, shows that it's okay to even be curious about the very evil people. And, you know, we don't need to agree with them to steal some of their thoughts.
Yeah. So I want to—you've written wonderful pieces, and the Frame newsletter that you guys put out is pretty amazing, just sharing the theories and the concepts. But you've written a few, and you talked a little bit about AI already. But I'm curious—the one you talked about metaphors and AI and the role of metaphors and how we think about AI—and I would just love to hear you talk about, I mean, you know, metaphors maybe to begin, and then how do they help us or hurt us as we try to figure out what's going on and what AI is and could be?
Yeah, no, that's a very good—that's, I think, an important thing to try to understand. I wrote this one like a few months ago. But I think what was the starting point here is, it was like the beginning of the Gen AI explosion, right?
So you had the first LLMs that were getting more and more used—I think GPT-2 or 3. And a lot of the discussion was around trying to understand what was the best way to make sense of what it was that they were doing with knowledge, right? I think everyone instinctively understood that it had to do with knowledge—it had to do with processing knowledge in some way.
And the debates, right, like the tension really was in trying to see how much of it was a—how much new knowledge was it creating as it processed all of the knowledge that already existed, right? And one way to understand that is the technical way. The sad reality is that there's maybe one person out of 10,000 that actually can have a decent understanding of, this is—technically speaking—what is happening here, right?
And I think, obviously, us as—you know, working in that space—we try as much as possible to get to some level of that technical understanding. But here again, like the unknown unknowns are just like extremely vast.
And so what helped me to make sense of that is to try to latch on and to understand some of the metaphors that some, you know, smarter people than—people smarter than me—were using to make sense of them. And I remember one, which I think probably had a lot of impact on anyone who's read that piece in The New Yorker—I think it's Ted Chiang who used that metaphor—of it's like a photocopier.
Like in a sense, these LLMs—what they do is that they are a way to process information where you never see what wasn't there in the first place, right? So what's a photocopier—like what does it do? Well, it's in a sense something that takes a certain amount of information. And then like that information is processed, and what you get out is something that is always a little less than what was before, right?
So there is always like some level of information loss in that process. And to a degree, I think that that's one way of understanding these LLMs and Gen AI.
And what metaphors do is that—I think none of them, by definition, will be able to show you the whole truth. Because, you know, obviously that would be like a one-to-one analogy. And here, like without getting into the details—because I can't even remember myself—but there's like some very good writing of Douglas Hofstadter. I can't even remember, like how—I don't, I never know how to pronounce his name.
But he—yeah, he wrote like Gödel, Escher, Bach, which is a very good book. And then he did a lot on trying to understand analogies. And when does it make sense to use an analogy and when it doesn't, right?
By definition, an analogy is always something that isn't a complete one-on-one mapping with what you're trying to understand—because otherwise it's not an analogy, it's just a copy.
So what it does, obviously, is that it sheds light on one of the properties—one of the dimensions—of the thing that you're trying to understand. And the way it does it is that it links it to something that you know from the past, right?
And so metaphors are basically a way—I think a very useful way—to leverage what people already know in order for them to understand something that is new to them.
Now, it is only useful insofar as you're grounding them on some understanding—some cultural understanding—that is shared within a certain population, right? Because you use the metaphor so that we are better able to talk about the new thing—in this case, the LLMs and the Gen AI solutions. If you don't have the same cultural knowledge that I have, then the metaphor becomes a little bit more—it’s less useful, because we basically cannot ground it in that same cultural context.
And so that is, at the same time, the usefulness and the limits of these metaphors. They help to simplify by leveraging common cultural knowledge, but they also limit people—and I don't want to say jail people—but they limit people within the scope of people who have the same cultural context as they have. And so that's why they need to be used with caution, if you will.
And so, yeah, the idea was to really try to understand what were the different metaphors that people used to make sense of these LLMs—these AI solutions. One, in the cultural discourse—so that's straightly what the Ted Chiang metaphor was. Two, and perhaps more importantly for us who work with these companies—the people who are designing these solutions also have their own metaphors that they draw on, but that are not explicit.
So that leads them to make some design choices, some strategic choices that often they're not aware of. So I think that what we wanted to show in the piece is that a process of helping these people and these organizations surface what these metaphors are and interrogate what these assumptions or orthodoxies, if you will, are when you're designing these products could be really useful for a number of ways.
But one way it could be useful is, if you think about it now—it's probably like a year since I wrote that piece—and there are many more AI solutions. They all have developed in some way, but they're all very much the same. If you think of it, someone might prefer Claude and someone might prefer ChatGPT and someone might prefer Gemini. And if you are in the same circle as I am—which I'm sure you are—then you have these discussions about, "No, but Claude is better because of this and that," and "Gemini is better because of this and that."
I mean, this is like really 2% of a difference, but 98% is exactly the same. The way that they interact with you is through exactly the same type of interface, which is a chat-based interface. The way that they infer words is exactly the same. The way that you're able to fine-tune and control how they actually process the information that you give them is exactly the same.
And so why is that the case? It's obviously not by chance. It is the case because all of the people who are designing these solutions come from pretty much like the same square mile in Palo Alto somewhere, and they all have the same assumptions and methodologies.
And so I think that this is what we've been trying to engage these companies with: all right, even for your own business purposes, if you want to create a solution that will be distinct from the others—and hence you will have more market share, hence you will not be commoditized—the only and first thing that you need to try to do is, instead of spending just like billions and billions and billions in making the model a little bit better in performance with these benchmarks that no one cares about anyways, like the F1 score or whatever, it's like 5% better at this and that, spend just like a thousandth of that money to challenge your own orthodoxies and to try to see what could be.
To retake—just like one last time—the Rumsfeld metaphor: the unknown unknowns. Try to see, is it really the case that all of the assumptions that you're making when designing these solutions are the right ones?
And what we've seen already is, when we do this project—when we help these companies deploy these solutions throughout the world—we see that where the true innovation comes is the global south, right? It's like the edge cases of these AI solutions are in rural India. They're not actually in Silicon Valley.
And why is that the case? It's because in rural India, people who are using AI do so because they have no other choice. And because they have like so many real problems to solve, they need to use it in whatever way works, right?
And this is what we're seeing: unexpected ways to understand and to interact with and to use these solutions. And so what we're trying to now tell our clients in Silicon Valley is: let's leverage that knowledge. So then you can start to challenge your own orthodoxies and design solutions that will be like a little bit different from anyone else's.
Yeah. How has the time you've spent exploring AI—how has it changed your idea of what AI is? What are you carrying around in you that the rest of us don't? What do you see that we maybe don't see—that we've been out there watching how people use it?
I think that just one thing is—I think it's one of those objects that, for whatever reason—and I think those reasons actually are very understandable—is very sensitive to people. I think it touches something about people's identities. And the perspective that people usually have on AI is quite loaded.
It's a strong perspective. Some people will tell you, "These things are just like... it's the stochastic parrot and it's never going to do as much as you think it is. And it's all like smoke and mirror anyways."
And some people are like true believers, and they tell you, "Wow, I mean, you actually were underestimating how much they will change, and you will have like AGI very soon." And it's like you're either a believer or a detractor.
And what I would say, by having engaged with them and trying to see how we can try to make them more useful to people, is that—quite obviously—the reality is in the middle. And I think that the way to see them is that they can be quite good at quite some specific things and not so good at a lot of other things.
And so what I would say is—it's important to... but things are changing very fast, right? Like, the models are indeed improving quite fast. Without getting technical, there's like nothing almost that you...
Basically, the last model that you can use now—like, if you use the paid version of ChatGPT, which is, I think it's like O4—and what O4 has, it's a completely different thing to the previous one, which is like 4.0, whatever. Like, they have like huge problems with naming anyway.
But what the previous model was really bad at doing, the new model can be actually quite decent at doing. But you need to really try to understand specifically what is that thing that you're trying to accomplish.
And I do think that it is important for people in our industry to try to engage with them and see what works and what doesn't work, and keep an open mind about what they are, what they can do, while still having in mind the basics, which is: they can only know about what is already knowable, right?
So what they do is they do inference based on data that is already existing somewhere digitally, right? And that's a good lens to try to see—that there are some things that, within this paradigm, they will never be good at doing, right? But there's a lot of things that, staying within that paradigm, you know, they can be quite good at doing.
So very, very concretely, I think they're much better at doing business and market analysis than they are at doing human understanding or human research analysis.
And why is that the case? It's because there's already like so much data that exists about, you know, the financials about a particular company, how that particular company is represented in a market, what are like all of the different products that are competing against that market. So this is data that already exists.
Then the value from that data—when you ask, like, no financial analyst is—well, they need to make sense of it with Excel and with processing and with understanding that data. The LLMs can do that very well, right?
Now, if you're trying to surface human truth about how a particular person is thinking, right—like, why is it that they're doing something—still the best way to do that is to ask the person, right? You can try to infer it from whatever comments they've put online and you're going to get somewhere, but the main choke point here is to actually get more data that is more directly answering your question, not doing better analysis on data that already exists, right?
So that's a bit like the easy heuristic way to see: what are LLM and AI solutions good for? Well, they're good for doing analysis on data that already exists, right? They're not so good at inferring stuff from data that doesn't exist.
With a little bit of time we have left—because you have, I think this just came out, or no, last month—about place. This was your idea, yes? I would love to hear you sort of articulate the "splintered world" hypothesis—is sort of the return of place the proposition you're making?
Yes, yes, yes. Thanks for asking that one. So that one is more recent—something we put out, I think, about like a few weeks ago, right? And here I think that the logic is the following: it is very clear, if you look at geopolitics, economics, politics, that we are entering into a new era, which is an era where you have more boundaries, more barriers, more frontiers in different domains.
So obviously, it can be the economic domain, it can be the political domain, but it can also be like the technological domain, right? And the cultural domain. And this is something that lots of people would say is inherent or started with the Trump administration, but that's actually not the case. It actually started before.
I think it started—like, I would personally say—after COVID. And if you think of the Biden administration, they did a lot to re-industrialize the U.S. as well. Like, there was the CHIPS Act, U.S. CHIPS Act, and so on.
And so I think that tells us that this is a longer, more significant trend. It's not something that is linked with just the Trump administration and will go away. I think I'm pretty convinced it's something that is a new era and not a new moment, if you will.
And the reason for that is also because, obviously, when you start to have that change at the political level, that creates second-order consequences. And so now we're having second-order consequences, which is: the European Union, for example, is waking up and they're trying to be a bit more like self-sustainable and their own tech ecosystem and so on and so forth.
So place—which I think is something that we tended to forget in the '90s. And we saw everything from afar, and you had all these companies that were really seeing the world as their playing field. And they had little interest in trying to understand the specifics of a place—culturally, in terms of regulations, geographically as well.
I think that era is—we might have thought for a minute that that was the new normal—but that's not the case. And now we're moving, to a degree, back to a world, a paradigm, where place does matter. But with one difference: the pace of change is a lot faster than it used to be, in terms of the technological advances.
As we've seen—you think of the innovation cycles of all these AI companies….What was true two years ago, one year ago, six months ago, isn't true now. And so you have this confluence of these two factors, which is: one, things are changing increasingly fast. So technology really is an accelerator of change.
But for the first time, change is not converging toward a similar place as it used to in the '90s. Again, when you had more globalization, the end goal—culturally, technologically, and economically, if you will—was more coherent, and it was more around fewer regional differences.
Now it's the exact opposite. You have, I think, these poles—regional poles—culturally, economically, and technologically, that are increasingly distinct. And technology will just increase the pace at which these realities start to differ.
And here, one thing that is quite dangerous to think about is: as people see reality through the prism of technology more and more, I think it will be the case that people from these different regional areas—so in the article, we say, you know, someone in Beijing, someone in Russia and Moscow, and someone in the U.S.—their actual belief around what reality is will be increasingly distinct. Because it will be mediated by, to take like a concrete example, these AIs—these LLMs—which, as we know, are machines, to go back to the analogy of the photocopier, to process reality and shape it around a particular narrative, right?
That's what they do. And it would be insane to think that the process through which they shape and they form that narrative—so how they transform data into stories—will not be culturally rooted and will not be influenced by geopolitical and economic imperatives. And I think that's the world we move in. And it's going to be quite all right.
Yeah. Oh, my goodness. Well, it seems an ominous place to end our conversation.
This piece in particular—I found really powerful. I'm so glad that you, that I had a chance to meet you. And I really appreciate you sharing your time and your expertise. So thank you so much.
No, thank you so much, Peter. It's been like a real pleasure.
Share this post