Helen & Dave Edwards are a husband-and-wife team of ‘analysts, artificial philosophers, and meta-researchers.’ Artificiality is a research & education company that helps people make sense of artificial intelligence and complex change. They have a long history in AI, and I consider them scouts to this new world. Sign up for their newsletter here.
Helen was CIO of Transpower, New Zealand’s national grid, and head of emerging products at Pacific Gas and Electric. Dave was head of software application marketing at Apple, head of consumer strategy at SunPower and a lead technology research analyst at Morgan Stanley.
I was excited to speak with them, and get their take on where we are right now in adapting to the arrival of artificial intelligence.
AI Summary. In this thought-provoking discussion, Dave and Helen Edwards from Artificiality explore the complex landscape of AI. They delve into the challenges posed by this rapidly evolving technology, including the need for responsible development, the impact on jobs and education, and the potential for exacerbating inequality. They emphasize the importance of embracing complexity thinking and allowing people to express their anxieties about AI. Despite the concerns, they remain optimistic about the possibilities of AI when approached with care and consideration for human values. Ultimately, they believe in the enduring power of human connection and creativity in an increasingly AI-driven world.
All right. I have never interviewed two people as part of this thing. So I thought I'll just take turns. But I start all of them with the same question. So I guess I'll start with you, Helen, but then David, it'll be your turn. I always start this conversation with the same question. I borrowed it from a friend of mine. She's an oral historian. She helps people tell their story. You can answer or not answer any way that you want. And the question is, where do you come from?
Helen: You are absolutely... That's a question. Where do I come from? I come from a place that fundamentally believes that we are animals. We're not... You know, there's no... That's where I come from. In my bones I feel that there is no higher meaning to our lives other than the people that we spend it with and the knowledge that we gain. I'm a committed atheist. There's another way of saying that.
It's beautiful. Dave, how would you answer the question? Where are you from?
Helen: He's a royal bastard.
Dave: It is actually true. I am a member of the Society of the Illegitimate Sons and Daughters of the Kings and Queens of England.
I don't even know what that means. What does that mean?
Dave: Means that somehow I descended from somebody who was a bastard. I think of King Edward II or something who apparently had a bunch of kids out of wedlock.
Oh, wow.
Dave: And I'm member number 200 something or 400 something because my grandfather was a great genealogist and he thought it was really funny. He had an unbelievable sense of humor. And he became a great genealogist after he was forced into retirement at age 60, after spending his career as an analyst at the CIA. And he was like, "Oh, I'm going to go figure out the rest of my family." Yeah. Oh, bastard. Where am I from? What a great answer to that. What a great question.
I come from a place of this desire of deep human experience. And that's why I'm trying to find the unification of it all. I grew up in a very creative family that was always trying to passionately feel things, whether it was music or dance or theater. I spent my summers in Maine trying to understand what it meant to live in the woods and fell in love with this great Sanskrit poem that was always about looking to this day. It's called "Look to This Day." It's about getting the most out of every single moment in every life. And so I have this sort of insatiable desire to experience every day a little bit more deeply. And that can be enriching and it can also probably be irritating to people around me. But it is what it is.
Do you have a recollection of being a child and what you wanted to be when you grew up?
Dave: I wanted to be a performer.
Is that right?
Dave: Oh yeah. Cause I started dancing at age six and singing at age nine. And my dream was always to perform. I love creative expression. And so that drove me intensely, my whole childhood and adolescence and into early adulthood.
Helen: I have a recollection until I was 17... My recollection just full stop really was I wonder if I'll ever figure out what it is I want to be when I grow up.
You remember that you have a recollection of wondering.
Helen: I have a recollection of wondering. And then the process of having to make a decision about what to study in college cemented a path that took away that wondering, but that wondering came back in full force in my early 40s.
In what way? Tell me that story. What happened in your early 40s? Where are you now? What are you guys up to? How do you explain what you guys are doing now?
Helen: I'll explain it from the wondering and where I came from perspective. I find that the more I learn about artificial intelligence, the more interested I am in biological and living intelligence and what that means. And that's what I want to be when I grow up, someone who actually understands this and is able to speak on many levels to many different people from many different angles and synthesize what it means to be building intelligence in a non-biological substrate, building artificial intelligence. And that came from that place of respect for the natural world and wonder for the natural world.
Dave: He grew up performing. I grew up walking around, digging holes, looking at soil and picking up rocks and studying plants and birds.
I think the best description of what we do that resonates the most for me right now is that we attempt to make the philosophical more practical and the practical more philosophical. And I think I've found over the now many years that we've been studying artificial intelligence that has followed along on a career of mine of at least on and off building technologies is that I find myself gravitating much more to the philosophical questions. I'm much more interested in who we are and what it means to be who we are in relationship to all of these technologies around us. And what I like most about what we do is combining those deep questions with all of the deep scientific knowledge and thought that I can pick up from Helen. And then figuring out how to communicate that and tell that through stories to people that inspire them to question the world and question the technology that is rapidly embracing us.
So how do you describe what you guys do? What is Artificiality to somebody who's never encountered it before?
Dave: We're a research and services company. So everything we do is ground in research, whether that be scientific research or humanities research. We like to dig deep and understand the relationship between humans and machines. And we deliver what we've discovered through publications and through services. Publications, a weekly publication. It's written, we do podcasts, we have videos. And our services are larger organizations usually, sometimes corporates, sometimes universities, colleges. We help them figure out how to make sense of this whole AI craze, how to think about a strategy for what they might do and help them figure out what it means to build a team and build the capabilities that allow them to go pursue whatever change is important for them.
What do you guys love about the work? I sense it... We've talked one time before and I've followed your stuff and I feel how much you care about it and how much you love about it. But what's the joy in it for each of you?
Dave: I think for me I have always loved the next thing. And that has taken me to some really interesting parts of my career. It's also probably had me jump around a bit too much here and there. But I love uncovering what's next. And this is, I think will prove to be, the largest next that I've ever seen, ever witnessed. The biggest change. Because it's not just a new form of technology. It's not a change in the scale of hardware that you can interact. It's not some new leap in the capabilities of software. It's creating an entirely new relationship with machines because these machines are now emerging as some form of intelligence. And that completely changes how we interact with them. And I find it to be fascinating because it has such a profound impact on an individual, on an organization or on society. And it's an amazing field for the two of us to combine our sort of quirky different ways of looking at the world and put those together in a way that creates something that's much bigger and more interesting than either one of us could have done on our own.
Helen: The joy in it for me is wherever you look, there's just this always constant frontier of paradoxes and tensions that get exposed. Because we're trying to build ourselves slash something bigger or different than ourselves, that is thinking and reasoning and planning and action. So you've got everything from the... There's paradoxes about how these things predict us better than, and know us better than, we know ourselves, and yet how do we still, so how do we still be ourselves if we accept the predictions from AI? There's that sort of level.
Then there's the next level, which is what does it mean to know something if it's known by a machine, but you can't access that knowledge, but you can use that knowledge. Is that different than if the knowledge was in a community of humans? Why? There's all of these new questions that get raised and it's now got some sort of predictive oomph, for lack of a better word, to it now.
A few years ago, we were talking to an AI designer about AI ethics, and he said Silicon Valley destroys words. So just wait to see what they do when they destroy the word "ethics." Now that came to pass. No question. There is really, and we don't even... "Ethics" is now a muddied, difficult word compared to the simple kind of glossy word that it was before, even though the hardest place was medical ethics, for example. But now you look at it and go, okay, so Silicon Valley's picked up the pin on intelligence and now defining intelligence. That word is dematerialized. What's intelligent?
I wrote a piece over the weekend about the next one for this is agency. The word "agency" is just going to be the next one tackled. Then it's probably going to be learning; that's already in the process of... So I think it's really, and that says something, it says that our original definitions shaped by the way we use language and philosophy and science of the past, those original definitions have fallen short for what we think it really means. Because when it's in a machine and we call it that in a machine, suddenly it's not what we thought it was in ourselves. And it makes us all step back.
So I love this constant cognitive dissonance that's happening. And this constant sense of a frontier of meaning even in language and experience. What is this barrier of meaning that's being talked about? So I just find that endlessly fascinating.
I feel like I completely tracked with you, Helen, what you were describing. Somebody asked me to, they thought it was a very innocent request to share my thoughts on the difference between sort of face-to-face qualitative research and synthetic user research, which is out there, of course. And I found myself, just like you said, in this weird hall of mirrors where all these words that I felt were very distinct just collapsed. You know what I mean? Anything I could say about myself, I could say about this artificial intelligence. And it got very disorienting. I found it really disorienting. I have yet to complete this essay that started as a very simple request and turned into kind of a real existential crisis.
Helen: That is why we talk about an artificiality, our artificial philosophy. Is at the level of the individual, the organization and the society. And you can never disentangle, you can never untangle them. Even at the... There's a lesson at every single level.
So that's how we've come to think about it. But it is just, it's just endlessly fascinating. There's some really bad ideas out there. There's some really poor reasoning in the space, some really smart people with some really poor reasoning. It's also fun to pick up on some of the hyperbole and try and break it and crack it down a little bit. That's a good intellectual challenge.
So I think there are very few places in my career where it's been so intellectually challenging and rewarding as being in this space right now.
Yeah. And when you're working with clients or teams, what kinds of things do they come at you with and where do you start a conversation on this with them?
Helen: One of the places that we like to start, and we stumbled onto this as a bit of learning by experience ourselves, as we start with getting people to articulate their excitement to fear ratio and recognizing that here's the tension: AI could be good, AI could be bad. So getting them to say it is it 50/50, is it 70/30, is it 90/10, is it 10/90? That initial anchoring is really important for the conversation because it allows you to start asking that next question. The first one is why, but getting that richer elucidation.
So we like to start there. The thing we stumbled into that we've become quite sensitive to is if you don't allow people to say right up front, particularly what they're anxious about... Excited, sure, it's easier to handle excitement. If you don't allow people to express that anxiety there's no real authenticity in the conversation moving forward. Because if you are thinking about these topics deeply, and you tend to, when you're more on the anxiety side of the ledger, if you don't allow someone to really express those and to vocalize them and to normalize them and to say, "I get it, I'm with you, you're not imagining things, this is..." There is a definite downside here. Then the rest of the conversation, they're just told to believe a certain narrative about how good technology is, and they don't really ever... It's saying "I do" but having your fingers crossed behind your back, right? It's, you're not really all in.
So we like to start with that.
Yeah. It's so interesting you bring up the idea of how do you communicate honestly about something that's changing so much, and that's so intimidating with people. What is the state of AI? I was looking at your, you produce these beautiful reports, the State of AI. But what, where are we now in terms of what's going on? It's a very broad question.
Dave: We're in a really messy phase. AI has changed dramatically. AI has been a thing for almost 70 years, or at least the idea, the academic pursuit, and even some of the core concepts that we're dealing with today have been around for a very long time, but the change in just the last couple of years, shifting to generative AI and these tools that go off and create new things, is a dramatic shift. And it has created again quite a lot of excitement, for sure, but coupled with that, quite a lot of fear, a lot of anxiety, and a lot of confusion.
The promise that's being fed to us by the vendors is optimistic to the point of being unrealistic and potentially irresponsible in terms of what they're trying to get people to think these tools can actually do.
What's an example of the oversell?
Dave: One of the things that came up recently was they're seeing some startups... I haven't seen it from the large companies as of yet, but startups that are marketing that they have a method for zero hallucination from their tools. And that's, as far as we've been able to see, that it's actually not possible. These are probabilistic systems. There is always some probability, right? That's actually just the reality. That's the way these entire systems operate.
So there is, there will always be some level of... And hallucination's another word where it's an odd word to use and it's not very well defined. So saying you have nothing of something that's not very well defined is a little bit of cheat anyway, but they're implying that they've figured out a way to make sure that the models are always accurate. And that's just irresponsible. It's not true, but it's also not something that we should be aspiring for these tools to be... That's not the, that shouldn't be the goal of these tools, right?
Just like we know that no one that we work with individually as a human is ever always accurate, right? None of us are perfectly accurate. All of our memories are flawed. Even the two of us who we work together, we spend 24/7 together. We'll have different memories of the same thing that we shared, just the two of us. Our memories are... Humans are flawed and that's okay. We've figured out how to work with each other.
These new systems fundamentally change our interaction with technology though. Prior to generative AI, a computer did what it was told to do. And if it didn't, or it was some form of inaccuracy, it was described as a bug, and you'd send it back to engineering to fix it so that it would actually do what it was supposed to do. So we built this trust of these computers as we built for mechanical machines, right? Mechanical machines, if you've got a tractor, it better keep working. And if it doesn't, you have to go fix it. Same with a computer, a spreadsheet. It has to do exactly what you want it to do.
These tools are designed to be unpredictable, to be creative, to come up with new ideas that sometimes are good and sometimes are harebrained. That's what they're supposed to do. And so I think it's irresponsible for people to say that they're going to create something like this that's never going to make an error. Because when you're asking it to do something that's speculative, or out there and creative, right and wrong is almost the wrong question, right? It's that there is no... If you ask it to break down a problem, "I got to get something done by next week. Help me break down the problem so I know what I need to get done." There's no right or wrong answer to that. There's a lot of different answers.
That's where the value of some of the core value in these tools exists. If you ask it to explain a concept to you, okay, it could stay within the boundaries of accuracy, but is there a right or a wrong way to explain quantum physics?
Helen: There's more wrong ways than right ways. Yeah, it's, that's messy.
It's almost like there isn't really even the right word in the English language to describe the kind of current state of AI right now. Because what it's doing is quite conveniently breaking things that have actually been accepted truths about how we do things. Whether it's at the business model level... You search the internet, it's powered by ads. If you search the internet with your own personal search assistant, what kind of ads? So there's that level. It's really challenging the social contracts we have, irrespective of competition law, but the social contracts we have about what's fair in terms of what's fair use of other people's creative work. What's fair in terms of how much should one company be able to do things, know things, run things, make money when there's a tiny number of people that are part of that, flipping this whole labor capital thing around.
It's challenging the ideas that we had about what AI would be good at versus what humans would keep as uniquely for them. Generative AI has lobbed a bomb into that. Not because we actually think that generative AI is truly creative like a human or deeply empathetic or any of these things that we like to say that are uniquely human, but because they're doing things that take out a lot of the almost sort of the baseline mechanisms of things like creativity and empathy.
So there's this question mark. Will they get better and go up the stack and just replace humans? Or will they go somewhere else? Or will it be a different combination? I think that's very anxiety provoking. And there isn't, you have to be in a... And there's only a few fields at the moment where you could look at these tools and say "I can use these to enhance my personal creativity." You have to have access to your own ways of leveraging big data and putting them into these tools so that you can advance your own, say, scientific creativity or other forms of creativity.
Right now we're in a moment where it's really easy to see what we might lose. It's harder to see what we might gain.
How do you mean?
Helen: It's really easy to see, for example, if you take photos for stock, that's the end of your job. If you're a Google, if you translate languages, that's the end of your... It's really easy to see those things. It's harder to see what new things would come along that are... I'm not saying that those people would move to that job, but what new things are going to emerge. You can't see.
So I think in many respects, the state of AI, the way that we think about it, is that it breaks linear thinking. We have to move to complexity thinking. And it's always been better to have a good complexity mindset. I think it makes you a more flexible, agile and just makes you smarter. You make decisions better when you understand what it means to live in a complex world. But now this breaks it. You've actually got to move to a complexity mindset. You've got to be thinking in terms of adaptation and feedback and self-organizing and emergent phenomena. If you're not caught up on that way of thinking, and you don't have a discipline around that, and you don't have a practice around that, you're DOA because you can't see past this rhetoric that's coming at us that is complicated, but agenda driven as well.
Dave: I was gonna say that I think that part of the status that we're living in right now of AI is that in some ways we've now plopped hundreds of millions or billions of people into the greatest Petri dish we've ever created.
My history with software goes back 30 years. In the past we did, we'd think very carefully about what it was designed. We'd craft it quite well. You'd QA everything cause you'd put it on a floppy disk, was my original, in a box and you'd shrink wrap it, and then you'd put it on a store shelf and someone would buy it and take it home. And there wasn't any chance for an update unless you had something that you could send out, another floppy and eventually CD-ROM.
But that whole world was very much about thinking very carefully about exactly how this was going to work for the people that you were trying to serve and making sure that it was perfect as possible before you sent it out. The world has changed so much over that time because we've embraced, whether it's "move fast and break things" or MVP mindsets or Agile.
And that is okay when you're saying, "Oh, I've got a new CRM system for small business. I'm going to stand it up because it's got just enough features, that MVP, and we'll see how people use it. And then we'll fix it and we'll change it around to make sure that it works well," just as an example.
This is different. We're experimenting on the entire population by putting out tools that are unpredictable, that we don't understand how they're going to use. We don't know what their impact is. There are true, justifiable existential questions about whether we will all still have jobs, because the return on the investment in these tools is basically to replace labor.
So you've put out these tools where there is this grand social experiment, and we are in the Petri dish with it. And I think that is the fundamental dynamic that drives so much confusion, because we don't really know what's going around.
Helen: By the way, you just described a complex system.
Dave: Yes. We're all looking around at the Petri dish, trying to figure out what everybody else is doing and seeing what else is going to get thrown at us and what the dropper is going to drop of some new compound that's going to make us all do something different. And we have no idea what's going on, and it makes us really anxious and concerned because we don't know whether we're going to get out of this okay. And at the same time, the people who are making it all say, "Oh, it's going to be great. You're all going to love it. And you're all going to sit on the beach and worship the God that we've created."
Helen: Yeah, that's just bullshit. But the AI leaders need to get a lot better at having realistic positive stories here. It's just... I wrote about it in a slightly scathing way, but it's ridiculous to be talking about how AI is going to run off and cure cancer and solve poverty and self-care. It's just ridiculous. It's one, it's just flat out not true, not possible. It only takes a little bit of good reasoning to see why that is just...
Dave: So I, you're left as a lay person going, "Why would they say that?" And I think that there are really positive implications of AI. No question. I think it's really going to advance us.
Helen: But stop, but stepping in saying it's going to solve climate change. It's ridiculous. We know how to solve climate change. We don't know how to get the world of people to collectively move as a system towards solving climate change.
Dave: The how do we reduce the amount of carbon? We understand that. It's the promise that we can't get the 7 billion people on the planet to make a collective decision. And AI isn't going to do that unless it is so manipulative that it manipulates all of us into that.
Helen: Or it's an autocrat.
Dave: Yeah, that we or it's a god and we all worship it so much that we'll do what it tells us to.
I'm curious about, cause I feel like what you, how you helped me is the frameworks that you build around this stuff. You know what I mean? Like the complexity science you talking about the intelligence staircase. I guess my question really is what are the best metaphors for thinking about what's going on or what this is, or where we, this Petri dish that we're in.
Helen: That's a good question. There are probably multiple. I talk about Silicon Valley holding, taking the pen and defining some of these things. So just the idea that technologists are now defining in code things that we barely understand ourselves.
Dave: Yeah. One of our core concepts is more aspirational. And it's a design philosophy for what we would like AI to be. And we call that "a mind for our minds." And we risk anthropomorphizing intentionally to get people to think about it. The phrase is a riff on something Steve Jobs used to say. He was inspired by a, it was a 1973 Scientific American article about bicycles. And in it they compared efficiency of species, and a human by itself is this middling contender. But a human on a bicycle is about as efficient as you can be in terms of the efficiency of movement. And he grabbed onto this and somewhere in the late '70s or 1980 coined the phrase by saying that he thought the computer was going to be the greatest invention ever created because it would be a bicycle for our minds.
So the greatest, this huge efficiency enhancement for our minds. And I think he's been right. 40 years, definitely true. These machines are great, improves all kinds of things that we can do with our minds, but for us, these tools have changed. And they're not the same kind of bicycle kind of tool that you steer and speed up or slow down and you direct it very specifically and it does exactly what you wanted to do. Or if the chain breaks, you have to go fix it. Like it's a truly mechanical tool.
These things are designed to be able to be creative and increasingly be able to perceive the world on their own and reason and make decisions and take actions for us. So they are becoming a metaphorical mind, at least. And so we think about it as being a mind for our minds. And the important part is actually the two little words in the middle. When we're thinking about all this problem with big tech and how it works is that the mind has to be for us, as opposed to what's happened with the Internet is it's become a thing where we're really doing it for them. Yes, we're getting content and da, but we're the feedstock for those behavior systems. And for our minds, so all of us.
And equality and inclusiveness is actually really quite important to us and concerning with these systems. We have two different worlds of outcomes. One is that the world continues to work because our interactions are monetized by companies using advertising. That's how we do it today. Since the beginning of the Internet, you click on something, go through a scroll through feed, you watch the next YouTube video. It's all supported by ads based on what we pay attention to. So we're paying with that attention.
With generative AI that we've got this level of intimacy that they'll understand about us, when you think about can YouTube get me to watch the next video based on what I've watched, that's one level, but when it's been monitoring every conversation I've had, where does that, what, where, how does that change? What line you find is acceptable about how they're going to manipulate us for some sort of commerce?
The other way is that they continue to charge us for these tools on a subscription. And for a good number of people involved in the tech industry, $20 a month doesn't seem a lot. Even doing $20 a month times a few tools doesn't seem a lot. But I recently did, found an interesting comparison. We're looking about the history of tools, technology, and education. And early on with the calculator, as it started to become a pocket calculator thing, they were embraced in the education market. And then a few years later, they actually got pulled from one of the major states, I think it was California, and actually they said they couldn't use it anymore. And the rationale for it was that they were worried that the cost of a pocket calculator was going to be prohibitive for a certain number of students. At the time, that pocket calculator was $20.
I think what's important is that the decision makers in the industry can really overlook what true equality is. And it bothers me when people talk about these tools as being able to democratize everything. Democracy is something that you have the ability to participate in by being part of a society. We banned poll taxes a long time ago, right? Because poll taxes were discriminatory. It's not democratizing anything if you've got to pay for it.
Helen: So I'm struggling with the mindset of the extraordinarily wealthy class in the tech industry that have just lost touch that 20 bucks a month, maybe I don't know what level of affordability, but some people may choose that 20 bucks a month is better spent on food. In which case we have another profound digital divide that's underway.
Dave: It's bigger than, and this is the place where the digital divide becomes such a significant cultural divide that if you run this forward into where the AI technologies are heading, which is more embedded, more spatial, more potentially connected to our biology itself, implants in your brain and what have you, if you run it forward and you say that, and you accept the current state of the science that cultural evolution in humans is moving faster than genetic evolution, suddenly we see this potential for quite a significant co-evolution event where you have access to these tools, you're really going to be in a profoundly different cognitive space and cognitive level than if you don't. And that's different than who got to use Google and who didn't. Which is just accessing information. This is about accessing extended cognition. And playing that forward, and it's really not science fiction, you start looking into where the brain privacy folks, mostly lawyers and ethical people, are starting to understand what it means to have the right to our private thoughts, that we don't need to all be connected up into some great human internet. And that this, it's all this early workplace surveillance that's starting to expose this.
So we're really entering quite a different place now. It's not exaggerating it to say as humans it is a quite different place. There is no precedent for it.
What you mentioned a little bit... Can you paint a picture of what's coming? What's your sort of aspirational vision of, you mentioned a few categories, spatial, I can't remember, embedded, what's coming? How is it going to change in the next... And how fast is this stuff happening?
Dave: I spent a good portion of my career trying to predict what's next. And you sometimes get some stuff right. Most of it, you get wrong because predicting the future is really hard. This is the segment that I find to be the most difficult I've ever touched.
In terms of the technology side, what we're watching, which should probably tell you a little bit about what we think might happen... The first one is AI agents or agentic AI. This idea that AI can have, actually we can give it agency to go perform an action. And what's intriguing about that is that you can, it can have some level of reasoning and decision making and then taking action on your behalf. You can also string these things together to be able to create a team to go accomplish something.
For instance, there's a system we were looking at that allows that has one agent that goes off and looks at the news of the day and another one that evaluates those news and writes a story pitch. Another one takes those story pitches and writes stories about it and makes it into a podcast script and then another one translates that into audio and then publishes it. And this is a completely automated system that spits out news stories that has dialogues between two hosts about stories of the day based on different topics. That's going to go through some level of explosion sort of soonish.
The other one that I think is quite interesting and quite important is this idea of spatial computing, spatial intelligence generally in the world of sensing the world. Today these generative AI, these large language models are mostly trained on text. There is some image and some video content that's come in. They're able to create some level of image and video, but they don't have an understanding of space the way that we do.
One of the challenges here is that we don't truly understand how we understand space. Our understanding of space is a mystery and it's very individual, right? I'm sure there's true neurological understanding of things. But when we start to think about what is that close or is that far, those kinds of things, our understanding and experience of space is much more of an intuitive lens through which we see the world.
There is progress on that. There's both if you look at what Apple's been doing with Vision Pro. We have friends at Soul Machines in New Zealand that have avatars that can see through your camera at you once you allow them to, and they're starting to try and figure out that "oh there's an apple on the shelf behind me" or other things where people, where companies are starting to get some understanding of space. That will be greatly enhancing to these tools because we don't, in order to be truly useful, they can't just learn from the records that we've published and that have been absorbed. They have to start to be able to participate in the world around us.
The other lens I'd say in terms of where it's coming is the sort of social dynamic. The first one is truly in, is in education. It's a huge challenge to figure out what it means to learn with these tools, what you need to learn, what it means to teach. The field of education is going to go through quite a lot of upheaval. And we love working with people in higher ed and feel for them deeply, that this is really a very difficult and challenging topic.
I think following that, if I was to predict an order of things, will be some level of mass anxiety through to potential hysteria about jobs and about employment. It depends on how quickly these changes happen, how much displacement. How many companies look at these as revenue enhancers to enable their people to move ahead and be more happy in their jobs versus these tools as things that allow them to fire a whole bunch of people.
And you get a next generation that's coming out of college and they're early in their careers and they already feel like the world's been pretty upside down for them. And suddenly these tools are going to show up and make it really difficult for them to figure out what their future place is. And that could be very difficult and disruptive.
And we see today that Gen Z has an attitude and they express it and they're out there right now making a whole bunch of noise over something that matters to them. Imagine what that would be like if it was about the fact that none of them can get a job because of AI, because a good number of companies say that they don't need to hire, they don't need to hire interns. And there was some research that we put out, I can't remember the number off my head, in last week's email that was about the number of companies that are gonna reduce the number of recent college graduate hires. If that actually happens and Gen Z takes hold and Gen Z wants to get angry about it, we're gonna hear about it.
Helen: Yeah. And the parents too. But I'm just saying this is a... It's just... I'd be angry too because...
Dave: Yeah. It will be, but this is an organized, motivated generation that knows how to communicate and get their point across.
Helen: Yeah. And I think that's why I think that, and I probably chose the wrong word by saying, I probably should have said something other than "hysteria."
Dave: That's right. That sounds, that maybe not the right word.
Helen: Because it's pretty loaded. Because it implies or it can be heard with some level of irrationality. And I don't mean it to be irrational at all. I think that there's actually very credible mass concern.
Dave: Yeah.
Helen: And I think if you go back to the sort of the predetermined versus truly uncertain, one of the things that we know about what's happened in jobs in the last 20 to 40 years with these sorts of disruptions, is that it's not that the jobs per se go away, they get shittier. And they get shittier because a couple of reasons. One is that there's pressure on moving jobs to places where people are going to be more fractured and put into buckets and surveilled. But there's also what Acemoglu talks about in terms of the so-so automation. That when the automation isn't good enough to truly replace the human, and that happens because we still don't really know what the human does and we over promise on automation or we don't really understand what we don't know, but that so-so automation gets put in and it makes... It doesn't... It's not enough to replace the human but that ends up with the human sort of having a bit of a shitty job and not having enough time and bandwidth and mental space to go and do that true thing that the original promise was about. The promise was about "we're going to release you for more creative work" and all this kind of stuff.
And I think we're seeing early signs of this and a lot of people aren't talking about it enough. They're talking about the cherry picking the research. There's a couple of very high profile commentators out there who conveniently ignore that the very, that they're talking about, "This is a great proficiency gap closer for unskilled people. Yay!" And they're conveniently ignoring the actual core of the research which says those people make more mistakes. Those mistakes have flow on effects, whether it's technical debt, or whether it's making the job of their boss more difficult, whether it's errors with consumers, whatever it is. People are still washing away this what I would put in the bucket of the so-so automation problem where it's not all rainbows and unicorns about productivity increases because there's... It's more complicated story about where the productivity losses are happening.
And gosh, this whole hour makes us sound a bit like Debbie Downers, but...
Is there a rosy picture that can be painted about the work that you're doing and the teams that you're working with and how people are trying to adapt?
Dave: One of the reasons we have "Mind for Our Minds," and it's certainly how we think about working with our clients for adoption, because it's like it's a little bit like saying we're building... We've got this... Yeah, you can't go very fast if you don't have a car with brakes on, so we spend a bit of time putting the brakes on first and then go fast.
Helen: Yeah, I think that the rosy picture is a counterbalance to what's now being described as the shitification of the Internet, everything getting flooded with all this machine generated nonsense, is those things that are created by humans are becoming more precious.
So the most valuable thing is that people know that what we write is what we wrote. And that our, when we go and give talks and run workshops and we're working on new events, it's about personal connections and people. And I would say that the same thing for you. The newsletter that you sent out is a human curating some content from other people. And I think that there will be...
It may not be the overall world and everything, but there will be a portion of society that's going to say, "I really want to live in that world." And it's a more active, thoughtful, considered approach. These machines are extraordinarily powerful and we are very excited about the possibility. We use them all the time, but the magical times of life is not the back and forth with the machine. That's what you're getting stuff done. The magical time is the time we spend with each other.
Dave: I want to thank you so much for your time. This has been really amazing and fun. And I really love the work that you're doing. And yeah, I really appreciate your time.
Helen: Thanks, Peter