Transcript
Transcript: Trends in Technology-Driven Change: Ask Me Anything, Part 2
[00:00:00 Text appears onscreen that reads "Trends In Technology-Driven Change".]
[00:00:06 The screen fades to Chris Howard.]
Chris Howard: Hi, I'm Chris Howard. I'm the Global Chief of Research at Gartner. Thanks for taking some time to listen to the advice that we have around A.I. and related subjects. I hope you find it interesting.
[00:00:17 Text appears onscreen that reads "Ask Me Anything with Chris Howard, part 2".]
[00:00:22 Text appears onscreen that reads "How do we harness A.I. for public good rather than profit?".]
The concern is that this concentrates power into the hands of very few, because the amount of money it takes to actually do the fundamental compute, very few organizations can do. So, J.P. Morgan, their IT budget is $18 billion a year, J.P. Morgan's IT budget. Their A.I. budget is a billion and a half dollars a year. So, what is that going to concentrate for them? So, when I'm working with them, I ask them this question, what can we learn from your payments infrastructure that you should be sharing with public sector? Because you could… I mean, basically, J.P. Morgan's payment structure is a digital twin of the world's transactions, and you can see things in there that have nothing to do with payments, like human migration patterns or other things that could be exceptionally useful if insight is crafted in a different way, but I don't know that you're asking them or the U.S. government is asking them, but it needs to… we need to forge these different types of relationships because it's economically prohibitive for you to have a leadership role in the technology as it is today.
Now, what's happening is those economics are becoming more democratic over the next few years, the use of open source models, for example, the reduction… now, you have to pay NVIDIA ungodly amounts to compute this stuff in GPUs. When I'm working with Intel, we're talking about, well, how do we make it… you can do it on every chipset all the way down to something that's implanted in the body, so it becomes more democratic. And so, I think the first steps for you, because the investment hurdle is so big, is to come up with compelling, cross public-private sector use cases where both sides benefit clearly. That's the first place, and that's a data conversation. That's fundamentally as a data conversation.
Interviewer: And a principles conversation.
Chris Howard: Exactly, well, and the other thing too is that private sector companies have, in their charters now and in their board charters, a fiduciary responsibility around ESG, that the… and the 'S' part of that especially when it comes to these models. That's easier in Europe probably than it is here. It's much harder in the U.S., much harder. In the CEO survey that I didn't show you, we asked questions about ESG, and 86% of European companies have that as a top priority, less than 50% of U.S. companies. It's strange to me.
Interviewer: Thank you. So, there was a question up here that I wanted to touch on because you mentioned the environmental damage of all the compute that this takes. And so, how are we mitigating that? What is the evolution of that?
Chris Howard: Yeah, well, it's accelerated alternative energy sources, especially for the mega vendors that are doing the compute in their server firms, but the other thing I'm seeing is private companies that have the wherewithal to do it, like BASF is a good example, they'll build a wind farm in the North Sea that will power all of their factories with excess that they will then give back to the grid. And so, you're getting some of those private sector initiatives around alternative energy, which is happening here. The other thing which I'm not completely convinced of it, but other offsets. When I challenged Microsoft on this, they're a big player, of course, in this compute, I talked to them, they said, well, yes, we're offsetting it by doing other things at scale like green concrete and electric vehicles on campus and our third party management of the third tier and so on, but that's going to take 20 or 30 or 40 years to play out. So, it's hard to know right now but it is accelerating movement around the edge, and again, just in the semiconductor industry of requiring less power for more insight.
[00:03:56 Text appears onscreen that reads "What should regulators focus on in the face of A.I. disruption and threats to democracy?".]
It's a really complex question but it's so, so important. So, there are… the thing that makes this hard, because a lot in here around deepfakes to disrupt democracy, fake news, that kind of thing, it's very hard to tell what is fake and what is not today. With images, is getting a little easier. With text, it's really hard. There's a set of innovation happening around the sensing of which models produce the generated output because there are patterns that these systems can detect that would show that it's watermarked, but I think it's really around… regulation should focus on the integrity of the information environment, period, that gets consumed to produce these things, and the regulation should have both a combination of innovation and humans in that generative loop to ensure the integrity of the output where you have less control, and the thing that makes it really hard is that those mechanical controls are extremely immature right now.
But one of the points of regulation is to raise people's awareness to something very specific, and I think just the continued exposure to what the real risks are and how they are being mitigated so that the regs stay in step with that is really the key, but so much of this is that people don't have bad intentions when they do wrong things internally, usually, right? It's just they didn't know. And so, some of that is just exposing the risks to them so they know where those boundaries are, which Ada in some cases does, right? It's sort of, here are the things that can go wrong, we need you to tell the government about this, here's a safe spot to play, we're not going to punish for innovation, so all of those types of things, and so just that clarity, but I keep my eye on the mitigating technologies around it because there are other things down in the security layer itself, A.I. techniques for protecting the environment, that come into play here too, but more to come on that.
Yes, I do believe there are decisions that can never be automated and it comes down to…
Interviewer: Well, they can be but maybe they shouldn't.
Chris Howard: Yeah, exactly. I mean, everything ultimately can be automated. What I worry about here is when the human becomes the problem that the system is trying to solve, right? If we get in the way of the solution that the system seeks, right? It sounds science fiction-y but totally possible, right? Especially dealing with implants and stuff like that, wherever the system interacts with the real world to a material effect is where you have to look very, very closely at this, and it's not just humans and warfare and things like that but other types of things that could be harmful to the environment that we're talking about, and there need to… there definitely need to be guardrails around that but part of it is, what I worry about with these tools is the risk of abdication of critical thinking on the part of the humans. If we trust these machines as sort of giving us these answers and they feel right a lot of the time, then we get lazy. That's what I worry about. And so, then I worry about, okay, well, maybe we let that scope of decision-making creep too far before we've actually really thought about it and put a stop to it before.
It reminds me of a Star Trek episode where there was a war between two planets and it was all done on computers. The numbers were sort of generated, okay, this requires 46,000 deaths as a result, and they just punch it in as numbers, right? We're not that far away from that. But yeah, these are the kind of conversations that, at the government level, you need to be fostering. So, it goes back to the (inaudible) question as well. It's as much a philosophical conversation as it is a mechanical one. That's not a great answer to that question, but it's a hard one. I mean, I think about it a lot. I think about it a lot. What it means is we need to have all kinds of people involved in making these decisions. Diversity is the key to unlocking this, right?
And it's diversity of opinions, of thinking, of background, of experiences to help shape where decisions can be made and where they can't, as opposed to what I worry about now which is kind of a Silicon Valley thing, which is solutionism. People that haven't actually experienced real world problems creating solutions for them worries me some because I think it's the people that experience the problems that are the ones that are most likely to come up with the solutions for that. And so, we need to engage them more. Right now, this is a very Cartesian kind of experience where we're taking things and cutting them down, but we need to sometimes step back and look at the whole, and that takes a different kind of thinker, I think.
[00:08:29 Text appears onscreen that reads "How does A.I. developed in North America impact the rest of the world?".]
I mean, I really do worry about that. I mean, the fact… so, I had a conversation with Sir Tim Berners-Lee about four years ago, the inventor of the Web, at the point where half of the world's population was not on the internet yet. And so, they're not represented really from a first class perspective in that data that train these tools, right? It's maybe a third party perspective on them that's part of those tools, but the fact that they're not directly contributing to it in a regular way like the developed world is creates an enormous bias just in the foundation models themselves, right? And so, that is a difficult, difficult problem to overcome because of just the availability of the data itself.
But what happens once you have that data in use is you're using multiple parties to train it, people to train it, which is where the opportunity comes in for diverse populations to have a different kind of say, but it's not at the same level, it's just not at the same scale. And so, I'm really thinking about this as the concentration of power goes to a handful of large vendors, three or four large vendors. And so, where this gets mitigated is in the open source world where you're having non-commercial models being developed in using cohorts of people that has a chance of sort of filling out part of the rest of this problem, but there's still an issue of scale and money. It's not an easy problem to fix.
Question: (inaudible)
Chris Howard: Yeah, go for it.
Question: So, is then that where the value-add of the public sector is?
Chris Howard: Yes.
Question: Because in terms of our data, our data is diverse.
Chris Howard: Yes.
Question: Our data does speak to that sort of multi-modal perspectives.
Chris Howard: Yeah.
Question: And so, is that the value that we bring to the table?
Chris Howard: It is. Well, so EI, right? So, you're in the employment… in the service world. So, you think about who needs that insurance that's underrepresented, homeless people. They're entitled to insurance that you provide. Are they in the model? How are they represented? So, for you, you're looking at, create a domain-specific set of data that represents the people that you need to serve. And then, what you're doing is you're always using that data to provide answers for things, right? So, that becomes the core of the service that you're providing, is that unique data that you have. Now, if your agency does that and the rest of them do that, then you end up with a larger model that represents a fairer view potentially of the citizens, yeah. So, it's a bit of a vanishing point right now in terms of design but I do think you're on to something. So, that's the value public sector brings because it's a non-commercial view of people. Everything else is a commercial view. So, even the patents that Meta is filing, Facebook is filing, they're all commercial patents of how do you sell people things or advertise to them, stuff like that. It's not how you serve them.
[00:11:23 Text appears onscreen that reads "Given the heavy energy consumption of A.I. compute, is there a new advocacy role for regulators?".]
There should be, yeah. I mean, there really should be. I'm not… the work I'm doing with the OECD is not at that level. It could be, because I think at some point, these things need to converge. We can't be blindly consuming power for this purpose without thinking about the value that's produced back, right? Because what that would do, if what you're suggesting happened, it would cause a better-shaped innovation around energy innovation, for example, and would put some more teeth into that. I think it would force change to happen more quickly where it really needs to, because I'm frustrated with this. I think about Canada. I mean, my family's in Halifax. My brother lives in Hammonds Plains where the fires were last summer. That never happened when I was a kid growing up there. I mean, the things that we thought wouldn't happen for another 20 or 30 years are happening now. And so, now is the time to do something about that, and the thing that drives me crazy is we have the intellect to do it. We're just applying it in the wrong places, right? And we need some kind of a forcing function to actually do it better.
I was on a call with David Suzuki not long ago, and if you've ever been on a call with him, he's not the mild-mannered guy you used to see on The Nature of Things, right? He's, in his own words, pissed off, right? It's like decades of this and nothing ever happens, right? And now, I think we've got tools in our hands that allow us to see things in different ways, to analyze them in different ways, to forecast them in different ways, to take action ahead of time that we have because these tools exist, right? So, at some point, there's a catch 22 that we need to put them together and converge them, but I think what you say is a good idea, and it may exist and I just don't know, but I think that's where we need to head. It can't just be about the solipsistic use of these technologies for our own gain. It can't be. If that's what this has brought us to, then we've kind of failed as a species.
Thanks for watching. And again, I hope you found this useful and interesting for the work that you're doing in Canada.
[00:13:34 The CSPS logo appears onscreen.]
[00:13:40 The Government of Canada logo appears onscreen.]