Language selection

Search

Exploring the Use of Artificial Intelligence in the Public Service (FON3-V10)

Description

This event recording from What Unites Us, Defines Us: Values and Ethics in Today's Federal Public Service spotlights a discussion about the emerging use of artificial intelligence (AI) and the opportunities and challenges it presents in the public service.

Duration: 01:59:54
Published: March 3, 2025
Type: Video


Now playing

Exploring the Use of Artificial Intelligence in the Public Service

Transcript | Watch on YouTube

Transcript

Transcript: Exploring the Use of Artificial Intelligence in the Public Service

[00:00:00 CSPS title page. Text on screen: What Unites Us, Defines Us; Values and Ethics in Today's Federal Public Service.]

[00:00:08 Nathalie Laviades Jodouin appears full screen and addresses the audience from a lectern. Text on screen: Nathalie Laviades Jodouin, Vice-President, Canada School of Public Service.]

Nathalie Laviades Jodouin: Welcome back, everyone. For those of you here in Ottawa, I invite you to please take your seats. Welcome back, everyone. We have an exciting agenda ahead as we shift our attention to the role of artificial intelligence and its linkages with public service values.

Before we do that, however, I want to take another moment to talk about some of our other regional events. We flagged earlier events taking place today in eastern Canada and central Canada, as well as in Iqaluit. So, now we're going to talk about the west. A shout out to Jean-François Tremblay, and Naina Sloan, who are leading discussions with federal public servants in Vancouver. In Calgary, we have Chris Forbes and Raj Thuppal, who are also hosting a fireside chat to talk about themes raised at the symposium. In Regina, we have Bryan Larkin, Sam Hazen, and Shannon Grainger, who are at the Canadian Heritage Museum leading a discussion on values and ethics. And in Winnipeg we have Gina Wilson and Diane Gray, who are at Red River College for a panel discussion showcasing different perspectives on our shared values and ethics. And in the northwest, in Whitehorse, David Millar from Parks Canada is hosting a panel to talk about some of the key takeaways from the symposium. And in Yellowknife, Valerie Gideon and Chris Fox are at the Tree of Peace Friendship Centre to discuss key themes also coming out of the symposium.

So, I just want to say thank you to everyone for coming together today, both in groups and in teams and departments, in agencies and regions, as well as in missions. Now, before introducing our next guest, we will start the session with a quick video highlighting the themes of stewardship and excellence.

[00:02:11 Video opens with title page. Text on screen: Reflections on Our Values: Stewardship.

[00:02:14 Dylan Jenkins appears full screen. Text on screen: Dylan Jenkins, Indigenous Services Canada, Ottawa, ON.]

Dylan Jenkins: The value of stewardship. It's about caring for resources. It's people. It's always people first.

[00:02:20 Franco Pagotto appears full screen. Text on screen: Franco Pagotto, Health Canada, Ottawa, ON.]

Franco Pagotto: We try to be as transparent as possible.

[00:02:24 Video shows a person working in a lab, then back to Franco Pagotto full screen.]

Franco Pagotto: So, from the highest level of government down to my immediate manager, every penny that I spend in my lab to do the work is accounted for.

[00:02:32 Natasha Cote-Khan appears full screen. Text on screen: Natasha Cote-Khan, Public Services and Procurement Canada, London, ON.]

Natasha Cote-Khan: I feel a great sense of responsibility to design a system that works well so that people stay,

[00:02:37 Video shows people in a variety of government workspaces.]

Natasha Cote-Khan: and they have that enjoyable experience and remember their years with fondness and enjoyment.

[00:02:44 Video shows title page. Text on screen: Reflections on Our Values: Stewardship.

[00:02:48 Honey Dacaney appears full screen. Text on screen: Honey Dacaney, Treasury Board of Canada Secretariat, Toronto, ON.]

Honey Dacaney: Excellence is a culmination of all the other values being brought to life. And so, integrity, respect for people, respect for democracy and stewardship.

[00:03:01 Mireille Lacroix appears full screen. Text on screen: Mireille Lacroix, Public Health Agency of Canada, Ottawa, On.]

[00:03:05 Video shows people in a variety of government workspaces.]

Mireille Lacroix: When we don't have all the answers, we open ourselves up to curiosity, to recognizing the contributions that other people can make to our work to help us achieve a result that would be better for everyone.

[00:03:15 Video ends with symposium title page. Text on screen: What Unites Us, Defines Us; Values and Ethics in Today's Federal Public Service.]

[00:03:18 Nathalie Laviades Jodouin appears full screen.]

Nathalie Laviades Jodouin: Delivering excellence is one of our fundamental purposes, and part of this is finding creative and innovative solutions to the complex challenges we face. And one area, that has had a profound impact on both public service excellence and stewardship, is artificial intelligence. And that's the focus of our next segment.

Before I hand over the mic to our keynote speaker, however, we're going to conduct another quick survey to get a sense of how AI plays a role in your daily work and to set the stage for the discussion ahead.

[00:03:53 Split screen: Nathalie Laviades Jodouin, and wooclap poll results.]

Nathalie Laviades Jodouin: So, as we did earlier this morning, you're now used to it: wooclap.com, VEOCT to open the survey. And the question being, do you use artificial intelligence at work? Do you use artificial intelligence at work?So, we're going to take a look at the responses. So, a bit of a tie there, or close, between regularly use and using it occasionally. Oh, although occasionally is now trending upwards. Interesting. I have never used AI at work at 32%. Keep those responses coming. Let's see. So, it's fair to say most either regularly or occasionally use it, but there is quite a bit of a chunk who have never used it at work. So, that'll be interesting for the discussion that's ahead. Keep those coming. We want to be able to kind of pull those responses. This is really helpful, helpful information.

[00:05:13 Nathalie Laviades Jodouin appears full screen.]

Nathalie Laviades Jodouin: All right, so as we were mentioning, the focus of artificial intelligence, these advances come with great responsibility as we need to use them both ethically and effectively. That's what our next guest is going to address, talking about the opportunities and challenges that artificial intelligence brings to our roles and the link between these factors and the values and ethics in the public service. So, I now have the pleasure of introducing Dominic Rochon, the Deputy Minister and Chief Information Officer at the Treasury Board of Canada Secretariat. The floor is yours, Mr. Rochon. Thank you.

[00:05:53 Dominic Rochon takes the stage, and then appears full screen. Text on screen: Dominic Rochon, Deputy Minister and Chief Information Officer of Canada, Treasury Board of Canada Secretariat.]

Dominic Rochon: Thank you very much, Nathalie. Hello everyone. Hello, everyone. I'm delighted to be here and be part of this very important conversation that we've been having over the last couple of days. I note that we've been talking about values and ethics from many different angles, and I've listened intently, and I'm hoping that all of these conversations have stimulated thought and reflection for you as much as it has for me as I've been sitting in the front row. And I know not too many people have ventured up front here, so hopefully I haven't scared anyone with this conversation. But as I listened intently the last couple of days, it stimulated a lot of thought.

Right from the opening traditional prayer from Elder Verna yesterday, we heard about the notion of seven generations thinking, and I could not help but ponder how we can sometimes barely figure out how to solve the problems of the here and now, let alone be thinking about seven generations. I believe the Clerk said it best in his opening remarks yesterday when he mentioned that we were here to learn over these two days, and the keynote speakers and panel conversations so far have certainly given me pause to think about the journey that our public service has been on.

I note we are reflecting about how we behave, how we treat one another. Words are ever important, and if I may say, even the simplest things like how we dress, or indeed, where we work can quickly become something very controversial. These are not issues that were necessarily heavily debated when my grandfather, or indeed my father, made their way through their careers as public servants. But you will note that I'm wearing a suit and tie here today to address all of you. And you might be surprised to hear how many times people poke fun at me for my attire, particularly in IT circles, criticizing me for being too formal, and threatening to brandish a pair of scissors the next time I show up with a tie. And yet, as the son of a career diplomat, my father's wisdom about dressing a certain way out of respect for the people you will be encountering always rings true in my ears. So, simple questions about values, integrity, and respect abound, and crises of conscience can arise at any moment over the smallest of things. And that's before we even broach the ever-evolving subject of technology.

In my role as Chief Information Officer for the federal government, I'd like to speak about how public service values and ethics must be the cornerstone of the use of artificial intelligence in government. Of course, AI is on everyone's lips these days. In fact, I don't think there's been a speaker or a panel that has not mentioned AI in the two days that we've been here, and there's not a day that goes by that I'm not asked about AI. To the average person, the expectation seems to be that AI is this revolution that has landed upon us and will be transformative, that your IT departments will just sprinkle a little AI stardust on products and services and things will magically be better. But it's, of course, much more complicated than that, because how we use AI is, at its heart, a conversation about values and ethics. And that's what I want to talk about today.

Now, I have resisted creating an avatar in three minutes that I could have, interacting with you, on the giant screen behind me and have you walk through various AI use cases. Frankly, I haven't even availed myself of wooclap, despite the fact that Nathalie did ask one question, so I'm afraid I've chosen a much more traditional way to convey my message to you. So, dare I say, it might be a bit boring. But suffice to say that I hope to frame things for the panel discussion that will follow, where I have no doubt there will be ample opportunity for a much more lively and perhaps spicy dynamic. So, until the panel arrives, bear with me. AI, of course, isn't anything specifically new. We've been doing it since the 1950s, but in recent years, it has evolved to a place where it has the potential to open all sorts of new ways of doing things that will enable us to work more efficiently and better serve Canadians. At the same time, it can also have life changing consequences for the people we serve, and it must therefore be used responsibly and ethically. Nathalie mentioned this in her opening remarks. I choose to channel the Marvel comic universe in saying that with great power comes great responsibility.

So, the widespread adoption of generative AI, and the fact that these tools are going to be in everyone's hands, if they aren't already, must be governed by guardrails and done responsibly to prevent and address bias, protect human rights and democratic institutions, and enhance public trust. Yes, AI has great transformative power, but fundamentally, this revolution is about managing change while being guided by our values and ethics, and that's what we're working towards in government.

Before I get to what I call moral guideposts, I would like to set the stage by looking at the broader AI landscape in Canada. Canada has been a leader in artificial intelligence and deep learning since the 1990s. Thanks to innovators like Geoffrey Hinton. I think the Clerk mentioned that he actually recently won a Nobel Peace Prize, or a Nobel Physics Prize, rather. And Yoshua Bengio and essential early investments by the federal and provincial governments. Canada was the first country in the world to implement a national AI strategy. Today, rapid advances in generative AI in particular are unlocking immense potential for our country, dramatically improving productivity by reducing the time spent on necessary but laborious or repetitive tasks. AI also has remarkable potential to make the world more accessible to people with disabilities, allowing them to access new skills that were previously out of reach. Researchers and businesses are using AI to create incredible new innovations and job opportunities across every aspect of the Canadian economy, from drug discovery to energy efficiency, housing innovation and improved hospital care.

So, the transformation that we're seeing across the board in Canada when it comes to AI is indeed significant. Within the federal government, we've been developing our own ability to leverage AI and other automated tools in a responsible way. Our human centric approach to the development and deployment of AI prioritizes transparency, accountability, and fairness in automated decision making, and there are several departments involved in developing our AI ecosystem. Innovation, Science, and Economic Development Canada is looking after AI regulation for the private sector and the funding of institutes. Indeed, since 2017, ISED has overseen $2 billion worth of funding for the Canadian AI ecosystem, with an additional $2.4 billion identified in Budget 2024. ISED's efforts include, for instance, the introduction of the Pan Canadian AI strategy, to drive AI's adoption across the economy and society, as well as the creation of an advisory council on AI to guide AI growth, leverage expertise, and ensure it is grounded in human rights, transparency, and openness. They are also the lead, of course, on Bill C 27, which is winding its way through parliament and, if adopted, would introduce, among other things, an artificial intelligence and data act.

For its part, Global Affairs Canada is involved at the G7; at the G20; at the Council of Europe; United Nations, and other international fora negotiating international standards for the ethical use of AI. And of course, the Canada School of Public Service has an important role to play as they're putting in place an awareness campaign to inform all public servants of this transformative technology and providing AI related training. For its part, my department, the Treasury Board Secretariat, is looking after the guidelines and rules for AI's adoption and responsible use by public servants.

And, needless to say, a great many departments and agencies, from the National Research Council to Environment Canada, Agriculture and Agri Food Canada, or the Space Agency, have gone far beyond merely dipping their proverbial toes in the AI pond. And you'll also hear that DND, and the Communication Security Establishment, and StatsCan, and a number of other departments and agencies have already their own AI strategies. The Privy Council Office, with Mark Schaan's recent appointment as Deputy Secretary to the Cabinet for Artificial Intelligence, has the daunting task of coordinating all of these moving parts. So, AI is here. We're creating an AI ecosystem in government, we're investing money, and we're asking every public servant to leverage it.

I mentioned a moment ago that the Treasury Board Secretariat has put in place a series of policies, risk management frameworks and tools to help the public service use AI responsibly. One of these tools is the Directive on Automated Decision-Making. Departments that use automated decision-making systems for certain programs, including those that rely on AI, are required to comply with the requirements of the directive. For example, people need to be given important information about when and how to use automation to make decisions that affect them. Decisions made by AI systems must be fair and accurate, and potential negative impacts of automation must be continually identified and minimized.

I therefore encourage everyone who's currently using AI or considering doing so in the future to familiarize themselves with this important policy tool. The Directive on Automated Decision-Making builds on the Algorithmic Impact Assessment tool to help federal public servants identify, assess and reduce the risks associated with the use of AI in decision-making. Furthermore, in February, the TBS published the second version of the Guide on the use of generative artificial intelligence. This guide provides public servants with advice, principles and best practices on the use of generative AI tools, with an emphasis on its safe and ethical use.

This guide on the use of generative AI speaks to the importance of aligning with the FASTER principles of fairness; accountability; security; transparency; education; and relevance. Principles I'll get to in just a moment. Overall, all these instruments will help you, as public servants, be more productive and deliver higher quality work, while at the same time hopefully help you manage potential risks, such as generating inaccurate information; amplifying stereotypes; or compromising privacy and security.

And of course, there are many examples that demonstrate how AI is now being used to streamline government operations. I'll offer up two quick ones. Employment and Social Development Canada is using AI to increase the efficiency of administrative processes, such as automatically assessing the relevance of comments in record of employment forms. And the GAC document cracker, or Doc cracker, developed by Global Affairs Canada, uses AI to help officials quickly find the information they need by summarizing and organizing large volumes of documents.

There are undoubtedly many more examples, but at the same time, what we must remember is that generative AI tools are just that, tools. As such, they should be evaluated for their potential to help increase the efficiency and productivity of public servants, not for their potential to substitute for a high performing workforce.

I should also mention that last May, the President of the Treasury Board launched a panel on Canada's first AI strategy for the federal public service. This strategy will align responsible efforts in this field across government, including how we use it to deliver services, conduct scientific research, strengthen cybersecurity and achieve efficiency in our operations. This will help us improve how we serve Canadians and how we train and develop our workforce so they are ready to adapt to the changing workplace. These consultations have already taken place with people in academia, bargaining agents, civil society, public service, Indigenous communities and industry, and those with the public began last month. If you have not yet participated in the public service consultations, you can still participate in the public consultations that are currently underway. The feedback we've received so far reinforces that our strategy must be focused on people, collaboration and trust. Trust, trust and transparency, is something that came up time and again yesterday.

Trust is essential. This means respecting inclusive and equitable practices that meet the needs of diverse communities; ensuring transparency, accountability, privacy and security of our AI initiatives; and minimizing their impact on the environment. Essentially, we want Canada, like other countries have done, to publish a strategy for AI in the public service in the spring of 2025. This is an important step towards adopting a consistent and uniform approach to AI within the federal government.

Now, with respect to your daily work, the Call to Action and the guide on the use of generative AI is clear. Federal institutions should explore AI tools to support and enhance their operations. And let me debunk a persistent myth that public servants are not allowed to use generative AI in the government.

That's not true. We can use it, but only in a responsible and ethical way. We're also well aware of the expectation that these tools will impact the way we work, but we'll need to adapt. The point is that we can't miss the boat or be afraid of using this technology. Now that we have the guardrails in place, you'll need to use it.

But you can't manage AI through controls alone. You'll have to use not only our values and ethics as guideposts, but your own sense of right and wrong to maximize the use of these tools to enhance our effectiveness. And with that flexibility comes accountability. As I said at the outset, this is about managing change. And what does that look like in our everyday work?

To help you understand, my office has prepared a document entitled Generative AI in your daily work, which was posted today on the virtual kiosk of the Canada School of Public Service website. If you're not sure what generative AI is, it produces material, like text and images, based on what you ask it to do. Examples of such tools include ChatGPT, Copilot and Gemini. All these tools are here to help you, but it all comes down to one question: should you use generative AI for your project? Every situation is different. So, make sure you think about these guiding principles and your moral values before using AI.

The document we published today will help you use AI responsibly and in accordance with the Values and Ethics Code for the Public Sector. For example, it shows you what generative AI can be used for; this includes writing presentations, report outlines, speaking notes, meeting minutes, and other documents. Exploring creative ideas and creating images for presentations. But it also describes what you can't use AI for: producing inappropriate, illegal, or unethical information; creating a single source of information for important business decisions; creating images of people; and creating materials that could mislead people or spread misinformation.

And to help you learn how to use generative AI tools responsibly, there's a handy acronym that I mentioned earlier to guide you. That acronym is FASTER, and it identifies six principles as well as do's and don'ts associated with each principle.

So, in the FASTER acronym, F stands for fair, so that you check to ensure that AI generated output is representative and inclusive and doesn't contain harmful stereotypes. The A is for accountable. In other words, take responsibility for what you prepare using AI. S is for secure. Meaning you use public tools with unclassified data only and don't input personal, sensitive, or protected information. T is for transparent. Meaning you indicate on the final product that you use generative AI, and you inform your manager when you use AI to complete your tasks. E is for educated. For example, you take courses and read articles on using generative AI. And finally, R is for relevance, which means remember that generative AI isn't appropriate for all uses. There's a lot more in this starter kit to the interesting and exciting world of AI, and I recommend you familiarize yourself with it before using generative AI in your work. AI is inevitable, and everyone is going to need a basic understanding of it, including when it should be used. So, I'll repeat, the Canada School of Public Service is also offering courses on it, and I really suggest that everyone look into that.

So, I told you it was going to be a little bit lengthy, but I needed to get through all of the various guideposts that we have in place. We have a panel discussion ahead. We'll get to the finer themes of AI, and perhaps specific examples with that panel. I've stayed clear of discussing how much this might cost, or how we need rigorous data strategies before we can truly adopt these tools, or indeed how we're tackling and overcoming our serious situation in the federal government when it comes to technical debt. We'll need to invest in compute power, in digital skills, and questions abound about the impacts on the environment, or, as we heard earlier, how this impacts productivity. All topics that may arise with the upcoming panel.

But before I hand things over to them, maybe I'll leave you with this one thought. As public servants, our collective commitment to ethical and responsible AI must be unwavering. We must advocate for AI that combats barriers to inclusion in the workplace and respects human rights and democratic values, and this is reflected in our AI strategies and policies. It's also incumbent on us as public service employees to manage this change and uphold our moral contract with Canadians.

To do that, connect early on with policy frameworks and guidelines that ensure AI is developed and used in a way that respects human rights and our values. And consult with experts, both within your organization and outside, to explore and understand the ethical considerations of a particular use of AI. You can also reach out to my office within the Treasury Board Secretariat for assistance, guidance, or collaboration on AI related matters. And don't forget to check out the latest information on Canada.ca.

AI is here. It's changing the way we work, so let's get comfortable with it, use it responsibly, and help guide the AI revolution. Thank you very much. Thank you.

[00:26:25 Nathalie Laviades Jodouin appears full screen.]

Nathalie Laviades Jodouin: Thank you very much, Mister Rochon, for shedding light on why AI should be a topic of central discussion, and how it intersects with values and ethics. And thank you for the shout out as well to the School, but also many other resources that you will find if you visit our virtual kiosk, which I'm sure we will scan. But you're already familiar with it, so go check that out. So, let's continue the conversation by hearing from other public servants, both in the federal government but also other jurisdictions, as they share their experiences and best practices using AI. So, the panel discussion, you are now used to it, will be followed by a Q and A, both here in the room and with our virtual participants. So, I'm now going to be inviting our panelists to join us.

[00:27:22 Camera shows the panelists as they take the stage.]

Nathalie Laviades Jodouin: So, first, welcome Mark Schaan, who's Deputy Secretary to the Cabinet, Artificial Intelligence at the Privy Council Office, who will also be moderating today's discussion. Welcome, Mark. Also joining on stage is Ima Okonny, Assistant Deputy Minister and Chief Data Officer at Employment and Social Development Canada.

[00:27:50 Nathalie Laviades Jodouin appears full screen briefly, then we see the panelists participating virtually full screen.]

Nathalie Laviades Jodouin: And also, a couple of participants who are joining us virtually. First, Natasha Clarke, Deputy Minister at the Department of Cybersecurity and Digital Solutions in the Government of Nova Scotia. Welcome, Natasha. And Anna Jahn, Senior Director, Public Policy and Inclusion at Mila, Quebec Artificial Intelligence Institute. Welcome, Anna.

[00:28:13 Nathalie Laviades Jodouin appears full screen.]

Nathalie Laviades Jodouin: So, a reminder to everyone, wooclap.com, VEOCT to submit your questions. And with that, over to you, Mark. Thank you.

[00:28:18 Split screen: Mark Schaan and Ima Okonny are seated on stage; Natasha Clarke, and Anna Jahn appear in video chat panels.]

Mark Schaan: Thank you, Natasha, and thank you all for being here today. It's really a pleasure to have the opportunity to speak with such an excellent group, it's really a great group. I hope that our panel today has an approach that achieves the result that <inaudible> to be expected.

[00:28:42 Mark Schaan appears full screen. Text on screen: Deputy Secretary to the Cabinet, Artificial Intelligence, Privy Council Office.]

Mark Schaan: I think we'll have to see whether or not we can live up to the hype which is normal for those of us who talk about AI is we are often hype busting or thinking about whether or not we're actually going to be able to do what it says that it possibly can. We're the last panel today [before] the Clerk's final comments. Our conversation which was preceded by our other panel today. It was amazing, and many different opinions are important for our conversation today. I was really struck by the conversations over the last two days and the ways in which it's really pressing on the things that we're going to talk about today. Yesterday, one of the panelists said that trust is an outcome, it's not an action.

[00:29:30 Mark Schaan and Ima Okonny are seated on stage.]

Mark Schaan: And I think when we think about artificial intelligence, we often think about it as a thing and not a process. And I think we actually, hopefully today, will talk a little bit about what is the process of AI that actually arrives at its responsible usage, and its effective outcomes.

[00:29:40 Mark Schaan appears full screen.]

Mark Schaan: There were also some really important discussions this morning about the important ability to rethink our processes. I think one of the panelists talked about the fact that our democracy and our institutions shouldn't be so fragile as to be vested in one particular configuration or one particular kind of format. And I think that's true for how we think about artificial intelligence. It really does have to be open to the possibility of doing things in a new and different way and maybe to concentrating our efforts onto some of the other spaces where we can really add value. It's clear that artificial intelligence has such an incredible capacity and influence that is just starting in the economy and society. It's clear that artificial intelligence is a technology that touches our lives in ways that perhaps seem new from older innovations in technology.

AI is hitting us in new and very different ways. And on the one hand, it holds this incredible promise, the ability to no longer care about repetitive tasks, to maybe be able to free up some of our capacities to be able to allow machines to do some of the work that actually underpins some of what we're up to.

On the other hand, it raises really important values and ethics questions, particularly for those of us, who are public servants. It raises questions about why it is that when you ask generative AI formats and a number of questions what a leader looks like, it never shows you anything other than a white man. It raises really important questions like what is its relationship to the creative sector that actually underpinned much of the data that actually feeds into these models? Where was that data created? Who was compensated for it? Where was it labelled in the world? And what about the environmental and other consequences of these processes? At the same time, it's this amazing opportunity for us, as a public service, to potentially leapfrog and utilize a technology that Canada has almost had more responsibility for than any other nation in the world.

And so, how do we strike that balance? And so, I'm super lucky today to be able to be joined by such an incredible team who is going to help us kind of wrestle with some of these questions. And so, I'll ask them each to kind of grapple with this question at outset, which is how and what do you see as the opportunity and the considerations for the responsible implementation of AI in a public service context? And maybe I'll start with you, Ima.

[00:32:26 Ima Okonny appears full screen. Text on screen: Ima Okonny, Assistant Deputy Minister and Chief Data Officer, Employment and Social Development Canada.]

What Unites Us, Defines Us; Values and Ethics in Today's Federal Public Service.]

Thank you so much, Mark. And also, thanks to our CIO. I thought your framing was excellent in terms of really painting the picture of what the ecosystem looks like in terms of the supports for us to get this going. And also, thanks so much to the Canada School PCO for us starting this extremely important conversation around the intersection of artificial intelligence data, and values and ethics.

So, in terms of answering your question, Mark, I've been in the public sector now for 25 years, and I know the Clerk, in the previous panels, asked people to reflect on when they just started in the public sector. So, when I started about 2000, I had done some contracts before then, what struck me was the care people had for people. Public servants really wanted to do good for Canadians. There was a lot of focus on making sure public servants were equipped to be able to deliver for Canadians. And I've seen that throughout the last 25 years in the public sector, even in my current role in Employment and Social Development Canada. So, when I think about leveraging technologies like AI, so today we're talking about AI, tomorrow it could be something else, I think the power we have, and the opportunity is to really turn all the talent we have across the system, leverage this technology for good to be able to deliver to Canadians.

We talked early on about the challenges with trust, challenges around people being concerned about so much risk around leveraging AI. And I think the opportunity for us, as public servants, is to turn this around and to focus [on] what makes us different, for example, from the private sector, and then drive to innovation based on that.

So, for example, in Employment and Social Development Canada, one of the things we did at the height of the pandemic was see how we could leverage data and AI to really drive to those comments that the CIO talked about earlier on, because we saw that our agents were inundated, the forms were coming in. We got so [many] forms because people were losing their jobs. And one of the things we thought about is, how can we support Canadians through this difficult time? And the way we did it was to leverage AI.

We leveraged it legally. We worked with TBS to ensure that all the protocols were in place. We also ensured that the privacy considerations were in place, the security considerations were in place, and we proved that this could be done. Many people said, you can't do this in the legacy system with all the data challenges we had, but we were able to do it and we were able to deliver.

So, I think the question for us, Mark, going forward, is how can we scale some of this across the public sector? Because we've proven that we can do this in a way that meets the protocols. We're very transparent about what we did. Right now, if you go to the open government portal, you will find documentation about what we did. We've been very open about some of these practices we've had within the organization. And I think that work also was able to get people more literate in terms of understanding some of those key data considerations, the foundational considerations around data integration, and then the responsible considerations around AI to make sure that, as we looked at this solution, which is still running up to today, we're also looking at things like data drift, because the data changes. If you look at the population of Canada, it's constantly changing.

So, they need to be very intentional, to be very deliberate in terms of how we leverage and push for innovation while at the same time sticking to those core values and ethics. So, I'll pause there for now.

[00:36:26 Mark Schaan and Ima Okonny are seated on stage.]

Mark Schaan: Thanks so much, Ima. Natasha, from your perch in Nova Scotia, what are your thoughts on some of these opportunities and considerations for responsible implementation in a public service context?

[00:36:40 Natasha Clarke appears full screen. Text on screen: Natasha Clarke, Deputy Minister, Department of Cyber Security and Digital Solutions, Government of Nova Scotia.]

Natasha Clarke: Mark, I love your grin, because Mark and I have been on a couple of conversations already about AI and I'm sure he's wondering, what's Natasha going to share with the audience? I think a couple of things I'd like to chat about and just share in terms of perspective. Certainly, here in Nova Scotia, I would say we're about one metre into a million-kilometre journey, in terms of AI.

What I would share, though, is the journey that we've been on in the learnings around how to shift a public sector entity system that's been an analog government since the 18th century to one that can work and think differently. And I would say we're still on that journey. And why I feel that's really important to hit home on is that AI is another type of technology. Now, it is a very different type of technology, meaning that what we needed to have been focused on during the Internet era, and maybe we've gotten away with not being as sharply focused on it then, that's going to cause us some significant consequences if we don't start to focus in on things like digital literacy in the public service, data management, and some core infrastructure and foundational things that we need to make sure we're putting in place.

Now, in saying all of that, I think we have huge opportunities. We all know that we're trying to solve wicked hard problems, and I do believe that technologies definitely can help us do that. But first and foremost, we really do need to make sure we're falling in love with, what's the problem we're trying to solve? What's the user need? And really making sure that we're not just saying those words but being in the context of the people that we're either serving or again, the problem that we're trying to address, to making sure that our hypotheses are actually valid.

So, I think that's my counsel as to the first place to start, because I do think these technologies are going to be incredibly powerful for us as public servants. But if we jump too quickly, and I think Dom said it perfectly, the AI stardust. I get a lot of calls on, Natasha, we need the AI, but what we want to get folks thinking about is, what is that problem? What does that user need? But also making sure we're thinking through the consequences. This type of technology is different than your traditional word processing or traditional processing technologies that we've used historically in the public sector. And so, if I was a policy person, I need to now raise my literacy and understanding the implications, not just the benefits, but potentially the unintended consequences of that technology. What I think is really awesome is that, obviously Dom talked about Treasury Board Secretariat and the guidelines and the principles that have been put in place. We've done similar things here as well, to start to help public servants to feel comfortable.

And the other thing I would share is don't be afraid. And no one has this really all figured out. Microsoft, what was it, about a year ago, was still trying to figure out how to leverage this. So, I know that even for myself as someone in this space, there are times where I feel like I'm behind. Don't let that get you caught up in worrying about that. Just start to use it in the context of the guidelines that have been shared. And when we get into some of the questions later, certainly I'll share some of the recent learnings that we've had, but maybe I'll just leave it at that in terms of some opening thoughts.

[00:40:33 Mark Schaan appears full screen.]

Mark Schaan: Thanks so much, Natasha. And it is always fun to be on panels with you, so I look forward to your next interventions as well.

Anna, as someone who helps guide public policy practitioners around the world around some of these issues, what for you do you see as the opportunity and some of the considerations for their responsible implementation of AI in a public service context?

[00:40:58 Anna Jahn appears full screen. Text on screen: Anna Jahn, Senior Director, Public Policy and Inclusion at Mila.]

Anna Jahn: Yes. Thank you, Mark. And I better step up to be as fun of a panelist as Natasha is, my lord.

So, I'm joining you right from a non-public service context. I'm joining you from an AI Research Institute. And at Mila, we are really, in a way, in a privileged position to be a bit at the heart of the Canadian AI ecosystem, meaning not only do we work with excellence, at Mila [there is] currently 1300 researchers here, but we also work with industry, we work with governments, and we work with international organizations to partly support and help them think through the adoption of AI technology. My team in particular tries to really even more bridge the gap between the research community and policymakers. But generally, we really are trying to support, in a way, everyone on this journey. So, we can see a bit and we can compare a bit what we see here.

I would say, on the opportunity side, and I think a lot of things were already said by my two brilliant panelists. I think, though, what we see, or maybe I personally see as the biggest opportunity in a way, is that governments have an opportunity to actually respond to societal needs with AI technology. A lot of the AI that we're seeing being put out and being deployed, are responding to the platform owner's needs, the producer needs, and the user needs. And I don't want to at all diminish user needs. I think the thinking about human-centred design and really thinking about user experience is really important. But often the societal needs are getting a bit forgotten. And I think ultimately governments have an opportunity to explore how we can really respond to some of these really big needs and huge challenges that we've been grappling with, whether that's in the area of climate change, health, etc, etcetera. And, I think, make a general case how AI can actually really make a difference and better deliver ultimately, of course, around citizen expectations of what governments can do. So, that's the small, or rather, really, really big challenge here and opportunity. And of course, there are lots of pieces that already were mentioned around better citizen service delivery that ultimately leads to trust as an outcome.

Around considerations, maybe I'll name four, that I think we are seeing across the ecosystem and across governments. First, I really want to just emphasize again the point around education and training. If you have a workforce that ultimately doesn't know what AI is, it's really, really hard to think about, fall in love with a problem and come up with solutions here. And I don't suggest that everyone gets a PhD in machine learning. I really, truly believe that every public servant can have a good grasp of what AI technology is within a day. That learning can be delivered in all kinds of different ways. It doesn't have to be the MIT course in person, but ultimately, I think we all have to figure out a way that, across the organizations, people know what AI actually is. Because I think only then can we really go to the problem definitions.

And I fully, fully agree with Natasha that we need to start with a problem, and we need to fall in love with the problem and not start with, oh, there is this tool that can do cool stuff and I'm going to now put it somewhere in my system. So, it needs to be around the problem. The problem needs to come first, but we can only identify the solutions if we have a fundamental understanding of what AI can do, and how it responds to these kinds of problems. And again, it doesn't have to be complicated – but AI is good in certain things and if we understand that and how it ultimately then is built, I think we can be a lot better around the solution finding.

Third, I think we do need to start thinking about this more as a change management process as opposed to just adoption of a tool or a technology. I think often the challenges that we're seeing with adopting ultimately a quite disruptive technology and a technology that really challenges, in many ways, everything that the public sector partly stands for, or actually everything that a democracy almost stands for. It is slow and it is consensus building and all these kinds of things. And so, thinking about it more as a change management process and therefore being really mindful on how we introduce it, how do we make sure that, for example, our own employees really understand what is being automated, and that they're not losing their job, et cetera. So, a lot, of course, is therefore then around communication. But I think the more we shift into a change management mindset, the better.

And then ultimately, I would say the last point, I think considering and thinking and talking about AI more with regard to augmentation as opposed to automation, because I think ultimately this is a technology that can augment certain tasks, certain processes. And I think the word automation sometimes may be a bit misleading. And it also maybe creates fears that I think maybe are – yes, I think with the word augmentation [it] might be better managed. And so, I think that shift of thinking of it as an augmentation with, of course, always a human, for example, in the loop, because if we talk all of a sudden just about automation,

[00:47:06 Split screen: Mark Schaan on stage; Natasha Clarke, and Anna Jahn in video chat panels.]

Anna Jahn: I think there are indeed some really core, of course, fundamental questions around how, are these decisions being made, and how does the system actually know how to serve, for example, that person and everything that we mentioned before around bias and the data, etcetera. So, I'll leave it maybe at that.

Mark Schaan: Very interesting and thanks for this perspective. It's so important to focus on the problems and take an approach that recognizes that artificial intelligence is a general technology. It's not just a solution or benefit; it's truly a process.

[00:47:16 Mark Schaan appears full screen.]

Mark Schaan: And, Anna, you've heard me say this before, but I also think we need to think about AI as a long game. We didn't get to the solutions we have in AI without 50 years of rapid and continuous support for foundational science that helped get us to these technologies. And so, let's not think that we need to solve it all in a day or get from zero to 100 in a day.

[00:48:12 Split screen: Mark Schaan on stage; Natasha Clarke, and Anna Jahn in video chat panels.]

Mark Schaan: Natasha, maybe a follow up for you on just again, thinking about your spot in Nova Scotia, some of what you're thinking about in terms of early use cases. And some of maybe the fundamentals that need to be put in place as you think about early use cases like data; like appropriate technological infrastructure; like skills and capabilities; and maybe where you might see some opportunities for engagement between provinces and territories and the federal government as we explore the responsible use of AI together.

[00:48:40 Natasha Clarke appears full screen.]

Natasha Clarke: Thank you for that. A couple of things. I might just bounce around a little bit. In terms of use cases, like I said at the outset, I think we're just starting to dip our toes in in a few ways. Certainly, our health system has been embracing AI for some time in terms of x rays, lab results, some of those, I want to say more lower hanging fruit kinds of examples. Certainly, in terms of the public service we've been rolling out copilot and I can see how public servants are using that and experimenting with that, which is brilliant. We just recently had some learning as well where we're actually going to issue some guidance on AI meeting assistance, where we actually had some AI meeting assistants join some meetings. And I think some public servants were perhaps a bit unaware and what the consequences of that was, especially as we start to move to maybe confidential or in camera meetings with stakeholders. And so, I think that would just be something to share, and [we're] happy to share that guidance as we publish that here in Nova Scotia. But back to some of the comments that Anna was making in her opening and then back to that fundamental infrastructure piece and the collaboration.

[00:50:04 Split screen: Mark Schaan, and Ima Okonny on stage; Natasha Clarke, and Anna Jahn in video chat panels.]

Natasha Clarke: I feel that, in Canada, I know we have a productivity issue. I really see the public service wanting to –

[00:50:14 Natasha Clarke appears full screen.]

Natasha Clarke: I would like us to get back to our roots in terms of creating the public good and public value. And when I think about innovations that have been made historically, whether it be GPS; the Internet; etcetera, I think we also should not hold ourselves small in terms of those types of innovations that we collectively can work on together to drive a very different type of future for my son, for example, who is turning 16 next week.

So, when I think about digital public infrastructure, when I think about how might we leverage the data that we have stewardship over in a very privacy and consent-based way, how do we build proper technology infrastructure like data exchanges? How might we take an upgrade to how we issue identity today, meaning we have paper driver's licenses, we need to move to digital trust and credentials. And so, there are other elements of that kind of digital public infrastructure that I feel that we collectively can be working on together to get at some of the things that Anna was talking about, which are those bigger societal challenges.

I'm really excited. In the past year and a half, two years, there's been a federal provincial territorial table of ministers and deputy ministers focused on cybersecurity and digital trust. AI is a topic at that table, and productivity most recently was a part of that conversation. And how we can collaborate and share. But I would like to challenge us to move past, bringing my guiding principles and I'm going to share them with you, and then we're going to have a muffin and be excited about that, to actually digging in on, how might we collaborate on that digital public infrastructure to put those foundational pieces in place. It doesn't mean government has to solve all the problems, but we can set the table that then can create some enabling activity, whether that be with research and academia, private sector, what have you.

And certainly, we've seen other nations globally being able to leapfrog Canada in those spaces. And what I'm concerned about is they're going to eat our lunch. But I believe in what we can do. And I know just even being invited to participate in a panel like this today invites those conversations because I really get fired up about, how might we work together to tackle some of the things that, like I said, Anna had shared in her opening remarks.

[00:52:52 Mark Schaan appears full screen.]

Mark Schaan: Thank you, Natasha. I'm also an optimist, particularly regarding the possibilities for coordination and cooperation between the provinces and territories. It's the same values and ethics issues; it's really the same infrastructure and capacity issues; and as Anna said, it's truly a change management process. And really, as we go through this change banded together, we will be so much stronger together if we actually kind of figure out how we can do it in ways that draw on our respective capacities.

Ima, we talked about, and you commented on the possibility of increasing and improving services to citizens with the use of artificial intelligence. And maybe you can talk a little bit more about the possibilities and considerations around citizen-facing services and really some of how you guys are drawing on that at ESDC as you think through these very important and really meaningful kind of opportunities.

[00:54:10 Ima Okonny appears full screen.]

Ima Okonny: Thanks, Mark. I would just say that before I came to the panel, I looked at StatsCan website. I looked at the population clock. You know, the population clock they have. If you don't, if you haven't seen it, it's fascinating to look at it. And our population there says we're going to over 40 million. I also went on StatsCan's website. We're very lucky. We have a very solid statistical agency in this country. I looked at the population of Indigenous people. I also looked at a population of French speaking people. I looked at the population of racialized Canadians. I also browsed through some of the studies we've done around the underserved, the people that we're not even reaching. I looked at some of the considerations and some of the challenges we faced during the pandemic and who was mostly impacted by that pandemic. So, I will say that I think we've researched enough to know where some of the challenges are in the system, to Anna's point about social good. We've already done – there's a lot of research out there. And every time we go and talk about our data strategy, some of the responses we get is, you collect so much data on us, what are you doing with all this data?

So, when I think about citizen-facing services and enhancing delivery to Canadians, I think about the opportunity we have, given everything we currently have in place. We have a very talented public service that understands some of these problems. I saw it during the pandemic. I saw people working [at] 03:00 am; delivering; building systems; putting things together; going to the front lines to deliver to Canadians. I think the question for us now is, given everything we know, we already know about the system, the challenges, the issues, the people that were not served properly, how can we turn this around to target and reach the people that we're not reaching? And I say, to be able to do this, we need to work closely in terms of understanding the population segments.

So, working closely with StatsCan. We do a lot of work with StatsCan to understand the different population segments; understanding the urban/rural divide; understanding that the population is aging; understanding that disability is growing. You know, we have more people with disabilities now than before, so as we're thinking through service delivery and citizen-facing services, how are we thinking through accessibility? How are we thinking through and making sure that as we're leveraging artificial intelligence, we're using proper training data because it's been proven that francophone text and anglophone text is not treated equally. So, how do we make sure that we're putting all these considerations in place? How do we make sure that some of this data we're leveraging properly reflects the diversity of the Indigenous populations, and the challenges that we've seen? Now, we already know of because we've researched all these things.

So, I think I'm in the space now, Mark, and the space now that we need to move, and we need to move in terms of saying, yes, there's some high-risk artificial intelligence cases, we know that. We know that there's the risk of leaving people behind if we're not doing things deliberately and intentionally. And the TBS has put out a lot of tools. We've worked with some of our institutions, like Mila. We worked closely with Mila. We've put some of the governance frameworks in place. So, the question for us, as public sector is, how can we leverage current things we have to drive to better delivery. And, like I said in my opening, we do have those opportunities. We do have classic cases. Like for example, I spoke about how we leverage the record of employment comments. But there's some work we did in my department that I thought was one of the outstanding cases of the use of AI for public good, just like what Anna talked about.

So, I'll give an example. There was a policy change that impacted some of the most vulnerable people in our society. And there needed to be extensive reviews and case notes after case notes and agents were inundated with the amount of files that were coming out. So, what we did is we leveraged natural language processing, machine learning to say, can we mine these notes so that we triage the cases? So, keeping the humanity loop, like Anna said, but then triage those cases to really accelerate the most at-risk populations getting the benefits that they were entitled to. So, like I said, the reason we were able to do this is we knew there was a problem and we anticipated that if we didn't move quickly, we're putting people in vulnerable situations.

So, there is the risk, but there's some low risk, high impact cases, use cases, and that could be leveraged today. And I think the opportunity for us, as public servants, is we already know what the problems are because we saw it, we even saw it during the pandemic. So, I think the challenge is how do we pick the most critical issues to solve and focus on them, and then drive through innovation and solutions?

[00:59:42 Split screen: Mark Schaan; Natasha Clarke and Anna Jahn in video chat panels.]

Mark Schaan: It's incredible. It's such a new perspective. From time to time, we have conversations about AI and all the concerns and issues, especially the issue for groups in minority communities. It's refreshing to think a little bit about, how could we use the power of technology and the power of data to actually inverse that?

[01:00:08 Mark Schaan appears full screen.]

Mark Schaan: So, we know that technology often has its most negative and challenging effects on marginalized communities. They're often the ones most personally and first impacted. And so, how actually might we be able to turn that on its head and really think about the ways in which AI and our data could actually best serve those least accommodated and effectively served by our systems.

[01:00:50 Split screen: Mark Schaan; Natasha Clarke and Anna Jahn in video chat panels.]

Mark Schaan: Anna, you have a privileged perspective in the sense that Mila, as a leading light in AI, has the chance to be able to engage with all sorts of players, both familiar and unfamiliar with AI, as they grapple with and think through the power of this technology. And we know that Canada is not an island. We're very much on this AI journey alongside a lot of partner countries across the world, and reflections from that sort of special spot that you sit in about some of what others are dealing with, use cases, considerations, and maybe any lessons learned that we should be thinking about, particularly as the world embraces this technological journey.

[01:01:24 Anna Jahn appears full screen.]

Anna Jahn: Sure, I will respond to that question in one second. But I just wanted to build on one other thing that Ima just said, and I just wanted to maybe add to that. I couldn't agree more in terms of the approach to really think of problems and needs and actually turn it a bit on its head and respond to those needs, especially of more vulnerable populations. I just wanted to say, though, with my inclusion hat on, I do think also we need to do that with people that know the technology and are coming from those communities. So, we can keep building, the solutions can be built by people who are sitting somewhere in an office and have no lived experience. And so, at Mila, we try to do that through a couple of different programs. One is specifically for Indigenous talent in AI, and one is trying to bring in more women and gender diverse people in the field of AI, because if we look at our student population, we have some real diversity issues. And so, that's just something to add here that I think both on all sides, we need to actually think about the inclusion.

So, now back to the international question. In a way, the good news is no one has figured this out and everyone is trying to figure it out. Every country that we talk to, and we do have the privilege of getting, for example, a lot of international delegation that come visit Mila because they're fascinated in a way by the model as a nonprofit organization that is working in this space. So, lots of governments, a lot of provincial governments, all levels of governments are trying to figure this out but also, of course, the private sector. It's not that the private sector has figured out how to adopt AI perfectly. They have the same change management questions. They have the same questions around AI governance, et cetera. So, in a way, really, let's be clear that no one has found the silver bullet.

I would say though, that those countries who are more advanced on their kind of digital transformation journey, I think have a leg up. And so, every country that has more seriously put in resources and education, et cetera, for their government or their public sector has a leg up in terms of how to think about now adding AI to that mix. It comes with its own specific challenges. But a lot of the things that I think we collected, [that] we have learned about digital transformation, are partly applicable to the adoption of AI. And so, therefore, not surprisingly, Singapore has some really, really interesting use cases and have a really interesting approach, and I would say, probably the most advanced government in terms of when it comes to AI adoption. They also have a really excellent portal and, in terms of transparency, have a really good way of communicating how AI exactly is used and what are the actual use cases of AI in the public sector and how they serve citizens. It's always hard to compare apples and oranges, but you can partly, of course, learn from that. But I would say these are examples of countries who have taken the overall digital transformation very seriously.

There are lots of countries who are trying to play catch up, partly because they haven't actually invested in AI. Germany is trying to really and massively spend on figuring out both digital transformation and AI, because they're a big country and they have a lot of resources. They're doing that. We see in other countries, though, that are leapfrogging. So, we see Nigeria, I think is probably one of the most advanced AI policy landscapes, as well as in terms of adopting AI in citizen service delivery. The US, of course, is also doing that. And I think I would highly recommend for everyone to visit AI.gov.com. It's basically the repository of all US government, not only policies and approaches and processes, governance frameworks, but also it is an ongoing collection of use cases. And I think having one place where citizens can see like, oh, what is happening in health and AI use cases, I think is kind of an interesting and a good way of engaging citizens in this.

I think, we shouldn't forget that Canada has one real advantage here. And that is Canada does have a unique <inaudible> ecosystem. Canada was one of the first countries to come up with a Pan-Canadian AI strategy, and that has resulted in a really rich, not only research ecosystem, but overall ecosystem. We may be lagging behind on the AI adoption and commercialization side of things, but we have a talent that is really the envy of almost every country that comes to visit us. And so, I'm just going to add our organization here to the mix, in terms of collaboration calls, when we're thinking about collaboration, please tell us how we can support you in your journey of AI adoption. So, how can you take advantage – and shout out here to our sister institutes, the Vector Institute, and Amii – it's not just Mila. There are three AI research institutes. There are more than 3500 AI researchers here, and they are all quite eager actually to support governments, especially in the last year now, truly more and more profs come to me and say, how can we support not only AI policy conversations, but how can we support the adoption of AI in government. So, please come and talk to us and see how we can support.

[01:06:52 Split screen: Mark Schaan; Natasha Clarke, and Anna Jahn in video chat panels.]

Mark Schaan: Thank you, Anna. I think it's such an excellent and effective offer but also for public servants who want to use AI in their field of work. You've touched on a subject that is very close to my heart. Which is that I actually see extraordinary promise and excitement. I think part of the reason why some of those Profs are super excited to play in this space and to work with public services is actually because it comes down to how it can actually meet some of our values and ethics opportunities, which is, I think people are motivated by the notion that we can actually get to service excellence; that we can actually get to ethical and considered supports to democracy; that fuelling the work of government is actually a unique opportunity. And the ability to do that in conjunction with new services and tools, I think is a huge one.

And, on the other hand, I also recognize, as I did in my opening, that it's not a zone that doesn't raise values and ethics questions. I noted at outset considerations around bias and discrimination, and thoughts about appropriation and the effective remuneration of content, thoughts around copyright, and then considerations on environment and opportunity costs, even. That which we invest in AI might be coming at the expense of other things. So, I wonder if the three of you can comment a little about maybe both how AI can be both consistent and an opportunity to live out public service values, and then also some of what we might need to make sure we're thinking about as we deploy new technologies and as we engage in the use of AI in services and in our own day to day effort. Because lots of this is actually not going to be widespread, big implementations. It's actually going to be small changes on the back-office side, or the ability to be able to write the context section of a background report with publicly available information that summarizes it way better than I'm capable of being able to do.

So, some of those thoughts about both opportunities to live out those values, and then maybe some of what we should be really thoughtful about. Ima, do you want to start?

[01:09:40 Ima Okonny appears full screen.]

Ima Okonny: Yes, for sure. I'll start. So, when I think about the opportunities, one of the things, the recent projects we did was leveraging AI to look for challenges in the system, so systemic issues. So, again, you will notice that I talk a lot about leveraging AI to turn things around. We leveraged AI to look at one of the programs and kind of analyze who was getting grants versus who wasn't. And by looking at those trends, or mining through a lot of information, you could actually leverage the output of that work and the outcomes of that work to shape policy going forward. So, for example, if you look at our current population, we have high immigration rates, population is aging, we can already anticipate what's coming. Talent is thin in the public service. People are going to be retiring. There'll be more demand for accessibility and all this.

So, we can actually leverage and mine some of this information and then design policy that meets the needs of people. People will say, okay, well, your data is skewed. And what I will say is that within the public service, we have so much data. I've worked in data for the last, over 20 years. We have so much data that we're not even touching yet. So, I think the opportunity for us is to look at that data, look at who we're missing in that data, leverage the different administrative sources of data. Break down some of those silos, because a lot of this data is stored in silos. Break it down. Look at what we can do in the legislation to shift things and look at clients from a 360 perspective, and then drive to innovation and delivery. And if you do that, you can mitigate for bias. There are ways to mitigate for bias.

So, for example, in my team, we've built a data ethics team, and the work of that team is really to analyze as we push out solutions, so to analyze if we're really leaving any segment of the population out. And we also look at the accuracy of what we're doing. So, often you'll see companies say, well, we're 67% accuracy, or 70% or 90%, but who's that 10% they're leaving out? So, it is important to understand who that 10% is, because if that 10% is comprised of 5% of the Aboriginal or Indigenous population, then we've not done our job as public servants.

So, I think in terms of the values and ethics component, the respect for people, the inclusion, the diversity, it is important to bring that element into everything we do. And as we reflect excellence, whether we're leveraging AI or data, we're bringing that lens of equity, we're bringing the lens of inclusion, and we're making sure that we're not leaving any Canadian out, because we cannot leave anybody out.

[01:12:45 Split screen: Mark Schaan, and Ima Okonny on stage; Natasha Clarke, and Anna Jahn in video chat panels.]

Mark Schaan: Exactly, exactly. We'll get to the question period in just a minute. I'll give my other panelists a quick moment, though, to weigh in on considerations and opportunities to live out public service values with AI. Natasha?

Natasha Clarke: Thanks. Wow. It's a hard thing to just rap on with one minute. So, a couple of things, again, popcorning a few things Anna mentioned earlier about those countries or those jurisdictions that have invested heavily in digital transformation. I cannot underscore that enough. We can get very excited about AI.

[01:13:26 Natasha Clarke appears full screen.]

Natasha Clarke: There are some fundamental pieces that, again, I would just reiterate from things like stock funding projects and fund teams, we need to invest in continuous improvement, those kinds of things. I think also another lens under which I would look at this from a values and ethics perspective, which is more maybe external thinking from a public policy point of view, is data privacy and my personal data record in the future. So, I had great opportunity to go to Estonia and Finland this year to talk to some folks there. And Finland, for example, was rolling out deep fakes and fake news education to their elementary, junior high and high school cohort. I thought that was really interesting. But what has also got me thinking about, as we venture into this space, not only the service benefits or the benefits to efficiencies for government, but also that public good and that public protection values that we have to have. And thinking about data privacy, and what does that mean now to protect my face and my voice? And then this week I read the article about neural privacy. I wouldn't even begin to understand that.

So, I think we have to think about those lenses not just from an inside out perspective, but also an outside in, when we think about public good and aligning by those values. Because, my one last statement I'll make is, we cannot control complexity. And that is the world we are in, is complexity. And so, I really feel that the way to navigate through that is to be principles based, and values driven. And I think this conversation about values and ethics, as we proceed on this journey as public servants, is going to be even more important to help us navigate the complexity that we're all going to face.

[01:15:24 Mark Schaan, and Ima Okonny on stage.]

Mark Schaan: Couldn't agree more, Natasha. Thanks so much. Anna.

[01:15:29 Anna Jahn appears full screen.]

Anna Jahn: All right, I'm going to try to be fast. So, maybe building on those two – principle based, values driven – those are excellent. And I would say, though, they need to become alive. And I would say they are becoming alive by two ways. One is they need to be reflected in the governance structures and models that we build around AI systems in the public sector. And then secondly, they need to be, in a way, become alive through actual responsible AI practices.

What does that actually mean? It means that everyone in the public service has to think about, what does it mean in my context, and it will actually differ. So, I think it's a great idea to have general principles and values that everyone shares, but then we need to break those down, because what we hear all the time, especially from our technical staff in our AI research, what does it actually mean, fairness? What does it actually mean, transparency, in the day to day? How do I code for that?

And there are, interestingly, ways to code for that. There are technical solutions for that, but it needs to be broken down. It needs to be broken down into very specific practices. It's generally a good idea to probably break it down along, for example, an AI lifecycle, because that really is a good kind of framework to think about it. At what stages do I need to consider? Where do my checks and balances come in? Where do I need to worry about the data sources? Where do I need to worry about bias? How can I then mitigate for those things, et cetera, et cetera.

But we actually need to give people a bit more help in breaking it down, because often where the disconnect comes in is between the interpretation from those principles and the values to the actual practices. And of course, also how we design the kind of the governance structures around it. And so, really that's where the rubber hits the road. And it can really look quite different. It can look different from NRCan to Health Canada because the context changes, the population we're serving changes, etcetera. So, one size fits all, doesn't work in this context, I find. I think the principles can stay the same, but then we need to be breaking them down into specific kind of context dependent practices. And that can look very different for everyone. But I think that's ultimately the hard work that we need to get started on.

[01:17:57 Mark Schaan appears full screen.]

Mark Schaan: Thank you, Anna. It's such an important perspective, it's important to recognize that artificial intelligence doesn't exist in a vacuum, it truly exists in a context. As Natasha said, it starts with the data. It's impossible to consider the implication of AI outside of a data perspective and from the origins of the use of the technology. But there are also services consumers.

So, it can't be thought of in this kind of vacuum of either thinking about where AI comes into the utilization journey and then ultimately where it ends up and how we think about integrating both of those perspectives. The other thing that I think it's super important to recognize is that it's also not arriving absent an analog world that we are currently operating in. And I think we sometimes forget about that. I'm used to giving AI presentations at private sector panels where I remind people that the analog for many of the services on the private sector side – I usually pick on the financial services industry, it's not their fault – but the previous decisions about access to credit and capital were made by racist, homophobic, colonial, white male bank managers.

[01:19:24 Split screen: Mark Schaan; Natasha Clarke, and Anna Jahn in video chat panels.]

Mark Schaan: And that's not to pick on them specifically, but it's just to say that as a generalization that was the case. We have an opportunity with better data systems and with technology to be able to improve those outcomes against the current analog that potentially produces those sorts of nefarious outcomes.

And I think we have to continuously think through both the opportunity and then also that whole journey of, where does the data come from? Because if we're now going to make decisions based on technology, but with really bad data, we're just going to end up in the same outcome. So, to your point, Ima, about, how do we actually ensure that we're thinking through all of those issues?

[01:20:06 Mark Schaan appears full screen.]

Mark Schaan: We're now at the question-and-answer period of our panel, your questions. There's some already, I think, being generated in wooclap, a reminder – it always gives me a tickle just to say that word – but wooclap is available. It is at your disposal. Please do add in your questions. The QR code is there. The event code is VEOCT. And we also have a chance to be able to take some questions from in the room.

[01:20:32 Split screen: Mark Schaan, and wooclap poll results.]

Mark Schaan: And I see them already starting to pop up. That's really great, so maybe we'll start with a question in the room. And I think I see someone with a microphone just right there, so go ahead.

[01:20:43 Mark Schaan, and Ima Okonny on stage.]

Audience member: Okay. There you go. Alberto Garcia from Treasury Board. I love AI. I really like it. It has increased, a lot, my productivity and my ability to deliver outputs. And one of the things that comes into my mind as a public servant and as well as a citizen who's integrated and who wants to integrate with technology and make my life the least complicated as possible. And as we're talking about values and ethics,

[01:21:30 Split screen: Mark Schaan, and Ima Okonny on stage; Natasha Clarke and Anna Jahn in video chat panels.]

Audience member: I wonder if we are not still thinking about some values in a context that it's not relevant any longer. Being more consistent on that and clear, one of them is privacy. So, I recently moved places and just the thinking of how many notices I have to give to banks, to service providers, to friends, to this and that about my new address, leaving alone, it's just overwhelming. Oh, my gosh. Another task that I have to tackle. While thinking if I could just tell Siri or Alexa, or whoever, notify everybody who needs to know, that I have contact with, of my new address. Boom, done. I don't have to worry about anything. I don't have to make the list and all of that.

So, I'm just thinking about from a perspective of governments, national and sovereign, if there is a possibility to start having a dialogue, or opening a dialogue about, what does the meaning of these values mean in the new or in the current realities?

[01:22:59 Mark Schaan appears full screen.]

Mark Schaan: I have many thoughts, but I am the moderator, so I will give first crack to my co-panelists as to who wants to tackle whether or not we need to revisit some of our prepositions and baseline assumptions about things like privacy in a modern and technologically augmented world.

[01:23:23 Ima Okonny appears full screen.]

Ima Okonny: I can start from what we learned during the pandemic. So, very great question, because during the pandemic, the height of the pandemic, the provinces needed information. They wanted to understand what was happening with people who lived in their province. Simple, right? We could not share that data. We couldn't share the data, not because of the Privacy Act, but because of the legislation and the Income Tax Act, that you're just not allowed to share that data. So, as the pandemic moved on, what we realized is we needed to share that data with the provinces for them to have a view of what was going on, because they needed to plan. So, we actually had to change the legislation to enable us to do that, and we were able to change the legislation very quickly because it was in a different context from today.

So, what I learned from that experience is it's not just the Privacy Act that's the challenge. It's also the way we've crafted the legislation, because we've crafted the legislation based on the different departments, and it hinders us from sharing data between departments also. And sometimes within departments, even within the same departments, there are some areas you can't link or connect data together.

So, to me, and I don't know if you'll like me after this, Mark. To me, we need to have a renewed focus on the legislation, because the legislation was crafted in a certain way many years ago. It was crafted for certain reasons. But now, as we think, digital, in this digital age, there is so much opportunity, like I said, for us to change how we craft the legislation and how we look at data.

So, one of the pet peeves that I've had is we in the data space, we focus on talking to each other. But I think it's important for us to bring in legal, bring in drafters at the conceptual phase, so that we can look at the art of the possible in terms of data integration. Because a lot of the problem that you're raising there lies with our inability – it's not because we can't do it. We've done, throughout my career in the public service, we've done fascinating things around data linkages working with Statscan. But there's opportunity for us to look at things in a more horizontal way, to look at things like client 360, for us to really shift and evolve how we engineer and architect data without looking at those silos of departments. But for us to do that, we have to kind of take a step back and review some of the legislation. And that's kind of what I'm thinking. So, I don't know, Mark, if you still like me after this.

[01:26:06 Mark Schaan appears full screen.]

Mark Schaan: Of course, I still like you. I'm happy to weigh in with a quick comment, but Natasha or Anna?

Anna Jahn: Go ahead, Natasha.

[01:26:22 Natasha Clarke appears full screen.]

Natasha Clarke: Well, I was going to say, I feel like all of us want to weigh in on this one. Just a couple of things. 100%. We need to look at legislation and reimagine things because stuff was written who knows how long ago. And written by people, and we can change it because we're people. We can do it and we can use AI to help us.

I'll just share a couple of things. I think we always have to be taking the pulse check of the social license we have in the world. As we know, societies change, but that doesn't mean everyone's okay with telling Siri to do all the things for them. Maybe my soon-to-be 16 year old will be comfortable with that. Not sure I am, but I think that's something for us to be mindful of. I would also just share with everyone. There was a blog post written today by a guy by the name of Tom Loosemore from Public Digital who talked about boldness. And he referenced a blog post that actually was written a long time ago by a woman by the name of Janet Hughes that talked about, should boldness be an explicit public service value? And what it talks about is, let's stop incrementally improving things, but let's actually really reimagine.

So, if you think about Netflix and Blockbuster, they were both in the same business. And Netflix did not process improve blockbusters processes. They completely reimagined the delivery of entertainment by the use of technology. And I guess that's what I would share is I feel that we should be a bit bolder, and I think that's what you were getting at with the legislation. I think we need to be a bit bolder and not just process improve around the edges. And that's what the opportunity is. And we do need to continue to check in on our values and what is shifting over time due to expectations and the social license we have.

Mark Schaan: Thanks, Natasha. Anna, go ahead.

[01:28:14 Anna Jahn appears full screen.]

Anna Jahn: My only two additions are maybe to say, I think values in particular are renegotiated all the time, and they need to be renegotiated all the time within the public service. And so, when I say renegotiated, it doesn't mean that the value itself changes necessarily, but really that the meaning and the interpretation of that value might really shift. And so, that's a process that needs to happen within the public service. And in a way, that's exactly what's happening probably here in the last two days. It's a process and engaging in that negotiation.

But I think, to your point, Natasha, around social license, I think there's sometimes assumptions that are being made by governments about what kind of social license we have and also how people interpret these values, be that around privacy or any others. And I think that's really where, testing those assumptions, and understanding better where actually citizens are at, what their expectations are with regard to not only the service delivery, but upholding of some of these values, I think is a really good exercise. And I think that dialogue doesn't happen often enough between the public sector and the people they serve.

[01:29:34 Split screen: Mark Schaan; Natasha Clarke, and Anna Jahn in video chat panels.]

Mark Schaan: Yes, thank you, Anna. I had dedicated more than eight years of my career to modernizing the Privacy Act, particularly in an industry context. So, I had a lot of feedback about the evolution of privacy considerations. The only thing I think we have to think about, particularly in an AI context, is – and I hear you – people trade their data for convenience all the time. And they do it with commercial actors, particularly all the time. But there's another context for the government. There are two factors that are very important.

One, we are often delivering citizen services, and we are the arbiter of democracy and fairness, and there is lots of information that gets provided to governments that is not assumed to be shared on a widespread basis. I am fully willing to tell some people that I'm a member of the 2SLGBTQ2+ community. I am not comfortable with that information transporting to all sorts of services that I'm not necessarily comfortable with what that might mean for me. And I think that brings back, from a values perspective, how are we actually working with end users about the benefits that come from the consideration of the sharing of information? Because they're not always automatic.

I usually use the example of a member of our Canadian Armed Forces whose file transfers automatically to Veterans' Affairs, and our assumption that that's always a good thing. But it turns out if that file includes both things that we would never want to have to tell people again where it would be super useful, like all of the information related potentially to a disability that we encumbered as we were a member of the Armed Forces, we're probably good. But if it also includes a whole bunch of HR filings about the fact that we weren't a great employee and that the end of our career was not actually voluntary, we're probably not super interested in that information transferring over to our colleagues at Veterans' Affairs. And then when we put that into an AI context, I think we have to think very carefully about the degree to which that information is then fuelling algorithms and lasting well beyond the situational context that the individual is comfortable with.

And so, I think the answer is, it's complex, but I think it really does underscore our need to continue to place emphasis on it. I think there's perhaps, yes, one question, one more question. And it's not that I'm avoiding the environmental question, it's just that I think it's a really hard one to answer in a short period of time. And so, I will take one more question from in the room, and I think I saw that there's someone with a microphone in the back.

[01:32:34 Split screen: Mark Schaan and Ima Okonny on stage; Natasha Clarke and Anna Jahn in video chat panels.]

Audience member: Hi. Hello. Thank you so much for this opportunity. I've really enjoyed these sessions. This might be kind of a tough question, but I feel comfortable asking it in the spirit of the discussions that we've had so far. So, as a non-executive public servant, it does kind of seem like senior management is distracted by the new big shiny issue of AI, and we might be putting the cart a little before the horse in some cases. So, arguably there are other more pressing, older issues or lower hanging fruit that we've yet to fully resolve as a public service. And just for example, yesterday we had a great discussion, but it was a long time coming, regarding social media. We probably should have had that one about ten years ago. So, there's certain issues that we haven't yet tackled successfully.

So, we also know that there are values and ethics issues with the application of the directive of prescribed presence on top of the Phoenix pay system issues. So, I know that we can focus on more than one issue at once, but resources are also finite, and our track record lately hasn't been great. So, it's reasonable that Canadians would not have a huge amount of trust in the federal government's application of AI. And public servants individually likely have concerns about our use of AI based on our own lived experiences. So, that's not to say we don't have very talented and effective people working on this, and it's extremely relevant and important. They're dedicated, effective public servants that are working to only provide fearless advice. But it's really up to the decision makers, as has been stated several times today. So, sometimes it's difficult to be comfortable with the decisions that are made against our best advice as public servants.

So, I guess my question is related to things within our control. How are we working to build accountability with AI use so that our own policies match our mandates and stated values? How are the internal system processes working with AI different from Phoenix, or the application of the directive on prescribed presence, or internal social media policy? And finally, how can we avoid the pitfalls of other past internal projects, especially related to equity, and ensure our AI uses align with the stated values and ethics? Thank you. Merci. Miigwech.

Mark Schaan: This is a bigger question and thanks for the question. There are many elements to this question. I don't know. I'm happy to take parts of that, but colleagues, any thoughts?

[01:34:56 Split screen: Ima Okonny; Natasha Clarke and Anna Jahn in video chat panels.]

Ima Okonny: Yes, I have a lot of thoughts, but I know we don't have much time. What I will say is, I've been reflecting a lot. I've been reflecting on where we are as a country. Like I said, we've hit 40 million people. Population is aging. We have a diverse population. We have a poly crisis. If you look at all the challenges that we're facing, we also have an opportunity. And this is a huge opportunity. And some of us knew it as an opportunity 20 years ago because we worked in the data space, and we've been leveraging AI and algorithms in different ways.

I think, for me, the way I look at it, and I know there's the shiny object syndrome, some people have that, I will acknowledge that. But the way I look at it is that based on everything and the challenges that we face today, we need to do things differently. And this is an opportunity for us to lead in a way that's different in terms of saying we can actually leverage data, we can leverage AI, we can leverage some of the automation, things like RP, to shift things. We have a crisis in service delivery right now. You hear about all the challenges we're having. And we cannot continue to function as a public sector the way we did 20,30 years ago.

So, to me, my focus in terms of how I work and how I think through values and ethics is, how do I leverage the talent of my team, the talent of people across the organization, to really drive to excellence? Because it is clear that, the more we deliver services, and we do so in a way that is inclusive and doesn't leave anybody out, the more we're going to grow trust. And given the state we are right now, I can't see any other way for us to do this in a way that's targeted and precise and measurable, than leveraging data at this point. That's kind of where I sit with the question.

Natasha Clarke: Mark, I can, if there's time?

Mark Schaan: Yes, super quick. And then we'll wrap.

[01:37:15 Natasha Clarke appears full screen.]

Natasha Clarke: Just one perspective, because I can't weigh in on all the pieces, obviously, because there's some federal context there. But in terms of certainly where I'm situated, the approach that we've been taking, and I know sometimes actually it might even come off as like I'm a Debbie Downer. And Mark knows this. I just talked to ministers pretty frankly a few weeks ago. It's back to the point around digital transformation and the fundamentals. If we do not change some fundamental things that we do in the public service around how we approach this kind of work, I would agree. I don't know if I would have trust in our ability to get this done. There is a lot of hype and a lot of excitement around this technology.

But what I'm getting at is things like investment in digital and data literacy and raising that for everyone, not just the people in my department or the IT shops, but everyone. Changing how we fund initiatives in government, moving away from funding projects to funding products and services. Thinking about that timeline and that horizon and why I think that's so important, is AI is going to require us to do continuous improvement. It is not a set it and forget it type of technology. It's also focusing on data management and making sure we really understand those data sources. Are they ethical, are they inclusive, et cetera, et cetera. So, these things are very boring. I am, like I said, the Debbie Downer wet blanket, but I think they're really foundational if we want to actually achieve the opportunities that we're talking about, which at the bottom line, our most important deliverable is trust. And these things will help build all of that. So, we can't forget the boring knitting that we need to do while we enter into all of these other exciting things as well.

[01:39:10 Anna Jahn appears full screen.]

Anna Jahn: I swear I'm going to take one minute. And as an outsider, let's learn from our mistakes. I think that's the biggest piece. I think both the most recent books by Jeff Mulligan in Yuval Harari are all talking about institutions that are losing trust and are basically no longer delivering value. It's because they basically haven't managed to learn from their mistakes, from their past mistakes. And so, more honesty, more looking back in terms of what didn't work here and why didn't it work, and what can we learn about the future when we talk about all of a sudden, the latest shiny thing I think is really, really important. And I think we are all collectively, not just in the public service, not very good at that because it's not fun and it's uncomfortable. And yes, we're very nice Canadians that don't really love those kinds of conversations. But more looking back and learning from mistakes, I would say, is maybe a good idea.

Mark Schaan: Huge thanks, panelists. And maybe just last thing I'd add to that question is Canada was helpfully responsible for the development of this technology. We've played an outsized role in its creation. Our populations, our citizens, are using it on a daily basis and have expectations and a belief in government that we will actually match where they're at and meet them where they are. And I think that behooves us to actually think through thoughtfully, logically, methodically, to all of the points made about the hard work that follows this. It isn't actually about necessarily just finding the shiny tool and figuring out where you can apply it, regardless of whether or not it fits the problem.

[01:40:52 Mark Schaan appears full screen.]

Mark Schaan: It's actually about the capacity building so that we can actually take the meaningful risk of deploying this to improve our functions in line with our values and ethics. Big thanks to the panel. Very valuable conversations after two very full days. So, thank you very much. And thanks to all of you as well.

[01:41:21 Nathalie Laviades Jodouin appears full screen briefly, along with wooclap QR codes.]

Nathalie Laviades Jodouin: Clearly, you're expecting another guest here, so I'm just going to take two seconds to welcome back, for some closing remarks, Clerk John Hannaford. Thank you.

[01:41:40 John Hannaford appears full screen. Text on screen: John Hannaford, Clerk of the Privy Council and Secretary to the Cabinet. / Greffier du Conseil privé et secrétaire du Cabinet.]

John Hannaford: Thanks a lot. And thanks for adjusting the mic. That was a really good panel, and it ends a really good two-day session. And so, I want to thank everyone who's been participating in all of this. I want to thank Mark for his leadership of that last conversation. Thank you to all the panelists and speakers we heard from over the last two days.

You know, I think the thing that has struck me over the course of the last two days, is the power of stories. The power of exchanging views and having meaningful conversations. And we started this process about a year or so ago with a view to doing that. I think for a variety of reasons, it was a conversation that was ripe. It was important for us to make sure that people who had joined our community had a sense of the core values of the community. It was important for us to hear about the range of experiences that now make up the public service. We've just heard about the future of best practices and ways to use artificial intelligence. This is fascinating too. It's important for us, not just as a community but also as a supplier of services to Canadians.

I think we are at a real moment. I think we are at a moment in a number of different ways. I think there is a series of expectations on us as an institution at a time when institutions are challenged. This could be partially a question of how we apply new technologies, the exchange that was just had. It's partially a question of how we respond to challenges of much more complicated geopolitics than we have seen in a long time. It's how we rise to serve Canadians at a moment when Canadians really need service. And, if you think of the range of conversations that we've had over the last two days, we started yesterday with an overview of the journey that got us here. A discussion on all the practices that were mentioned in conversations during the year. This was obviously an example of an exchange of ideas and a conversation. We talked about the importance of inclusion; the centrality of the Call to Action; and the critical work that we need to be doing to make sure that we are accessible to all Canadians as an institution and as a service provider. We've talked about the role of the public service in promotion of democracy, and defence of democracy. And we've talked about application of new technologies.

In addition, we introduced new tools and practices to address some of the key concerns raised by public servants during our sessions on values and ethics. Including conflicts of interest, onboarding, management orientation and intent, and guidelines on personal use of social media. I want you to have a look at all the resources that have been developed over the course of this last period and are now available on the kiosk associated with this conference, and you can have access to that through the QR code. I'd also like you to take advantage of the speaker's corner as part of this exercise.

But most importantly, I want to leave you with this thought: the discussion is not over. We've got to continue to bring our values and ethics to life. They are a critical part of the work we do. They define us in what we do. That has been an absolute theme that is completely striking over the course of the last two days and every conversation I've had in the last period of time. They are an affirmation of why what we do matters. It's critical that we continue this work. We are in a world where change is rapid and inevitable. We can't always predict what that change is going to entail. But what we can do is build as resilient an institution as we possibly can in order to support each other, support the Canadian public, and to support our institutions. And that's on all of us. I said this earlier, and I meant it.

There's a particular role for leadership when it comes to the culture of our organization, but we are all the embodiments of that culture. It is why I'm so touched, honestly, at the degree of participation that we've had over the course of these last two days. I'm touched by all of you here in this room and I'm touched by all the conversations that have happened over the course of the last two days across the country and in missions around the world. [It's a] sign of enthusiasm, and it gives me enormous hope for the future of this institution. But for now, it's our institution. We're the embodiment of it right now. We are the public service. We should be deeply proud of that, and we should maximize the effect that we have recognizing the importance of the role we play in the society that we serve.

So, I want to thank all of you for participating. I want to thank the organizers of this symposium because it was an enormous amount of work that went into all of this. I'll note Chris Fox, Donnalyn McClymont, Derek Ferguson, Taki Sarantakis. Lucy Ellis, who wrote a lot of my remarks. I'm very grateful to all of you for your leadership in all of this. And there are many, many others who played such a critical role. I think the School did an absolutely terrific job supporting all of it.

But I want to leave you again with thank you. I thank you for your service to this country. Thank you for your participation in today's events and thank you for your ongoing enthusiasm. It matters a lot. We make a big difference, and I'm very grateful to you. Thank you. Merci. Miigwech.

[01:48:28 Nathalie Laviades Jodouin appears full screen.]

Nathalie Laviades Jodouin: I want to extend my gratitude as well to you, Clerk Hannaford, for actually being so present throughout the two days. It matters, and we really appreciate it. So, thank you.

So, as we conclude our gathering today, I do want to take this moment with great honour, once again, to invite Elder Verna McGregor to provide some final remarks.

[01:49:04 Elder Verna McGregor takes the stage and appears full screen. Text on screen: Verna McGregor, Elder.]

Elder Verna McGregor: Hello, everybody. There are less people here today. We're near the end. When I was thinking about these values and ethics, I always think about the contrast here and honouring the contrast. And I think this reconciliation is also that understanding we have a different contrast in understanding. And that's why I do these events, to try and bridge this understanding, because it was foretold that here we are in terms of this climate crisis. So, here we are. One of the things I was thinking about, when John Hannaford mentioned the power of story. For us, we would meet here and also share stories about the land because that was also our method of learning. But also bringing that to today's scenario with AI. I was going to a meeting with Heritage in the spring, and what happened is I said, I don't know what I'm going to talk about on this AI. And spirit works in funny ways. I was listening to my radio, and it turned about this conversation on AI and how this editor used AI to create stories. But when she keyed in different keywords on different people, it searches the whole engine, and it brought forward on one person very negative connotations and on another person it brought favourable. And it turned into a racial contrast because you see it picks up all the keywords that are on the Internet and it created a story.

So again, this AI, I think you still need that verification also as you move along in your personal information about you. Because why I say that to you is I worked in the bank and at that time you take the loan application and then the computer would formulate an outcome in terms of giving the loan or not. And it would access the Credit Bureau. And if you had an unfavourable Credit Bureau, then that's where the person to person comes in because everybody has a story. I remember having one girl, I said, yes, your credit is not good, but it was in the past. And she said, well, I was supposed to get married, and my groom ran off on me and I was stuck with all the cost of the arrangements. And, by knowing that, I had to write the justification and she got her loan. But with AI, again, with our dependence on it, again, you miss that personal touch.

So, again, values and ethics, for us to know with the public service to, yes, we're valuing change because change is the only constant. And Covid taught everybody this, how fast things can change. But it was also foretold the times we're in, and it's coming together like this to share that information and evaluate also the policy and procedures that you're undertaking. So, I think that's very important.

So, now I'm just going to say a closing prayer to honour your journey here of also the public service, which is part of the canoe teachings. And I'm going to make you canoe home. Don't take the O train. No. This may be more reliable. I'm going to sing you a little song, but if you want to, join in. Part of that too, is finding your voice because sometimes, in the public service, you get lost in the policy and procedures. But sometimes when you have conflict, it's a sign that things need to change. But sometimes fear holds you back in bringing forward the concern. And what I did is, when I worked for Minwaashin Lodge, we had the drum circle. There are teachings to the drum, but also, it's to find your voice as well. And finding courage, which is one of the seven grandfather teachings of value, which is, again, different to the values and ethics that were explained earlier here.

So, the second part, I'm going to sing you a little song. Actually, I'll just do the canoe song because you're going to canoe home and not take OC Transpo. No, no, I better not say that. Mark Sutcliffe might be here. No, no, no, I'm just kidding. No. But we do need public transit because, again, we're in climate change. Change is the only constant. So, here we go. But the second part, I want you to join in, find your voice. If I'm just singing by myself, are actually 10,000 people out there. If you're still out there, please join in. And the audience here can't hear you anyway. But it goes like this. Well, this is a practice, so it goes.

 <Indigenous song>

The first part I'm singing is, you're going to paddle home. And it's very expressive. You see you coming around the corner and the people are singing because they're saying they're already coming home. Now, it's been a while, and jiimaan ni is canoe. So, the canoe is bringing you home, because this was canoe central here, because of the four directions. And honouring the four directions and the four stages of your life, which is part of the public service, is one of your stages of life, too. So, here we go. Everybody ready? Okay, now you're ready to join in.

<Indigenous song>

Now I'll do the second part, and you'll have a second chance.

<Indigenous song>

Now it's the last chance here. Now, you're going to run out of the room after this. That's okay.

<Indigenous song>

Miigwech, everybody. Now, you could put your values and ethics aside and head home. Paddle home.

[01:57:52 Nathalie Laviades Jodouin appears full screen.]

Nathalie Laviades Jodouin: That's kind of a hard act to follow. Elder Verna, thank you again for sharing in your wisdom and your teachings and for sending us all on our way in a very good way. So, with that, everyone, this officially concludes our two-day symposium. I'd like to thank again all the speakers who joined us over the last two days. Your contributions and questions were very important and enriching for this discussion. There was a great energy in the room, and we'll get the final numbers, but I think over 15,000 people logged on across the country. So, I think that deserves an applause. Yes. The School is always very proud to be able to take part in these initiatives and support the conversation and ongoing discourse on values and ethics, as well as all other themes we've heard over the last two days.

So, you will be receiving an evaluation questionnaire. Please, please, please give us your feedback so that we can continue to bring these important conversations to the forefront. And as the Deputy Clerk mentioned yesterday, this is not a one and done. So, hopefully, this is just additional momentum to conversations that have been taking place for many, many months now. And I think important values and ethics doesn't start and end here. This is not a one and done. Hopefully we've given you much to reflect on to take back these conversations in your respective workplaces and contexts. So, with that, thank you again, it's been a true honour. So, until next time. Thank you, everyone.

[01:59:47 The CSPS animated logo appears on screen.]

[01:59:51 The Government of Canada wordmark appears and fades to black.]

Related links


Date modified: