Language selection

Search

Innovate on Demand, Episode 7: Regulatory Artificial Intelligence (DDN2-P07)

Description

In this episode of the Innovate on Demand podcast, co-hosts Natalie Crandall and Valeria Sosa speak with Scott McNaughton about how federal regulators can integrate technologies such as artificial intelligence and Rules as Code into their work to facilitate tedious and time-consuming tasks.

Duration: 00:43:00
Published: January 10, 2020
Type: Podcast


Now playing

Innovate on Demand, Episode 7: Regulatory Artificial Intelligence

Transcript | All episodes
Listen on: Apple | Google | Spotify | Stitcher

Transcript

Transcript: Innovate on Demand, Episode 7: Regulatory Artificial Intelligence

Tood 
I'm Todd Lyons.

Natalie 
I'm Natalie Crandall.

Valeria 
I'm Valeria Sosa.

Scott 
I'm Scott McNaughton.

Tood 
And this is the Innovate on Demand podcast.

Whether you're a citizen or a business, wading through policy, regulation and legislation can be difficult. How can a human being navigate thousands of words written in complex formal and legal vocabulary? Well, increasingly, we're trying to delegate that difficult work to a helper better suited to the task: software. By converting rules into code, we can concentrate instead on asking AI to provide us with details pertaining to our situation, such as eligibility, benefits, obligations, and restrictions.

Welcome, Scott.

Valeria 
How you doing? So, why don't you tell us a little bit about what you're working on?

Scott 
Yeah, so I'm working on what's known as the Regulatory Demonstrator projects. And these are projects that are supposed to show federal regulators the possibilities of new and emerging technologies. So, things like AI [artificial intelligence], blockchain, and rules as code. I'm going to go on the assumption that most people know what artificial intelligence is. Most people know what blockchain is, but maybe not "rules as code." So, rules as code is a relatively new concept for government. It's the process of taking your rules, whether they're regulations, standards or policies, and converting them into machine-readable code. And when we do that, and we release that as open-source code and as an application protocol interface (or API), we can build applications and services that let regulated companies understand what their requirements are to be compliant. We can, to some level, automate decision making when it comes to licenses, permits and certificates. Because your rules, at the end of the day, are a lot of "if-then" statements. That's quite the oversimplification of it, but, [it means that] if you do something, then you get a penalty or if you are this kind of person, then you're eligible for this benefit. That's what a computer understands: a lot of if-then statements. So, if we can turn our rules into that, then we enable a whole new world of possibility of delivering better regulatory service, ensuring better regulatory compliance and potentially really pushing government into the digital era in all senses of the word.

Natalie 
So, you guys have had some big successes and some big wins on your project recently. Maybe you could tell us a little bit about that and about what you've created.

Scott 
Yeah, so we have a portfolio of projects right now. One of them is the Incorporation by Reference project and for those who aren't familiar [with it], incorporation by reference is a technique commonly used by regulators to refer to other documents within their regulation. And those other documents are not necessarily Government of Canada documents. They could be published by standards organizations or other bodies out there in the world. And the Department of Justice was asked by the Joint Committee on the Scrutiny of Regulations, How often is incorporation by reference used in your regulations? And how accessible are your regulations to the general public and to businesses more specifically? Of course, like many things in government, we don't have the answers to questions like that. Nobody's tracking this information and nobody thought to track this information. Based on the regulatory stock in Canada of 3000-plus regulations, we estimated that it would take a paralegal—poor paralegal, I would not want to be that person!—1300 hours a year to actually collect and gather this information;  to review each regulation, identify an incorporation by reference, go to the relevant document and collect information like language of availability, cost, [or] when it was last updated. Not a very fun job and not one that I would wish upon my worst enemy. That being said, artificial intelligence and automation represent an opportunity to take that very mundane, painful task and do it much faster. It doesn't replace the human judgment that's needed to assess the data collection, but it does make a very thankless and painful job much quicker and much easier. The problem we're trying to solve with that particular project is to allow Parliament to answer questions like How many of those documents we are referencing in our regulations are only available in English and not French? How many are behind paywalls of a couple [of] thousand dollars. And what does that mean for our regulations? What does that mean for the rural Quebec farmer who is trying to be compliant, doing everything in their power to be compliant, but the standard we reference is only available in English? So they can't be fully compliant and what are the repercussions of that. That's a very open-ended question that nobody has the answer to right now—and we'd prefer not to find out. We'd prefer to get on top of this issue before it becomes a full-blown legal issue.
The second project we're working on is the Regulatory Evaluation Platform project. Treasury Board Secretariat [TBS], as part of its regulatory review process, has asked regulators to do "periodic reviews" of their regulatory stock, which is defined as doing things like checking which regulations can be modernized, doing comparisons between Canadian regulations and regulations from other jurisdictions, and being able to do what's known as an analysis of conflict or overlap.  By that I mean: is a regulation at the federal level have a requirement that conflicts with something at the provincial or territorial level? Now, a lot of this is supposed to be done within existing resources. Regulators are already strapped and don't have the time to do this. And again, another opportunity was realized: could we use artificial intelligence to make this task more bearable [and] to actually get some tangible analysis done and tangible results that will give us a better picture of what opportunities are there to modernize [and] harmonize our regulations and to do overall regulatory analysis more efficiently and effectively?

Valeria 
So, where are you with that?

Scott 
With that particular project—the Regulatory Evaluation Platform project—we were the first requirement to go through the AI source list that was published by PSPC [Public Services and Procurement Canada] and TBS. We have awarded 2 contracts to 2 different firms and they are building us 2 separate prototypes. We will be evaluating those prototypes and the one that we like more, based on completely objective criteria, we will award a contract to that firm to take that prototype into production for us. This has been a very interesting procurement process that we've undertaken for this project. First of all, using the source list is a new experience in itself. We followed a very agile procurement process with lots of vendor engagement, lots of tweaks to the statement of work and the evaluation grid in response to that feedback. We're running multiple vendors in competition with each other to build us prototypes so that we're not locked down to a single vendor.

Valeria 
Sounds like a challenge.

Scott 
It is. It is like a challenge. The thing is, the way the contract has been structured, there are a lot of option periods, too. So, if we don't like any of the prototypes, we don't have to award a contract to move into production. After we move it into production, there are option periods to do additional development—if we want to. If we realize that what we've put out to production is not quite up to snuff, well, then we have option periods built into the contract to do more enhancements and more work on it. We've, in a sense, future-proofed it that way.

Natalie 
So, a complex and intense procurement process might actually still be faster than that poor paralegal doing the work?

Scott 
Going through a complex RFP [request for proposal] process, yes, would still be faster than the paralegal. But what we have learned through that procurement process is that there's still a lot of room for improvement in that procurement process. There's a lot of procedural problems we've run into around how people bid and what happens when vendors drop out of the process. We've learned a lot about how to craft an ideal statement of work, especially in a field like AI, where it's very new to government. You don't have baselines for how long and how much this stuff should cost. A lot of the criteria that you're using to evaluate the bidders are very subjective and are hard to evaluate at an objective level. To give you a practical example, I might be able to say that I want experience on the team with completing an AI project. To meet that criteria, you'd literally just have to write on a piece of paper, "This team member completed an AI project" and I would have no way of verifying that. Because our procurement processes are built in a way to make sure it's fair, it's transparent, and that everybody has an equal opportunity to bid on every single thing that government puts out for bidding.

Natalie 
It's like our job process.

Scott 
Exactly, but from the point of view of the person who just wants good to get done, and I don't really care who does it, as long as I get what I need and it's done really well—especially in a field that's really new and emerging like AI. I can't objectively tell you what is good, what is better, what is best, what is bad, [and] what is average. I don't necessarily know because I'm bringing you in to do something that's never been done before. I don't have 20 years of history to draw upon like I would for an application. If you tell me your technical approach to something is the best way to do it, I don't really have a way of disputing that because I can't use objective criteria of what a good technical architecture is for an AI project in government, because I don't have much history to base that on.

Valeria 
How has Procurement [PSPC] been throughout this process?

Scott 
So, a shout-out to anybody from PSPC that is listening to this. PSPC has been amazing. They have been providing unparalleled service. They've been very open to us trying some crazy things with this RFP.

Tood 
"Crazy things?" You've got to expand on that very greatly.

Scott 
Okay, so crazy things. I'll use intellectual property as an example. You may or may not be familiar with the Directive on Automated Decision Making as well as the Impact Assessment for Algorithms. These are new Treasury Board requirements for automated decision-making systems. They're not in force until April 2020 but as a matter of best practice, we said, Let's pretend they're in force anyway. When you initially dream up an AI project, you're supposed to go through an assessment of the relative level of risk that your AI system will have, accounting for things like ethical approach to AI, bias in your algorithm and your training data, so on and so forth. Things I think most people are familiar with. That being said, we rated as a level 1. There aren't too many requirements under level 1. So, Neil Bouwer, our Vice-President here at the School said, Let's pretend we're a level 2 and let's subject ourselves to more requirements than we have to. After sharing a look with my director of, Okay, I guess we just have more work to do now, we said, Sure, let's do it as a thought exercise and if it doesn't work out, what's the big deal? We're the School. Nobody's gonna die. It's an AI system that's not making decisions. It's pretty low risk. It's safe. Let's try it anyway. So, we said, Okay, we have to build some elements into the contract to account for this, one of which was around intellectual property. Specifically, you need the ability to do what's known as a peer review; that would include looking at the source code, looking at the software components, hardware components, basically every single piece of how your AI was built and how it's running. Typically, your intellectual property provisions in a contract say that the intellectual property stays with whoever you're contracting out to. Well, that creates an interesting problem. I might have to go look at that to do a peer review, not to mention because of the regulatory application I may find myself in court someday and somebody [might] say, Oh, you used the Regulatory Evaluation Platform to make this decision, in addition to databases, applications and program files? Well, I want to see that system. I want to see how that system influenced your decision-making. And that may one day come up in court. So, I need some way of going into the intellectual property of a contractor on a moment's notice and not have to fight a big battle to get there. So, I came up with a clause that gave the Crown perpetual lifetime access to all the source code and all the components. Forever. Well, that's basically what the clause was.

Valeria 
Direct. I like it.

Scott 
A perpetual lifetime license to access all of the components of the solution whenever we made a request to do so. It gave a few examples, like a court case, a tribunal, just because we felt like it. It didn't actually say that in those exact terms, but that's basically what it equated to. And we expected pushback. We didn't expect pushback from PSPC, but they were very open to it. They didn't say, Well, we don't do that on RFPs. We could never do that. They said, Oh, okay, we understand what you're trying to do [and] we understand why you're trying to do it; let's find a way to make it work. When PSPC is an enabler is when things go really smoothly and that's where they offer the most value. They brought it back to their lawyers, they brought it back to their managers and the procurement experts, massaged the wording a little bit. Then we presented it to all the vendors and I expected them to flip a few tables and say, You're crazy. I'm never bidding on this. You're essentially giving a free pass for my competitors to see all my IP. And what ended up happening is, the vendors understood. They understood that AI is under incredible scrutiny right now. There's still a lot of mistrust and if the government is to build legitimacy and credibility as it introduces AI systems, we have to be able to audit the systems that we're putting in place. We have to be transparent about how the systems are making decisions, how they're being built. Especially when you apply the ethical AI and the bias-reduction lenses, not to mention the numerous other lenses that could be applied. Because you don't want a system that unfairly favours a certain group of people because the training data used—I don't know, let's say [it] favoured white males. And so your system as a result, prioritizes white male applications. That's not the kind of system that we should be building. And I think the companies understood that and that's why they were very open to that provision.

Valeria 
That's great. And I love your line about when PSPC is an enabler. It's great and I feel like that message needs to be communicated: how much we love PSPC in the role of an enabler!

Scott 
Yeah. And through this process, there have been other requirements that have come through the source list that PSPC and TBS created. Those departments have reached out to me to ask for advice and some of the advice that I impart on them I've already talked about during the podcast. But some other pieces of advice—especially what I'm seeing through this procurement process, and what we encountered while we were looking through the bids—is a very distinct lack of expertise when it comes to new and emerging technologies. I'm sure this won't be a new theme. What we saw was we would get these flashy vendor bids and some of the firms have entire teams just dedicated to writing government bids, so they could do a really good job. But then you would see a technical architecture diagram, you would see, Oh, we're going to use Elastic Search, and we're going to build it using this data science method and blah, blah, blah. We compiled a group of subject-matter experts—regulators, in this case. Well, they sound like they know what they're talking about. I guess it's good, right? Like I don't really have a way of saying whether it's good or not. And we did have one AI expert who participated in the evaluation panel from Digital Academy. Of course, the rules of procurement say that people in the evaluation panel are not allowed to talk to each other until you do your consensus meeting. They had questions but they had nobody to go to. Even so, even if there was an AI expert somewhere out there that was in ready access for them, you're not supposed to discuss the contents of a bid with anyone until you come to your consensus panel. And the contents of the evaluation bid are not supposed to be shared with anyone outside of the panel. So, we've created this interesting procurement system that I would argue is not ready for the digital era, especially if we're trying to encourage government to go and adopt all of these cutting-edge things. But we're not equipping public servants and we're not creating processes that can work to support the adoption of new and emerging technologies.  I gave the example of not being able to access AI expertise because of dated procurement rules that don't let us discuss the bids, that don't let us consult with people outside of the panel, that don't let us talk amongst each other until we come to a consensus meeting... It's fine and dandy when it's the 46th research project that you've been [sending] out to RFP because you do one every 2 or 3 years, so, you could do it on muscle memory. But then when it comes to something you've never done before, that inability to go and get expertise when you need it makes the process incredibly difficult.

Valeria 
I have to say it also happened recently in one of the projects that I was working on. We were looking for a behavioural scientist and that's also kind of new to government. It just happened that I have a background in psychology and [have] done some work in conflict. I was able to read through the lines of these proposals. But what amazed me throughout this process, what I learned, is that there wasn't somebody from the procurement aspect, or, that they didn't necessarily have an expert in that [field], that was able to evaluate these things from that level. And what if I hadn't been there? What if it was just people who didn't necessarily have that little bit of added knowledge? Then yeah, it does sound great, but it's not. You have to be able to read between the lines to really see and evaluate properly. So yeah, I couldn't agree with you more.

Scott 
Absolutely, absolutely. And that resonates with me a lot based on our experience of doing the procurement for the Regulatory Evaluation Platform project. We noticed the same thing when we were developing the statement of work. Procurement understand process, and they understand policy, and they understand the rules. But when it comes time to, How do I frame my criteria so that I can objectively evaluate some new requirement with a technology that barely anybody's familiar with? they can't really help you that much because they're often operating on a similar level of knowledge that you are. If you have a procurement question like how the process is going to work, or how a criterion you've drafted will line up with procurement rules, they're going to be incredibly helpful. But if you say to them, I need the system to do X ,Y, Z. How can I phrase this in a way that it becomes objective? How can I define good, better, best? they're not going to be too much help. And what I fear is, we'll have a lot of explorers and we'll have a lot of experimenters who will definitely be moving forward on the adoption curve, but they will not be able to get good results without having the supporting capacity and infrastructure.

Valeria 
What I also found [difficult] was explaining reasons why this does not match. It was an interesting process because they wanted more clarification than I was able to give. Other than showing you definitions on the web of why this is this and this is—which I attempted to do, which make it very clear, the explanation that I'm trying to give you—I don't know how else to communicate why I'm saying, "No, this does not match." I felt that there was a communication barrier there as well. I felt if they had been able to bring in that expertise from their end, some familiarity with the topic, it would have helped greatly in that process.

Scott 
Yeah, absolutely. I completely agree. What we've also started to learn as we go through actual project execution as well—and this is a very common characteristic for any project—is trying to avoid "scope creep." Trying to not boil the ocean and keep yourself very focused. And we've had to many, many different times, reflect back on what the original problem we were trying to solve was. Because AI is very smoke-and-mirrors to a lot of people. When you ask somebody to visualize what an AI system actually looks like [and explain] what is their interaction with it, it's very hard for them to do. It's very nebulous. They can't really give you a good picture. It's not like you necessarily go onto an application and start interacting with an AI. The AI interaction may be invisible and under the hood. You may not even realize it's happening. So, it's hard for people to visualize that. Once they start to see the results they start to imagine, If it can visualize my data at the click of a button, what else can it do? Can it also give me more data points? Can it visualize it in 10 different ways? Before you know it, scope creep is starting to come up. And what we perceive, as non-technical experts, as a very simple task, like, Oh, couldn't you just ingest an entire new data set into your existing algorithm? All you have to do is hit a button to ingest it, right? is actually a very complicated task. There's some assumptions built in that it's just going to be: the data set is ready, it's machine readable, I hit a button, it goes into the algorithm, and everything sorts itself out. And the reality is, that's not how it happens. If we go back to what I think is an essential project management practice of what was the problem we are trying to solve, and where we are in our project, whether it's a prototype, whether it's [a] production-ready solution... does it actually solve our problem? Because everything else is just distractions and noise. If we can't say that it solves our fundamental problem, then why did we even bother doing it in the first place? It's very easy to get distracted by the shiny new features that everybody wants as they start realizing that this could be the thing that solves all their problems. But going back to that fundamental question: does it solve everything? Does it solve our problem? Does it fix whatever issues we were running into that prompted us to start this adventure in the first place? If yes, great! Project success. If not, why not? And what can we do to ground ourselves back to the original problem space?

Valeria 
For you, what does the end of the project look like with a big bow, you dropping a mic and being like, "I could move on now. This is a great success. I'm so proud of myself." What does that look like?

Natalie 
He's never hiring a paralegal to do those 1300 hours of work. [laughter]

Scott 
What does success look like? What does my mic–drop moment look like? I would say that we've always framed this as an experiment. So, this is more than just producing a solution. Because the number 1 question I'm always asked is, Why is this happening at the School? And it is a very good question to ask. Yes, we are producing something of value that will increase productivity and efficiency and [we'll] pat ourselves on the back and everybody's happy about that. But at the end of the day, as the School, we care about building capacity. We care about the learning benefits from something like this. That's why we frame it as an experiment. Everybody who's come along for the journey started thinking AI is Skynet, it's going to take over the world. It's going to be the robots are going to take over the world. I for one, welcome our robot overlords—if they're listening! [laughter] And that's where everybody started. But as they've gone on this journey with us, they're now in a position where they understand more about what AI is capable of, and just as importantly, what it's not capable of. They understand its limitations. They understand more about data science. As regulators, they're better equipped to be 21st-century regulators. A regulator who hears that their industry is adopting AI now understands what that means, what its possible implications are and how they as a regulator should respond to that. So, should they regulate? Should they monitor? Should they do more research? Should they ask to be a part of that project? They now are equipped to make that determination because they understand the technology that the industries they're regulating are adopting. They become better regulators. What does this mean in terms of what does project success look like? I would argue that it will be great if we get something that can start being adopted into the practice of the regulators. At the same time, though, I want to create something that creates an experience where the regulators coming out of this project say, I understand more about AI. I understand about how I can be a better regulator as a result of how the world is changing. I've also been inspired to think about how I can change the way I do business and adopt AI. Ideally, to build upon the results of the experiment the School has been doing. Maybe I look at the Incorporation by Reference project. Maybe I look at the Regulatory Evaluation Platform project and say, to meet my needs, that's 80% of the way there, but I want to adopt it into my own practice to take it the remaining 20%. Trying to be everything for everyone—and we have 16 federal departments and agencies who are partners in this project; I won't name them all, because that will take a few minutes in itself!—but with that being said, we're not going to make a product that will meet everybody's needs, because that is impossible. We will not be able to create the one platform to rule them all. But what we can create is something that makes the status quo better and something that can be built upon if the departments want to. They have the source code. They have the knowledge and understanding and capacity. They have the connections to AI experts, and then just give them an encouraging push to take it the rest of the way.

Natalie 
Well, I think having 16 investing partners is a real testament to the quality of the work that you're doing because those things don't just happen on crappy projects.

Scott 
Yeah, and an interesting story about how that all came together. So, over the summer, a period of a month and a half, we reached out to 18 departments and agencies, all in rapid succession, signing MOUs [memorandums of understanding] and we fundraised about $1.1 million in about a month and a half.

Natalie 
Wow. Do you feel like calling out the 2 who didn't invest? No, I'm just kidding. Just kidding. [laughter]

Tood 
You missed out! [laughter]

Scott 
I won't name the guilty parties! I just want to jump a little bit to rules as code because I don't think I've given it enough time or justice. Rules as code is a very interesting concept. Speaking of justice, the Department of Justice is a key partner in this and so is Transport Canada and we have an active project. So, the Department of Justice, Transport Canada and the Community of Federal Regulators, where we're working on trying to convert the large commercial vessel registry rules into machine-readable code. An owner of a vessel, whether it's a company or an individual, would understand how to register their vessel, depending on its size, depending on its purpose, and what are the required documents and provisions and rules around that. What's interesting about this space is we are in a state right now where we're going to try to convert existing rules. There's a very interesting and emerging conversation, especially in Canada, where we already have a system where we draft our regulations in 2 languages, English and French, of potentially one day—and I don't want to scare off any Department of Justice listeners—one day introducing a third language. Drafting it in code, and then converting it into English and French. If we want a digital-first government, a government that prioritizes and recognizes the importance of digital in today's world, then we would take into serious consideration whether, when we're drafting our regulations, we should draft it in code first. Should we primarily consider a digital use case as the primary driver for drafting our regulations? It opens up a lot of interesting constitutional questions, which I think is outside the scope of this podcast, but we have drafting conventions in place already. If we introduced a third language—code—what will that mean for our drafting conventions? What will that mean for how lawyers, policy experts, drafters, all come into a room and try to figure out how they're going to draft rules so that they're ready to be put into code as soon as those rules come into effect?

Valeria 
Interesting. What's the reaction that you get [on that]?

Scott 
The reaction I get is, depending on who you're talking about, some people are very excited about that. Digital and innovation-minded people tend to get very excited about that. People in the legal profession tend to react with a little bit of, That's interesting, but that also terrifies me. So, it really depends on who you're talking about. But the future that you could create is—there's so many interesting futures that you could create by doing this. I'm going to give 2 examples. The first is: if you imagine that you convert your rules into machine-readable format and you start doing advanced-data scenarios. Stats Canada data is already largely machine readable and we have all of our rules, which are now machine readable and converted into data itself. If I tweak a variable in my regulations, what is going to be the impact to GDP [gross domestic product], or to productivity, or to population growth, or to immigration rates or to whatever data point I care about? So if I say, Okay, the threshold for inspection is currently set to, let's say, 10% of whatever variable and I'm going to tweak the regulation so it's 15% of that same variable. Then I run an advanced scenario using real-life data and I run a simulation, and I say, Okay, so what's the impact? Oh, look at that! The number of airplanes falling out of the sky has gone up by 5%. Ooh, well, I'd better not make that tweak because I don't think the cost of life is worth whatever administrative burden reduction I'm giving to business.

Valeria 
We appreciate that, Scott.

Scott 
That's the kind of advanced scenario that we could do. And there's some research out of the University of Ottawa from Wolfgang Alschner, [who] has done something very similar but with trade agreements. Using the text of existing trade agreements and converting them into machine-readable code, he can actually predict whether a trade agreement is going to be successful or not, purely based on the text of the trade agreement and based on other secondary data sources that have been inputted into the algorithm. If we can convert our rules into machine-readable code, we open up a lot of possibilities for advanced data analytics, advanced scenario planning—the sky's the limit. It's just a question of whether we take the time to do it or not. It's not gonna work on every single rule set. I can already hear people telling me, Well, that will work great for a prescriptive rule where the rules are very clear that you must do this. But in more outcome-based rules where it's like, As long as you're not killing anybody, I don't really care how you achieve that outcome. Yes, and I'm the first to admit it's not going to work for every single rule that we have in Canada, but there are a large number of them where it will work, especially the more prescriptive ones. I don't expect people to be ready to go full-scale adoption on day 1. I think we have a duty to experiment with it [and] see if we can realize the benefits that are promised. People in Australia and New Zealand, Singapore, the UK, [and] France, have already started doing this. Let's experiment with it. Let's see what happens. Let's see what kind of benefits we can realize. And then let's make a judgment call on whether it's worth our time or not. I think that's a very modest proposal that I don't think is too controversial. I'm not gonna say, Let's go invest $10 million to go convert 200 rules right off the bat. I understand that, especially with how early this is, it needs to prove itself  first. It's very interesting.
I'll just give a shout out to some of the other work we're doing. We're doing some forward planning right now. We're looking to see for future project ideas like using artificial intelligence to find missing federally regulated companies—which sounds a little bit odd and I saw some eyebrows shoot up. I'll explain it really quickly. Essentially, I won't name the guilty parties, but some departments do not know everybody who falls under their jurisdiction. Sometimes it's out of pure ignorance. Companies just don't know they're federally regulated. Other times, it's just poor record keeping. Whatever the reason is, how can we find those companies? We could do a massive manual effort of Google searches, Program Files, keywords, and eventually piece [it] together. One particular department who shall go unnamed has estimated that they're missing about 1500 companies. They did a manual effort to try to identify some of them and in about 6 months, they identified 300 companies. And companies are being created, companies are going out of business, every single day. Truthfully, I don't think you'll ever get on top of it, plus, realistically, who has the staff to dedicate to this 37.5 hours a week for—I can't do the math off the top my head—but years. This is a chance for artificial intelligence to help us out. If we know what the variables are that might indicate that a company is federally regulated and we have a database of the companies who are registered, it's simply just matching up the companies we find based on criteria against who's on the list and just doing a cross walk between them. It's not exactly the most complicated thing to piece together, but it is very time consuming. Stats Canada has something known as a Linked File environment, where they keep Business Registry data [and] CRA tax data, and we're looking into whether we can link that with regulatory data like licenses, permits and certificates, so that we can create a more fulsome profile of a company's regulatory status. Combine that with something like rules as code and now you have a very powerful engine for businesses to use to see if they are compliant with all the requirements of our rules. When we link that with the licensing permit and certificate data, your rules are a machine-readable format, your company's profiles are a machine-readable format. With a click of a button, I can tell you what requirements you're subject to and which of the licenses and permits and certificates you're supposed to have, that you actually have [and] which ones you're missing. As long as I don't use that information for enforcement purposes, as long as I use that as more of an "infotainment" service so that people know what they have to do to be compliant, then it's fine from a privacy point of view.

Valeria 
I feel like your tagline should be "AI for common-sense service."

Scott 
Yeah, well, maybe we can look into making a logo and have some branding and a tagline. [laughter] Get some ads up maybe on... maybe not Facebook these days, but, I don't know... Twitter? Get some Twitter ads going. That's usually where all the government digital innovation people are hanging out. So...

Valeria 
Well, thank you. Thank you very much, Scott.

Scott 
Thank you for having me today.

Tood 
You've been listening to Innovate On Demand, brought to you by the Canada School of Public Service. Our music is by Grapes. I'm Todd Lyons, producer of this series. Thank you for listening.

Credits

Todd Lyons
Producer
Canada School of Public Service

Valeria Sosa
Project Manager, Engagement and Outreach
Natural Resources Canada

Natalie Crandall
Project Lead, Human Resources Business Intelligence
Canada School of Public Service

Scott McNaughton
Project Lead, Regulatory Artificial Intelligence
Canada School of Public Service

Tell us what you think

Share your comments on this episode by using our feedback form.

Related links


Date modified: