Transcript
Transcript: Social Media Monitoring in the Government of Canada
[00:00:00 The CSPS logo appears onscreen.]
[00:00:07 The screen fades to Carly Dybka in a video chat panel.]
Unidentified Speaker: All right, Carly, take it away.
Carly Dybka: Thank you very much for that kind introduction, I really appreciate it. I was also very appreciative of the opportunity to be a public servant in residence. Among other things, that time allowed me to really focus on my doctoral dissertation, and as I'm going to be speaking about, it looks at how GC comms branches are using social media monitoring to support public environment analysis, basically understanding the public and the things that they're talking about.
I only have 20 minutes to cover some of the most key points, so I'm focusing on things that I think will be practical and of interest for public servants. Please feel free to ask questions on any items that I touch on or something that maybe I didn't cover, and I'll be happy to elaborate after my presentation. So, if we can move down to the 'About' slide.
[00:01:00 A slide is shown with text that reads:
"The government's monitoring of Canadians' social media activities could be done with good intentions; however, this does not necessarily mean that the processes that form the basis of social media monitoring, nor the outcomes derived from monitoring, are necessarily optimal for the public good"
« Le Suivi par le gouvernement des activités des Canadiennes sur les réseaux sociaux pourrait être effectué avec de bonnes intentions; cependant, cela ne signifie pas nécessairement que les processus qui constituent la base du Suivi des médias sociaux, ni les résultats qui en découlent, sont nécessairement optimaux pour le bien public ».]
First, I'll start by explaining what I mean by social media monitoring. Some call this social listening, there are also plenty of other terms that are available. In fact, government and industry use a variety of terms, sometimes in contradictory ways, so I picked social media monitoring because it's one of the most consistently used within the government of Canada.
In short, social media monitoring involves the routine and a systematic study of the broad social media environment, generally through the collection and analysis of big data from social media. There are monitoring tools that support this work through a combination of their own technologies and different contracts with social media platforms, including use of APIs. They allow end users to enter search terms and then cull results from a wide variety of platforms. Tools then put all this information into one place for users to get that information at a glance, and the tools generally also offer their own analysis on what is trending.
So, my research examined that-… sorry, that type of monitoring done by departments to understand the public environment, why and how they're doing it and with what effects. It also has considered some different contexts that might shape how monitoring is practiced and what beliefs surround it. I applied these when analyzing how departments approach monitoring from methodological and from privacy standpoints, and then what they come to know as the public through monitoring. Without getting too nerdy, the public is a construction as opposed to an inherent entity, so in doing monitoring, work is happening in creating a specific public.
So, if we can move to slide four on methodology, I'll tell you what I did to actually advance that work. I conducted a baseline survey in spring 2022 with the help of the CCO to gather some general information from Government of Canada departments and identify contacts who were willing to speak with me for interviews. I conducted those interviews from spring to summer 2022, and many of the 71 participants that I spoke to are listening in today, and I want to thank you again for your participation.
I will say that in the past year and a half since I did that primary research, things have continued to evolve. In broad strokes, I don't think the landscape has changed that dramatically, but I am also speaking in generalities, the most broad themes that I found in my research.
So, if we can move to the next slide, please. I'll be using this time to focus on four key themes in my findings. First, I'll share some basic facts on why monitoring is done and how it's done, but I will focus more on concerns and challenges, particularly in the privacy space. As I'll get into, there are benefits to monitoring, but it also needs to be conducted and understood properly, otherwise the drawbacks can actually outweigh the benefits.
Many people that I spoke to in interviews voiced that they wanted to expand monitoring and its use in their department, so my hope is that I can prepare those people to go into that kind of expansion understanding the opportunities and the steps that they should take to make it an optimal activity. If we can move to the next slide, please.
[00:04:27 A slide is shown with text that reads:
"Sometimes we will see topics way ahead of the [media] articles. So sometimes we can give a heads up to Media Relations, "This is happening, you're definitely going to get a call on it, start working on media lines or looking for information and get your ducks in a row." – Communications Advisor"
"We're able to leverage data and the metrics to see the return on investment […] and also to make sure that the program is being leveraged the way it's supposed to be leveraged." – Communications Executive
« Parfois, nous aborderons des sujets bien avant les articles [des medias]. Ainsi, parfois, nous pouvons avertir les Relations avec les médias : « Cela se produit, vous allez certainement recevoir un appel à ce sujet, commencez à travailler sur des lignes médiatiques ou à rechercher des informations et à mettre vos canards dans une rangée. » - Conseiller en communications »
« Nous sommes en mesure d'exploiter les données et les mesures pour connaitre le retour sur investissement […] et également pour nous assurer que le programme est exploité de la manière dont il est cense l'être. » - Responsable des communications.]
It's very common for the branches, the comms branches that I spoke to, to fall under one of two streams when it came to their motivations for monitoring. I'll call it public relations on one side and communications improvement on the other side. They're not mutually exclusive, and monitoring can be done for a variety of reasons, but there is often one side that dominated, one primary motivation.
On the public relations side, often social media monitoring was added onto an existing media monitoring team and the content of that monitoring and analysis was typically focused on what participants called social echo, so how issues and announcements are being discussed, to what degree, mainly as a ripple effect of official activities and traditional media coverage. These departments were often looking primarily at media and stakeholder activity, also looking at opinion leaders, but of course often saw content from the general population.
These groups also focused on what many called an early warning system, where departments were hoping to get wind of something, such as negative stakeholder comments or a bombshell allegation, before it was picked up heavily in mainstream media. So, their goal was essentially better preparing for public relations issues and better gauging what was successful on the PR side.
On the comms improvement side of things, those departments wanted to see first and foremost how the public was discussing programs, policies, services, the department, departmental officials and more. These departments were often looking to more directly and quickly assess public information needs and the kind of language that the public was using to make their communications more responsive. I did notice a tendency for these departments to add social media monitoring into their social media teams, in part because improvements to content were mostly being applied to social media content as opposed to other communications products.
So, I'll turn to the how of social media monitoring. Because of the scope of my research, most departments I spoke to were using a monitoring tool, often Meltwater that has sort of dominated Government of Canada tool use. To a lesser degree, some departments were using Cision as a tool provider.
When it came to the searches used to pull data from those tools, one positive is that many departments were adapting their searches over time based on what they were seeing in the environment, trying to get a better and more fulsome understanding of the conversation. However, the overall scope of monitoring in most departments was somewhat limited, a few departments tried to capture everything mandate related, while others focused on social echo or specific priorities.
The scope of monitoring may be limited by a number of things. It could be limited by the knowledge and expertise of the person doing the monitoring, often it's limited based on how much time they actually have to do the monitoring. But I found it was also shaped by that primary goal, the departments who are trying to do the widest public environment analysis usually had the most fulsome searches set up.
I will note that in discussing the introduction of social media monitoring into their comms branch, many managers and executives were working to show the value of and get wider departmental support for social media monitoring. Some felt that using it effectively required a cultural shift in the branch or in the organization. In their sense, how they worked and what they prioritized needed to shift. Executives, especially, felt that branches could better leverage the data that comes from monitoring, but they also noted challenges in actually building out the function to achieve some of those broader goals, and I'll talk about those broad goals on the next slide.
[00:08:41 A slide is shown with text that reads:
"Often we'll say, "You know, there was somebody who posted about this article, and then that person got a hundred likes," and it's like a conversation about the article. I do think, that's where the value is. It's like, "This is an article was posted." Okay, that's great to know. But what do people think about it? And what are the opinions?" – Communications Manager
« Souvent, nous dirons : « Vous savez, il y a quelqu'un qui a posté à propos de cet article, et cette personne a reçu une centaine de likes, » et c'est comme une conversation sur l'article. Je pense que c'est là que se trouve la valeur. C'est comme « C'est un article qui a été publié. » D'accord, c'est bon à savoir. Mais qu'en pensant les gens? Et quelles sont les opinions? » - Responsable des communications.]
So, underlying some of the very specific reasons that departments were setting up monitoring functions, there are more general benefits associated with understanding the public and public environment through social media monitoring, and I think these reflect how monitoring has a unique position in the overall communications toolkit. I'll list off a few of these. Several participants noted that social media monitoring is faster and unfiltered in comparison to traditional public opinion research. Formal POR, especially if it's contracted out, is costly, tightly controlled and time consuming. It's rarely available as a quick pulse check, and it's also confined by the framing of the questionnaire that's used.
So, social media monitoring really diverges from that in a number of ways. It's also a tool to better detect mis- and disinformation, which otherwise can percolate for a long time, or not even reach the department. Departments aren't necessarily familiar with disinformation that might be circulating on social media. It's also useful in knowing if and how the public is responding to media coverage. Several people noted that they used to have to basically assume that media coverage represented the public environment, and this provides them with an alternative. Sometimes departments get concerned about media articles and then they look at what's being discussed online and realize that it hasn't really been discussed significantly. So, for them, it's a way of better assessing that impact.
Also, in terms of impact, several people felt that it offered more well-rounded communications evaluation. Their evaluation can go beyond just link clicks and social media engagements to try and assess what's happening in the broader public audience, the audience that might not be directly interacting with the department's communications.
Underlying those kinds of stated benefits from participants, there's also this underlying benefit of wanting to have the information. Lori-Ann (ph) alluded to this, we'd like to have information, especially in communications, and I do think that FOMO, or fear of missing out, is a very significant motivator. Executives and managers in particular voiced a sense that they just can't be missing information. They don't want to be caught unprepared or look disorganized, either internally or externally when someone raises something that they've seen over social media, so they want to know as much as they can to be able to better anticipate what might be coming their way.
Now, I will speak for a moment about public environment analysis more generally, it is something that's noted in the communications policy suite. I think in terms of why public environment analysis is so important is that communications branches are very critical in terms of playing the role as an interface between the government and the public. Public environment analysis helps us understand public views and needs, and we can not only relay that back into government, but then also aim to take that information and communicate in a way that resonates more with the public.
So, in that way, communications really help maintain the public's trust in government. At the same time, that trust is very fragile, and I think we're seeing that maybe more so now than in recent history. So, the way we go about public environmental analysis should be reflective of the extent to which we need to treasure and maintain that public trust.
What communicators want to ideally accomplish through social media monitoring is often hard to achieve in reality, and it has risks. So, if we can go to the next slide, I'd like to start delving into that space.
[00:12:34 A slide is shown with text that reads:
"16 of the 31 departments engaged in social media monitoring described their monitoring as an exercise in 'trial and error' or in 'learning as you go'. This exists in tension with departments' attempts to prove the value a monitoring function that measurably achieves its established goals." – Chapter 5
« 16 des 31 ministères engagés dans le suivi des médias sociaux ont décrit leur suivi comme un exercice « d'essais et d'erreurs » ou « d'apprentissage au fur et à mesure ». Cela est en contradiction avec les tentatives des ministères de prouver la valeur d'une fonction de suivi qui atteint de manière mesurable ses objectifs établis » - Chapitre 5.]
I'll talk about some of the limitations that participants noted when it came to conducting monitoring as fulsomely as they would like. The most significant limitation that participants shared was that there was a lack of human and financial resources. This is a new function, often departments are trying it out on a pilot basis or for a specific project, and in many cases, they didn't have the people to do what they would really like to do in an ideal world. And in many cases, they were adding responsibilities onto the work of existing employees, so evidently, that doesn't allow those employees a lot of time to really delve into that full space.
And one simple issue with this is related to the sheer volume of data that you're finding through social media and the volume of data to analyze to get a really full picture of the public environment. It is a lot of information for someone to get through, and it's difficult to do without significant resources, especially if you want to try and cast a broad net with social media monitoring.
However, those human resource limitations weren't only tied to the number of people or the amount of time, but also the skill sets. Those skills aren't only about the fundamentals of monitoring, like how to use the tool, how to write searches, but also the variety of skills that can make someone really exceptional and useful in this area.
Many people who were conducting monitoring had no prior experience in it, and they described working as self-starters, learning by trial and error, which I think are excellent skill sets. But it also means that they're starting off from a somewhat limited place and often their training was limited to what the vendor was providing them, which as I'll get to in the conclusion, can be a bit challenging. It doesn't allow teams to make the most of the function or really develop a nuanced sense of social media monitoring.
And I felt that executives really captured this well in interviews when they spoke about finding the right talent. They highlighted the need to have someone with research skills, healthy curiosity, writing skills, strategic thinking skills, and knowing when and what to be briefing on. And when you find someone with that magical combination, they also often don't stay at an IS-3, IS-4 or IS-5 level for very long. I think those challenges really do speak to how sophisticated social media monitoring can be in practice.
The next issue relates to something I noted briefly before, which is that the structures and cultures in government communications shops, or in departments more generally, haven't necessarily aligned with the opportunities of monitoring. Managers and even executives are working to show the value of the function and leverage the data that come from monitoring.
It was interesting speaking to the people who work on the ground, the people doing the monitoring, developing the reports, because they elaborated on some of the difficulties they experienced. One of them was that they were unsure if and how their reporting was actually being used, and several people told me that they felt their work was going into a black hole. There's no reporting back done on what was done with the information provided. And I think in part this ties back to the, first of all, the somewhat nebulous and loose nature of public environment analysis, but also because sometimes the motivation is just that people want the information and they want that sort of psychological security of knowing what's going on.
The last challenge I'll note here is that social media monitoring is neither social media, nor is it media monitoring. Many participants do feel that it is basically just an extension of one or both of those functions, but that perception contributed to what I see as two key issues. First, based on a critical engagement with monitoring and the data that people are seeing, and second, in the privacy implications and requirements that stem from monitoring.
So, I'll turn to each of those now, if we can move on to the next slide.
[00:16:55 A slide is shown with text that reads:
"We didn't go in depth into data analysis. It was mainly just 'This is how we produce these reports. Here's how you do it.' And then that was it." – Communications Analyst
« Nous n'avons pas approfondi l'analyse des données. C'était principalement simplement « C'est ainsi que nous produisons ces rapports. Voici comment procéder. » Et puis c'était tout. » - Analyste des communications.]
One thing I was very relieved to see in the interviews was a recognition that social media monitoring, while useful in engaging the overall public environment, is not actually a replacement for public opinion research. Those data are much different from one to another. However, although this was acknowledged, I also noticed a lot of language that implied that monitoring was capturing the public environment more or less comprehensively. Even if it wasn't seen as an accurate indicator of the sentiment of the general public, participants felt it was showing them everything they needed to see in terms of the topics of discussion.
The use of certain language becomes sneaky, because even unintentionally, it starts to position monitoring in a certain way. Participants mentioned that monitoring did capture everything or that they weren't missing anything, they might mention having hard data on the conversations that are happening through social media, but that isn't entirely accurate for various reasons, and we need to be careful about that language seeping in because we have to be careful about how we present the information and the function as a whole.
As I mentioned at the introduction, monitoring constructs a very specific kind of public and a specific public environment. This stems in part from the very practical way that social media monitoring is done by departments. When I spoke to the people who do monitoring, most do it the same way. They open up the monitoring tool and the dashboard, they look at spikes in conversation volume, and then they see what actual social media posts are driving the conversation. That's a very reasonable way to analyze what's going on, especially if you have limited time. It does limit a full understanding of the environment, and I'll talk about two of the simplest reasons why that's the case.
The first is that when participants, when people using monitoring tools delve into those content spikes to see what's driving conversation, the posts that are displayed are prioritized by influence. And what most participants noted is that the most influential posts are from users who are usually existing opinion leaders in that space; media, politicians, stakeholders. Many departments wanted to know what the public was thinking when they did social media monitoring, but with limited time, using this process, they end up with a pretty limited understanding of what the general public is saying and what they might be discussing compared to what opinion leaders are discussing.
Second, as many participants acknowledged, monitoring tools and monitoring platforms, they prioritize Twitter content. There are a lot of reasons for that, happy to discuss it in the Q&A if people are curious. But when you're looking mostly at Twitter content, you're looking at content that doesn't actually represent Canadians. Users on Twitter are overwhelmingly making six figures, plus are university educated and are employed full time or self-employed. Many participants assumed that Twitter was used more or less equally across demographics, or that what was being discussed was reflective of what the whole population was discussing, and I think we need to sort of unpack that question.
What those things mean is that monitoring, through both the data and the analysis, constructs an audience, and it's one that in many cases is already privileged and already has its collective voice heard by government, and what gets omitted are the voices of people in other demographics unless you can invest resources into more intensive monitoring. For departments who want to adapt their communications to better address the needs of those groups or understand what the true general public – to the extent that that is a thing – is saying, monitoring as it's being done now can be of somewhat limited use.
And this isn't to say that partial information is worse than no information, a lot of participants noted they were taking information with a grain of salt and that it was better than nothing. It does mean that departments need to recognize how biased the outputs of monitoring can be, and that that bias can affect groups differently depending on how the information is used. There are various small shortcomings in the function of social media monitoring that end up adding up, so you have to be careful about the use of that information and how you present it to people in the Government of Canada.
So, I want to turn now to the second issue that emerged, which pertains to the department's understanding of the privacy implications of monitoring. If we could go to the next slide, that would be great.
[00:21:48 A slide is shown with text that reads:
"In three cases, participants assumed their department had done some form of privacy assessment prior to them having joined the team, but had never seen them. These assumptions lead to a degree of passivity among communications specialists who conclude that their activities must comply with an assessment with which they are unfamiliar – Chapter 8
« Dans trois cas, les participants ont supposé que leur service avait effectué une certaine forme d'évaluation de la protection de la vie privée avant qu'ils ne rejoignent l'équipe, mais qu'ils ne l'avaient jamais vue. Ces hypothèses conduisent à une certaine passivité des spécialistes de la communication qui concluent que leurs activités doivent se conformer à une évaluation dont ils ne sont pas familiers. » - Chapitre 8.]
So, one thing that I will make clear to start off this discussion is that social media monitoring, by its very nature, is very likely to capture personal information in one way or another. It's based on culling and displaying massive amounts of data. Any post that is collected through the tool could theoretically contain individuals' names, opinions and demographic information that the user chooses to share about themselves or about other people. So, even if departments don't want that information and it was exceedingly rare that they would want personal information, they do need to account for the fact that they're seeing it and sometimes sharing and storing it, even when they don't necessarily mean to.
Unfortunately, the vast majority of departments I spoke to aren't doing this accounting. Any activity that does or will probably involve personal information should require a privacy impact assessment, also known as a PIA. At the time of my research, only two departments had completed a PIA through to the end stages, and three other departments had documented privacy protocols. Among other departments, I can say with certainty that 20 departments I spoke to were in violation of the Privacy Act at the time of my research.
I was really interested in why departments, why comms branches didn't do a PIA. Many people just didn't even realize that the PIA was a thing, but there were some dominant beliefs that I think showcase misunderstanding of the monitoring space, particular sensitivities, and so I wanted to highlight five reasons that I heard.
Thankfully, only two people said to me that publicly available information was not personal information. I thought that number would be higher, so that was a good news story. Some participants thought that a privacy assessment would have been done as part of the development of a standing offer or a supply arrangement, and many people procure off of one of those. Some felt that the standard PIA for official social media accounts would provide coverage for social media monitoring, even for conversations that didn't tag the department.
Some departments felt that if they didn't have to do a PIA for media monitoring, that they don't need to do one for social media monitoring. And finally, if users agreed to the platform terms and conditions, so users of social media platforms, if they agreed to Facebook's terms and conditions or Twitter's, that they don't actually… the government doesn't actually need to do any additional assessment, that those terms and conditions offer sufficient coverage.
There are so many reasons why participants can be excused for this misunderstanding. There's a lack of central guidance on the activity and other communications activities involving personal information have generally been assessed through standard PIAs. And in fact, what I found was interesting is that some departments did consult their privacy teams and got conflicting information on the privacy implications of monitoring, despite what I think is relatively clear guidance from the Office of the Privacy Commissioner. There are other reasons, I can delve into them if asked, but I'm mindful of the time.
I want to talk about why that actually matters. For me, first and foremost, the challenge is that without a PIA, departments are not fully considering what data that they're using, how to minimize data collection and use, and how to store and dispose of that information properly, so there are issues that stem from this related to the principle of data minimization.
It's common for departments to, for example, put links to tweets or put verbatim tweets into reports that are circulated widely within their department. But that exact information, a Twitter user specific handle, specific content, doesn't need to be shared directly to explain to public servants what's going on in the public environment. By sharing that information, you are allowing the recipients of the report to go in and find the user's profile, look at their other content and potentially access more personal information.
Now, as public servants, we all abide by values and ethics, no doubt about it, it's still incumbent on everyone to not over distribute, overshare information. And the fact that some senior officials in a department might want to see specific examples of tweets doesn't actually override that principle.
That second part of data minimization relates back to challenges I noted in terms of comms branches being unsure of if or how their reporting is being used. Comms branches are responsible for understanding the public environment, but when it comes to social media monitoring and how those specific activities within monitoring meet those goals, those objectives are often not laid out very specifically.
So, there's no indication of what data actually need to be collected and shared to achieve objectives, and when you combine that with the fact that many people have no idea if what they're sharing is accomplishing objectives, what it means is that there is a high chance that really any use of or sharing of information could be excessive, so contrary to the principle of data minimization.
This does expose departments to risk. In addition to not meeting the requirements of the Privacy Act or the Privacy Policy suite, I think it's easy to exaggerate or misconstrue social media monitoring as an activity or make assumptions about its underlying intentions. There are avenues to get information, like media calls or parliamentary questions that I think could uncover these details sooner rather than later. Departments may be caught flat-footed if they haven't really mapped out the privacy impact and what they're doing to mitigate privacy risks.
Despite those challenges, one really positive thing that I saw in my interviews was that individuals really wanted guidance, they wanted to speak to other departments about what they were doing. They were uncertain, too, if their approach was actually the best one. And I think some of their doubts stemmed from underlying issues that I will list quickly in the next slide, if we can move down to that. Thanks.
I think many of the challenges that I've raised stem from the fact that social media monitoring is a relatively new activity, not just in government, but in general. Participants weren't aware of what other departments were doing and sometimes didn't know who to go to with questions about best practices or things like privacy considerations. I think in the absence of that guidance, departments have relied primarily on the vendors of monitoring tools for information.
These tools are actually designed for private sector use first and foremost, and that influences the kind of public that's constructed, and it also influences the kind of training that those vendors provide. That training is going to be focused very much on the use of the tool and not on the challenges of government monitoring and the whole package of considerations that that raises.
Furthermore, the companies that generate or even broker the social media data being used do want to position them as objective and as inherently meaningful. There's tons of rhetoric out there that positions data as basically being the truth at your fingertips. You want tools to access and use big data because if you have so much of it, it must be representing an objective truth. But these data offer a very, very limited kind of reality, and if not understood, the use of those data can actually run contrary to goals. So, data need to be understood and framed appropriately.
So, I'm now very happy to take your questions. I have provided my email address on the last slide – if we can show that – just in case you'd like to chat further with me. But for now, I will turn it back to (inaudible) for any questions. Thank you.
[00:30:31 The CSPS logo appears onscreen.]
[00:30:36 The Government of Canada logo appears onscreen.]