Skip to main content
WP_Query Object ( [query] => Array ( [name] => research-backed-tips-for-emailing-legislators [post_type] => resources [resource-type] => info ) [query_vars] => Array ( [name] => research-backed-tips-for-emailing-legislators [post_type] => resources [resource-type] => info [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [paged] => 0 [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post__not_in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 10 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [queried_object] => WP_Post Object ( [ID] => 5773 [post_author] => 12 [post_date] => 2021-10-12 21:34:10 [post_date_gmt] => 2021-10-12 21:34:10 [post_content] => Jessica Pugel: Research-to-Policy Center is based out of Penn State University, and we are focused on improving the wellbeing of children and families by promoting the use of research and social policy. In addition to our practice, we also study how to improve the use of research and legislation. So what we're covering today is some of what we've learned from that, Just to orient you to the issue at hand. Do you ever relate to this picture? Think that you have a lot of emails? Think about public officials, and their staff sort through emails from a wide range of stakeholders from think tanks, to colleagues, to a lot of constituents, and they're expected to be responsive to all of them. And all of that contributes to an information overload. And one that only got worse for the pandemic. Public officials were, and still are navigating the pandemic and the heightened racial tensions from the past couple of years. And both of those prompted even more emails from constituents and other stakeholders who wanted their voice to be heard. With this influx of information what makes you think that yours are getting read more off? Outreach strategizing is super important to any organization. And if you don't believe me, just look at the subject lines of emails that you've received. Like these ones that I pulled from my inbox just yesterday, this online learning myth is busted. Get the facts from the experts. Maybe that's like spiking controversy. He's one of the worst Americans. This field filth dangerous, very dangerous. Seems like an appeal to strong emotions. And the last one, Jessica, protects what's worth saving is personalized to me. Everyone is trying to encourage more people to read their emails and testing out their own ideas. So think about it. In your own work, what have you done to cut through the noise of officials' email inboxes? What is the most effective way to email them? After all the first step to getting a meeting with them is for them to open your email and for you to make a good impression. A variety of stakeholders message public officials every day. But there are very few studies that have looked at how to most effectively reach them until now. Our team at the RPC started to look into communication strategies in earnest right at the start of the pandemic. March 2020. Since then we've conducted over 75 rapid cycle, randomized controlled trials. Rapid cycle, meaning that we were learning in real-time. We use the results of one trial to inform the next trial and randomized controlled trials are the gold standard for evaluation. It allows us to be pretty darn sure that any difference in outcome is due to the thing that we changed rather than any other outside factors. And this is how those tests are structured. We start with an idea. Sometimes we look through our own inboxes and think about why we open the ones that we open. And just as importantly, why we don't open the ones that we don't open or ideas can be pulled from psychology or marketing or other relevant work. For example, the cocktail party effects. This idea that our names are really powerful and pulling our attention that might drive you to want to test including the recipient's name in the subject. So one email would have the name and another email would not have the name. Cool. The idea is done. Now we actually have to implement it. So next, we would find something to distribute. Since our goal is to improve the use of research. And there was a new pandemic. We often found COVID-related research summaries to distribute in an email. Notice that we aren't just spamming these emails. We carefully choose timely material. It's really just a side benefit that we also get to test communication strategies along with that. And because we're sending science-related information, the takeaways that we're presenting today are specific to science information, but should largely work with any type of information. Once we know what we're sending, we then determine who to send it to. Choosing relevant officials is key and we'll talk about why later. We usually distribute to a few thousand officials and staffers, both at the federal and state levels for reference. After we have a list of people we want to send a resource to now the testing part of it begins. Randomized controlled trials require that we've randomized our recipients into group A and group B hence randomization. This allows us to be sure that the effect is because of the thing that we're altering, the name in this example, and many computer programs can do this. We usually use Excel. We would randomize half of them to the control group and half of them to the name group. Next up is to upload these recipients and their group conditions as a custom field to Quorum, instead of having to type in each name individually into the recipient box, based on how they are randomized, Quorum allows us to, upload the randomized groups as a custom field, and then use the custom field to select all the recipients at once. This is a huge time-saver for us and it makes mistakes way less likely. Then we schedule the two emails that we want to test against each other, put in the recipients, and schedule the emails to go out at the same time. We have to go out at the same time because we don't want the time of day or day of the week to influence their outcomes. And this example, we would have two emails with identical bodies, but the subject line would differ. One might say substance use research. And the other might say substance use research for the office's name. The placeholder options that Quorum provides are also a really cool feature and save us again a lot of time and a lot of mistakes. Once the emails go out, we monitor the replies we receive and the open and click rates that Quorum presents on its Outbox. This is what the Outbox monitoring page looks like. So we can see that the emails indeed went out correctly and we can get a glimpse of which emails do better. In this one, we see that the top email has a higher open rate, 23 compared to 21. And but the second one has a higher click rate, click meaning clicks the links in the email rather than just like opening the emails, like clicking in your inbox. So this just lets us know that things are going okay. After a couple of weeks, we closed the data collection by downloading the reports from the Outbox and analyzing them. For you stats nerds out there, we use logistic regression to look at if they opened or clicked at all and negative binomial regressions to look at how many times they opened or clicked. This difference between a binary outcome and a count outcome is an important one when looking at email data, and I'm happy to give more details about these policies if you're interested, just shoot me an email. Now, for those of you who are perhaps less interested in statistics, first of all, I don't blame you. That's fine. But on the following slides, you're going to see some asterisks. And this is just how we denote statistical differences between groups or between subject lines. More asterisks mean that the finding was less likely to happen just by chance. So we can be more certain that the effect is real. More asterisks more certain. And that's all that means. So even if some of these effects have zero asterisks, the examples are included just to demonstrate these ideas rather than to report out exact results. And we have several specific trials for each of these takeaways that we're presenting today. And we just chose a couple of them as good examples, regardless of if there's just statistical significance. As you can imagine, these tests are time-intensive to conduct, but Quorum helps make it a smoother, more efficient process. The Power Search allows us to identify the relevant officials.. Put in personalized information for the thousands of people without placeholder. Upload the randomized groups as Custom Fields and send those as bulk emails and then also monitor theopen and click rates. So we did that process 78 times in the past 18 months, and we learned a lot from that work. We have 10 key takeaways that you can use in your own outreach strategies. And Beth is going to take it over from here. Beth Long: Thanks, Jessica. I'm really excited to share with you some of the things that we learned from this work. We hope and expect that some of these lessons can be applied to any goal that you may have for your email, whether it's to get a meeting or get your information read. A little bit of background. First, as Jessica alluded to because there've been few if any research studies that have experimentally tested science communication strategies with policymakers specifically, we had to draw our ideas for our tests from social psychology and marketing research. First up, we tested the idea of relevance. It's intuitive and also supported by marketing research, that people are more likely to engage with information that they deem to be personally relevant. So in the case of policymakers, this could mean information that is relevant to them or their constituents as well as information that they can use or care about. As such, we tested things like using a legislator's name or their state name in the subject line. We also tested sending the emails to a targeted audience, such as those who sit on relevant committees, those who work in states that have a high prevalence of the issue, and those who mentioned the issue in public statements like on social media. We've found that the results varied by issue area, but there was a fairly consistent effect that personalizing the subject lines with policymaker names or state names across the areas of COVID exploitation and interpersonal violence increased engagement with emails, but there was no effect of personalization in the context of police violence. When sending emails to those who sit on relevant committees, we similarly found increased engagement in three of the four issue areas. But when sending to those who have a high prevalence of the issue in their state, there was surprisingly little effect. And finally, when targeting those who mentioned the issue frequently in public statements, we found really inconsistent results. Engagement actually decreased in the context of COVID and was increased in the context of police and interpersonal violence, but had no effect at all in the context of exploitation. So in some cueing relevance by including policymakers' names or states in the subject line, or by targeting those who sit on relevant committees seem to be effective strategies in most contexts, but the state-level prevalence and public mentions were not so effective. And here's just an example of the types of responses that we've received. If our targeting is way off, they'll say something to the effect of not being sure what to do with the information. This response was sent from someone who sits on a child and family law committee and who frequently works in legislation related to crime and law enforcement. The resource we sent to her was about preventing substance use, but clearly, our targeting was way off as she didn't really understand how to use the information we were sending to her. Next, we were interested in testing subject line length because it's also intuitive and supported by marketing research. We tested this by sending short and long emails to approximately 2000 recipients with short meaning two to three sentences that take up no more than three lines, excluding the hello and sign-off lines. We found that the short email resulted in almost a hundred more clicks than the link in the email compared to the long one. So yes, shorter is generally better with a few exceptions that I'll get to shortly. We were then interested in whether formatting the email is coming from a person or an organization with fancy newsletter-style formatting would be more effective because it's commonly believed that fancy newsletter formatting should increase credibility and therefore engagement with the email message. At least that's what we believed. However, we were quite surprised to see a very large effect in the opposite direction of what we expected. The plain email coming from a real person resulted in 46% more opens and a whopping seven times more clicks than a newsletter-style email. So it seems that policymakers prefer emails from real people, rather than newsletters from organizations. Because policymakers prefer people, we next wanted to test if they similarly like people's stories. Accordingly, we tested a personal narrative against a normal short email. In this case, we happen to have a person who had lived experience with a substance use disorder who is also a parent contribute to one of our fact sheets on parenting supports for parents with substance use disorders. She was willing to share her very personal story in this way and how she personally benefited from a parenting support group. And it paid off. We found that the link of the email with her personal story, it was clicked on more than the regular one, despite the longer length. So this is one caveat to our previous recommendation. Keep it short unless you're sharing a personal story in this way. Just driving home this point a little bit more, we tested these three emails against each other. The first one started off with some statistics regarding the problem. The second one started with a pleasantry. I hope you're having a great week. And the last one started with a researcher introducing themselves. We found that the one that started off with numbers, got approximately half as many clicks as the other two. So policymakers really prefer people. Switching gears just a little bit. We next wanted to test the science and research frames because of the increasingly negative light that universities and science is being seen in, specifically that academic scholarly work is self-serving and doesn't meet the needs of the public. We first tested two science-based subject lines against a control line with the word regarding. And we found that although the science-based subject lines were open slightly less, the link to the research product was clicked 50% more suggesting that policymakers want to know what to expect. If they get a general email with a subject line that starts with regarding they don't know what to expect. And they therefore may feel duped or surprised by being presented with a science-based fact sheet. But when the science frame is transparent, they know what they're getting and seem to click on the link more as a result. We explore this more in another trial to see if we could replicate it. And this trial we tested concerning and regarding against research. And our results were similar. The research subject line has slightly fewer opens, but a lot more clicks on the research product, further supporting the idea that policymakers want science-based emails to be transparent, or they just want to know what to expect. And although we don't have time to present all the trials that tested this. We want to mention that we overwhelmingly found zero evidence for an anti-science bias, which as researchers we think is a good thing. Our next recommendation is to just be normal, which sounds like common sense. But when trying to incorporate or test traditional messaging tactics like jumping on the bandwagon, it becomes a little difficult to have a natural-sounding subject line. It seems that policymakers have become averse to tactics like this, and just prefer normal-sounding subject lines. Here we found that what we thought would be the control subject line or the ineffective neutral one ended up being the most effective likely because it was the most natural sounding. We saw this again when trying to test the strategy of evoking emotion and drawing attention and realize in retrospect, how strange a subject line like shocking policy considerations is. Again, we saw here that our subject line also the most natural-sounding and non-click baity ended up being the most effective. It seems that policymakers and staffers have been so inundated with emails that use these tactics, that they've come to view them as clickbait and became averse to them. Following this, we've done a number of tests on emotion and threat frames. And this first trial, we tested a subject line that included the word information against blinds that included the words, research in social disparities and found that the social disparity subject line resulted in the most opens, we think because it can be a hot button issue that's frequently debated in the media and accordingly it authentically evoked emotion. Similarly in our next trial, we tested a solution frame against a threats frame with the thinking that the threats frame would authentically evoke emotion. And because people are more prone to pay attention to things that are threatening or dangerous to them. Indeed, the subject line with the word threats was more effective than the subject line with the phrase new solutions. Finally, we tested another variation of a solutions frame against a threat or difficulty frame. We likewise found that the subject line with the word difficulties was more effective than the line with the word solution. So in sum using emotional appeal to increase policymakers engagement with email is slightly complicated because it seems that the emotion has to be subtly evoked as we see here, rather than overtly mentioned, as we saw with the shocking policy considerations line. So these results subsequently led us to test a series of subject lines with a problem versus solutions frame. This is just one example from this series where we tested a problem-focused email body against a solutions-focused one. Notice how the first one here mentions how children are vulnerable and dangers that parents and schools must address while the second one discusses the unique opportunity that parents in schools have and the feasible measures that can be taken to ensure that children are safe online. So the first one is the problem one and the second one is the solutions one. And we found that, although the problems frame, problems on this one had more opens, the solutions-focused one had barked clicks. And these differences might not be as dramatic as some of our other results, but they are indeed statistically significant, which as Jessica mentioned, indicates that the difference is meaningful and suggests a real effect. So our thinking here is that a problem's frame may attract attention, but it can also hinder action. More work is we need to do more work to further explore this but work from the frameworks Institute, which is a group that tests different message frames with the general public has shown that messages that convey a high sense. A sense of high urgency and high efficacy work best. So it may not be an either-or thing, but rather that both the problem and solution are needed to convey both that there's indeed a problem, but also that something can be done about it. And through all this testing, we became interested in the email behavior of policymakers. Do they always open emails, sometimes, or never. We thought that it might be similar to the general public, where there's a small group of people who always open and keep a clean inbox and larger groups of people who never, or only sometimes open emails. I'll start with the good news that approximately 20% of recipients open every email, the bad news is that a larger percent of perhaps not like 45% never opened. The rest, approximately 35%, rarely or sometimes open it's this group that dissemination teams may benefit the most by figuring out how to reach since their behavior seems to be the most malleable. Finally, what might be one of the most important takeaways is that context matters. Outside events and the current politicization of a given issue might be stronger than any messaging strategy. What works in one context might not work in another. In this series of trials, we found that the problems framed visitor engagement with the e-mails disseminating research products to policing and students, but not racial disparities in housing. We're showing this light again, just to further reiterate this point. Even strategies that are generally effective, like personalization may not work a hundred percent of the time. And this exact reason takes us to our final takeaway that evaluation is necessary. Evaluation is also necessary because what might seem intuitive might not actually be all that effective as we saw with the plane formatting compared to the newsletter formatting. But you won't know unless you evaluate it. So as such evaluation capacities should be integrated into dissemination efforts and maintain to understand what works and what doesn't that when trying to reach policymakers via email, and now hand it back to Jessica who can recap and cover anything I forgot to mention. Jessica Pugel: So I ended up, it was a lot of information in a short amount of time, but all of these tips are available at the site and linked on this QR code, which you can scan for the end of the session. It's on the rest of our slides, but to recap. So it's important to make whatever you're sending relevant to the official, whether that means including their name or state name in the subject line or targeting based on their committee assignment. Whatever you send, be sure to keep it as short as possible with the exception of personal stories. Personal stories are counted out of this because policymakers really prefer people to hear from them and understand. Because they prefer people, you should take steps to be like a real human person, instead of trying to trick them into opening your email by making it sound like there's something else inside. Transparency is key. If you see an email that says cute dog pictures, and you open it to find turtle pictures, instead, you might be a little annoyed. Even if the turtle pictures are also cute. Perhaps an overarching way to capture these takeaways is to be normal, sound like a human, make sure that it sounds like something you would actually send to like your colleague. Because it's so important to be normal, it makes emotional appeals really complicated. If you can authentically evoke emotions that might help, but if it comes off as forced or inauthentic, it might hurt. One way you could authentically evoke motion is by appealing to problems about rather than solutions for an issue. But we see that might also backfire. People might open the problem emails one more, but would also maybe get overwhelmed with the size of the problem and that would stop them from taking action. Instead, a solution frame might promote action, though it may receive lower open rates. No matter what methods you've used to keep in mind that one in two will never open the email and one in five will always open it. So it's that middle part that you can really target with strategies. Given the fast-paced nature of policy, international attention, these strategies won't always be effective in every case. Even our strongest strategies fail sometimes. And like I said before, this is about scientific information. These could have very different results for different issue areas. And because of that, it's crucial to include this testing as part of your outreach efforts. There is certainly more to learn in this area and we can't do it alone. A huge thank you to our messaging trials team and our supporters, Taylor, Rachel, Mary, Kat, Brittany Max, and Cagla without whom none of this work would have been possible. And thank you to our attentive audience, and you can follow us on Twitter or shoot us an email if you want to keep in touch. Otherwise, I think we're ready to move into Q and A. I see in the short versus long email tasks, did it matter where the link was placed in the email? If the link was higher in a longer email, did that matter? We didn't test that to my knowledge. We usually put the link at the end and on its own. We've had some issues where if we embed the link like we're like read it here and then we embed the link on that, those words then they'll respond to us saying can you send it in a different way because we can't click on links that aren't actual, they're just, they think that you're going to give them an error or sorry, words are hard. Give them that's. What is the word I'm looking for? Beth Long: A virus Jessica Pugel: was not sure what you're trying to say, but my brain caught up. So we're here now, but do you remember if we tested that? Okay. Beth Long: No, we didn't, but I would say keep it higher generally, but you also want to balance that with being personal and getting your message across, conveying the importance of the link. Jessica Pugel: Yeah. If you were just saying here, read this and then you get your spiel below it. I'm not sure what that would maybe come off as weird. And then that would go against the whole just be a normal thing. Beth Long: Why should I read this? Yeah. Yeah. Okay. Jessica Pugel: Next question. Can you talk more about who specifically those emails went to? Was it the policymaker's information email, or their staff or the staff covering the issue, and do you dis-aggregate by the state to account for how different states provide administrative support for policymakers? That is a great question. We think a lot about it. In most of our analysis, we do break it up by officials and staffers. Like we can include that and we find that, Surprisingly to me, at least that officials open the emails at a higher rate than the staffers do. We don't send it to necessarily the Chief of Staff. We send it to all of the staff for officials, sorry, for the state level, we send it to all of their staff. For federal level we target based on their issue area. So like the area that they work on. Anything to add to that? Beth Long: Nope. I think that's good or sense Jessica Pugel: for letter-writing campaigns? Is it best to focus on a single issue or a solution or is it ineffective to include more than one issue in one campaign? Beth Long: That's a good question. We haven't studied letter-writing campaigns, but based on the work that we have done, I would say it doesn't matter the number of issues, as long as you're getting at what I was talking about with the frameworks institute. If you're conveying a problem, you also have to convey the solution. So that might get difficult if you're including, like 10 different issues. But if you have just a few issues and you can say the solution, what can be done about it, then that might be more effective. Jessica Pugel: From an analytics perspective, I think we have considered including different issue areas in the same email campaign, but we have decided against it because it's really hard to decide whether someone opened that email more times because of this issue or because of that issue. And so we just tend to not allow that complexity to happen because we want cleaner results. Next question. Have you made it publicly available which policymakers never opened emails and which ones usually do like in a scorecard format? As a constituent, I'd love to know those can't my reps are in. We do not because it can hurt their reputation. All of our data are anonymized. Even before they're stored on our server. But we know we can like, know what state they're in, but we don't release that information. Because it would probably hurt our relationships with them as well. Beth Long: We do have a paper coming out soon that describes that data anonymously. If just a plug for that. Jessica Pugel: Yes. We'll be coming out soon. That's exciting. It's the one that has the one that, of course, it would be the one with the circle with the pie chart. Okay. That's fine. Can you tell if that was forwarded to the staffer who leaves them summaries? No, we can't. That's actually one of the big things. So when we say one of our emails get more, gets more opens or gets more clicks, it's actually just like the email. It's based on its intended recipient. Sometimes we get open rates back that are like in the hundreds. And I don't know about you, but I never opened emails a hundred times. Unless it's oh is my order like delivered yet? I don't know. But usually I, it, so it must be forwarded. So we assume that if there's a stronger or a higher number of opens, that it was forwarded, the same thing with a click. How are you evaluating effectiveness? Click rate. This is something through Quorum. They have, so for open rate, there's a one-pixel invisible download thing that when you open the email that downloads. With Apple's new privacy advancements they are futzing with that a little bit, but we don't really know how it's impacting our data yet. And then click rate is also through Quorum. They change the link. I don't know how it works, technologically, but it's like a link redirects. It's like the industry standard. Both of these are industry standards. Did you test threat frames of define and a solution to frame email, body? We did. Oh, it was so cool. Nothing happened from it, but we tested it. We test it. So it was matching solutions, subject with solutions, body, and problems with problems, body, and I'd miss matching them. And there was nothing. I wanted it to have something so bad. And so that's why I remember it. Beth Long: And it's surprising because the frameworks Institute, that could be one way of getting at that high efficacy and high urgency, but I think maybe it needs to be like the threat frame needs to be more than a subject line in this case. Jessica Pugel: Yeah. Yeah. I agree. Do you study your replies? It's difficult to. Just for capacity reasons. But we have that data stored. And so at some point in the future, we can look at it and we're hoping for one of our future papers that we do actually end up getting to look at those. But we have someone coded, like from the inbox that it comes into like positive, negative, meeting request, other things like. Beth Long: What are other things we really want to test? We have a wishlist of things. We want to test so many things. I know for me, I'm personally interested in substance use issues and the power of sharing personal lived experiences and how that might increase engagement with emails. So that's what I'll be exploring further. Jessica, what do you really want to test? Jessica Pugel: The one that I've been really on lately has been the what versus how framework. So this is, it's also pulled from Frameworks Institute, but it's also anyway the what framework and just what are they, how prevalent is this problem and the how the framework is like, why does this problem exist? What are the structural influences there? And it's been so interesting to see the differences in engagement between those two. And I just want to dive into that a lot more. We just haven't done enough on it for me to know what to do with that information yet. It would be number 11. But we don't have that data in full. Yeah. In conveying solutions is offering legislative language is more effective than just describing the solution. Beth, if you want to take this on, you're welcome to. Beth Long: I can speak to this. I can speak to this a little bit. We haven't tested this specifically, but, in our other work we test our model, the research and policy collaboration model, which as a model for bridging policy and research by connecting legislators and policymakers. Through that, I've attended a number of meetings with staffers who have told me specifically that legislative language would be very helpful to them rather than just having a laundry list of different solutions. We have not delved into that territory, mostly because we don't have the capacity to be drafting legislative language. And also because we're federally funded, we can't do lobbying. So if we were to draft legislative language, it would have to be more of like example text. So it would be more than coming up with just one language piece. so we want to, but we haven't yet. Jessica Pugel: A counterpoint to that is not, I've read it in some of the research literature on this area that maybe policymakers don't like when people, when researchers provide their solutions to this like it would be like the legislative language because they feel like that's their job, the legislator's job. And so it feels like researchers should stay in their lane without providing specific language. So in sum, we don't know. Any insight into how many legislators are viewing through a mobile device? I imagine this affects the length of the message, depending on which device they're accessing from. In, in short, we don't have that information judging from our Google analytics reports about mobile versus website use I think most of them are our own website, but we know that of course like people do check their email on their phone. So I don't know for sure, but I agree that the device that you're on is going to affect how you view it and then how you engage with it, for sure. Beth Long: That's an interesting future test. Maybe if we can come up with a way to test it. Jessica Pugel: In our like website optimization stuff, we have more stuff coming. Do you test if the actual link is still bent more or less than a hyperlink? We did test it, I think twice. And there were none. It wasn't interesting. But so people click it, I think about the same rates or maybe open at the same rates, but the responses that we receive from the ones where we hyperlink are more negative than the ones with the actual length, like very recently, like we provided a hyperlink and multiple people responded. Like I can't click on that link. It's our policy. Like I can't click on hyperlinks and so that you have to give them the raw link itself. I don't understand that, but I'm sure that there's good reason for it. Yeah, but we have tested it a couple of times. [post_title] => Research-Backed Tips for Emailing Legislators and Staff with RPC's Beth Long and Jessica Pugel [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => research-backed-tips-for-emailing-legislators [to_ping] => [pinged] => [post_modified] => 2021-10-12 21:34:10 [post_modified_gmt] => 2021-10-12 21:34:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.quorum.us/?post_type=resources&p=5773 [menu_order] => 0 [post_type] => resources [post_mime_type] => [comment_count] => 0 [filter] => raw ) [queried_object_id] => 5773 [request] => SELECT wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_name = 'research-backed-tips-for-emailing-legislators' AND wp_posts.post_type = 'resources' ORDER BY wp_posts.post_date DESC [posts] => Array ( [0] => WP_Post Object ( [ID] => 5773 [post_author] => 12 [post_date] => 2021-10-12 21:34:10 [post_date_gmt] => 2021-10-12 21:34:10 [post_content] => Jessica Pugel: Research-to-Policy Center is based out of Penn State University, and we are focused on improving the wellbeing of children and families by promoting the use of research and social policy. In addition to our practice, we also study how to improve the use of research and legislation. So what we're covering today is some of what we've learned from that, Just to orient you to the issue at hand. Do you ever relate to this picture? Think that you have a lot of emails? Think about public officials, and their staff sort through emails from a wide range of stakeholders from think tanks, to colleagues, to a lot of constituents, and they're expected to be responsive to all of them. And all of that contributes to an information overload. And one that only got worse for the pandemic. Public officials were, and still are navigating the pandemic and the heightened racial tensions from the past couple of years. And both of those prompted even more emails from constituents and other stakeholders who wanted their voice to be heard. With this influx of information what makes you think that yours are getting read more off? Outreach strategizing is super important to any organization. And if you don't believe me, just look at the subject lines of emails that you've received. Like these ones that I pulled from my inbox just yesterday, this online learning myth is busted. Get the facts from the experts. Maybe that's like spiking controversy. He's one of the worst Americans. This field filth dangerous, very dangerous. Seems like an appeal to strong emotions. And the last one, Jessica, protects what's worth saving is personalized to me. Everyone is trying to encourage more people to read their emails and testing out their own ideas. So think about it. In your own work, what have you done to cut through the noise of officials' email inboxes? What is the most effective way to email them? After all the first step to getting a meeting with them is for them to open your email and for you to make a good impression. A variety of stakeholders message public officials every day. But there are very few studies that have looked at how to most effectively reach them until now. Our team at the RPC started to look into communication strategies in earnest right at the start of the pandemic. March 2020. Since then we've conducted over 75 rapid cycle, randomized controlled trials. Rapid cycle, meaning that we were learning in real-time. We use the results of one trial to inform the next trial and randomized controlled trials are the gold standard for evaluation. It allows us to be pretty darn sure that any difference in outcome is due to the thing that we changed rather than any other outside factors. And this is how those tests are structured. We start with an idea. Sometimes we look through our own inboxes and think about why we open the ones that we open. And just as importantly, why we don't open the ones that we don't open or ideas can be pulled from psychology or marketing or other relevant work. For example, the cocktail party effects. This idea that our names are really powerful and pulling our attention that might drive you to want to test including the recipient's name in the subject. So one email would have the name and another email would not have the name. Cool. The idea is done. Now we actually have to implement it. So next, we would find something to distribute. Since our goal is to improve the use of research. And there was a new pandemic. We often found COVID-related research summaries to distribute in an email. Notice that we aren't just spamming these emails. We carefully choose timely material. It's really just a side benefit that we also get to test communication strategies along with that. And because we're sending science-related information, the takeaways that we're presenting today are specific to science information, but should largely work with any type of information. Once we know what we're sending, we then determine who to send it to. Choosing relevant officials is key and we'll talk about why later. We usually distribute to a few thousand officials and staffers, both at the federal and state levels for reference. After we have a list of people we want to send a resource to now the testing part of it begins. Randomized controlled trials require that we've randomized our recipients into group A and group B hence randomization. This allows us to be sure that the effect is because of the thing that we're altering, the name in this example, and many computer programs can do this. We usually use Excel. We would randomize half of them to the control group and half of them to the name group. Next up is to upload these recipients and their group conditions as a custom field to Quorum, instead of having to type in each name individually into the recipient box, based on how they are randomized, Quorum allows us to, upload the randomized groups as a custom field, and then use the custom field to select all the recipients at once. This is a huge time-saver for us and it makes mistakes way less likely. Then we schedule the two emails that we want to test against each other, put in the recipients, and schedule the emails to go out at the same time. We have to go out at the same time because we don't want the time of day or day of the week to influence their outcomes. And this example, we would have two emails with identical bodies, but the subject line would differ. One might say substance use research. And the other might say substance use research for the office's name. The placeholder options that Quorum provides are also a really cool feature and save us again a lot of time and a lot of mistakes. Once the emails go out, we monitor the replies we receive and the open and click rates that Quorum presents on its Outbox. This is what the Outbox monitoring page looks like. So we can see that the emails indeed went out correctly and we can get a glimpse of which emails do better. In this one, we see that the top email has a higher open rate, 23 compared to 21. And but the second one has a higher click rate, click meaning clicks the links in the email rather than just like opening the emails, like clicking in your inbox. So this just lets us know that things are going okay. After a couple of weeks, we closed the data collection by downloading the reports from the Outbox and analyzing them. For you stats nerds out there, we use logistic regression to look at if they opened or clicked at all and negative binomial regressions to look at how many times they opened or clicked. This difference between a binary outcome and a count outcome is an important one when looking at email data, and I'm happy to give more details about these policies if you're interested, just shoot me an email. Now, for those of you who are perhaps less interested in statistics, first of all, I don't blame you. That's fine. But on the following slides, you're going to see some asterisks. And this is just how we denote statistical differences between groups or between subject lines. More asterisks mean that the finding was less likely to happen just by chance. So we can be more certain that the effect is real. More asterisks more certain. And that's all that means. So even if some of these effects have zero asterisks, the examples are included just to demonstrate these ideas rather than to report out exact results. And we have several specific trials for each of these takeaways that we're presenting today. And we just chose a couple of them as good examples, regardless of if there's just statistical significance. As you can imagine, these tests are time-intensive to conduct, but Quorum helps make it a smoother, more efficient process. The Power Search allows us to identify the relevant officials.. Put in personalized information for the thousands of people without placeholder. Upload the randomized groups as Custom Fields and send those as bulk emails and then also monitor theopen and click rates. So we did that process 78 times in the past 18 months, and we learned a lot from that work. We have 10 key takeaways that you can use in your own outreach strategies. And Beth is going to take it over from here. Beth Long: Thanks, Jessica. I'm really excited to share with you some of the things that we learned from this work. We hope and expect that some of these lessons can be applied to any goal that you may have for your email, whether it's to get a meeting or get your information read. A little bit of background. First, as Jessica alluded to because there've been few if any research studies that have experimentally tested science communication strategies with policymakers specifically, we had to draw our ideas for our tests from social psychology and marketing research. First up, we tested the idea of relevance. It's intuitive and also supported by marketing research, that people are more likely to engage with information that they deem to be personally relevant. So in the case of policymakers, this could mean information that is relevant to them or their constituents as well as information that they can use or care about. As such, we tested things like using a legislator's name or their state name in the subject line. We also tested sending the emails to a targeted audience, such as those who sit on relevant committees, those who work in states that have a high prevalence of the issue, and those who mentioned the issue in public statements like on social media. We've found that the results varied by issue area, but there was a fairly consistent effect that personalizing the subject lines with policymaker names or state names across the areas of COVID exploitation and interpersonal violence increased engagement with emails, but there was no effect of personalization in the context of police violence. When sending emails to those who sit on relevant committees, we similarly found increased engagement in three of the four issue areas. But when sending to those who have a high prevalence of the issue in their state, there was surprisingly little effect. And finally, when targeting those who mentioned the issue frequently in public statements, we found really inconsistent results. Engagement actually decreased in the context of COVID and was increased in the context of police and interpersonal violence, but had no effect at all in the context of exploitation. So in some cueing relevance by including policymakers' names or states in the subject line, or by targeting those who sit on relevant committees seem to be effective strategies in most contexts, but the state-level prevalence and public mentions were not so effective. And here's just an example of the types of responses that we've received. If our targeting is way off, they'll say something to the effect of not being sure what to do with the information. This response was sent from someone who sits on a child and family law committee and who frequently works in legislation related to crime and law enforcement. The resource we sent to her was about preventing substance use, but clearly, our targeting was way off as she didn't really understand how to use the information we were sending to her. Next, we were interested in testing subject line length because it's also intuitive and supported by marketing research. We tested this by sending short and long emails to approximately 2000 recipients with short meaning two to three sentences that take up no more than three lines, excluding the hello and sign-off lines. We found that the short email resulted in almost a hundred more clicks than the link in the email compared to the long one. So yes, shorter is generally better with a few exceptions that I'll get to shortly. We were then interested in whether formatting the email is coming from a person or an organization with fancy newsletter-style formatting would be more effective because it's commonly believed that fancy newsletter formatting should increase credibility and therefore engagement with the email message. At least that's what we believed. However, we were quite surprised to see a very large effect in the opposite direction of what we expected. The plain email coming from a real person resulted in 46% more opens and a whopping seven times more clicks than a newsletter-style email. So it seems that policymakers prefer emails from real people, rather than newsletters from organizations. Because policymakers prefer people, we next wanted to test if they similarly like people's stories. Accordingly, we tested a personal narrative against a normal short email. In this case, we happen to have a person who had lived experience with a substance use disorder who is also a parent contribute to one of our fact sheets on parenting supports for parents with substance use disorders. She was willing to share her very personal story in this way and how she personally benefited from a parenting support group. And it paid off. We found that the link of the email with her personal story, it was clicked on more than the regular one, despite the longer length. So this is one caveat to our previous recommendation. Keep it short unless you're sharing a personal story in this way. Just driving home this point a little bit more, we tested these three emails against each other. The first one started off with some statistics regarding the problem. The second one started with a pleasantry. I hope you're having a great week. And the last one started with a researcher introducing themselves. We found that the one that started off with numbers, got approximately half as many clicks as the other two. So policymakers really prefer people. Switching gears just a little bit. We next wanted to test the science and research frames because of the increasingly negative light that universities and science is being seen in, specifically that academic scholarly work is self-serving and doesn't meet the needs of the public. We first tested two science-based subject lines against a control line with the word regarding. And we found that although the science-based subject lines were open slightly less, the link to the research product was clicked 50% more suggesting that policymakers want to know what to expect. If they get a general email with a subject line that starts with regarding they don't know what to expect. And they therefore may feel duped or surprised by being presented with a science-based fact sheet. But when the science frame is transparent, they know what they're getting and seem to click on the link more as a result. We explore this more in another trial to see if we could replicate it. And this trial we tested concerning and regarding against research. And our results were similar. The research subject line has slightly fewer opens, but a lot more clicks on the research product, further supporting the idea that policymakers want science-based emails to be transparent, or they just want to know what to expect. And although we don't have time to present all the trials that tested this. We want to mention that we overwhelmingly found zero evidence for an anti-science bias, which as researchers we think is a good thing. Our next recommendation is to just be normal, which sounds like common sense. But when trying to incorporate or test traditional messaging tactics like jumping on the bandwagon, it becomes a little difficult to have a natural-sounding subject line. It seems that policymakers have become averse to tactics like this, and just prefer normal-sounding subject lines. Here we found that what we thought would be the control subject line or the ineffective neutral one ended up being the most effective likely because it was the most natural sounding. We saw this again when trying to test the strategy of evoking emotion and drawing attention and realize in retrospect, how strange a subject line like shocking policy considerations is. Again, we saw here that our subject line also the most natural-sounding and non-click baity ended up being the most effective. It seems that policymakers and staffers have been so inundated with emails that use these tactics, that they've come to view them as clickbait and became averse to them. Following this, we've done a number of tests on emotion and threat frames. And this first trial, we tested a subject line that included the word information against blinds that included the words, research in social disparities and found that the social disparity subject line resulted in the most opens, we think because it can be a hot button issue that's frequently debated in the media and accordingly it authentically evoked emotion. Similarly in our next trial, we tested a solution frame against a threats frame with the thinking that the threats frame would authentically evoke emotion. And because people are more prone to pay attention to things that are threatening or dangerous to them. Indeed, the subject line with the word threats was more effective than the subject line with the phrase new solutions. Finally, we tested another variation of a solutions frame against a threat or difficulty frame. We likewise found that the subject line with the word difficulties was more effective than the line with the word solution. So in sum using emotional appeal to increase policymakers engagement with email is slightly complicated because it seems that the emotion has to be subtly evoked as we see here, rather than overtly mentioned, as we saw with the shocking policy considerations line. So these results subsequently led us to test a series of subject lines with a problem versus solutions frame. This is just one example from this series where we tested a problem-focused email body against a solutions-focused one. Notice how the first one here mentions how children are vulnerable and dangers that parents and schools must address while the second one discusses the unique opportunity that parents in schools have and the feasible measures that can be taken to ensure that children are safe online. So the first one is the problem one and the second one is the solutions one. And we found that, although the problems frame, problems on this one had more opens, the solutions-focused one had barked clicks. And these differences might not be as dramatic as some of our other results, but they are indeed statistically significant, which as Jessica mentioned, indicates that the difference is meaningful and suggests a real effect. So our thinking here is that a problem's frame may attract attention, but it can also hinder action. More work is we need to do more work to further explore this but work from the frameworks Institute, which is a group that tests different message frames with the general public has shown that messages that convey a high sense. A sense of high urgency and high efficacy work best. So it may not be an either-or thing, but rather that both the problem and solution are needed to convey both that there's indeed a problem, but also that something can be done about it. And through all this testing, we became interested in the email behavior of policymakers. Do they always open emails, sometimes, or never. We thought that it might be similar to the general public, where there's a small group of people who always open and keep a clean inbox and larger groups of people who never, or only sometimes open emails. I'll start with the good news that approximately 20% of recipients open every email, the bad news is that a larger percent of perhaps not like 45% never opened. The rest, approximately 35%, rarely or sometimes open it's this group that dissemination teams may benefit the most by figuring out how to reach since their behavior seems to be the most malleable. Finally, what might be one of the most important takeaways is that context matters. Outside events and the current politicization of a given issue might be stronger than any messaging strategy. What works in one context might not work in another. In this series of trials, we found that the problems framed visitor engagement with the e-mails disseminating research products to policing and students, but not racial disparities in housing. We're showing this light again, just to further reiterate this point. Even strategies that are generally effective, like personalization may not work a hundred percent of the time. And this exact reason takes us to our final takeaway that evaluation is necessary. Evaluation is also necessary because what might seem intuitive might not actually be all that effective as we saw with the plane formatting compared to the newsletter formatting. But you won't know unless you evaluate it. So as such evaluation capacities should be integrated into dissemination efforts and maintain to understand what works and what doesn't that when trying to reach policymakers via email, and now hand it back to Jessica who can recap and cover anything I forgot to mention. Jessica Pugel: So I ended up, it was a lot of information in a short amount of time, but all of these tips are available at the site and linked on this QR code, which you can scan for the end of the session. It's on the rest of our slides, but to recap. So it's important to make whatever you're sending relevant to the official, whether that means including their name or state name in the subject line or targeting based on their committee assignment. Whatever you send, be sure to keep it as short as possible with the exception of personal stories. Personal stories are counted out of this because policymakers really prefer people to hear from them and understand. Because they prefer people, you should take steps to be like a real human person, instead of trying to trick them into opening your email by making it sound like there's something else inside. Transparency is key. If you see an email that says cute dog pictures, and you open it to find turtle pictures, instead, you might be a little annoyed. Even if the turtle pictures are also cute. Perhaps an overarching way to capture these takeaways is to be normal, sound like a human, make sure that it sounds like something you would actually send to like your colleague. Because it's so important to be normal, it makes emotional appeals really complicated. If you can authentically evoke emotions that might help, but if it comes off as forced or inauthentic, it might hurt. One way you could authentically evoke motion is by appealing to problems about rather than solutions for an issue. But we see that might also backfire. People might open the problem emails one more, but would also maybe get overwhelmed with the size of the problem and that would stop them from taking action. Instead, a solution frame might promote action, though it may receive lower open rates. No matter what methods you've used to keep in mind that one in two will never open the email and one in five will always open it. So it's that middle part that you can really target with strategies. Given the fast-paced nature of policy, international attention, these strategies won't always be effective in every case. Even our strongest strategies fail sometimes. And like I said before, this is about scientific information. These could have very different results for different issue areas. And because of that, it's crucial to include this testing as part of your outreach efforts. There is certainly more to learn in this area and we can't do it alone. A huge thank you to our messaging trials team and our supporters, Taylor, Rachel, Mary, Kat, Brittany Max, and Cagla without whom none of this work would have been possible. And thank you to our attentive audience, and you can follow us on Twitter or shoot us an email if you want to keep in touch. Otherwise, I think we're ready to move into Q and A. I see in the short versus long email tasks, did it matter where the link was placed in the email? If the link was higher in a longer email, did that matter? We didn't test that to my knowledge. We usually put the link at the end and on its own. We've had some issues where if we embed the link like we're like read it here and then we embed the link on that, those words then they'll respond to us saying can you send it in a different way because we can't click on links that aren't actual, they're just, they think that you're going to give them an error or sorry, words are hard. Give them that's. What is the word I'm looking for? Beth Long: A virus Jessica Pugel: was not sure what you're trying to say, but my brain caught up. So we're here now, but do you remember if we tested that? Okay. Beth Long: No, we didn't, but I would say keep it higher generally, but you also want to balance that with being personal and getting your message across, conveying the importance of the link. Jessica Pugel: Yeah. If you were just saying here, read this and then you get your spiel below it. I'm not sure what that would maybe come off as weird. And then that would go against the whole just be a normal thing. Beth Long: Why should I read this? Yeah. Yeah. Okay. Jessica Pugel: Next question. Can you talk more about who specifically those emails went to? Was it the policymaker's information email, or their staff or the staff covering the issue, and do you dis-aggregate by the state to account for how different states provide administrative support for policymakers? That is a great question. We think a lot about it. In most of our analysis, we do break it up by officials and staffers. Like we can include that and we find that, Surprisingly to me, at least that officials open the emails at a higher rate than the staffers do. We don't send it to necessarily the Chief of Staff. We send it to all of the staff for officials, sorry, for the state level, we send it to all of their staff. For federal level we target based on their issue area. So like the area that they work on. Anything to add to that? Beth Long: Nope. I think that's good or sense Jessica Pugel: for letter-writing campaigns? Is it best to focus on a single issue or a solution or is it ineffective to include more than one issue in one campaign? Beth Long: That's a good question. We haven't studied letter-writing campaigns, but based on the work that we have done, I would say it doesn't matter the number of issues, as long as you're getting at what I was talking about with the frameworks institute. If you're conveying a problem, you also have to convey the solution. So that might get difficult if you're including, like 10 different issues. But if you have just a few issues and you can say the solution, what can be done about it, then that might be more effective. Jessica Pugel: From an analytics perspective, I think we have considered including different issue areas in the same email campaign, but we have decided against it because it's really hard to decide whether someone opened that email more times because of this issue or because of that issue. And so we just tend to not allow that complexity to happen because we want cleaner results. Next question. Have you made it publicly available which policymakers never opened emails and which ones usually do like in a scorecard format? As a constituent, I'd love to know those can't my reps are in. We do not because it can hurt their reputation. All of our data are anonymized. Even before they're stored on our server. But we know we can like, know what state they're in, but we don't release that information. Because it would probably hurt our relationships with them as well. Beth Long: We do have a paper coming out soon that describes that data anonymously. If just a plug for that. Jessica Pugel: Yes. We'll be coming out soon. That's exciting. It's the one that has the one that, of course, it would be the one with the circle with the pie chart. Okay. That's fine. Can you tell if that was forwarded to the staffer who leaves them summaries? No, we can't. That's actually one of the big things. So when we say one of our emails get more, gets more opens or gets more clicks, it's actually just like the email. It's based on its intended recipient. Sometimes we get open rates back that are like in the hundreds. And I don't know about you, but I never opened emails a hundred times. Unless it's oh is my order like delivered yet? I don't know. But usually I, it, so it must be forwarded. So we assume that if there's a stronger or a higher number of opens, that it was forwarded, the same thing with a click. How are you evaluating effectiveness? Click rate. This is something through Quorum. They have, so for open rate, there's a one-pixel invisible download thing that when you open the email that downloads. With Apple's new privacy advancements they are futzing with that a little bit, but we don't really know how it's impacting our data yet. And then click rate is also through Quorum. They change the link. I don't know how it works, technologically, but it's like a link redirects. It's like the industry standard. Both of these are industry standards. Did you test threat frames of define and a solution to frame email, body? We did. Oh, it was so cool. Nothing happened from it, but we tested it. We test it. So it was matching solutions, subject with solutions, body, and problems with problems, body, and I'd miss matching them. And there was nothing. I wanted it to have something so bad. And so that's why I remember it. Beth Long: And it's surprising because the frameworks Institute, that could be one way of getting at that high efficacy and high urgency, but I think maybe it needs to be like the threat frame needs to be more than a subject line in this case. Jessica Pugel: Yeah. Yeah. I agree. Do you study your replies? It's difficult to. Just for capacity reasons. But we have that data stored. And so at some point in the future, we can look at it and we're hoping for one of our future papers that we do actually end up getting to look at those. But we have someone coded, like from the inbox that it comes into like positive, negative, meeting request, other things like. Beth Long: What are other things we really want to test? We have a wishlist of things. We want to test so many things. I know for me, I'm personally interested in substance use issues and the power of sharing personal lived experiences and how that might increase engagement with emails. So that's what I'll be exploring further. Jessica, what do you really want to test? Jessica Pugel: The one that I've been really on lately has been the what versus how framework. So this is, it's also pulled from Frameworks Institute, but it's also anyway the what framework and just what are they, how prevalent is this problem and the how the framework is like, why does this problem exist? What are the structural influences there? And it's been so interesting to see the differences in engagement between those two. And I just want to dive into that a lot more. We just haven't done enough on it for me to know what to do with that information yet. It would be number 11. But we don't have that data in full. Yeah. In conveying solutions is offering legislative language is more effective than just describing the solution. Beth, if you want to take this on, you're welcome to. Beth Long: I can speak to this. I can speak to this a little bit. We haven't tested this specifically, but, in our other work we test our model, the research and policy collaboration model, which as a model for bridging policy and research by connecting legislators and policymakers. Through that, I've attended a number of meetings with staffers who have told me specifically that legislative language would be very helpful to them rather than just having a laundry list of different solutions. We have not delved into that territory, mostly because we don't have the capacity to be drafting legislative language. And also because we're federally funded, we can't do lobbying. So if we were to draft legislative language, it would have to be more of like example text. So it would be more than coming up with just one language piece. so we want to, but we haven't yet. Jessica Pugel: A counterpoint to that is not, I've read it in some of the research literature on this area that maybe policymakers don't like when people, when researchers provide their solutions to this like it would be like the legislative language because they feel like that's their job, the legislator's job. And so it feels like researchers should stay in their lane without providing specific language. So in sum, we don't know. Any insight into how many legislators are viewing through a mobile device? I imagine this affects the length of the message, depending on which device they're accessing from. In, in short, we don't have that information judging from our Google analytics reports about mobile versus website use I think most of them are our own website, but we know that of course like people do check their email on their phone. So I don't know for sure, but I agree that the device that you're on is going to affect how you view it and then how you engage with it, for sure. Beth Long: That's an interesting future test. Maybe if we can come up with a way to test it. Jessica Pugel: In our like website optimization stuff, we have more stuff coming. Do you test if the actual link is still bent more or less than a hyperlink? We did test it, I think twice. And there were none. It wasn't interesting. But so people click it, I think about the same rates or maybe open at the same rates, but the responses that we receive from the ones where we hyperlink are more negative than the ones with the actual length, like very recently, like we provided a hyperlink and multiple people responded. Like I can't click on that link. It's our policy. Like I can't click on hyperlinks and so that you have to give them the raw link itself. I don't understand that, but I'm sure that there's good reason for it. Yeah, but we have tested it a couple of times. [post_title] => Research-Backed Tips for Emailing Legislators and Staff with RPC's Beth Long and Jessica Pugel [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => research-backed-tips-for-emailing-legislators [to_ping] => [pinged] => [post_modified] => 2021-10-12 21:34:10 [post_modified_gmt] => 2021-10-12 21:34:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.quorum.us/?post_type=resources&p=5773 [menu_order] => 0 [post_type] => resources [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 1 [current_post] => -1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 5773 [post_author] => 12 [post_date] => 2021-10-12 21:34:10 [post_date_gmt] => 2021-10-12 21:34:10 [post_content] => Jessica Pugel: Research-to-Policy Center is based out of Penn State University, and we are focused on improving the wellbeing of children and families by promoting the use of research and social policy. In addition to our practice, we also study how to improve the use of research and legislation. So what we're covering today is some of what we've learned from that, Just to orient you to the issue at hand. Do you ever relate to this picture? Think that you have a lot of emails? Think about public officials, and their staff sort through emails from a wide range of stakeholders from think tanks, to colleagues, to a lot of constituents, and they're expected to be responsive to all of them. And all of that contributes to an information overload. And one that only got worse for the pandemic. Public officials were, and still are navigating the pandemic and the heightened racial tensions from the past couple of years. And both of those prompted even more emails from constituents and other stakeholders who wanted their voice to be heard. With this influx of information what makes you think that yours are getting read more off? Outreach strategizing is super important to any organization. And if you don't believe me, just look at the subject lines of emails that you've received. Like these ones that I pulled from my inbox just yesterday, this online learning myth is busted. Get the facts from the experts. Maybe that's like spiking controversy. He's one of the worst Americans. This field filth dangerous, very dangerous. Seems like an appeal to strong emotions. And the last one, Jessica, protects what's worth saving is personalized to me. Everyone is trying to encourage more people to read their emails and testing out their own ideas. So think about it. In your own work, what have you done to cut through the noise of officials' email inboxes? What is the most effective way to email them? After all the first step to getting a meeting with them is for them to open your email and for you to make a good impression. A variety of stakeholders message public officials every day. But there are very few studies that have looked at how to most effectively reach them until now. Our team at the RPC started to look into communication strategies in earnest right at the start of the pandemic. March 2020. Since then we've conducted over 75 rapid cycle, randomized controlled trials. Rapid cycle, meaning that we were learning in real-time. We use the results of one trial to inform the next trial and randomized controlled trials are the gold standard for evaluation. It allows us to be pretty darn sure that any difference in outcome is due to the thing that we changed rather than any other outside factors. And this is how those tests are structured. We start with an idea. Sometimes we look through our own inboxes and think about why we open the ones that we open. And just as importantly, why we don't open the ones that we don't open or ideas can be pulled from psychology or marketing or other relevant work. For example, the cocktail party effects. This idea that our names are really powerful and pulling our attention that might drive you to want to test including the recipient's name in the subject. So one email would have the name and another email would not have the name. Cool. The idea is done. Now we actually have to implement it. So next, we would find something to distribute. Since our goal is to improve the use of research. And there was a new pandemic. We often found COVID-related research summaries to distribute in an email. Notice that we aren't just spamming these emails. We carefully choose timely material. It's really just a side benefit that we also get to test communication strategies along with that. And because we're sending science-related information, the takeaways that we're presenting today are specific to science information, but should largely work with any type of information. Once we know what we're sending, we then determine who to send it to. Choosing relevant officials is key and we'll talk about why later. We usually distribute to a few thousand officials and staffers, both at the federal and state levels for reference. After we have a list of people we want to send a resource to now the testing part of it begins. Randomized controlled trials require that we've randomized our recipients into group A and group B hence randomization. This allows us to be sure that the effect is because of the thing that we're altering, the name in this example, and many computer programs can do this. We usually use Excel. We would randomize half of them to the control group and half of them to the name group. Next up is to upload these recipients and their group conditions as a custom field to Quorum, instead of having to type in each name individually into the recipient box, based on how they are randomized, Quorum allows us to, upload the randomized groups as a custom field, and then use the custom field to select all the recipients at once. This is a huge time-saver for us and it makes mistakes way less likely. Then we schedule the two emails that we want to test against each other, put in the recipients, and schedule the emails to go out at the same time. We have to go out at the same time because we don't want the time of day or day of the week to influence their outcomes. And this example, we would have two emails with identical bodies, but the subject line would differ. One might say substance use research. And the other might say substance use research for the office's name. The placeholder options that Quorum provides are also a really cool feature and save us again a lot of time and a lot of mistakes. Once the emails go out, we monitor the replies we receive and the open and click rates that Quorum presents on its Outbox. This is what the Outbox monitoring page looks like. So we can see that the emails indeed went out correctly and we can get a glimpse of which emails do better. In this one, we see that the top email has a higher open rate, 23 compared to 21. And but the second one has a higher click rate, click meaning clicks the links in the email rather than just like opening the emails, like clicking in your inbox. So this just lets us know that things are going okay. After a couple of weeks, we closed the data collection by downloading the reports from the Outbox and analyzing them. For you stats nerds out there, we use logistic regression to look at if they opened or clicked at all and negative binomial regressions to look at how many times they opened or clicked. This difference between a binary outcome and a count outcome is an important one when looking at email data, and I'm happy to give more details about these policies if you're interested, just shoot me an email. Now, for those of you who are perhaps less interested in statistics, first of all, I don't blame you. That's fine. But on the following slides, you're going to see some asterisks. And this is just how we denote statistical differences between groups or between subject lines. More asterisks mean that the finding was less likely to happen just by chance. So we can be more certain that the effect is real. More asterisks more certain. And that's all that means. So even if some of these effects have zero asterisks, the examples are included just to demonstrate these ideas rather than to report out exact results. And we have several specific trials for each of these takeaways that we're presenting today. And we just chose a couple of them as good examples, regardless of if there's just statistical significance. As you can imagine, these tests are time-intensive to conduct, but Quorum helps make it a smoother, more efficient process. The Power Search allows us to identify the relevant officials.. Put in personalized information for the thousands of people without placeholder. Upload the randomized groups as Custom Fields and send those as bulk emails and then also monitor theopen and click rates. So we did that process 78 times in the past 18 months, and we learned a lot from that work. We have 10 key takeaways that you can use in your own outreach strategies. And Beth is going to take it over from here. Beth Long: Thanks, Jessica. I'm really excited to share with you some of the things that we learned from this work. We hope and expect that some of these lessons can be applied to any goal that you may have for your email, whether it's to get a meeting or get your information read. A little bit of background. First, as Jessica alluded to because there've been few if any research studies that have experimentally tested science communication strategies with policymakers specifically, we had to draw our ideas for our tests from social psychology and marketing research. First up, we tested the idea of relevance. It's intuitive and also supported by marketing research, that people are more likely to engage with information that they deem to be personally relevant. So in the case of policymakers, this could mean information that is relevant to them or their constituents as well as information that they can use or care about. As such, we tested things like using a legislator's name or their state name in the subject line. We also tested sending the emails to a targeted audience, such as those who sit on relevant committees, those who work in states that have a high prevalence of the issue, and those who mentioned the issue in public statements like on social media. We've found that the results varied by issue area, but there was a fairly consistent effect that personalizing the subject lines with policymaker names or state names across the areas of COVID exploitation and interpersonal violence increased engagement with emails, but there was no effect of personalization in the context of police violence. When sending emails to those who sit on relevant committees, we similarly found increased engagement in three of the four issue areas. But when sending to those who have a high prevalence of the issue in their state, there was surprisingly little effect. And finally, when targeting those who mentioned the issue frequently in public statements, we found really inconsistent results. Engagement actually decreased in the context of COVID and was increased in the context of police and interpersonal violence, but had no effect at all in the context of exploitation. So in some cueing relevance by including policymakers' names or states in the subject line, or by targeting those who sit on relevant committees seem to be effective strategies in most contexts, but the state-level prevalence and public mentions were not so effective. And here's just an example of the types of responses that we've received. If our targeting is way off, they'll say something to the effect of not being sure what to do with the information. This response was sent from someone who sits on a child and family law committee and who frequently works in legislation related to crime and law enforcement. The resource we sent to her was about preventing substance use, but clearly, our targeting was way off as she didn't really understand how to use the information we were sending to her. Next, we were interested in testing subject line length because it's also intuitive and supported by marketing research. We tested this by sending short and long emails to approximately 2000 recipients with short meaning two to three sentences that take up no more than three lines, excluding the hello and sign-off lines. We found that the short email resulted in almost a hundred more clicks than the link in the email compared to the long one. So yes, shorter is generally better with a few exceptions that I'll get to shortly. We were then interested in whether formatting the email is coming from a person or an organization with fancy newsletter-style formatting would be more effective because it's commonly believed that fancy newsletter formatting should increase credibility and therefore engagement with the email message. At least that's what we believed. However, we were quite surprised to see a very large effect in the opposite direction of what we expected. The plain email coming from a real person resulted in 46% more opens and a whopping seven times more clicks than a newsletter-style email. So it seems that policymakers prefer emails from real people, rather than newsletters from organizations. Because policymakers prefer people, we next wanted to test if they similarly like people's stories. Accordingly, we tested a personal narrative against a normal short email. In this case, we happen to have a person who had lived experience with a substance use disorder who is also a parent contribute to one of our fact sheets on parenting supports for parents with substance use disorders. She was willing to share her very personal story in this way and how she personally benefited from a parenting support group. And it paid off. We found that the link of the email with her personal story, it was clicked on more than the regular one, despite the longer length. So this is one caveat to our previous recommendation. Keep it short unless you're sharing a personal story in this way. Just driving home this point a little bit more, we tested these three emails against each other. The first one started off with some statistics regarding the problem. The second one started with a pleasantry. I hope you're having a great week. And the last one started with a researcher introducing themselves. We found that the one that started off with numbers, got approximately half as many clicks as the other two. So policymakers really prefer people. Switching gears just a little bit. We next wanted to test the science and research frames because of the increasingly negative light that universities and science is being seen in, specifically that academic scholarly work is self-serving and doesn't meet the needs of the public. We first tested two science-based subject lines against a control line with the word regarding. And we found that although the science-based subject lines were open slightly less, the link to the research product was clicked 50% more suggesting that policymakers want to know what to expect. If they get a general email with a subject line that starts with regarding they don't know what to expect. And they therefore may feel duped or surprised by being presented with a science-based fact sheet. But when the science frame is transparent, they know what they're getting and seem to click on the link more as a result. We explore this more in another trial to see if we could replicate it. And this trial we tested concerning and regarding against research. And our results were similar. The research subject line has slightly fewer opens, but a lot more clicks on the research product, further supporting the idea that policymakers want science-based emails to be transparent, or they just want to know what to expect. And although we don't have time to present all the trials that tested this. We want to mention that we overwhelmingly found zero evidence for an anti-science bias, which as researchers we think is a good thing. Our next recommendation is to just be normal, which sounds like common sense. But when trying to incorporate or test traditional messaging tactics like jumping on the bandwagon, it becomes a little difficult to have a natural-sounding subject line. It seems that policymakers have become averse to tactics like this, and just prefer normal-sounding subject lines. Here we found that what we thought would be the control subject line or the ineffective neutral one ended up being the most effective likely because it was the most natural sounding. We saw this again when trying to test the strategy of evoking emotion and drawing attention and realize in retrospect, how strange a subject line like shocking policy considerations is. Again, we saw here that our subject line also the most natural-sounding and non-click baity ended up being the most effective. It seems that policymakers and staffers have been so inundated with emails that use these tactics, that they've come to view them as clickbait and became averse to them. Following this, we've done a number of tests on emotion and threat frames. And this first trial, we tested a subject line that included the word information against blinds that included the words, research in social disparities and found that the social disparity subject line resulted in the most opens, we think because it can be a hot button issue that's frequently debated in the media and accordingly it authentically evoked emotion. Similarly in our next trial, we tested a solution frame against a threats frame with the thinking that the threats frame would authentically evoke emotion. And because people are more prone to pay attention to things that are threatening or dangerous to them. Indeed, the subject line with the word threats was more effective than the subject line with the phrase new solutions. Finally, we tested another variation of a solutions frame against a threat or difficulty frame. We likewise found that the subject line with the word difficulties was more effective than the line with the word solution. So in sum using emotional appeal to increase policymakers engagement with email is slightly complicated because it seems that the emotion has to be subtly evoked as we see here, rather than overtly mentioned, as we saw with the shocking policy considerations line. So these results subsequently led us to test a series of subject lines with a problem versus solutions frame. This is just one example from this series where we tested a problem-focused email body against a solutions-focused one. Notice how the first one here mentions how children are vulnerable and dangers that parents and schools must address while the second one discusses the unique opportunity that parents in schools have and the feasible measures that can be taken to ensure that children are safe online. So the first one is the problem one and the second one is the solutions one. And we found that, although the problems frame, problems on this one had more opens, the solutions-focused one had barked clicks. And these differences might not be as dramatic as some of our other results, but they are indeed statistically significant, which as Jessica mentioned, indicates that the difference is meaningful and suggests a real effect. So our thinking here is that a problem's frame may attract attention, but it can also hinder action. More work is we need to do more work to further explore this but work from the frameworks Institute, which is a group that tests different message frames with the general public has shown that messages that convey a high sense. A sense of high urgency and high efficacy work best. So it may not be an either-or thing, but rather that both the problem and solution are needed to convey both that there's indeed a problem, but also that something can be done about it. And through all this testing, we became interested in the email behavior of policymakers. Do they always open emails, sometimes, or never. We thought that it might be similar to the general public, where there's a small group of people who always open and keep a clean inbox and larger groups of people who never, or only sometimes open emails. I'll start with the good news that approximately 20% of recipients open every email, the bad news is that a larger percent of perhaps not like 45% never opened. The rest, approximately 35%, rarely or sometimes open it's this group that dissemination teams may benefit the most by figuring out how to reach since their behavior seems to be the most malleable. Finally, what might be one of the most important takeaways is that context matters. Outside events and the current politicization of a given issue might be stronger than any messaging strategy. What works in one context might not work in another. In this series of trials, we found that the problems framed visitor engagement with the e-mails disseminating research products to policing and students, but not racial disparities in housing. We're showing this light again, just to further reiterate this point. Even strategies that are generally effective, like personalization may not work a hundred percent of the time. And this exact reason takes us to our final takeaway that evaluation is necessary. Evaluation is also necessary because what might seem intuitive might not actually be all that effective as we saw with the plane formatting compared to the newsletter formatting. But you won't know unless you evaluate it. So as such evaluation capacities should be integrated into dissemination efforts and maintain to understand what works and what doesn't that when trying to reach policymakers via email, and now hand it back to Jessica who can recap and cover anything I forgot to mention. Jessica Pugel: So I ended up, it was a lot of information in a short amount of time, but all of these tips are available at the site and linked on this QR code, which you can scan for the end of the session. It's on the rest of our slides, but to recap. So it's important to make whatever you're sending relevant to the official, whether that means including their name or state name in the subject line or targeting based on their committee assignment. Whatever you send, be sure to keep it as short as possible with the exception of personal stories. Personal stories are counted out of this because policymakers really prefer people to hear from them and understand. Because they prefer people, you should take steps to be like a real human person, instead of trying to trick them into opening your email by making it sound like there's something else inside. Transparency is key. If you see an email that says cute dog pictures, and you open it to find turtle pictures, instead, you might be a little annoyed. Even if the turtle pictures are also cute. Perhaps an overarching way to capture these takeaways is to be normal, sound like a human, make sure that it sounds like something you would actually send to like your colleague. Because it's so important to be normal, it makes emotional appeals really complicated. If you can authentically evoke emotions that might help, but if it comes off as forced or inauthentic, it might hurt. One way you could authentically evoke motion is by appealing to problems about rather than solutions for an issue. But we see that might also backfire. People might open the problem emails one more, but would also maybe get overwhelmed with the size of the problem and that would stop them from taking action. Instead, a solution frame might promote action, though it may receive lower open rates. No matter what methods you've used to keep in mind that one in two will never open the email and one in five will always open it. So it's that middle part that you can really target with strategies. Given the fast-paced nature of policy, international attention, these strategies won't always be effective in every case. Even our strongest strategies fail sometimes. And like I said before, this is about scientific information. These could have very different results for different issue areas. And because of that, it's crucial to include this testing as part of your outreach efforts. There is certainly more to learn in this area and we can't do it alone. A huge thank you to our messaging trials team and our supporters, Taylor, Rachel, Mary, Kat, Brittany Max, and Cagla without whom none of this work would have been possible. And thank you to our attentive audience, and you can follow us on Twitter or shoot us an email if you want to keep in touch. Otherwise, I think we're ready to move into Q and A. I see in the short versus long email tasks, did it matter where the link was placed in the email? If the link was higher in a longer email, did that matter? We didn't test that to my knowledge. We usually put the link at the end and on its own. We've had some issues where if we embed the link like we're like read it here and then we embed the link on that, those words then they'll respond to us saying can you send it in a different way because we can't click on links that aren't actual, they're just, they think that you're going to give them an error or sorry, words are hard. Give them that's. What is the word I'm looking for? Beth Long: A virus Jessica Pugel: was not sure what you're trying to say, but my brain caught up. So we're here now, but do you remember if we tested that? Okay. Beth Long: No, we didn't, but I would say keep it higher generally, but you also want to balance that with being personal and getting your message across, conveying the importance of the link. Jessica Pugel: Yeah. If you were just saying here, read this and then you get your spiel below it. I'm not sure what that would maybe come off as weird. And then that would go against the whole just be a normal thing. Beth Long: Why should I read this? Yeah. Yeah. Okay. Jessica Pugel: Next question. Can you talk more about who specifically those emails went to? Was it the policymaker's information email, or their staff or the staff covering the issue, and do you dis-aggregate by the state to account for how different states provide administrative support for policymakers? That is a great question. We think a lot about it. In most of our analysis, we do break it up by officials and staffers. Like we can include that and we find that, Surprisingly to me, at least that officials open the emails at a higher rate than the staffers do. We don't send it to necessarily the Chief of Staff. We send it to all of the staff for officials, sorry, for the state level, we send it to all of their staff. For federal level we target based on their issue area. So like the area that they work on. Anything to add to that? Beth Long: Nope. I think that's good or sense Jessica Pugel: for letter-writing campaigns? Is it best to focus on a single issue or a solution or is it ineffective to include more than one issue in one campaign? Beth Long: That's a good question. We haven't studied letter-writing campaigns, but based on the work that we have done, I would say it doesn't matter the number of issues, as long as you're getting at what I was talking about with the frameworks institute. If you're conveying a problem, you also have to convey the solution. So that might get difficult if you're including, like 10 different issues. But if you have just a few issues and you can say the solution, what can be done about it, then that might be more effective. Jessica Pugel: From an analytics perspective, I think we have considered including different issue areas in the same email campaign, but we have decided against it because it's really hard to decide whether someone opened that email more times because of this issue or because of that issue. And so we just tend to not allow that complexity to happen because we want cleaner results. Next question. Have you made it publicly available which policymakers never opened emails and which ones usually do like in a scorecard format? As a constituent, I'd love to know those can't my reps are in. We do not because it can hurt their reputation. All of our data are anonymized. Even before they're stored on our server. But we know we can like, know what state they're in, but we don't release that information. Because it would probably hurt our relationships with them as well. Beth Long: We do have a paper coming out soon that describes that data anonymously. If just a plug for that. Jessica Pugel: Yes. We'll be coming out soon. That's exciting. It's the one that has the one that, of course, it would be the one with the circle with the pie chart. Okay. That's fine. Can you tell if that was forwarded to the staffer who leaves them summaries? No, we can't. That's actually one of the big things. So when we say one of our emails get more, gets more opens or gets more clicks, it's actually just like the email. It's based on its intended recipient. Sometimes we get open rates back that are like in the hundreds. And I don't know about you, but I never opened emails a hundred times. Unless it's oh is my order like delivered yet? I don't know. But usually I, it, so it must be forwarded. So we assume that if there's a stronger or a higher number of opens, that it was forwarded, the same thing with a click. How are you evaluating effectiveness? Click rate. This is something through Quorum. They have, so for open rate, there's a one-pixel invisible download thing that when you open the email that downloads. With Apple's new privacy advancements they are futzing with that a little bit, but we don't really know how it's impacting our data yet. And then click rate is also through Quorum. They change the link. I don't know how it works, technologically, but it's like a link redirects. It's like the industry standard. Both of these are industry standards. Did you test threat frames of define and a solution to frame email, body? We did. Oh, it was so cool. Nothing happened from it, but we tested it. We test it. So it was matching solutions, subject with solutions, body, and problems with problems, body, and I'd miss matching them. And there was nothing. I wanted it to have something so bad. And so that's why I remember it. Beth Long: And it's surprising because the frameworks Institute, that could be one way of getting at that high efficacy and high urgency, but I think maybe it needs to be like the threat frame needs to be more than a subject line in this case. Jessica Pugel: Yeah. Yeah. I agree. Do you study your replies? It's difficult to. Just for capacity reasons. But we have that data stored. And so at some point in the future, we can look at it and we're hoping for one of our future papers that we do actually end up getting to look at those. But we have someone coded, like from the inbox that it comes into like positive, negative, meeting request, other things like. Beth Long: What are other things we really want to test? We have a wishlist of things. We want to test so many things. I know for me, I'm personally interested in substance use issues and the power of sharing personal lived experiences and how that might increase engagement with emails. So that's what I'll be exploring further. Jessica, what do you really want to test? Jessica Pugel: The one that I've been really on lately has been the what versus how framework. So this is, it's also pulled from Frameworks Institute, but it's also anyway the what framework and just what are they, how prevalent is this problem and the how the framework is like, why does this problem exist? What are the structural influences there? And it's been so interesting to see the differences in engagement between those two. And I just want to dive into that a lot more. We just haven't done enough on it for me to know what to do with that information yet. It would be number 11. But we don't have that data in full. Yeah. In conveying solutions is offering legislative language is more effective than just describing the solution. Beth, if you want to take this on, you're welcome to. Beth Long: I can speak to this. I can speak to this a little bit. We haven't tested this specifically, but, in our other work we test our model, the research and policy collaboration model, which as a model for bridging policy and research by connecting legislators and policymakers. Through that, I've attended a number of meetings with staffers who have told me specifically that legislative language would be very helpful to them rather than just having a laundry list of different solutions. We have not delved into that territory, mostly because we don't have the capacity to be drafting legislative language. And also because we're federally funded, we can't do lobbying. So if we were to draft legislative language, it would have to be more of like example text. So it would be more than coming up with just one language piece. so we want to, but we haven't yet. Jessica Pugel: A counterpoint to that is not, I've read it in some of the research literature on this area that maybe policymakers don't like when people, when researchers provide their solutions to this like it would be like the legislative language because they feel like that's their job, the legislator's job. And so it feels like researchers should stay in their lane without providing specific language. So in sum, we don't know. Any insight into how many legislators are viewing through a mobile device? I imagine this affects the length of the message, depending on which device they're accessing from. In, in short, we don't have that information judging from our Google analytics reports about mobile versus website use I think most of them are our own website, but we know that of course like people do check their email on their phone. So I don't know for sure, but I agree that the device that you're on is going to affect how you view it and then how you engage with it, for sure. Beth Long: That's an interesting future test. Maybe if we can come up with a way to test it. Jessica Pugel: In our like website optimization stuff, we have more stuff coming. Do you test if the actual link is still bent more or less than a hyperlink? We did test it, I think twice. And there were none. It wasn't interesting. But so people click it, I think about the same rates or maybe open at the same rates, but the responses that we receive from the ones where we hyperlink are more negative than the ones with the actual length, like very recently, like we provided a hyperlink and multiple people responded. Like I can't click on that link. It's our policy. Like I can't click on hyperlinks and so that you have to give them the raw link itself. I don't understand that, but I'm sure that there's good reason for it. Yeah, but we have tested it a couple of times. [post_title] => Research-Backed Tips for Emailing Legislators and Staff with RPC's Beth Long and Jessica Pugel [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => research-backed-tips-for-emailing-legislators [to_ping] => [pinged] => [post_modified] => 2021-10-12 21:34:10 [post_modified_gmt] => 2021-10-12 21:34:10 [post_content_filtered] => [post_parent] => 0 [guid] => https://www.quorum.us/?post_type=resources&p=5773 [menu_order] => 0 [post_type] => resources [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 1 [max_num_pages] => 0 [max_num_comment_pages] => 0 [is_single] => 1 [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => [is_admin] => [is_attachment] => [is_singular] => 1 [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 28ec25dc6b9c28799880f0a8e7849f62 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) )
!!! 5773
Info

Research-Backed Tips for Emailing Legislators and Staff with RPC’s Beth Long and Jessica Pugel

Research-Backed Tips for Emailing Legislators and Staff with RPC’s Beth Long and Jessica Pugel

Jessica Pugel: Research-to-Policy Center is based out of Penn State University, and we are focused on improving the wellbeing of children and families by promoting the use of research and social policy.

In addition to our practice, we also study how to improve the use of research and legislation. So what we’re covering today is some of what we’ve learned from that,

Just to orient you to the issue at hand. Do you ever relate to this picture? Think that you have a lot of emails? Think about public officials, and their staff sort through emails from a wide range of stakeholders from think tanks, to colleagues, to a lot of constituents, and they’re expected to be responsive to all of them.

And all of that contributes to an information overload. And one that only got worse for the pandemic. Public officials were, and still are navigating the pandemic and the heightened racial tensions from the past couple of years. And both of those prompted even more emails from constituents and other stakeholders who wanted their voice to be heard. With this influx of information

what makes you think that yours are getting read more off? Outreach strategizing is super important to any organization. And if you don’t believe me, just look at the subject lines of emails that you’ve received. Like these ones that I pulled from my inbox just yesterday, this online learning myth is busted.

Get the facts from the experts. Maybe that’s like spiking controversy. He’s one of the worst Americans. This field filth dangerous, very dangerous. Seems like an appeal to strong emotions. And the last one, Jessica, protects what’s worth saving is personalized to me. Everyone is trying to encourage more people to read their emails and testing out their own ideas.

So think about it. In your own work, what have you done to cut through the noise of officials’ email inboxes? What is the most effective way to email them? After all the first step to getting a meeting with them is for them to open your email and for you to make a good impression. A variety of stakeholders message public officials every day.

But there are very few studies that have looked at how to most effectively reach them until now. Our team at the RPC started to look into communication strategies in earnest right at the start of the pandemic. March 2020. Since then we’ve conducted over 75 rapid cycle, randomized controlled trials. Rapid cycle, meaning that we were learning in real-time. We use the results of one trial to inform the next trial and randomized controlled trials are the gold standard for evaluation.

It allows us to be pretty darn sure that any difference in outcome is due to the thing that we changed rather than any other outside factors. And this is how those tests are structured. We start with an idea. Sometimes we look through our own inboxes and think about why we open the ones that we open.

And just as importantly, why we don’t open the ones that we don’t open or ideas can be pulled from psychology or marketing or other relevant work. For example, the cocktail party effects. This idea that our names are really powerful and pulling our attention that might drive you to want to test including the recipient’s name in the subject.

So one email would have the name and another email would not have the name. Cool. The idea is done. Now we actually have to implement it. So next, we would find something to distribute. Since our goal is to improve the use of research. And there was a new pandemic. We often found COVID-related research summaries to distribute in an email. Notice that we aren’t just spamming these emails. We carefully choose timely material. It’s really just a side benefit that we also get to test communication strategies along with that. And because we’re sending science-related information, the takeaways that we’re presenting today are specific to science information, but should largely work with any type of information.

Once we know what we’re sending, we then determine who to send it to. Choosing relevant officials is key and we’ll talk about why later. We usually distribute to a few thousand officials and staffers, both at the federal and state levels for reference. After we have a list of people we want to send a resource to

now the testing part of it begins. Randomized controlled trials require that we’ve randomized our recipients into group A and group B hence randomization. This allows us to be sure that the effect is because of the thing that we’re altering, the name in this example, and many computer programs can do this.

We usually use Excel. We would randomize half of them to the control group and half of them to the name group. Next up is to upload these recipients and their group conditions as a custom field to Quorum, instead of having to type in each name individually into the recipient box, based on how they are randomized, Quorum allows us to, upload the randomized groups as a custom field, and then use the custom field to select all the recipients at once.

This is a huge time-saver for us and it makes mistakes way less likely. Then we schedule the two emails that we want to test against each other, put in the recipients, and schedule the emails to go out at the same time. We have to go out at the same time because we don’t want the time of day or day of the week to influence their outcomes.

And this example, we would have two emails with identical bodies, but the subject line would differ. One might say substance use research. And the other might say substance use research for the office’s name. The placeholder options that Quorum provides are also a really cool feature and save us again a lot of time and a lot of mistakes.

Once the emails go out, we monitor the replies we receive and the open and click rates that Quorum presents on its Outbox. This is what the Outbox monitoring page looks like. So we can see that the emails indeed went out correctly and we can get a glimpse of which emails do better. In this one, we see that the top email has a higher open rate, 23 compared to 21.

And but the second one has a higher click rate, click meaning clicks the links in the email rather than just like opening the emails, like clicking in your inbox. So this just lets us know that things are going okay. After a couple of weeks, we closed the data collection by downloading the reports from the Outbox and analyzing them.

For you stats nerds out there, we use logistic regression to look at if they opened or clicked at all and negative binomial regressions to look at how many times they opened or clicked. This difference between a binary outcome and a count outcome is an important one when looking at email data, and I’m happy to give more details about these policies if you’re interested, just shoot me an email. Now, for those of you who are perhaps less interested in statistics, first of all, I don’t blame you. That’s fine. But on the following slides, you’re going to see some asterisks. And this is just how we denote statistical differences between groups or between subject lines. More asterisks mean that the finding was less likely to happen just by chance. So we can be more certain that the effect is real. More asterisks more certain. And that’s all that means. So even if some of these effects have zero asterisks, the examples are included just to demonstrate these ideas rather than to report out exact results.

And we have several specific trials for each of these takeaways that we’re presenting today. And we just chose a couple of them as good examples, regardless of if there’s just statistical significance. As you can imagine, these tests are time-intensive to conduct, but Quorum helps make it a smoother, more efficient process.

The Power Search allows us to identify the relevant officials.. Put in personalized information for the thousands of people without placeholder. Upload the randomized groups as Custom Fields and send those as bulk emails and then also monitor theopen and click rates. So we did that process 78 times in the past 18 months, and we learned a lot from that work.

We have 10 key takeaways that you can use in your own outreach strategies. And Beth is going to take it over from here.

Beth Long: Thanks, Jessica. I’m really excited to share with you some of the things that we learned from this work. We hope and expect that some of these lessons can be applied to any goal that you may have for your email, whether it’s to get a meeting or get your information read.

A little bit of background. First, as Jessica alluded to because there’ve been few if any research studies that have experimentally tested science communication strategies with policymakers specifically, we had to draw our ideas for our tests from social psychology and marketing research. First up, we tested the idea of relevance. It’s intuitive and also supported by marketing research, that people are more likely to engage with information that they deem to be personally relevant.

So in the case of policymakers, this could mean information that is relevant to them or their constituents as well as information that they can use or care about. As such, we tested things like using a legislator’s name or their state name in the subject line. We also tested sending the emails to a targeted audience, such as those who sit on relevant committees, those who work in states that have a high prevalence of the issue, and those who mentioned the issue in public statements like on social media.

We’ve found that the results varied by issue area, but there was a fairly consistent effect that personalizing the subject lines with policymaker names or state names across the areas of COVID exploitation and interpersonal violence

increased engagement with emails, but there was no effect of personalization in the context of police violence. When sending emails to those who sit on relevant committees, we similarly found increased engagement in three of the four issue areas. But when sending to those who have a high prevalence of the issue in their state, there was surprisingly little effect.

And finally, when targeting those who mentioned the issue frequently in public statements, we found really inconsistent results. Engagement actually decreased in the context of COVID and was increased in the context of police and interpersonal violence, but had no effect at all in the context of exploitation.

So in some cueing relevance by including policymakers’ names or states in the subject line, or by targeting those who sit on relevant committees seem to be effective strategies in most contexts, but the state-level prevalence and public mentions were not so effective. And here’s just an example of the types of responses that we’ve received.

If our targeting is way off, they’ll say something to the effect of not being sure what to do with the information. This response was sent from someone who sits on a child and family law committee and who frequently works in legislation related to crime and law enforcement. The resource we sent to her was about preventing substance use, but clearly, our targeting was way off as she didn’t really understand how to use the information we were sending to her.

Next, we were interested in testing subject line length because it’s also intuitive and supported by marketing research. We tested this by sending short and long emails to approximately 2000 recipients with short meaning two to three sentences that take up no more than three lines, excluding the hello and sign-off lines.

We found that the short email resulted in almost a hundred more clicks than the link in the email compared to the long one. So yes, shorter is generally better with a few exceptions that I’ll get to shortly.

We were then interested in whether formatting the email is coming from a person or an organization with fancy newsletter-style formatting would be more effective because it’s commonly believed that fancy newsletter formatting should increase credibility and therefore engagement with the email message.

At least that’s what we believed. However, we were quite surprised to see a very large effect in the opposite direction of what we expected. The plain email coming from a real person resulted in 46% more opens and a whopping seven times more clicks than a newsletter-style email. So it seems that policymakers prefer emails from real people, rather than newsletters from organizations.

Because policymakers prefer people, we next wanted to test if they similarly like people’s stories. Accordingly, we tested a personal narrative against a normal short email. In this case, we happen to have a person who had lived experience with a substance use disorder who is also a parent contribute to one of our fact sheets on parenting supports for parents with substance use disorders.

She was willing to share her very personal story in this way and how she personally benefited from a parenting support group. And it paid off. We found that the link of the email with her personal story, it was clicked on more than the regular one, despite the longer length. So this is one caveat to our previous recommendation.

Keep it short unless you’re sharing a personal story in this way. Just driving home this point a little bit more, we tested these three emails against each other. The first one started off with some statistics regarding the problem. The second one started with a pleasantry. I hope you’re having a great week.

And the last one started with a researcher introducing themselves. We found that the one that started off with numbers, got approximately half as many clicks as the other two. So policymakers really prefer people.

Switching gears just a little bit. We next wanted to test the science and research frames because of the increasingly negative light that universities and science is being seen in, specifically that academic scholarly work is self-serving and doesn’t meet the needs of the public.

We first tested two science-based subject lines against a control line with the word regarding. And we found that although the science-based subject lines were open slightly less, the link to the research product was clicked 50% more suggesting that policymakers want to know what to expect. If they get a general email with a subject line that starts with regarding they don’t know what to expect. And they therefore may feel duped or surprised by being presented with a science-based fact sheet. But when the science frame is transparent, they know what they’re getting and seem to click on the link more as a result.

We explore this more in another trial to see if we could replicate it.

And this trial we tested concerning and regarding against research. And our results were similar. The research subject line has slightly fewer opens, but a lot more clicks on the research product, further supporting the idea that policymakers want science-based emails to be transparent, or they just want to know what to expect.

And although we don’t have time to present all the trials that tested this. We want to mention that we overwhelmingly found zero evidence for an anti-science bias, which as researchers we think is a good thing.

Our next recommendation is to just be normal, which sounds like common sense. But when trying to incorporate or test traditional messaging tactics like jumping on the bandwagon, it becomes a little difficult to have a natural-sounding subject line. It seems that policymakers have become averse to tactics like this, and just prefer normal-sounding subject lines. Here

we found that what we thought would be the control subject line or the ineffective neutral one ended up being the most effective likely because it was the most natural sounding.

We saw this again when trying to test the strategy of evoking emotion and drawing attention and realize in retrospect, how strange a subject line like shocking policy considerations is. Again, we saw here that our subject line also the most natural-sounding and non-click baity ended up being the most effective.

It seems that policymakers and staffers have been so inundated with emails that use these tactics, that they’ve come to view them as clickbait and became averse to them.

Following this, we’ve done a number of tests on emotion and threat frames. And this first trial, we tested a subject line that included the word information against blinds that included the words, research in social disparities and found that the social disparity subject line resulted in the most opens, we think because it can be a hot button issue that’s frequently debated in the media and accordingly it authentically evoked emotion.

Similarly in our next trial, we tested a solution frame against a threats frame with the thinking that the threats frame would authentically evoke emotion. And because people are more prone to pay attention to things that are threatening or dangerous to them. Indeed, the subject line with the word threats was more effective than the subject line with the phrase new solutions.

Finally, we tested another variation of a solutions frame against a threat or difficulty frame. We likewise found that the subject line with the word difficulties was more effective than the line with the word solution. So in sum using emotional appeal to increase policymakers engagement with email is slightly complicated because it seems that the emotion has to be subtly evoked

as we see here, rather than overtly mentioned, as we saw with the shocking policy considerations line.

So these results subsequently led us to test a series of subject lines with a problem versus solutions frame. This is just one example from this series where we tested a problem-focused email body against a solutions-focused one. Notice how the first one here mentions how children are vulnerable and

dangers that parents and schools must address while the second one discusses the unique opportunity that parents in schools have and the feasible measures that can be taken to ensure that children are safe online. So the first one is the problem one and the second one is the solutions one. And we found that, although the problems frame, problems on this one had more opens, the solutions-focused one had barked clicks. And these differences might not be as dramatic as some of our other results, but they are indeed statistically significant, which as Jessica mentioned, indicates that the difference is meaningful and suggests a real effect.

So our thinking here is that a problem’s frame may attract attention, but it can also hinder action. More work is we need to do more work to further explore this but work from the frameworks Institute, which is a group that tests different message frames with the general public has shown that messages that convey a high sense.

A sense of high urgency and high efficacy work best. So it may not be an either-or thing, but rather that both the problem and solution are needed to convey both that there’s indeed a problem, but also that something can be done about it.

And through all this testing, we became interested in the email behavior of policymakers. Do they always open emails, sometimes, or never. We thought that it might be similar to the general public, where there’s a small group of people who always open and keep a clean inbox and larger groups of people who never, or only sometimes open emails. I’ll start with the good news that approximately 20% of recipients open every email, the bad news is that a larger percent of perhaps not like 45% never opened.

The rest, approximately 35%, rarely or sometimes open it’s this group that dissemination teams may benefit the most by figuring out how to reach since their behavior seems to be the most malleable.

Finally, what might be one of the most important takeaways is that context matters. Outside events and the current politicization of a given issue might be stronger than any messaging strategy. What works in one context might not work in another. In this series of trials, we found that the problems framed visitor engagement with the e-mails disseminating research products to policing and students, but not racial disparities in housing.

We’re showing this light again, just to further reiterate this point. Even strategies that are generally effective, like personalization may not work a hundred percent of the time.

And this exact reason takes us to our final takeaway that evaluation is necessary. Evaluation is also necessary because what might seem intuitive might not actually be all that effective as we saw with the plane formatting compared to the newsletter formatting. But you won’t know unless you evaluate it.

So as such evaluation capacities should be integrated into dissemination efforts and maintain to understand what works and what doesn’t that when trying to reach policymakers via email,

and now hand it back to Jessica who can recap and cover anything I forgot to mention.

Jessica Pugel: So I ended up, it was a lot of information in a short amount of time, but all of these tips are available at the site and linked on this QR code, which you can scan for the end of the session. It’s on the rest of our slides, but to recap. So it’s important to make whatever you’re sending relevant to the official, whether that means including their name or state name in the subject line or targeting based on their committee assignment. Whatever you send, be sure to keep it as short as possible with the exception of personal stories.

Personal stories are counted out of this because policymakers really prefer people to hear from them and understand. Because they prefer people, you should take steps to be like a real human person, instead of trying to trick them into opening your email by making it sound like there’s something else inside. Transparency is key.

If you see an email that says cute dog pictures, and you open it to find turtle pictures, instead, you might be a little annoyed. Even if the turtle pictures are also cute. Perhaps an overarching way to capture these takeaways is to be normal, sound like a human, make sure that it sounds like something you would actually send to like your colleague. Because it’s so important to be normal,

it makes emotional appeals really complicated. If you can authentically evoke emotions that might help, but if it comes off as forced or inauthentic, it might hurt. One way you could authentically evoke motion is by appealing to problems about rather than solutions for an issue. But we see that might also backfire. People might open the problem emails one more, but would also maybe get overwhelmed with the size of the problem

and that would stop them from taking action. Instead, a solution frame might promote action, though it may receive lower open rates. No matter what methods you’ve used to keep in mind that one in two will never open the email and one in five will always open it. So it’s that middle part that you can really target with strategies. Given the fast-paced nature of policy, international attention, these strategies won’t always be effective in every case. Even our strongest strategies fail sometimes.

And like I said before, this is about scientific information. These could have very different results for different issue areas. And because of that, it’s crucial to include this testing as part of your outreach efforts. There is certainly more to learn in this area and we can’t do it alone. A huge thank you to our messaging trials team and our supporters,

Taylor, Rachel, Mary, Kat, Brittany Max, and Cagla without whom none of this work would have been possible. And thank you to our attentive audience, and you can follow us on Twitter or shoot us an email if you want to keep in touch. Otherwise, I think we’re ready to move into Q and A.

I see in the short versus long email tasks, did it matter where the link was placed in the email? If the link was higher in a longer email, did that matter?

We didn’t test that to my knowledge. We usually put the link at the end and on its own. We’ve had some issues where if we embed the link like we’re like read it here

and then we embed the link on that, those words then they’ll respond to us saying can you send it in a different way because we can’t click on links that aren’t actual, they’re just, they think that you’re going to give them an error or sorry, words are hard. Give them that’s. What is the word I’m looking for?

Beth Long: A virus

Jessica Pugel: was not sure what you’re trying to say, but my brain caught up. So we’re here now, but do you remember if we tested that? Okay.

Beth Long: No, we didn’t, but I would say keep it higher generally, but you also want to balance that with being personal and getting your message across, conveying the importance of the link.

Jessica Pugel: Yeah. If you were just saying here, read this and then you get your spiel below it. I’m not sure what that would maybe come off as weird. And then that would go against the whole just be a normal thing.

Beth Long: Why should I read this? Yeah. Yeah. Okay.

Jessica Pugel: Next question. Can you talk more about who specifically those emails went to? Was it the policymaker’s information email, or their staff or the staff covering the issue, and do you dis-aggregate by the state to account for how different states provide administrative support for policymakers?

That is a great question. We think a lot about it. In most of our analysis, we do break it up by officials and staffers.

Like we can include that and we find that, Surprisingly to me, at least that officials open the emails at a higher rate than the staffers do. We don’t send it to necessarily the Chief of Staff. We send it to all of the staff for officials, sorry, for the state level, we send it to all of their staff. For federal level we target based on their issue area. So like the area that they work on. Anything to add to that?

Beth Long: Nope. I think that’s good or sense

Jessica Pugel: for letter-writing campaigns? Is it best to focus on a single issue or a solution or is it ineffective to include more than one issue in one campaign?

Beth Long: That’s a good question. We haven’t studied letter-writing campaigns, but based on the work that we have done, I would say it doesn’t matter the number of issues, as long as you’re getting at what I was talking about with the frameworks institute.

If you’re conveying a problem, you also have to convey the solution. So that might get difficult if you’re including, like 10 different issues. But if you have just a few issues and you can say the solution, what can be done about it, then that might be more effective.

Jessica Pugel: From an analytics perspective, I think we have considered including different issue areas in the same email campaign, but we have decided against it because it’s really hard to decide whether someone opened that email more times because of this issue or because of that issue. And so we just tend to not allow that complexity to happen because we want cleaner results.

Next question. Have you made it publicly available which policymakers never opened emails and which ones usually do like in a scorecard format? As a constituent, I’d love to know those can’t my reps are in.

We do not because it can hurt their reputation. All of our data are anonymized. Even before they’re stored on our server. But we know we can like, know what state they’re in, but we don’t release that information. Because it would probably hurt our relationships with them as well.

Beth Long: We do have a paper coming out soon that describes that data anonymously. If just a plug for that.

Jessica Pugel: Yes. We’ll be coming out soon. That’s exciting. It’s the one that has the one that, of course, it would be the one with the circle with the pie chart. Okay. That’s fine.

Can you tell if that was forwarded to the staffer who leaves them summaries?

No, we can’t. That’s actually one of the big things. So when we say one of our emails get more, gets more opens or gets more clicks, it’s actually just like the email. It’s based on its intended recipient. Sometimes we get open rates back that are like in the hundreds.

And I don’t know about you, but I never opened emails a hundred times. Unless it’s oh is my order like delivered yet? I don’t know. But usually I, it, so it must be forwarded. So we assume that if there’s a stronger or a higher number of opens, that it was forwarded, the same thing with a click.

How are you evaluating effectiveness?

Click rate. This is something through Quorum. They have, so for open rate, there’s a one-pixel invisible download thing that when you open the email that downloads. With Apple’s new privacy advancements they are futzing with that a little bit, but we don’t really know how it’s impacting our data yet.

And then click rate is also through Quorum. They change the link. I don’t know how it works, technologically, but it’s like a link redirects. It’s like the industry standard. Both of these are industry standards.

Did you test threat frames of define and a solution to frame email, body?

We did. Oh, it was so cool. Nothing happened from it, but we tested it. We test it. So it was matching solutions, subject with solutions, body, and problems with problems, body, and I’d miss matching them. And there was nothing. I wanted it to have something so bad.

And so that’s why I remember it.

Beth Long: And it’s surprising because the frameworks Institute, that could be one way of getting at that high efficacy and high urgency, but I think maybe it needs to be like the threat frame needs to be more than a subject line in this case.

Jessica Pugel: Yeah. Yeah. I agree.

Do you study your replies?

It’s difficult to. Just for capacity reasons. But we have that data stored. And so at some point in the future, we can look at it and we’re hoping for one of our future papers that we do actually end up getting to look at those. But we have someone coded, like from the inbox that it comes into like positive, negative, meeting request, other things like.

Beth Long: What are other things we really want to test?

We have a wishlist of things. We want to test so many things. I know for me, I’m personally interested in substance use issues and the power of sharing personal lived experiences and how that might increase engagement with emails.

So that’s what I’ll be exploring further. Jessica, what do you really want to test?

Jessica Pugel: The one that I’ve been really on lately has been the what versus how framework. So this is, it’s also pulled from Frameworks Institute, but it’s also anyway the what framework and just what are they, how prevalent is this problem and the how the framework is like, why does this problem exist? What are the structural influences there? And it’s been so interesting to see the differences in engagement between those two. And I just want to dive into that a lot more. We just haven’t done enough on it for me to know what to do with that information yet. It would be number 11. But we don’t have that data in full. Yeah.

In conveying solutions is offering legislative language is more effective than just describing the solution. Beth, if you want to take this on, you’re welcome to.

Beth Long: I can speak to this. I can speak to this a little bit. We haven’t tested this specifically, but, in our other work we test our model, the research and policy collaboration model, which as a model for bridging policy and research by connecting legislators and policymakers.

Through that, I’ve attended a number of meetings with staffers who have told me specifically that legislative language would be very helpful to them rather than just having a laundry list of different solutions. We have not delved into that territory, mostly because we don’t have the capacity to be drafting legislative language.

And also because we’re federally funded, we can’t do lobbying. So if we were to draft legislative language, it would have to be more of like example text. So it would be more than coming up with just one language piece. so we want to, but we haven’t yet.

Jessica Pugel: A counterpoint to that is not, I’ve read it in some of the research literature on this area that

maybe policymakers don’t like when people, when researchers provide their solutions to this like it would be like the legislative language because they feel like that’s their job, the legislator’s job. And so it feels like researchers should stay in their lane without providing specific language.

So in sum, we don’t know.

Any insight into how many legislators are viewing through a mobile device? I imagine this affects the length of the message, depending on which device they’re accessing from.

In, in short, we don’t have that information judging from our Google analytics reports about mobile versus website use I think most of them are our own website, but we know that of course like people do check their email on their phone. So I don’t know for sure, but I agree that the device that you’re on is going to affect how you view it and then how you engage with it, for sure.

Beth Long: That’s an interesting future test.

Maybe if we can come up with a way to test it.

Jessica Pugel: In our like website optimization stuff, we have more stuff coming.

Do you test if the actual link is still bent more or less than a hyperlink?

We did test it, I think twice. And there were none. It wasn’t interesting. But so people click it, I think about the same rates or maybe open at the same rates, but the responses that we receive from the ones where we hyperlink are more negative than the ones with the actual length, like very recently, like we provided a hyperlink and multiple people responded. Like I can’t click on that link. It’s our policy. Like I can’t click on hyperlinks and so that you have to give them the raw link itself. I don’t understand that, but I’m sure that there’s good reason for it. Yeah, but we have tested it a couple of times.