🚨Out in Psych Sci🚨
Prompting accuracy can increase news sharing quality - but is this true for those on the political right?
In an ADVERSARIAL COLLABORATION we find:
➡️ Acc prompts increase sharing quality of Republicans
➡️ Some evidence of greater efficacy for those on left v right
1 replies
12 reposts
25 likes
Reposted by David Rand
*Less than half* of misinformation studies expose people to both false and true info, and ONLY 7% measure discernment (the difference between believing/sharing true and false info)!!
misinforeview.hks.harvard.edu/article/what...
Short 🧵 on why you should be measuring discernment👇
3 replies
25 reposts
37 likes
If you're going to MPSA in Chicago and want to chat, send me a message or email!
0 replies
0 reposts
0 likes
🚨New PNASNexus🚨
We join work on misinformation & harmful language:
-More harmful language in tweets w low-quality news links β=0.1 & in false headlines β=0.19
-Users who share more misinfo use more harmful language in non-news tweets β=0.13 academic.oup.com/pnasnexus/ar...
w @mmosleh.bsky.social
2 replies
11 reposts
23 likes
Thanks for sharing!!
0 replies
0 reposts
1 likes
Many thanks!
0 replies
0 reposts
0 likes
Our dataset of 14k headlines available online:
osf.io/q5h49/
0 replies
0 reposts
0 likes
CONCLUSION
-Misinformation & harmful language *are* related in important ways- but not so strongly related that harmful language is a useful diagnostic for info quality
-Shows opportunities to integrate largely disconnected strands of research & understand psychological connections
1 replies
1 reposts
3 likes
RESULTS
-Tweets with links to lower-quality news domains are more likely to contain harmful language
-False headlines are more likely to contain harmful language than true headlines
-Users who share more low quality links use more harmful language - even in non-news posts
1 replies
1 reposts
5 likes
We study 8.6 million posts from 6832 Twitter users
-classifiers identify harmful language
-URL news domain quality scores to measure info quality
Also analyze 14k true and false headlines (as evaluated by professional fact-checkers)
1 replies
0 reposts
1 likes
But they might also be independent:
- false claims need not involve harmful language
- can insult and denigrate targets without making inaccurate claims So we wanted to investigate if they were actually related or not empirically
1 replies
0 reposts
1 likes
Misinfo & harmful language are both problematic, but treated largely independently. They could be connected: -hateful/toxic posts may use inaccurate or misleading claims about their targets to insult or belittle -posts that seek to mislead may use harmful lang as persuasive tool
1 replies
0 reposts
3 likes
🚨New PNASNexus🚨
We join work on misinformation & harmful language:
-More harmful language in tweets w low-quality news links β=0.1 & in false headlines β=0.19
-Users who share more misinfo use more harmful language in non-news tweets β=0.13 academic.oup.com/pnasnexus/ar...
w @mmosleh.bsky.social
2 replies
11 reposts
23 likes
🚨New WP🚨
Field experiments with 33 million FB users & 75k Twitter users: Ads prompting users to think about accuracy reduce misinformation sharing!
Accuracy prompts offer platforms a content-neutral approach that is scalable and preservers user autonomy osf.io/preprints/ps...
2 replies
27 reposts
48 likes
Reposted by David Rand
New paper in PNAS "The distorting effects of producer strategies: Why engagement does not reveal consumer preferences for misinformation" with @jplotkin.bsky.social, @dgrand.bsky.social and @arechar.bsky.social.
www.pnas.org/doi/epdf/10....
1 replies
11 reposts
16 likes
Reposted by David Rand
Catch me next Wed March 6th at the Harvard/Northeastern Misinformation Speaker Series (live & on zoom) for the public reveal of the results from our large collaborative megastudy testing nine misinformation interventions @dgrand.bsky.social @lewan.bsky.social
shorensteincenter.org/new-event/mi...
1 replies
7 reposts
21 likes
🚨WP🚨
We test 9 online samples and find clear tradeoffs between attentiveness and representativeness - which sample is best depends on research q and priorities. For social/political qs I rec Bovitz/Lucid, for complex designs I rec Cloud/Prolific osf.io/preprints/ps...
1 replies
58 reposts
80 likes
Reposted by David Rand
If you run online surveys, you should read this paper.
0 replies
2 reposts
5 likes
Reposted by David Rand
Amitai Shenhav and his lab are great people who do great science, and they're looking for a lab manager...
0 replies
2 reposts
3 likes
🚨WP🚨
We test 9 online samples and find clear tradeoffs between attentiveness and representativeness - which sample is best depends on research q and priorities. For social/political qs I rec Bovitz/Lucid, for complex designs I rec Cloud/Prolific osf.io/preprints/ps...
1 replies
58 reposts
80 likes
Excited to present this paper at SPSP JDM pre-conference today, 4pm PST room 11b. Feel free to swing by (even if not registered for pre conf)!
0 replies
3 reposts
7 likes
🚨New WP🚨
Field experiments with 33 million FB users & 75k Twitter users: Ads prompting users to think about accuracy reduce misinformation sharing!
Accuracy prompts offer platforms a content-neutral approach that is scalable and preservers user autonomy osf.io/preprints/ps...
2 replies
27 reposts
48 likes
I'll be at SPSP on Friday - if anyone is interested in meeting up, send me an email or DM!
0 replies
0 reposts
1 likes