Announcing the winners of the 2024 SIPS Commendations! 👇✨ For information about our awards and to nominate projects for mission awards and commendations for 2025, see here: improvingpsych.org/mission/awar...
Initially I wondered why the hell they were having George Clooney go off on Biden but this is actually newsworthy and the kind of first-person experience that goes beyond what other people (who work in politics) have said.
"We were right about how the data were altered, Gino’s prevailing explanation for the alterations does not make sense, and yet we are the defendants in this case."
For evaluating research, should transparency and credibility of findings be independent criteria, or should transparency be prioritized over credibility?
Oversimplified form:
Best non-transparent research < Worst transparent research
or
Worst transparent research < Best non-transparent research
This thread from the bad place is very tough on a recent finding, but I can't help but still admire the authors for their openness with a prereg and data sharing to make the critique possible.
My priors have completely reversed since I first learned of this idea and literature >20 years ago. I look forward to them reversing again if a paradigm can be established that specifies the conditions necessary to produce a repeatable effect of death priming on one or more TMT outcomes.
This, the weak priming effect, and our Many Labs 4 failure to replicate an impact of mortality salience, leaves me skeptical that there's anything unique about priming death, and it is unlikely to have meaningful impact on culture worldviews. online.ucpress.edu/collabra/art...
A multi-lab replication of a theoretical expectation from Terror Management Theory that death related primes are uniquely impactful after delay, as opposed to other types of semantic priming. Findings do not support that expectation.
It appears very well done. It is also remarkable that such a classic and important paradigm for one of the standard bearer theories in the field received relatively modest attention upon publication. Is it decline of social media, becoming inured to such outcomes, or something else?
Save the date! On July 19, COS and UVA TYDE are hosting a webinar on innovative research methods for understanding social media's impact on youth mental health. See you there! cos-io.zoom.us/webin...
I think it is mostly jokes because it is a nonsense statement. There's no obvious meaning to the concept of "PhD-level intelligence."
I perceive that the jokes are the academic way of pointing out the nonsense in the form of a comment rather than a question.
“My hope for this is that it goes from something that was unimaginable until it happened, and then it was unthinkable not to do it,” says @ianhussey.bsky.social “You have to give people the permission, and incentive, to think in the first place that errors might exist.” www.wired.com/story/bounty...
Open Science NL CfP for #OpenScience Infrastructure is now open! Interested parties can apply for funding for the improvement or development or of digital infrastructures that support open science. For more info and documentation, see: t.co/w4Tn4wxNRi
New role for a Project Coordinator at COS. Perfect for an early career researcher passionate about open scholarship and looking to gain research experience, whether to then go to grad school or stay in industry.
In this @scholarlykitchen.bsky.social interview, Nici Pfeiffer, our Chief Product Officer, discusses her journey from Mechanical Engineer to leading the product development of OSF at COS.
New preprint! "Prevalence of transparent research practices in psychology: A cross-sectional study of empirical articles published in 2022" osf.io/preprints/ps... After > decade of new infrastructure, advocacy, & policy, how often are transparent research practices used in empirical psychology?
🚨 NEW: Joan Donovan, one of the world’s leading misinformation experts, claims that the Harvard Kennedy School shut down her work there to appease Meta.
A tool I made, called StatCheck Simple Edition, is being shared on the other site. Might as well share it here too.
Copy in text with statistical tests, and it will use the StatCheck library to show you any inconsistencies. It is also a bit more flexible about formatting than the original.
In 2023, preprint communities united in a fundraising campaign to ensure their long-term sustainability. This initiative aimed to support open-access publishing by backing preprint communities and shared infrastructure.
New DP from I4R's board members, co-director, chair & others: econpapers.repec.org/paper/zbwi4r.... We reproduced & conducted sensitivity analysis from 17 AER papers. Robustness varies between 17% & 88% across studies. A survey of economists suggest that they overestimate robustness.
"The complexities of scientific reform require thoughtful, well-rounded solutions built on inclusive discussions, and the contributions in this special issue provide a rich tapestry of perspectives to guide our way forward."
The primary purpose of preregistration is to provide transparency of the research process. It helps authors and readers calibrate confidence in their claims and evidence based on the plan and what happened.
Deviations from plans are normal. Aspire to make them visible and account for their impact.
What impresses me most is that ResearchGate boasts having almost 3x as many researchers using the service as there are scientists in the world, and it never shows up in my workflow or social media except when someone complains that it exists.
Our "Many Voices" 75-author collaboration led by Yuto Ozaki finding global regularities in music-speech relationships is now out on the cover of Science Advances: science.org/doi/10.1126/...
Full video of 18 coauthors singing/speaking in our own languages at youtu.be/a4eNNrdcfDM
Would you like to contribute to the development of new approaches to evaluating research credibility? Submit a recent paper to receive novel AI and human assessments of your research!