Agreed, but I think it is too easy to blame this failing on new media for merely doing what they do and corporate fundraising doing what they do.
Science owes them a better critique than "sure it's published in the 'best' journals but we all suspect something is off there"
1 replies
0 reposts
1 likes
but the code has been public since 2017, & the index much longer than that. If anything that only underscores our reluctance to go beyond sniff test and engage deeply and collectively in building robust analyses.
0 replies
0 reposts
0 likes
Right, very much so! The authors cite numerous examples, and rev 2 addresses this: "my read of the conservation community is that they tend to not trust the LPI and
see it as badly exaggerated without knowing why"
2 replies
0 reposts
1 likes
brilliant! maybe these can be added to the existing cito ontology sparontologies.github.io/cito/current... maybe somewhere between 'derides' and 'parodies' 🙂
0 replies
0 reposts
4 likes
Rather, I am worried that our entire publication system may not only be inadequate but possibly quite at odds with what is most sorely needed to confront the biodiversity crisis.
We select for novelty and significance. We scrutinize words and claims far more closely than code or sensitivity.
0 replies
6 reposts
11 likes
While I applaud these authors (& the editor for showing that it is not about getting three yes votes, and publishing the reviews -- well worth the read!), I don't mean to single out the LPI index or rev 1.
1 replies
0 reposts
1 likes
The authors include a line-by-line code review that identifies 30 errors in the published LPI code; though all with much smaller numerical impact than the math issues that receive the focus. (Rev 1: 'I didn't feel it was necessary to review the code because it wouldn't change my assessment.')
1 replies
0 reposts
1 likes
reviewer 1 is clearly skeptical of the value here, writing:
"In the first sentence of this review report, I purposely refer to this study as a ‘sensitivity analysis’ of the LPI because, to me, that is all that it is."
1 replies
0 reposts
0 likes
The authors observe this:
"We have explored the code used for the calculation of the LPI. Although [3 cites] provide the basic principle of calculating the LPI, the exact methodological procedure is clear only from the code of the package rlpi (v.0.1.0) in R"
1 replies
0 reposts
0 likes
Reviewer #2 speaks directly to the influence, writing:
"LPI is probably the single most widely known and used index of human impact on biodiversity [..] that receives
exceptionally wide press coverage to one that is embedded into UN conventions and treaties."
1 replies
0 reposts
0 likes
Implementing complex indicators such as discussed here is both immensely influential and immensely challenging stuff. Moreover, such metrics are dynamic quantities whose precise definitions are not in the static papers we peer review but the evolving code that computes them.
1 replies
0 reposts
2 likes
Note the full peer review and reply chain has been published alongside - kudos to the reviewers and publisher on that! Arguments aside I think it is a great example/resource for new researchers learning to navigate this part.
1 replies
0 reposts
0 likes
This rather fantastic recent piece is a great illustration of the value of code review, as well as sensitivity analyses. doi.org/10.1038/s414...
But I think it is particularly instructive as commentary on our scientific process today. 🧵
1 replies
6 reposts
8 likes
Nice to see our earthdatalogin R package with @openscapes.bsky.social covered in the NASA news! www.earthdata.nasa.gov/news/easier-...
0 replies
1 reposts
2 likes
Candidates distinguished only by ridiculously high numbers of pubs can do very poorly. Search committees have plenty of flaws, but these are very human processes and the flaws are very human too.
0 replies
0 reposts
0 likes
In my experience, (n=1 depts) the ridiculous productivity expectation thing is a myth. Yes the system is broken, but not in the write 50 papers way. Bean counting is used as a filter more than deciding factor. Write ~ 2 first-author papers a year. Other factors are much more decisive.
1 replies
0 reposts
1 likes
Exciting opportunity for multiple postdoctoral scholars to join our growing UC Berkeley team at DSE! Focus areas in Climate resilience in our National Parks, Indigenous Environmental Stewardship, Applied hydrology, + Tools for Working Lands. ~ $87K - $95K + plus benefits
dse.berkeley.edu/postdocs
0 replies
9 reposts
8 likes
Thrilled to see "Biodiversity monitoring for a just planetary future" now out in Science! Led by @milliechapman.bsky.social in collaboration with 14 coauthors from ecology, sociology, geography, computer science and other disciplines. 🌏 🦋 doi.org/10.1126/scie...
0 replies
5 reposts
11 likes
www.cogeo.org has a good introduction of how this works in the cog spec. the client tool knows the format so the range request can run on the compressed data
0 replies
0 reposts
1 likes
Good question! In fact streamable compression is actually better than uncompressed formats. Most common “cloud-native” formats use compression I believe. (COGs, geoparquet, etc)
1 replies
0 reposts
0 likes
Heading to SF for AGU, excited to share our preliminary NASA TOPS work on cloud-native geospatial (boettiger-lab.github.io/nasa-topst-e... ) and chat open science & open source. 🚀🛰️🌏
1 replies
1 reposts
11 likes