Reposted by Alexander Hoyle
if a computer told you how fucking stupid this is would you believe it
twitter.com/emollick/sta...
94 replies
236 reposts
1576 likes
Oh, a friend's show also just had an ep about it---he said the director's recent high-contrast black and white cut worked better than the original (sorry if you mention this in the ep! haven't listened yet)
wondery.com/shows/eye-of...
0 replies
0 reposts
1 likes
As someone working in NLP and social science, my skin crawls at the idea of “in silico” subjects from LLMs. People aren’t truss bridges you can stress test in SolidWorks! There is no link between model and real-world subject
0 replies
0 reposts
1 likes
Apple Silicon already has dedicated “neural engines” for ML. Going forward, I assume the main consumer uses would be, eg, a local chatbot using your own data
0 replies
0 reposts
0 likes
I’m an AI researcher and I regularly use my Mac to run models locally—albeit not training unless it’s small prototypes. It’s possible to compress the big models (in fact a pioneer of the technique gave his job talk today!)
the one nice outcome of AI hype is that the hobbyist tooling is quite good
1 replies
0 reposts
2 likes
was about to say “mine too”, then realized where Adam would have picked up this principle
0 replies
0 reposts
1 likes
The recent New Yorker piece where he features heavily gave an interesting perspective
www.newyorker.com/magazine/202...
0 replies
0 reposts
0 likes
the last time i had fried chicken in a bucket it was actually ice cream coated in corn flakes. fakery!!!
0 replies
0 reposts
1 likes
Also, for research assistants, if a codebook is unclear you just can just refine it by taking to them.
But yes, if your estimator is unbiased, then I don't have a problem in principle with a black box model
1 replies
0 reposts
1 likes
Yeah, I don't know that I really agree with Emily here
For me, it's that I suspect the sources of bias and variation for people are likely to be categorically different than that of LLMs, in ways that aren't necessarily captured by high-level scoring metrics. (I could be wrong)
1 replies
0 reposts
1 likes
What is in the water in Amsterdam?? For my dissertation I've been reading these excellent critical papers on measurement and validation and so many authors have a connection to UvA
pubmed.ncbi.nlm.nih.gov/15482073/
www.tandfonline.com/doi/epdf/10....
www.tandfonline.com/doi/full/10....
0 replies
0 reposts
3 likes
I agree that humans also rely on heuristics, and results from lazy annotator(s) may also be incorrect. I guess the question is how we regard the errors: is one preferable in some way? (eg, if I'm annotating sentiment, are my systematic errors more reflective of the construct than those of an LLM)?
1 replies
0 reposts
0 likes
To your second point, my assumption is that LLMs are more likely to be biased by spurious heuristics that systematically affect downstream conclusions. In the ideal case, I think humans disagreements stem from genuine ambiguity. Wondering if it's sort of a bias-variance tradeoff
(...cont.)
1 replies
0 reposts
0 likes
Variable-specific test sets make sense. That said, if using summary metrics alone (eg, F1), then I don't think test set size should have any bearing on ease of bias detection
(...cont.)
1 replies
0 reposts
1 likes
My main concern is that the nature of disagreements between the LLM and human may be very different than those between two humans
Comparing agreement metrics between human-human and human-LLM is a good start, but they could hide fine-grained systematic LLM biases (e.g., spurious lexical influences)
1 replies
0 reposts
2 likes
I also appreciated that it was not relentlessly bleak? There are moments of hopefulness in The Last of Us, but the overall perspective is pretty grim
Also I'm now grateful to be a Himesh Patel stan
1 replies
0 reposts
1 likes
Yes, I've thought the same thing! I don't know why it didn't get more attention/acclaim. (And in this era of abundance, a one-season miniseries is very welcome)
1 replies
0 reposts
1 likes
Right, and in most cases, I don't think it should?
$35k is, incidentally, the median individual income in NYC, not exactly a low CoL area. I mean, yeah, there's some truth in saying that most New Yorkers are struggling---but half the people can evidently afford to live there on less than $35k
0 replies
0 reposts
0 likes
...but I think the important point is that the terms of the debate have changed. The policy shifts student debt from actual "peonage" to being a progressive tax
You've made the case the $35k threshold should be higher. That's a different kind of argument than the one happening before the policy
0 replies
0 reposts
0 likes
Look, I think we all agree that there is inherent value in education and that it ought to be low-cost. The question is the proportion of the tax paid by society vs. the individual. You want more paid by society and, tbh, I'm with you. We need more state support... (cont.)
1 replies
0 reposts
0 likes
I mean, I live in DC, have loans, and that's my exact salary. I guess I think of it like a payment plan for something I purchased, like a car?
To be clear, yes, I'm in a very privileged position (field with job prospects, access to internships,...) But to my mind, housing costs are the Big Problem
0 replies
0 reposts
0 likes
It's 5% of income less $35k and is adjusted for dependents. To me the need for a roommate in a high-CoL area is not the fault of these loan payments. In the setup I described the loan is less than 6% of rent
1 replies
0 reposts
0 likes
I don't really see what the problem is with what amounts to an income sharing agreement. You earn more, so you pay more. It's a progressive tax! That's good policy!
0 replies
0 reposts
0 likes
My dude. You're about this close to sounding like a libertarian braying "taxation is theft!"
Different scenario: you graduated with a $100k loan, start off at $40k/year, 4% wage growth
You paid $36k over 20 years and the gov cut you a check for the other $64k. Sure sounds like a bargain to me
2 replies
0 reposts
1 likes
It depends, but if you borrow <$12k it's forgiven after ten years (every additional $1k borrowed adds a year, capped at 20 years total).
In this scenario, if their annual salary increases by 1%/year, and assuming 5.5% interest, after 10 years they'd have paid $6k total and be forgiven ~$10k
0 replies
0 reposts
0 likes
It doesn't matter for my argument. The principal doesn't influence the monthly payment
Second, most debt balances are <$10k. I was imagining someone who dropped out but still owed. Typically, they are most affected by student debt because they don't get the increased earning potential of a degree
0 replies
0 reposts
0 likes
You're $10k in debt and make $20/hour in Atlanta
That's ~$40k/year or $720 over the typical expenses from the MIT living wage estimate, which puts rent at $1100/mo (promise I didn't cherry-pick!)
SAVE repayments are $60/month. So you need to lower your monthly expenses by $8. Seems doable?
0 replies
0 reposts
0 likes
🐣
0 replies
0 reposts
0 likes
Reposted by Alexander Hoyle
The #DataSittersClub is back with an all-new book on topic modeling! If the LDA buffet explainer didn't do it for you, give this one a try: thanks to Xanda Schofield and her student Sathvika Anand, I now feel like I actually understand how it works. datasittersclub.github.io/site/dsc20.h...
4 replies
14 reposts
38 likes
@jonathancheng.bsky.social ? not that I've read it, but I know he is an English PhD
1 replies
0 reposts
2 likes
will my incessant lurking in your mentions finally pay off?
1 replies
0 reposts
1 likes
My bluesky feed is effectively an all-day call-in show hosted by Jamelle Bouie with a very antagonistic audience
1 replies
0 reposts
1 likes
When I say it's a useful framing, it's because I think it encourages the appropriate stance when reasoning about what language models are doing---as an example, see this recent discussion on Twitter (the "Octopus paper" she's referencing is the first thing I linked)
x.com/shaily99/sta...
0 replies
0 reposts
1 likes
You're welcome! It generated a lot of discussion among academic NLP people (and still does). While many disagree with their characterization of "understanding," it's still a useful framing (see this recent paper for a different perspective arxiv.org/pdf/2308.055... )
1 replies
0 reposts
1 likes
Yes, this is basically the argument made in this paper. It's quite an easy read, if somewhat out of date
aclanthology.org/2020.acl-mai...
1 replies
1 reposts
1 likes
Like, I can see how someone saying "I worship at the church of a harry potter fanfic author" encourages a search for ulterior beliefs. in deciphering American politics, you become accustomed to decoding obscured intents. but sometimes ya gotta just take people at their word
1 replies
0 reposts
1 likes
not an original thought, but that there are instrumental or materialist explanations for an ideology (early Christian missionaries helped facilitate global trade; AI doomerism promotes regulatory capture) does not preclude an earnest and militant adoption of that ideology
1 replies
0 reposts
0 likes