LLMs are useless for lawyering and other tasks involving actual analysis and understanding not because the technology hasn't gotten there yet, but because that's fundamentally not what statistical word association is capable of doing. Not only can't it understand, it's not even trying to understand.
LLMs remind me of 3D printing in that there is something there, but almost everything you see online about it is by someone with no understanding of what it is.
No it won't let terrorists make nuclear weapons, no it can't replace in house council, that's not what it's for.
LLMs remind me of 3D printing in that there is something there, but almost everything you see online about it is by someone with no understanding of what it is.
No it won't let terrorists make nuclear weapons, no it can't replace in house council, that's not what it's for.
We've worked on natural language models for legal and other predicate oriented spaces before. We have not yet used a neural network structure for it, but have seen considerable reason to believe there is a solution.
In fact, I just thought of something. Hold on.
the impression i get is that they're sometimes useful search engines? so for law they could turn up stuff that may be relevant, which an actual human can check out. like telling you which bales are most likely to contain the needle you're looking for.
there isn't even an awareness for the phrase "trying to understand" to apply to. It's putting one character after another, and all it "knows" is what was input, what parts of its training data are similar to the input, and what parts of its training data are therefore appropriate output.
It's kind of sad that LLMs are probably drawing interest and funding away from symbolic languages and systems and maybe even degrading the field of AI research as a whole.
I'm no lawyer, but would there also be a liability issue with a lawyer using ai? Say a lawyer writes a contract with the help of ai. The client signs it, but then the lawyer realizes it contains language he or she wasn't expecting, detrimental to the client. How would this be resolved?
We could do so many useful things with “AI”/data science but get stuck with llms. Generating content has to be one of the least useful things we could build but I guess it’s easy to monetize.
I’d like to have data organized and categorized so that we can access data quickly and accurately!
Seeing people boast about what ChatGPT can do is like watching someone framing a house using a screwdriver to hammer in the nails.
If you know even the slightest thing about tools, you know it's not the right tool for those jobs.
I'd push back very slightly and say theoretically - the chinese room - you could have a machine that mimics that understanding, but here in the real world with physical limits scaling becomes impossible before you get anywhere close
One of the things you absolutely do not want an "AI robo-doctor" to do is give you a diagnosis that has the same syntax and jargon that the diagnosis a real doctor would give you, without the substance. Unfortunately, that's all it ever will do.
i wish more people understood that AI is only going to be useful when it's purpose-built and only applied to the things it has been built to do. Like that guy who had gemini analyse his scans for cancer
Furthermore, contra these lying con men and women, there is not even any PATH to useful "intelligence" through this technology. It's pure waste in support of financial scams.
We have to stop cooperating with, stop humouring, and stop supporting any of this fraudulent nonsense.
As a pointer to concepts or documents that *might* be related, it's potentially useful. Vibes based search. In a legal context, the product should be checking whether those docs... actually exist.. before surfacing them.
That's not what's being sold.
There are some genuinely good use cases for Gen Ai in legal tech, such as first pass contract generation, redline review and playbook interrogation, etc... Is it over hyped, absolutely, but there are definitely some solid uses for LLMs in law.
Interesting, involved in an ugly estate war. Our attorney is beta testing the court reporting company's AI tool to summarize deposition testimony and insert into petitions. She'll double check it all but thinks this might save thousands on the next motion. God, I hope so.
Tomorrow's essay question for the class:
Does this observation have any implications for the coming fad of using corpus linguistics to interpret the Constitution, statutes, and contracts; if so, what are those implications?
I agree that LLMs don't "understand" anything but disagree that they're not good for analysis. Can be very useful in working with datasets, finding trends and outliers, etc.
someone asked David Simon if it would help him get from Scene 5 to Scene 6 in a script. Anyone can come up with 10 ways to do a scene. Writing is knowing which one *works.*
That's why it's surprising so much money is being poured into something that is an impressive novelty for sure, but in a sense 'average' by definition.
In fact AI is an awfully misleading term. Something like "Data Trained I/O Technology" would be more apt.
It doesn't matter how much better LLM chatbots get at the thing they're actually doing, the thing they're doing is categorically not what hype merchants in this vein are claiming. It's like claiming Photoshop is going to tell us how to reform the tax system. It's a nonsensical premise.
It seems to me that human brains also use language by using "statistical word association."
What the AI lacks is lived and sensory experience. It's hard to tell fact from fiction using language alone.
Just read a 2024 article “Chat GTP is Bullshit” by Hicks, Humphries and Slater, Ethics Inf Technol where they argue it’s more accurate to call the inaccuracies in LLM as bullshit instead of ‘hallucinations’, in science communication. Interesting discussion, referencing other studies.
Even doing something like math expressions prove it can not deal with any form of analysis, there is not "thinking" involved. It finds words that correlate to your prompt and make word soup of it. The amount of text spat out makes people assume it has to be right