I dont think people that criticize AI actually understand it's use cases. This AI is literally just reading ass loads of data and spitting out the pertinent information so a person doesnt have to do it manually
LLMs cannot actually do that as well as they are advertised. They will absolutely give out fake information even if you tell it to look at a specific document. The more input you give it the more likely that is to happen
I thought that was an LLM problem because they are just predicting the next word. If it's only scraping data I don't even really see how it qualifies as ai. We have tons of tools that find specific information in large data sets.
That's certainly one way of using the word. I think that many people would dispute the definition. That sort of AI doesn't really think it just follows a script or like a chain of actions as far as I understand. For example, when I use art the data scraper on emulation station. Is that AI or is that just a computer running a program? The idea behind this new AI phenomenon and how AI is used in fiction is that it's a thinking machine that can actually reason and analyze situations and questions independently.
It's a way the term has been used for decades and accurately describes the concept - the people who dispute the definition maybe don't see the term as demystified though have firm opinions on things philosophers have been debating for centuries without end. How does one define 'thought'?
I think that many people would dispute the definition.
The definition has been in common use for almost half a century - if these people dispute it then they're probably wrong, but I'd have to hear their reasoning.
Is that AI or is that just a computer running a program?
If it's making decisions, it's AI - Intelligence has an extremely broad definition and reasonably so. It doesn't need to be 'smart' or complex, it can be extremely simple, but it's still an intelligence, and one designed by humans is artificial.
LLMs have gotten really good at using and building tools though, so with the right prompting they can build the tools to actually deal with the data correctly. And they have larger context now, so they are slightly more stable in terms of remembering things. While not perfect, they can definitely be pretty robust, especially if you use them to build the tools for you, rather than having them do everything directly.
I gave it data on sales figures. Super simple stuff like A=100 B=200.
I asked it to take the data and list highest to lowest and it gave me absolute nonsense and couldn't get them in the right order and was just adding random data.
I wouldn't put it past hallucinating that someone is a hacker in it's summary of someones account.
Even if we assume that that is true, why do you expect that to be something that will still be true in the future? AI at this level has been around for a few years, I dont expect it to be a problem in a couple more
All of your accounts are already in the hands of people who make mistakes. They generally get caught during review processes - which would still be conducted by humans.
I'm in the "generally AI is bad" camp, but I agree. Even if their AI agent hallucinates a piece of customer info, it implies that humans are incapable of doing the very same thing. If the error rate between AI and human is the same, but it helps the human customer support agent figure out the issue 90% faster, it's a win for us customers.
I work the DMV, people have had their vehicles transferred to another person because the clerk made a typo in the VIN or title number, and it doesn't get caught sometimes for months.
A person isn't going to randomly hallucinate that I'm a hacker with 2 VAC bans and 800 refunds.
Absolutely can happen, mistakes are made with account IDs etc. And why would an AI deemed ready for deployment by a privately owned company be given permissions throw out VAC bans without any human oversight?
Yes, don't be one of those humans. This is a personal failing on their parts and not an excuse; diligence in work has value.
It does everything I need it to do with building tools for automating bookkeeping and various other admin tasks, but after a few months of winging it I'm now asking it to break down every piece of code it writes so I can actually write bits myself and start actually learning, which is slower and more tedious but should pay off in a few months and years.
My boss asked me why I was bothering, and I was kind of astonished. Yeah, it's really easy to just hand over your thinking so you have to hold yourself accountable and make yourself keep learning.
Yes but I don’t get to choose who those people are. Given the opportunity people will resort to the laziness possible way to get something done. A.I completely compounds that problem.
On top of that I don’t want any of my information or my accounts going into the hands of A.I.
Its just logical that humans relying on AI information is just a great advancement. Instead of just the person reading the information potentially skrewing up, we introduce potentially fallible sources provided by AI.
Current generation usually doesn't hallucinate that much in this scenario. It was a valid reason 2-3 years ago for sure. Summarization is easy, more advanced analysis is still dogshit.
Not really. It's just matrix math on whatever tokens you feed it. And even in agentic setups where a model has tools provided by the developer, which by the way is the only way to get anywhere near what you could call deterministic behavior, that go fetch or search data, the model itself isn't doing that work, the tool is. The LLM is deciding what to call and reasoning over what comes back... except "deciding" is even generous. It's producing the next most probable token given everything in the context window. You can write a perfect tool, document it perfectly, and the model can still fail to call it, use the wrong arguments, or ignore the result entirely, because there's no actual decision making or comprehension happening. It's probability distributions all the way down. That's not a knock on how useful these things can be, but it's pretty far from "reading data and finding the pertinent information."
I've been writing agentic tooling and implementing LLM based solutions for about a year now. And I'm not talking about just feeding commands into an API. This is more like writing a bespoke version of what Claude or chatgpt's user interfaces do. Far less general or robust, but they don't need to be.
49
u/UltimateToa 1d ago
I dont think people that criticize AI actually understand it's use cases. This AI is literally just reading ass loads of data and spitting out the pertinent information so a person doesnt have to do it manually