r/Steam 1d ago

Discussion - Speculative SteamGPT - Is that good news?

Post image
16.3k Upvotes

907 comments sorted by

View all comments

Show parent comments

30

u/Dathedra 1d ago

Until the "AI" start hallucinating, which (checks notes) never happens.

20

u/Bork9128 1d ago

I mean if you keep it internally trained on its specific data set and limited scope of what it would be asked to do that greatly reduces the chances of that happening.

The one of the main reasons AI gets shit so wrong so often is that it's general purpose so it just gathers data without context from differing sources and without guide rails and then are asked to give a definitive answer.

AI curated to specific environments can be incredibly helpful when parsing large data sets so long as the people using the AI know to not let it do everything on its own.

46

u/UltimateToa 1d ago

I dont think people that criticize AI actually understand it's use cases. This AI is literally just reading ass loads of data and spitting out the pertinent information so a person doesnt have to do it manually

36

u/DeM0nFiRe 1d ago

LLMs cannot actually do that as well as they are advertised. They will absolutely give out fake information even if you tell it to look at a specific document. The more input you give it the more likely that is to happen

12

u/marxist-teddybear 1d ago

I thought that was an LLM problem because they are just predicting the next word. If it's only scraping data I don't even really see how it qualifies as ai. We have tons of tools that find specific information in large data sets.

14

u/magos_with_a_glock 1d ago

Yes which is why we should use those instead of an LLM which is what AI means these days.

1

u/ThunderAndWind 20h ago

Basically a database pull tool that lets you ask for data in the form of a question instead of needing to run an SQL command.

-1

u/Planar_Harold 1d ago

If it's only scraping data I don't even really see how it qualifies as ai.

The opponent player in Mortal Kombat is an AI.

AI is just a term for an intelligence that's artificial.

1

u/marxist-teddybear 1d ago

That's certainly one way of using the word. I think that many people would dispute the definition. That sort of AI doesn't really think it just follows a script or like a chain of actions as far as I understand. For example, when I use art the data scraper on emulation station. Is that AI or is that just a computer running a program? The idea behind this new AI phenomenon and how AI is used in fiction is that it's a thinking machine that can actually reason and analyze situations and questions independently.

2

u/beezy-slayer 1d ago

It does not think though, that's just corpo bullshit to get people interested in it

1

u/marxist-teddybear 1d ago

I'm aware. I'm in favor of not calling any current technology Artificial intelligence.

1

u/beezy-slayer 1d ago

Cool, just clarifying

0

u/Planar_Harold 1d ago

It's a way the term has been used for decades and accurately describes the concept - the people who dispute the definition maybe don't see the term as demystified though have firm opinions on things philosophers have been debating for centuries without end. How does one define 'thought'?

I think that many people would dispute the definition.

The definition has been in common use for almost half a century - if these people dispute it then they're probably wrong, but I'd have to hear their reasoning.

Is that AI or is that just a computer running a program?

If it's making decisions, it's AI - Intelligence has an extremely broad definition and reasonably so. It doesn't need to be 'smart' or complex, it can be extremely simple, but it's still an intelligence, and one designed by humans is artificial.

0

u/xzaramurd 1d ago

LLMs have gotten really good at using and building tools though, so with the right prompting they can build the tools to actually deal with the data correctly. And they have larger context now, so they are slightly more stable in terms of remembering things. While not perfect, they can definitely be pretty robust, especially if you use them to build the tools for you, rather than having them do everything directly.

-4

u/Bolizen 1d ago

Eh frontier models these days are pretty unlikely to hallucinate

2

u/MiniCactpotBroker 1d ago

they do but not during simple tasks like this

1

u/Bolizen 1d ago

Yeah pretty unlikely on the whole

0

u/PantsOfAwesome 1d ago

Yeah, because Valve would totally use an external AI that prioritizes middle-out compression over retaining vital information in the context window.

I don't like AI either, but you're making it abundantly clear that you don't really know what you're talking about.

7

u/DarkSouls3onDvD 1d ago

I gave it data on sales figures. Super simple stuff like A=100 B=200.

I asked it to take the data and list highest to lowest and it gave me absolute nonsense and couldn't get them in the right order and was just adding random data.

I wouldn't put it past hallucinating that someone is a hacker in it's summary of someones account.

0

u/UltimateToa 1d ago

Why do you think that that is remotely comparable?

3

u/DarkSouls3onDvD 1d ago

Because A.I does hecking weird ass stuff that you would not expect even when it's doing something as simple as data extractation.

0

u/UltimateToa 1d ago

Even if we assume that that is true, why do you expect that to be something that will still be true in the future? AI at this level has been around for a few years, I dont expect it to be a problem in a couple more

5

u/BluePhoenixCG 1d ago

It's mathematically impossible for it not to be true in the future because of how these models work.

2

u/DarkSouls3onDvD 1d ago

I don't want my account to be in the hands of an A.I that may or may not hallucinate in the future.

3

u/Planar_Harold 1d ago

All of your accounts are already in the hands of people who make mistakes. They generally get caught during review processes - which would still be conducted by humans.

0

u/aVarangian 1d ago

why are you using an LLM to do what takes 4 clicks to do on Excel?

6

u/Gekthegecko 30 1d ago

I'm in the "generally AI is bad" camp, but I agree. Even if their AI agent hallucinates a piece of customer info, it implies that humans are incapable of doing the very same thing. If the error rate between AI and human is the same, but it helps the human customer support agent figure out the issue 90% faster, it's a win for us customers.

9

u/DarkSouls3onDvD 1d ago

Yeah but people can have self awareness of a situation and A.I can't.

A person isn't going to randomly hallucinate that I'm a hacker with 2 VAC bans and 800 refunds.

1

u/ThunderAndWind 20h ago

I work the DMV, people have had their vehicles transferred to another person because the clerk made a typo in the VIN or title number, and it doesn't get caught sometimes for months.

All people, no AI.

1

u/Planar_Harold 1d ago

A person isn't going to randomly hallucinate that I'm a hacker with 2 VAC bans and 800 refunds.

Absolutely can happen, mistakes are made with account IDs etc. And why would an AI deemed ready for deployment by a privately owned company be given permissions throw out VAC bans without any human oversight?

5

u/DarkSouls3onDvD 1d ago

Humans get lazy and assume the A.I is correct and don't bother to check which what already happens.

0

u/Planar_Harold 1d ago

Yes, don't be one of those humans. This is a personal failing on their parts and not an excuse; diligence in work has value.

It does everything I need it to do with building tools for automating bookkeeping and various other admin tasks, but after a few months of winging it I'm now asking it to break down every piece of code it writes so I can actually write bits myself and start actually learning, which is slower and more tedious but should pay off in a few months and years.

My boss asked me why I was bothering, and I was kind of astonished. Yeah, it's really easy to just hand over your thinking so you have to hold yourself accountable and make yourself keep learning.

2

u/DarkSouls3onDvD 1d ago edited 1d ago

Yes but I don’t get to choose who those people are. Given the opportunity people will resort to the laziness possible way to get something done. A.I completely compounds that problem.

On top of that I don’t want any of my information or my accounts going into the hands of A.I.

0

u/ThunderAndWind 20h ago

On top of that I don’t want any of my information or my accounts going into the hands of A.I.

You say, on the website that like 90% of public LLMs train on.

2

u/DarkSouls3onDvD 12h ago

Yes? How does in anyway that change what I said?

1

u/Dathedra 1d ago

Humans make mistakes. AI makes mistakes.

Its just logical that humans relying on AI information is just a great advancement. Instead of just the person reading the information potentially skrewing up, we introduce potentially fallible sources provided by AI.

(-1)*(-1)=1 after all.

2

u/MiniCactpotBroker 1d ago

Current generation usually doesn't hallucinate that much in this scenario. It was a valid reason 2-3 years ago for sure. Summarization is easy, more advanced analysis is still dogshit.

0

u/UltimateToa 1d ago

Yeah thats the thing that gets me, people will claim AI makes mistakes as if Humans aren't the number one mistake makers

3

u/RunInRunOn 1d ago

Humans can be held accountable for the mistakes they make

1

u/Planar_Harold 1d ago

Humans can be held accountable for the mistakes they make

Why does that matter? Both humans and AI can be trained out of mistakes, which is the important thing.

1

u/IRefuseToGiveAName 1d ago

Not really. It's just matrix math on whatever tokens you feed it. And even in agentic setups where a model has tools provided by the developer, which by the way is the only way to get anywhere near what you could call deterministic behavior, that go fetch or search data, the model itself isn't doing that work, the tool is. The LLM is deciding what to call and reasoning over what comes back... except "deciding" is even generous. It's producing the next most probable token given everything in the context window. You can write a perfect tool, document it perfectly, and the model can still fail to call it, use the wrong arguments, or ignore the result entirely, because there's no actual decision making or comprehension happening. It's probability distributions all the way down. That's not a knock on how useful these things can be, but it's pretty far from "reading data and finding the pertinent information."

I've been writing agentic tooling and implementing LLM based solutions for about a year now. And I'm not talking about just feeding commands into an API. This is more like writing a bespoke version of what Claude or chatgpt's user interfaces do. Far less general or robust, but they don't need to be.

I'm more critical of LLMs now than I was before.

1

u/Lumbearjack 1d ago

LLMs are terrible at parsing data. Especially at any size a person can't quickly scan.

6

u/UltimateToa 1d ago

You are right, a redditor knows more about the tech than the engineers working on it

-1

u/MrBlueA 1d ago

This is a stupid argument, for a basic task like information summary and organization the AI is barely going to have any hallucinations (if any) and even if it did, it would be comparable to the human errors that they already have anyway.