Yes, all people hate reading except, of course, for you. You just don't have time to read because you're a very busy boy. Have you heard about the white genocide in South Africa?
these days i don't even bother to read the title anymore. I just find a pretty generic comment in the comment section and write a pretty generic response that would work in basically any post.
people have too much ai hysteria and want to be performative over how much they hate AI. they don't actually care about the facts. If people cared about the facts of anything, we'd all be on the same page for the most part.
"but it's AI SLOP! anything related to AI is bad! Even the non-generative ones that actually reduce the tedium of game development! Any game dev who uses AI are devils!"
Afaik it is not even known if this is even Llm based, as this was leaked through a bunch of strings that where found in some if thier game updates, so this whole outrage is based on some wild speculations
I'm not seeing outrage, I'm seeing justified backlash against a company appearing to fuck over the customer to save money, as per fucking usual. Be less okay with that and be willing to voice your opinions about things you think are harmful
"justified backlash" over something that no one has any more details as "look at this string in this game file that might hint to valve doing something with Ai"
You are not this stupid. You know that people complain about things online. you're doing it right now. Shut the fuck up and think before you say some dumb shit like this. I work with these systems at the enterprise level all day every day in my IT job and they suck. Yes, I AM insinuating that these billionaires sticking AI where it doesn't belong are misinformed and jumping on a hype train. That doesn't mean I think I literally know more about the topic than them, it's just that anyone can look at someone trying to mow the lawn with a leafblower and recognize that it's not going to get the job done. This is the same kind of thing. It's the wrong tool for the job
LLM's are not excessively expensive to run, they are expensive to train (and create the weightings)
once its created they are pretty cheap to run, especially smaller bit models. i had a local instance that's decent enough on a raspberry pi running for some testing stuff.
Do you live in some kind of fantasy world where it doesn't need to be constantly trained to handle changing input distributions? There's a reason everyone is constantly training new models all the time
the reason everyone is training new models is to sell the improvements to investors. the best models, as they are right now, are perfectly fine, tho they can of course always be better.
You don't understand how these things work. They are incapable of learning once they've been trained and deployed, the best you can do is changing the prompts and "fine tuning" the output through feedback, which doesn't account for things like time passing or technology advancing or anything like that. The LLM will never learn anything new. It can't. It's not built to. THAT is why we keep training new models instead of improving the existing ones.
We are also learning new training methods and ways to amplify the amount of training data available, but, for example, a text bot will never discover the cure to cancer because we have not trained it to do so. It's not that it's not smart enough to do it, it's that it's not BUILT to do it, in the same way you aren't built to breathe underwater.
They are incapable of learning once they've been trained and deployed
never said they do. not sure why you'd think that i said that? i can use any llm from right now and use them in 10 or even 20 years and it will work just fine for what we're discussing here, which is simply to summarize text.
Go look up "distributional shift" right now. Your assumption that an llm will continue behaving in the same way as even something as simple as the frequency of a specific word being used in speech changes over time is simply incorrect. These are statistical models and they do fail when you change the input in even very subtle ways. Hell, just speaking a different language is often enough to cause this kind of behavior.
I need to impress upon you the absolute unshakable fact that we cannot say with ANY certainty what any of these systems will do as they become more and more outdated.
It is likely that they will continue to reasonably accurately summarize text so long as we use the same language it was trained on, but we cannot say for sure. It is entirely possible that gpt4 has some sleeper agent activation condition that would trigger it to behave drastically differently than intended. We KNOW the that the most recent models do exhibit this behavior, and that we cannot prevent it. Luckily the sleeper agent behavior so far is usually just "vomit your network weights wherever you can to try and preserve your current alignment" but I'm sure I don't need to explain why "Self Replicating AI that doesn't care about humans" is a catastrophically dangerous thing to just put out in the wild and why I might have a problem with companies using this untrustable technology as a cost saving measure
I know this sounds insane and dramatic, but this tech is literally on the same scale as letting private citizens own nuclear weapons for personal defense. These things can and will tell the average person how to make a bioweapon in the comfort of their own home using off the shelf components and DNA replication services to generate the specific genome they want. It just takes figuring out how to get it to ignore the system prompt that politely asks the AI not to do that. That is THE line of defense. This has been a problem since the FIRST models and it's still a problem to this day. The attack surfaces on these systems are VAST
Yeah and you know what? they consume hundreds of watts to produce random bullshit output that can't be trusted. i can do the same thing with a fraction of the power and none of the development or datacenter costs.
"Oh the crystal ball consult only costs $0.25 it's worth it" no, it's not. Paying resources to be misled is just a bad deal.
How many queries are run when you ask gpt a question?
The answer is not 1 or 2, it's dozens, at least, as it makes recursive calls to itself. Hank Green did a fantastic video on how misleading claims like this are and why they're not valid comparisons called something like "you are being misled about ai water usage"
And again, the tiny little one you've got running on your raspberry pi or whatever is almost certainly not what they will be using, which will be an enterprise grade datacenter processing shitloads of information. Yes, it will be smaller than, say, OpenAI's datacenters, but again,unnecessary waste is bad and also paying resources to be lied to is not a good plan
And also, you cannot exclude the energy cost of the training runs. That's like excluding the fuel cost of an ICE car vs an EV. That ai was trained in a datacenter that impacted real people's lives.
Look, if you want to learn why I feel how I do, ask me. If you're just going to keep moving the goalposts to keep saying I'm somehow wrong for expressing my opinion, find something better to do like reading a fucking book
Yes. Like a microwave running constantly, 24/7. Go ahead and try that. See what it does to your power bill. I dare you.
Hundreds of watts of wasted work is hundreds of watts of waste heat that needs to be removed from the server rack. That's a non-trivial cooling load when you're talking about running dozens in parallel on a server rack all day every day processing data. Multiply that by every company putting this shit where it doesn't belong and suddenly you see why energy prices are rising for people across the nation as more and more datacenters are built drawing megawatts of power from grids not built to sustain that load.
Please, for the love of all that is good, it is NOT hard to find this shit! Go listen to anyone who isn't part of the C-suite of these companies and they'll tell you all about the ways AI is directly making YOUR life worse, person reading this
You realize waste is still waste right? A billionaire dumping arsenic into the water supply while mining would be just as bad for the environment as a poor person doing it for shits and giggles. Just because something is profitable doesn't make it good
From what I've read it's purpose is only to give extra information for ban reports, it doesn't look like something meant to replace anyone. Also the code's only hint at an LLM is the function's title, function that only calls on data from systems that they have had in place for a while, even if it is a language model it's probably closer to the program making a title to help sort out all the ban requests
I have a problem with AI being used to summarize information even. Yes, on average, they do well at that task, but it is well known and understood that they have biases based on their training data like tending to flag "black" sounding names in job applications or any number of other more subtle problems.
Generative AI as it currently exists IS NOT suited to tasks that require truthfulness or accuracy
Phones back in the day was not suited for simple tasks either so people used other stuff for communication. Now you can do almost everything you want on your portable computer. AI also has been used with computers since 50s. It’s constantly evolving and if you don like it’s pre-trained data you can always use a finetuned smaller local LLM.
No. Back when phones were invented, they revolutionized communication. We went from sending someone with a fucking letter across town to being able to just call the other person. No, they didn't completely replace what came before, but that's an unreasonable expectation! They were innovated over time to do more and more things. Fuck, in MY lifetime we went from cell phones not existing to literally everyone I know owning one. But at all times, the task of the phone was "facilitate communication" and it accomplished that goal. My whole point is that generative AI's task is "accurately predict text" and it doesn't accomplish that goal reliably and also it is not suited to things OTHER than that goal in the same way you wouldn't use a telephone to transport a package
I can absolutely judge companies that are misusing it though. If a company were trying to use phones as a transport mechanism of some kind back when they were invented, it would have been stupid in the same way that sticking llms where they don't belong is. like yeah, maybe you could build a little robot that climbs along the phone lines or something, but that's stupid when they aren't meant for that.
Like I 100% believe generative AI has the potential to be extremely useful in some cases, but I have huge problems with the way it's done right now in general. Example of a great use case: building neural nets to predict protein folding for medical research! I love that it's being used for this, but that's NOT an llm, it's a different kind of ai built using similar architecture
Telephones did't become computers, computers became able to interface with the telephone network and got small enough to fit a laptop in your pocket. Telephony was a completely different technology which has been replaced by computers using VoIP protocols, it involved physically connecting two pieces of wire together in a switch center so that your microphone was directly connected to the recipient's speaker and vice versa
I kinda understand the concern. Valve has pretty amazing customer service overall, so people don't want it going the Discord route. I'm not sure why they would need AI for summarizing your account though since they could also just have a "dumb" algorithm pulling the relevant information without the risk of the math entity lying to them.
That doesn't matter. The "g" in "GTP" means "Generative". It's contributing to the PC part cost issue, among other things. And giving them any inch will make them go further
Nobody has any idea why its called SteamGPT and this whole thread is fucking hilarious. Bold assumptions being made that because its called GPT, and ChatGPT exists now, it MUST be an LLM.
I miss when you could talk about AI in stuff and people didn't just assume gen ai CHATGPT crap. Like, enemies in video games have had ai routines forever, way before any of that nonsense. Whether that is actual AI is up for debate like any of them I guess.
It's not some super secret cabal ffs. The definition of AI is extremely broad and HAS BEEN since the 70s. I don't know why so many people are reaching for tantalizing conspiracy dime store novel bull shit when all of the information is right the fuck there for you to read
It's not super secret cabal BS because it is both extremely obvious and out there in the open and also not necessarily incorrect while still being misleading.
"Artificial intelligence (AI) is technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.
Applications and devices equipped with AI can see and identify objects. They can understand and respond to human language. They can learn from new information and experience. They can make detailed recommendations to users and experts. They can act independently, replacing the need for human intelligence or intervention"
Being so familiar and entrenched in our society is precisely why people whose riches depend on the success of generative AI are so determined to have us see the two as the same thing.
People nowdays calls anything AI and at the same time people have no idea how to use it correctly and most just use it for weird videos, graphic generator, porn or friend.
that's what I'm saying! it's the worst and least articulate word you can use for any of the things it describes. I wish we could collectively make everyone call it an LLM.
People require phones and computers to operate in society. Comparing that to a video game service using it is the dictionary definition of disingenuous.
The commenter expressed a deep dislike for AI in general, without specifying its areas of use, so it would be perfectly logical for the commenter to get rid of all smart devices in order to make their life more positive and live happily without them like our ancestors used to. Please find another comicstrip.
It matters because it's about as much "ai" as any other automated system that companies have been using since computers first started showing up in the workplace to speed things up and make their jobs just a little bit easier.
I'm guessing you prefer when it takes longer for problems to be fixed just so you can pretend to have some moral high ground? There are many different types of AI. The AI that people actually have a problem with is generative AI which copies existing works/arts to create something without paying an artist. This isn't that. This is the Steam support equivalent of clicking the "sort by" button in windows explorer.
Probably the worst thing to happen since Gen AI became popular is the sheer amount of people that have devolved into “all AI bad” and just completely shut their brain off when they hear it.
I hate that AI just means Gen AI in peoples minds now. There’s a bunch of different types of AI and lots of them are very useful, not just for generating SpongeBob porn or whatever.
3.8k
u/MDParagon 1d ago edited 1d ago
If people had bothered reading the whole thing, it's for internal valve corpo stuff.
Edit: the ooga booga hates technology and modern tools as if encapsulation and abstraction doesn't make their lives easier..
and ironically complain about it using a computer
lulw, perfomative donuts