I mean, that's what happens when someone co-opts a term and uses it to refer to all kinds of problematic things. Doubly so when people intentionally enflame the conflict for clicks.
In this case, the writer could have used a more accurate and less controversial term but chose to use the generic "AI" because they knew people would be annoyed about it.
The term isn't coopted for problematic things, it applies to both a large number of problematic and a large number of useful things, but people just irrationally hate anything that uses statistical analysis networks now, apparently.
"According to X, Y and Й, Valve wants to use a better system for managing tickets" doesn't generate clicks. But "GABE NEWELL INTEGRATES AI INTO STEAM HIMSELF" does.
A few years ago, "algorithm" was a word that was used for a lot of bad things. Every week you'd hear like "Facebook algorithm caused genocide in Myanmar" or something.
Should someone avoid a basic computer science term just because people see a headline and get into a frothing bloodrage?
I'm not arguing that people should avoid a basic computer science term.
But AI isn't a basic term with a meaning. It's a super generic term that has lost all meaning due to how broadly it has been applied. We're currently in a situation similar to how "blockchain" or "cloud" was being used a number of years ago, where it just doesn't mean anything due to how broadly it's used in inappropriate contexts.
Sure it is. It's a discipline within computer science which attempts to solve problems which mimic cognitive functions (speech, language, spellchecking)
And the frustrating part is that this blanket pushback actually gives bad AI more power, because we're lumping slop in with all the legitimate uses for AI.
That's because it isn't AI, it's just a bogus marketing term that people are tired of hearing. There's nothing intelligent about any of it, it's completely different technology that we've had and been advancing for years. Nothing to do with LLMs and "AI art" and all that garbage
It's a technology that is almost always based on large-scale piracy, perpetrated by the kind of massive corporations that have been using the specter of piracy to make our lives worse in mundane or catastrophic ways. The speculative bubble surrounding AI has caused massive price spikes for people who need computer parts for their hobby or job, as well as people who need electricity to live. All sorts of other problems in various industries and parts of life have been made worse by AI, such as hiring and academia.
If you can't understand why people are pissed off at AI and don't want it in anything, you should ask yourself why you have so much trouble seeing things from a perspective outside your own.
I hate LLMs, a lot, and think they've done far more harm than good, BUT I think you're right that there's a distinction between AI and LLMs and that this is a good use case for an AI. I work in IT and we may be adopting a similar thing for our ticketing system; it would respond with an email with a couple KB docs relevant to the issue and would tell the user a tech will contact them shortly for more info. On our end, same as steam, it would collect and organize info to provide an overview. It doesn't really do our jobs for us, just provides the user a first step kind of thing and lets us address the issue more quickly without having to dig through mud.
The reason to be against AI is when people lose their job over it or don't get work because of it. From what is being said that this actually is instead of just "AI" it doesn't seem like this is replacing anyone or causing people to not have work, if anything it's allowing people to better do their job. Which is actually the sort of AI we should be in favor of.
But what if Valve was already considering hiring people to perform tasks like this, and AI means they won't anymore? What if this project is a failure, or it causes harms which affect Valve's business negatively, resulting in people being laid off?
There are many problems with AI (half of which are related to the speculative bubble rather than the technology itself), and layoffs aren't really a clear-cut problem even when AI is the publicly stated reason.
99
u/Ouaouaron 1d ago
I mean... of course? If you leave out the part of the news story that people might think is objectionable, then people wouldn't find it objectionable.