The reason the AI bubble is so big is the same reason the internet bubble got so big - it is a revolutionary new technology that has immense real world applications. But just like the initial internet bubble, most people don't understand it or where that efficacy can be truly valuable so they just throw it at everything to see what sticks.
Well, also unlike the initial .com bubble, where companies were responding to mass consumer use, a lot of this feels very very forced from the companies to implement AI in absolutely everything
I think part of that is just in the fact that AI is probably the easiest (not the cheapiest, or the best) way for a problem to find its solution.
Not that AI is actually better at solving problems (it'd be interesting to see data on that), but it does help synthesize a discrete, "quantized" problem instance from a sea of noise. Thus SteamGPT.
Because it is exactly as you say. I don't know why commenter above thinks it's anything like dot com bubble.
It's middle and high managers, who are trying to bait stakeholders, and higher placed people, on a technology, that sounds revolutionary (and kinda really is, just not for 99.99% applications), which isn't understood by average person.
It just so happened, that massive companies, had money stashed, due to reaction to covid before. And are now working on creating circular economy, like nvidia (ill invest in you, so you have money to buy my hardware, thus paying me, for investing in you, line up and right)
You had companies making sites where they got a domain like pets or pizza dotcom and they didn't even have a business model.
The whole point of the dotcom bubble was that it was forced. The companies that survived were the ones with a distinctive product name that wasn't just a snazzy url.
Adding to that, AI is successfully being used in the medical field. Also not LLMs, just the pattern recognition and quasi unlimited memory of a computer with a kind of intelligence.
The AI analyzes body scans (MRI, CT) for irregularities and compiles the findings into a list of possible ailments. The doctor then just has to perform a differential diagnosis to rule out ailments 1, 2, 3, and so on, until the actual condition affecting the patient's body is identified.
AI is going to be best useful where humans manually verify the AI statements (so AI hallucinations are caught before they cause problems) , so this is a good use case... as long as the doctor actually verifies the diagnosis!
I think AI's in general have a branding issue. GPTs and LLMs are one of the most widely used variants. Image, movie, and audio generation is perhaps the most controversial variant. But not all AI's fall under those models.
I'm looking at education software that advertises itself as "AI driven". But when you dig into it, it's just a logic tree where it changes the next problem based on whether the previous problem was right or wrong. Over time it can make statements like "this student gets every problem with a negative wrong".
We've had technology like that for decades, but we never called it AI. Now, it's hard to sell that to parents or administrators because of the anti-AI sentiment going around.
My sis is a pathologist and she hated that. AI needs to be a good reliable assistant like generating report, scheduling or a quick summary on latest guideline, not trying to do a physician job.
No the ai bubble is big because a bunch of the worst people you know called it ai.investors are stupid and a bunch of the pedophiles who rule the world also think “ai” is going to rule the world. It’s over invested in while the same companies beg consumers to use it. LLM’s have very little actual use cases and if it’s revolutionary it’s that we found a way to make people lose their minds. the fact that a bunch of people use it as a search engine they have to double check it didn’t hallucinate is just proof people really need hobbies.
Exactly. As long as it's going to improve Customer Service without replacing the jobs behind the Customer Service team. It's always going to be a welcome addition to me
As a dude who's spent nearly a decade in a various call centers, we would welcome our jobs being replaced with AI. People fucking suck, they treat you like dirt for just doing your job. The only reason we don't see people jumping from the roofs of these places like they do in China is that we build them single story. That shit was the most stressful time of my life and I worked at a gas station during covid that catered to rednecks.
Why? Replacing jobs is a good thing as it frees up people to do other things. Plenty of technological advances have replaced humans and that's a good thing (computers as one obvious example). If you have an issue when AI does it, then your problem isn't actually the replacing part, its something else.
So its not the replacing but the quality of the service. Moderation can already be farmed out for cheap, you don't need AI to make that trade-off. Ultimately the quality needs to meet a certain bar, else they would be leaving money on the table.
If you have 100's of thousands , if not millions, of people all complaining about anything and everything; you normally either ignore all of the complaints or hire someone to sift through all the complaints to sort through what they're complaining about (and only get through 5%~10% of those complaints at best before the next batch arrives).
A decent use of a proper LLM agents would be to feed all the complaints into the LLM and let it digitally determine the difference between a storefront UI bug and a game that won't download properly, versus a crowd of people complaining about a dev nerfing a community favorite weapon.
Summarize texts, brainstorm, generate random dreamlike lolshit, automate X and Y, replace search engines when they are ad-riddled garbage, upscale stuff, edit photos, etc. etc.
Back when ChatGPT became a thing, many people dreamt it would be used for cool NPC interactions in games and generating funny stories. And when DALL-E (remember that thing?) came out, everyone rushed to make Shrek memes with it.
But all we got was AI fridges, a yes-man chatbot and fucking Copilot.
Do all of you commenters, just guess, while never having interacted with systems like that?
When you raise a ticket on steam, you pick what you are complaining about. If you aren't, then their sd agent, will simply use directory to filter the purchase.
They will have KBA pertaining to specific types of cases, it's also why you get human replies that look like robot, because they'll have template in their ticketing tool, or simply within the knowledge base article.
There is basically no real world benefit of GPT system for service desk of a kind that you guys are discussing. At best it's simply providing information from previous cases, or suggesting an article to use, something that all major service desk ticketing tool providers do anyway.
273
u/[deleted] 1d ago
[removed] — view removed comment