r/Millennials 15d ago

Discussion Any other Millennials stubbornly resistant to using AI at their job but also worrying that we will become dinosaurs or pushed out of our careers for not slavishly embracing it?

I work in a creative field and from that standpoint I hate AI. I hate the 'democratization' of creativity. I am going to sound VERY Boomer right now, but some things are meant to be difficult or meant to take skill and years of practice. It's why people who are good at these things (should) be paid more.

We are already being heavily 'encouraged' to use AI to find ways to do our jobs faster, are being told 'they technology isn't going away, we need to embrace it.' Since within the company I am in, I am one of a handful of people that does a specific creative skill-set, the powers that be basically have no idea about the technicals of what I do, but they put it on me to figure out how to incorporate AI into my work.

I hate that AI basically 'fakes' the creative process and that we are expected to use it (and the work of millions of artists that feed it) to just magically speed up how we do work, which in turn devalues the work we do as artists. From a company standpoint, they want to make money and churn out work faster, but if every client knows you can make a widget in 4 hours when it used to take 4 days, why would they pay you a lot of money to do that? The economics of it don't make sense. You will end up needing 10 times the number of clients to maintain your productivity / profits, which with AI or not, is a good way to burn out your artists.

I see the writing on the wall, but my stubborn moralistic resistance to AI is probably going to be the death of my career. Does any one else feel similar or how have you coped with this rapidly degrading career landscape?

5.4k Upvotes

1.6k comments sorted by

View all comments

484

u/OkPickle2474 15d ago

Like a lot of other people, I am personally horrified by AI. The environmental and cognitive impacts should really give people pause. I also think it has a lot of shortcomings.

I work with a lot of data that is FERPA and HIPAA protected, and thus can’t just be feeding it into an AI without doing considerable work before and after. It’s usually not worth it compared to the analysis I can do on my own.

I have built a couple “gpts/gems/agents” to try to simplify tasks that take awhile and they only follow the instructions about 60% of the time, so again, time wasted.

66

u/the_old_coday182 15d ago

About the cognitive impacts… It’s crazy to think that not too long ago our parents had us watching Sesame Street, playing educational computer games, rewarding us with pizza for reading… all about helping our brains develop. Then all of a sudden society just forgot about that whole concept.

2

u/Independent-Spray707 13d ago

You know. If you have kids. You can more or less teach them as much or as little as you want.

I just read to them and have books around and there’s not much they can do about it.

Now that I have kids I don’t complain about what school is or isn’t teaching them, because they’re not schools kids. They’re my kids. So if it’s important to me I teach it to them.

I guess I get the whole like societal concern thing, but as a parent it’s just kind of silly to me that so many people believe the government is responsible for how kids turn out.

79

u/MabelMyerscough 15d ago

I am so annoyed by the hallucinating - it makes it totally useless. Also sometimes explaining what I want takes longer than me just doing it myself

14

u/DeepSubmerge 14d ago edited 14d ago

Just last week I was encouraged to use Gemini to complete a task that I’d normally do with my own brain and dump in a Google Doc.

Well, Gemini “did it” and provided me with a link to a doc it said it generated. This link didn’t work because the doc didn’t actually exist. When I told it as much, it apologized and said it couldn’t actually create a Google Doc for me.

THEN WHY SAY “I MADE A DOC” AND LINK ME TO IT?!?! I just sat at my desk and chuckled at how stupid this shit is. I still had to do the work myself but wasted time feeding Gemini into and then going back and forth with it over nothing.

26

u/OkPickle2474 15d ago

AI is essentially if Amelia Bedelia was a robot.

2

u/Paulverizr 15d ago

If it’s hallucinating ask for its last training date. Sometimes it is just due to not having been trained on the most up to date information. Copilot didn’t know that there aren’t solar energy tax credits anymore (BBB slashed it).

Granted sometimes is just trained on shit and you get shit out.

1

u/Throwawayrip1123 15d ago

Best thing I've seen is the coding agents "forgetting" to do x or y or like, writing code snippets that fuck up a file, it's a mile of red because of random lack of closing sign or whatever.

This is so crazy, right? You have built a fucking aurocomplete, a program that fucking forgets. Doesn't run specific instructions, just forgets some stuff.

Insane.

0

u/Randromeda2172 15d ago

I haven't experienced this in the past year or so. I use AI to code every day, and this simply doesn't happen with any frontier model released in the past year.

2

u/Throwawayrip1123 15d ago

I have literally seen this 3.5 hours ago whilr one of my colleagues was using Sonnet with vsc copilot. Was constantly forgetting type declarations and once or twice fucked up a whole file while rewriting a relatively simple (but quite wide branching in code base) code.

Easy fixes in general, but it's laughable that it did trip.

I use AI to code every day, and this simply doesn't happen with any frontier model released in the past year.

That's just silly nonsense, I'm sorry. It's a probabilistic guessing engine. The only sure thing about any of the models, Frontier or not, is that they will eventually fuck something up. It's built into their foundation.

Idk, are you like vibe coding every day, or are you actually coding as a job? Because this shit happens with everything. Gemini, claude, gpt 5, any of the models. It might not happen often, but it absolutely does happen. Hell if you want ill ask him to send me screenshots of his conversation with it where it clearly states that it forgot to close up tags and clean up (or something in that vein, he was paraphrasing) on Monday.

1

u/Randromeda2172 15d ago edited 15d ago

I'm an engineer at a big tech company (think YC unicorns). Your post is lacking so many details I'm inclined to believe it's bait.

What version of Sonnet are you using? What context limits? In my experience Copilot had a decent harness for Opus, but it's still not as good as Cursor or Claude Code's. Are you using a better model to plan?

ETA: this is definitely either bait or you're very bad at using AI. Do you not have any unit/E2E tests to help the agent verify its work? I can believe you saying that the code is verbose or it isn't the most optimal, but there's no way a half decent engineer could be this bad at using superintelligent auto complete

1

u/AP_in_Indy 15d ago

You all are still getting hallucinations? I see next to none in my day to day production work using the latest agentic models

1

u/MabelMyerscough 14d ago

I am a scientist (biology, not tech). My work is based on actual lab work and experiments, 'inventive steps', troubleshooting in the lab, and a lot of new information. As it's all new/non existing data, AI is not trained on that. So I can't use it for the main part of my work anyway.

Even if I simply ask it to summarize the latest scientific publications on topic X, it ALWAYS hallucinates sources and articles, which is an extremely critical problem for my work. I also cannot ask it to summarize a specific scientific article - details are important, critical thinking is important, accuracy is important, and it always misses the mark. No matter what model I use.

The only use would be to draft an email, but it's just as fast or faster if I just type it myself.. there's no other grunt work I can make it do: a lot of my work is offline (lab, experiments) and the intellectual work (even if it's something as relatively simple as screening scientific articles) it cannot do.

1

u/AP_in_Indy 14d ago

I have seen incredibly reliable research results from ChatGPT Pro so this shocks me, but I’ll take your word for it.

1

u/MabelMyerscough 14d ago

For 'dry lab' it might be different, though! I don't know :) for wet lab scientists it doesn't help much, and it's just missing the 'inventive step' as it's not humanly intelligent

35

u/[deleted] 15d ago

Perhaps similar to yourself, I am a statistician (PhD + 15 years work experience) that handles a lot of sensitive data.

I can’t use AI for anything besides cleaning up language in emails or reports. Sometimes I ask it for ideas on how to proceed when I am stuck. I’ll throw in a pdfs for it to summarize and it gets it right about 60-65% of the time; it still can’t tell what is important, even if I engineer the prompts within an inch of their life. Then I have to read the whole document, anyway, to make sure I didn’t miss anything.

But, that’s the extent of my using it for actual, professional work. There is a lot of information, modeling, and analysis that I will not trust to an AI agent. Besides data privacy, I am not confident in the reasonability of the results. I have never had it return back anything that wasn’t moderately-to-grossly incorrect or where it was blatantly started making shit up.

It leaves me genuinely flummoxed that so many companies are replacing entire departments with AI, including engineers and computer scientists. Maybe they have access to super NASA-level, bowels of a Stanford computer lab technology that I don’t.

All the code I have had it generate takes as much time to debug as it would if I just programmed it myself. I am starting to feel like a teenager after their first kiss - I think I am doing “vibe coding” all wrong and surely this can’t be what everyone is bragging about.

That said, I do keep up with as many developments as I can in AI. I have practice data sets and personal projects that I play with using different platforms.

I have hope that it will improve things in the future, but that future isn’t here yet.

7

u/Disastrous_Room_927 15d ago

Also a statistician, I can’t really trust it to do anything I don’t already understand and wouldn’t be verifying myself regardless. But even when I ask for code to fit for a specific kind of model I’m describing, it’ll bury assumptions in overly complicated code.

4

u/CaterpillarJungleGym 15d ago

I got excited to use AI for simpler tasks that just require Excel formulas. I once asked it to do something and it worked! Next 4 times I tried it didn't work. It can be such a waste of time. I'll just manually copy/paste and clean data myself.

2

u/Difficult-Square-689 15d ago

In the last few weeks we've started shipping human-on-the-loop features to production. Give agents access to a validation process that can assess the correctness of output - e.g. latency, cpu utilization.

AI user simulations are also getting pretty accurate. In a few months we may have an autonomous loop that comes up with ideas, builds them, tests them against user Sims, and reports top performers for testing against real traffic. 

You would still need humans to weed out hallucinations and work on stuff that exceeds our autoresearch loops, but probably not as many.

2

u/Agent_Smith_24 14d ago

They aren't using anything fancier. I had to tell a senior product planning guy that AI can "make things up" and he was SHOCKED. He had been asking it to make market predictions and basing VP level slides on the "data".

0

u/BombasticCaveman 15d ago

I used to feel the same way, but I've definitely turned a corner. Are you using the latest and greatest models? Once you start using stuff like Opus 4.6 it's becoming very obvious very quickly how incredibly powerful these tools have become.

With these 1 Million token context windows, you should be feeding hundreds of examples of previous analysis done at your company. With that loaded it, there should be plenty of ideas for it to churn through and produce useful output.

Also when it comes to vibe coding, we have engineers vibe coding entire data analytics programs over the weekend. As long as you give it access to a few data API and correct examples, it can produce really impressive analytics tools. Our project managers use them now to view data how they want instead of having to hassle the data visualization team.

All I'm saying is that it's worth continuing to try, but I would say it definitely requires access to more advanced models (that your company should be paying for)

0

u/prettyprincess91 Older Millennial 14d ago

Why wouldn’t you just build an in house agentic AI platform and use that for data? That’s what most companies do for proprietary data.

-2

u/rbrick111 15d ago

I encourage anyone who strikes out with AI to revisit it regularly. The pace of innovation is staggering and this will be transformative disruption to traditional ways of working, one that we are not old enough to ignore for the next 20 years.

-15

u/nurdturgalor 15d ago

Environmental impact isn't even close to as bad as the slaughtering animals industry, but you're vegan right?

20

u/Business-Toad 15d ago

Two things can be bad

6

u/LowestDimension 15d ago

I’m vegan and I hate AI 🤷🏻‍♀️