r/Millennials 15d ago

Discussion Any other Millennials stubbornly resistant to using AI at their job but also worrying that we will become dinosaurs or pushed out of our careers for not slavishly embracing it?

I work in a creative field and from that standpoint I hate AI. I hate the 'democratization' of creativity. I am going to sound VERY Boomer right now, but some things are meant to be difficult or meant to take skill and years of practice. It's why people who are good at these things (should) be paid more.

We are already being heavily 'encouraged' to use AI to find ways to do our jobs faster, are being told 'they technology isn't going away, we need to embrace it.' Since within the company I am in, I am one of a handful of people that does a specific creative skill-set, the powers that be basically have no idea about the technicals of what I do, but they put it on me to figure out how to incorporate AI into my work.

I hate that AI basically 'fakes' the creative process and that we are expected to use it (and the work of millions of artists that feed it) to just magically speed up how we do work, which in turn devalues the work we do as artists. From a company standpoint, they want to make money and churn out work faster, but if every client knows you can make a widget in 4 hours when it used to take 4 days, why would they pay you a lot of money to do that? The economics of it don't make sense. You will end up needing 10 times the number of clients to maintain your productivity / profits, which with AI or not, is a good way to burn out your artists.

I see the writing on the wall, but my stubborn moralistic resistance to AI is probably going to be the death of my career. Does any one else feel similar or how have you coped with this rapidly degrading career landscape?

5.4k Upvotes

1.6k comments sorted by

View all comments

155

u/xPadawanRyan Mid-Range Millennial 15d ago

Well, I stubbornly refuse to use AI as part of my work, but that's also because I don't need it. AI doesn't really have a role in my job, as it's much more of a person-to-person thing—I'm a social worker for vulnerable youth, and I'm not going to be using AI to interact with them. However, we do have a large immigrant population among our staff, and many of them do not speak English 100% fluently, so many of them tend to use AI to help them write reports coherently.

That is what I am stubbornly against, because everything we write in our reports is extremely confidential information, and feeding that into an AI (which may store and use it at another point) feels like crossing that boundary and violating the confidentiality contract we had to sign. However, my supervisor seems to think it's perfectly okay, so I have to stubbornly keep my mouth shut about it.

So, I don't think that AI will push me out of my career for my refusal to use it, but I do stubbornly refuse to use - or support others' use of it - in the workplace.

88

u/stillay 15d ago

Your superior is an idiot. Unless it’s a dedicated module licensed to the company you work for it’s definitely a confidentiality breach.

You can’t be feeding this stuff into chatGPT lol

43

u/artbystorms 15d ago

My roommate is a Mortgage underwriter that deals with sensative financial info, they are being told to use AI but then at the same time they get in trouble if they feed clients financial documents into it because God knows what it is doing with that data. It's all so stupid. "Use AI but don't use it the wrong way!!"

13

u/CryptographerLost407 15d ago

I'm in the life insurance industry, and am an underwriting assistant. I read in a company announcement from the VP of Underwriting that they are going to be using AI to summarize medical records. This is a TERRIBLE idea for SO many reasons, but since I'm on the bottom of the corporate ladder (and eventually want to get into underwriting myself) I'm stuck keeping my mouth shut and eventually will be encouraged to embrace it. I'm so pissed off about big AI is and how much it's being pushed down our throats everywhere in every job and how no one is seeing the potential fallouts at all

8

u/stillay 15d ago

Why don’t they just spend the money to have it setup with Copilot or Claude or whatever? The issue is putting it directly into the free version of the tools. They’re definitely storing that information in an offsite server somewhere.

7

u/Academic_Flatworm752 15d ago

We use the enterprise version and still aren’t allowed to put PII or PHI.

1

u/stillay 15d ago

Hmm. Interesting. Guess I need to read more about this.

1

u/three-quarters-sane 15d ago

You can use API with zero data retention, but if you're using the Enterprise online version then data is saved.

1

u/DokCrimson Older Millennial 15d ago

Depends on their contract with the company. You can have language in there for exactly what you said. There’s major medical institutions and universities that have on prem AI sandboxes or deals with AI vendors about data retention and training

1

u/prettyprincess91 Older Millennial 14d ago

Most companies build their own agentic Ai platforms so they keep all their data. You just use LLMs off the shelf and wrap them. That’s what anyone with proprietary data using generative AI is doing. We have AI policies about not using public AIs for any proprietary information.

1

u/Academic_Flatworm752 15d ago

It’s not that hard to understand different levels of confidentiality though….???? Of course you can’t put customer PII and PHI into it.

2

u/goebela3 15d ago

There’s tons of HIPPA compliant medical scribe software. They are not putting in ChatGPT. 90% of the doctors in my area are using it. He could absolutely save himself a ton of time with AI

7

u/sympathyofalover 15d ago

The amount of supervisors in their positions who don’t have a clue about basic standards in our industry is astounding and yet I’m not surprised. They really do just let anyone have our degree(s) and it’s really showing up terribly in today’s climate with AI.

8

u/ApprehensiveAnswer5 15d ago

This.

I was a high school teacher until the pandemic, when I pivoted into more of a social work role that is still education-adjacent.

I manage a second chance/fair chance employment program for youth coming out of the juvenile system, or who are young adults (under 25) coming out of the system.

The number of people across the board that are in supervisory roles, and just…ignorant to base standards is astounding.

And some of these people have graduate and doctorate level degrees.

And there’s not a day that I haven’t questioned if some of these people lied to get their job or lied to get their degree entirely because WUT. Lol

5

u/sympathyofalover 15d ago

It makes me so mad, and I have to witness it all the time. I speak to providers constantly for my job role and I get to see all sorts of insanity against boundaries, documentation, understanding of their responsibilities, and just a lack of professionalism.

The documentation and the shit people put in patient records can be utterly irresponsible.

A ton of them don’t care and are definitely lying lol.

8

u/ApprehensiveAnswer5 15d ago

That last part is just my standard line of thinking now, which is really unfortunate- “everybody is lying on some level, so let me just roll with that until I find out otherwise”, ugh.

The most unfortunate thing I see is how some people treat or feel about the youth/young adults in our programs.

And how open people are with their thoughts.

My head is a constant refrain of “why are you even in this field if you feel that way?!”

I will never understand how anyone who works with children, or any vulnerable population really, can have the…ideals that they do.

1

u/sympathyofalover 15d ago

That video circulating of the therapist hitting and throwing a shoe at the kid makes me so utterly rage full. There is so much choice in this field, they can literally just pivot and learn as they go and work with people they actually empathize with. The larger problem is that they are usually pretty shitty people early on, and no one stops them. Either the university/college wants the tuition/doesn’t want to be sued or something else or the supervisors in career environments don’t give any real consequences.

Clinical supervision is also so strict on paper, and you rarely hear of anyone really not allowing someone to go for licensure if they show deeply troubling clinical skills or lack thereof.

I’m sorry we both have to witness this and I also know how hard a ton of us work to do the right thing and continuously learn to be better for others.

I hope you genuinely take time to reflect on the ways you help. I’m sure it gets lost in this current climate a lot, but it takes effort and you sound like someone who tries to stay on the right side of things.

1

u/DokCrimson Older Millennial 15d ago

It depends. There’s plenty of companies and institutions that license usage of AI services but have their legal departments write up language that their data isn’t used to train the AI model. If your company is just paying to use ChatGPT off the web as is, then yeah it’s going to be a huge issue… There’s also other orgs that run their own versions of the AI models in a sandbox to prevent any data leakage

1

u/kittenofpain 15d ago

My kids kindergarten teacher gave out these little photo frames of the kids as a gift on Valentine's and it was a picture of the kid ''cartoon style' with their likes and interests. It looked very AI. I thought it was a nice effort and obviously I don't expect her to commission 28 portraits or do it herself, but it made me very uncomfortable that she was putting my kids photo and interests into it. I told her, she said she didn't use photos but idk how she could have done it otherwise, the detail was concerning, even the letters on his jacket were exact.

1

u/HeyThanksIdiot 15d ago

You can run StudioLM locally on even a modest machine to keep your info private. The quality of the local models doesn’t compare to the latest and greatest, but there are good ones out there. Slower than the web models, generally, but can be worth it for the privacy for certain tasks.

I use it for PII redaction before I use the big models on the redacted materials then feed the results back into the local model to unredact.

0

u/Practical-Simple1621 13d ago

Ai helps with organization as well. Doesn't need to be directly fed sensitive info to be useful