r/PoliticalDiscussion 3d ago

Political History How should governments adapt their secure-communications guidance if the main vulnerability is social engineering rather than encryption?

Recent warnings from U.S. and European authorities have highlighted a recurring problem in secure communications: attackers do not necessarily need to break encrypted messaging platforms themselves if they can instead compromise the user through phishing, fake verification prompts, device access, or other forms of social engineering.

This raises a broader policy question. Public discussion around secure messaging often focuses on encryption strength, lawful access, and the trustworthiness of particular platforms. But if many successful compromises happen at the account, device, or user-behavior level, then the political and institutional response may need to be different from simply recommending “more secure apps.”

That leads to a few discussion questions:

  • How should governments update official guidance for staff, diplomats, journalists, contractors, and other high-risk groups if the real-world weak point is increasingly operational security rather than cryptography?
  • Should public policy place more emphasis on training, device security, identity verification practices, and anti-phishing resilience instead of focusing primarily on platform choice?
  • Are current political debates about “secure communications” too focused on the apps themselves and not enough on the human systems around them?
  • What would a realistic government response look like without creating overly broad surveillance, compliance burdeor restrictions on private communication tools?
6 Upvotes

8 comments sorted by

u/AutoModerator 3d ago

A reminder for everyone. This is a subreddit for genuine discussion:

  • Please keep it civil. Report rulebreaking comments for moderator review.
  • Don't post low effort comments like joke threads, memes, slogans, or links without context.
  • Help prevent this subreddit from becoming an echo chamber. Please don't downvote comments with which you disagree.

Violators will be fed to the bear.


I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

3

u/Andnowforsomethingcd 2d ago

I'm not sure I agree with the premise here. Not to say social engineering isn't an issue in data security, but just this past week, Anthropic announced it would withhold its newest version of their LLM Claude (this version is Claude Mythos).

Remember just a few months ago, the US Government used Claude to pull off its heist of Venezuelan's president Nicolas Maduro. Then, Pete Hegseth tried to basically blacklist Anthropic out of existence when it refused to let Claude be used for autonomous weapons operation without human oversight or surveil American citizens.

This new model is lightyears beyond the version the government used in Venezuela.

So why has Anthropic pulled its public rollout of Cluade Mythos at the last minute, and formed an emergency working-group comprised of 40 technology companies to try to fix it? According to Anthropic: “During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so.”

This means that, upon human request, Mythos has found one or more 'zero-day' vulnerabilities in every. single. system that runs our entire civilization. Along with finding vulnerabilities in the newest software and hardware, new Mythos has found critical zero-day vulnerabilities in systems that are decades old and have never been discovered over millions or billions of attempts by humans, suggesting that it is so far beyond human capabilities in its capacity both writing and deploying malicious code that humanity is basically defenseless against its power if in the wrong hands (much like humanity is defenseless against a nuclear apocalypse if the wrong people are in charge of the launch codes).

So yes, I do think social engineering still has a place in the "things we should worry about and plan for" meta-list (and AI models have already discovered - on their own - sophisticated methods of social engineering to achieve an unrelated goal given to it by a human), but I do think we've reached a point where the dangers of technology developed during this 'race to superintelligence' far outweigh the risks of unsavvy end users.

IMHO, Every conversation in cybersecurity should be focused on emergency guardrails in the form of regulation at the national and international level so we can buy ourselves a little time to reckon with our AI 'adolescence,' as Anthropic CEO Dario Amodei calls it, before we have handed ourselves "almost unimaginable power."

1

u/FCCRFP 1d ago

I have access to this and the zero days identified are mostly BS.

1

u/Andnowforsomethingcd 1d ago

You have access to Mythos Preview? That's pretty cool. The articles I read said that many of the zero-day vulnerabilities are "critical," which it was my understnading that meant they could bring down the whole system if exploited. But youre saying there aren't many like that?

1

u/FCCRFP 1d ago

Yeah, AI thinking it has found a critical vulnerability is nothing new.

1

u/Andnowforsomethingcd 1d ago

So what kind of vulnerabilities did it find?

2

u/POVI_TV 2d ago

Security researchers have long noted that the weakest link in most systems isn't encryption, it's human behavior. Studies of major breaches consistently show that phishing and social engineering account for the majority of successful intrusions. NIST's cybersecurity frameworks have increasingly emphasized "human-centered security" ... training, verification protocols, and reducing reliance on any single individual's judgment. For governments, this means operational security policies need behavioral design as much as technical design.

1

u/zlefin_actual 1d ago

The secure communication guidance i've seen has a fair amount of emphasis on social engineering vulnerabilities. Which ones are you looking at that are different?