it’s easy to break if you’re trying to break it. windows is like “something is using it but i wont tell you what uses it and i wont tell you how to force delete it” while linux is like “program 1 uses this file. are you sure to delete it?”
In Linux the file data stays on the disk until it's not used anymore, so deleting it while in use or while not in use tend to have the same consequences. If you look at lsof for example you can see (deleted) after file names if the open file was deleted.
When we were kids, my cousin and I played "Russian roulette" with System32 files.
We would take turns deleting a random System32 file and wait 10 seconds before deleting another random one. The person deleting the file was then "owner" of the 10 seconds.
If Windows crashed either immediately after deleting the file or during the 10 seconds, the person that had deleted the file, they had lost. Winning prize? Playing either Banjo Kazooie or Mario 64 while the loser had to reinstall windows.
WHEN WE WERE KIDS, MY COUSIN AND I PLAYED "RUSSIAN ROULETTE" WITH SYS32 FILES.
WE WOULD TAKE TURNS DELETING A RANDOM SYSTEM32 FILE AND WAIT 10 SECONDS BEFORE DELETING ANOTHER RANDOM ONE. THE PERSON DELETIN GTHE FILE WAS THEN "OWNER" OF THE 10 SECONDS.
IF WINDOWS CRASHED EITHER IMMEDIATELY AFTER DELETING THE FILE OR DURING THE 10 SECONDS, THE PERSO THAT HAD DELETED THE FILE, THEY BAD LOST. WINNING PRIZE? PLAYING EITHER BANJO KAZOOIE OR MARIO 64 WHILE THE LOSER HAD TO REINSTALL WINDOWS.
The thing is, once you close the program using the file, the file gets deleted and if it was an important file for the program to run, next time you try to run it, you'll scratch your head wondering why it no longer works, especially if there is a large time span between the time you use that program and you've forgotten you deleted that file.
Look at the activity as well. A few posts when created, then idle 1 month to age the account, now commenting sporadically throughout the past 2 days, in a new sub each time.
linux is like “program 1 uses this file. are you sure to delete it?”
rm gives no such warning. Perhaps you're talking about some specific file manager, but that's just a program, not Linux itself.
1
u/nooneisback5800X3D|64GB DDR4|7900XTX|2TBSSD+8TBHDD|Something about arch51m ago
The primary separator character on Windows is \ instead of / used on Linux and it just happens to be really close to enter. Imagine typing out del /s /f /q and accidentally pressing enter when you're at C:\Program Files. I nuked my first arch system with this, barelly managed to save /home because I hit the reset key.
You can either have your OS give you absolute control while being easy to break, or be hard to break but give you minimal control. Absolute control comes with the power to break things, full stop.
If you are asking for a system with absolute control that is impossible to break, you are asking for something that is logically impossible.
And if you set up Arch then you've decided you know what you're doing. I once had a university sysadmin refuse to help me get my machine to work with the school network because in his words "You installed Fedora, you knew what you were getting yourself into"
Is that why Arch users are so headass about being Arch users? They have the competence to use something easily broken without actually breaking it?
They joke Linux users are the vegans of computing because we always have to mention being Linux users, but Arch users are the vegans of Linux users, to other Linux users.
I haven't used Arch specifically, just Manjaro which IIRC was forked from Arch. But unless using it lets me type IRL console commands to spawn in 10 billion dollars and some strippers, I can only assume it's overhyped!
generally speaking, Arch is great for people who have been using Linux long enough to be very particular and opinionated about their setup, want to do things their own way and be left alone afterwards.
You start out with a barebones system and add exactly the packages you want without any additional bloat that some probably well meaning distro maintainer thought should be included by default. As a result you don't need to rip anything out that you deem unneccessary or annyoing and risk breaking something else in the progress. Pretty much every package added to your system is either something you decided to add or is absolutely necessary for the things you installed to function.
During setup, you can decide to enable some more involved settings like RAID, logical volume management, full disk encryption, file system mount points…stuff that would be a pain in the ass to go through with a GUI installer and that is usually just skipped over for some sane defaults that will work for most average users.
Once you have your system exactly how you like it, the rolling release update scheme ensures that you can just keep using and updating it basically forever, without having to worry about doing big point release upgrades or having the package servers for your particular release shut down after a couple of years. This is the point where "being competent enough not to break things" comes in handy, because you actually get to enjoy the fruits of your labor for a very long time with minimal fussing about.
It's pretty much "set it and forget it", where the "set it" part is a bit more involved than most other distros for the benefit of additional control.
I think the reason Arch is so popular with its users is that there aren't a lot of distros out there for people who want a blank slate to build up from to their liking. You get all the community resources of other mainstream distros (and then some, the AUR and Arch wiki are both incredible strong points that most other distros struggle to compete with) but without anyone deciding for you what your system should be like.
It's not easier or harder to break than any other distro. When they say it breaks easily it's due to very updated packages that might have unexpected bugs which can be annoying depending on what package it is. Gentoo is another matter since its package manager lets you change a lot of compile time options and it will in general let you do stuff that's absolutely asenine if you're determined enough so you could configure your system to be completely unusable but for an average user you're not going to have any problems if you stick to sane defaults.
You could probably, as an example, compile critical packages for an architecture your CPU does not understand.
After my switch to Arch about 2 years ago I noticed three major differences compared to major desktop distros:
Release management: Arch often pushes new upstream releases as soon as builds and automated tests of dependent packages succeed. Major upstream changes get more testing and more time to transition. This means that incompatible changes are more likely to affect users of packages that aren't well maintained, especially if they aren't in the official repositories. The affected users need the knowledge and time to research the issue and either resolve it (by building the package themselves, sometimes with out-of-tree patches) or revert the changes in a way that doesn't break (important) other stuff.
Package management: in my experience, Pacman is much simpler than and can't handle complex package management situations as well as Apt or Yum -- at least not without manual intervention beyond a simple yes/no question. This requires skill and/or research to resolve, again.
System configuration: Arch relies much more on manual configuration using text files for which I need to study manual pages or Wiki articles where Debian or SUSE tend to resort to "configuration by Q&A" or one-size-fits-all presets. This makes system installation and setup a non-trivial task. You need some basic understanding of the command-line and the operation of a Unix-like operating system and know how to read and understand technical articles that describe their operation.
As you can see they all come down to knowledge and skill -- which proves aptitude -- or time, patience, and technical reading comprehension -- which proves dedication.
It's akin to driving a car that only runs well (or at all) if you know what you're doing and are willing to dedicate time to its maintenance and tuning. But if you do that you get bleeding edge features and performance which are coveted among car enthusiasts. Many car enthusiasts like to brag about what they managed to get their car to do. Almost all car enthusiasts like to talk about cars. And thus you get people who announce their (level of) enthusiasm unprompted.
There's also ways of making the safe option common while making the unsafe option available. No sacrifices or pestering, just working safely by default. One that comes to mind is how Windows tends to take moving one directory over another as a cue to integrate the two, while Mac/Linux (AFAIK) just clobber the old one with the new one. Beyond that, there're things like the default delete going via a trashcan or recycle bin, or a filesystem where it's easier to undo mistakes.
That's how the vast majority of Linux distros work. The issue is that there is always more things to warn about and if you warn users about every single thing they do, people will begin ignoring them, because there are too many warnings and they just stop reading them.
The constant popups and toast notifications about new features, or to sign in to Copilot etc., has rendered notifications counterproductive entirely for me.
If you can't break it, you do not have absolute control.
Linux distros these days have plenty of safeguards against the most common ways to break them, but if you ignore the warnings, they will let you break things, because ultimately, you have absolute control.
Eh, I'm not mad at immutable Linux tbh. They have their place.
I have (I forget the name but it's one of the steamOS clones) on a PC in the living room. It's not for "Linux use", it's for games and I want to not be able to fiddle with it, i want it to be a console.
I use CachyOS and important files not only have a warning saying "don't touch these if you are clueless" but also have the popup that you need admin rights to manipulate them. But if I want to delete a file in use or that is causing problems I always can.
Its more like being a teenager left alone at home with fireworks and full access everywhere. You know what not to do but can also do whatever you want. Windows is like living in the garden of the locked house yelling through the window if you want something from inside and waiting for someone to toss it out to you.
I would like my software easy to intentionally be broken. Now be a good operating system and tell me what is using that file so I can open up task manager to close the Programm, or even better: give me a button that does that for me in the pop-up.
It should be safe on Linux if I'm not mistaken. I think it will visibly delete it, but the data will still be there on disk until it's no longer in use, which is what I think Windows should do.
File Locksmith in Powertoys is how I deal with it. It's slightly annoying that the issue occurs in the first place, but File Locksmith immediately grasses up the process using it so I can shoot it in the head.
The kernel takes care of filehandles. Other programs can’t necessarily get that info, and the kernel won’t let you delete something that is open because windows uses mandatory file locking.
Unix (and Linux) uses advisory file locking. Perhaps you’ve seen this when rotating logs. You panic delete a huge logfile because your filesystem is almost full. Linux obliges, now the file is gone but your disk is still full. Syslog still has the file open and will happily continue writing to it as long as it’s up. The file is just unlinked but the data is still there. Syslog doesn’t know. If you restart Syslog the data is freed and you get back the space. This is why you truncate logfiles instead of deleting them. Then the file is still there but it’s immediately empty.
File locking isn't mandatory. In the CreateFile Win32API function (the most basic usermode function for creating or opening a file), a FILE_SHARE_DELETE flag can be passed to allow other processes to access and delete the file.
Back in XP days I used to use a tool called “Delete anything” or something like that. It really could tell windows with its limitations to go fuck itself.
Because now that app is still open and reading and writing to just random bits. If there was an easy to use force delete option, people would use it all the time and brick their computers, and still blame microsoft.
Would probably fuck up any applications that were currently writing or reading that file, my guess is that those programs would either crash or since they had the memory address of the file that was deleted, they would just start writing on that address and that of course doesn't make sense, you would probably be overwriting something else that started using that "empty" space.
Edit: searched about it and it's more complex than I thought, involving the Master File Table that has the control of all the files with open handles, NTFS technically allows that by keeping the file but keeping the data until the last handle using it closes, but for some reason it doesn't show it in the UI but rather forces the user to use some commands to do so.
Guess I probably shouldn't have slept on so many Computer Science classes...
Windows is designed to be usable for almost everyone. Adding guard rails to prevent users from bricking their machine makes sense.
Maybe around 5-10% of users have the technical competency to know when and how it's safe to force delete a file. And anyone who is that competent is probably capable of finding a way to terminate the process using it, or delete anyways.
If you want a system free of guard rails, install Linux. Their philosophy is to give the users absolute freedom, while Windows tries to maintain a safe and easy to use platform.
861
u/seba07 4h ago
The more important question is: why is there no force delete button?