Since you forgot to add - - preserve-root It won’t go too far
Go on then … try it.
Or don’t because you will erase your system. (Hint: it’s in the asterisk)
Since you forgot to add - - preserve-root It won’t go too far
Go on then … try it.
Or don’t because you will erase your system. (Hint: it’s in the asterisk)
as the binary is already loaded into memory
That’s not the reason why it continues. It’s because there’s still a file descriptor open to rm
.
That’s not the reason why it continues. It’s because there’s still a file descriptor open to rm
.
In Unix/Linux, a removed file only disappears when the last file descriptor to it is gone. As long as the file /usr/bin/rm
is still opened by a process (and it is, because it is running) it will not actually be deleted from disk from the perspective of that process.
This also why removing a log file that’s actively being written to doesn’t clear up filesystem space, and why it’s more effective to truncate it instead. ( e.g. Run > /var/log/myhugeactivelogfile.log
instead of rm /var/log/myhugeactivelogfile.log
), or why Linux can upgrade a package that’s currently running and the running process will just keep chugging along as the old version, until restarted.
Sometimes you can even use this to recover an accidentally deleted file, if it’s still held open in a process. You can go to /proc/$PID/fd
, where $PID
is the process ID of the process holding the file open, and find all the file descriptors it has in use, and then copy the lost content from there.
kill -9 1
Leave the poor kernel out of it, it has nothing to do with this. It’s Lennart, not Linus.
I don’t think it’s intended as a “solution”, it just lets the clobbering that is caused by the case insensitiveness happen.
So git just goes:
If you add a third or fourth file … it would just continue, and file gets checked out first gets the filename and whichever file gets checked out last, gets the content.
It tells you there’s a name clash, and then it clones it anyway and you end up with the contents of README.MD
in README.md
as an unstaged change.
That’s some suckless level cope
Thanks, really constructive way of arguing your point…
Who really cares about some programming purity aspect?
People who create operating systems and file systems, or programs that interface with those should, because behind every computing aspect is still a physical reality of how that data is structured and stored.
What’s correct is the way that creates the least friction for the end users
Treating different characters as different characters is objectively the most correct and predictable way. Case has meaning, both in natural language as well as in almost anything computer related, so users should be allowed to express case canonically in filenames as well. If you were never exposed to a case insensitive filesystem first, you would find case sensitive the most natural way. Give end users some credit, it’s really not rocket science to understand that f
and F
are not the same, most people handle this “mindblowing” concept just fine.
Also the reason Microsoft made NTFS case insensitive by default was not because of “user friction” but because of backwards compatibility with MSDOS FAT16 all upper case 8.3 file names. However, when they created a new file system for the cloud, Azure Blob Storage, guess what: they made it case sensitive.
Unix was designed for mainframes
Unix was never for mainframes. It was for 16-bit minicomputers that sat below mainframes, but yes they were more advanced than the first personal computers.
It’s actually impressive how much modern/business functionality they were able to cram into that.
Absolutely, but you have to admit that it’s a less solid foundation to build a modern operating system on.
In the 80s, there were several Unices for PC too btw: AT&T, SCO, even Microsoft’s own Xenix. Most of them were prohibitively expensive though.
You’re probably joking, but in case you don’t know: LPT stands for Line Printer Terminal, and LPT1, LPT2, LPT3… referred to parallel ports which were typically (though not exclusively) used to connect a printer.
The thing is, a lot of the legacy backwards compatible stuff that’s in Linux is because a lot of things in Unix were actually pretty well thought out from the get go, unlike many of the ugly hacks that went into MSDOS and later Windows and overstayed their welcome.
Things like: long case sensitive file names from the beginning instead of forced uppercase 8.3 , a hierarchical filesystem instead of drive letters, “everything is a file” concept, a notion of multiple users and permissions, pre-emptive multitasking, proper virtual memory management instead of a “640k is enough” + XMS + EMS, and so on.
Or just name the file con
. Windows 95 even used to bluescreen if you tried to refer to con\con
.
If you rename a file only changing the casing it doesn’t update properly, you need to rename it to something else and back. This is so userfriendly I have been stumped by it multiple times.
To my great surprise, this has been fixed. I don’t know when, but I tried it on my Windows 10 VM and it just worked. Only took them 20 years or so :)
I would argue that elegance and being easy to program are virtues by themselves, because it makes code easy to understand and easy to maintain.
A one-to-one string to filename mapping is straightforward and elegant. It’s easy to understand (“a filename is a unique string of characters”), it makes file name comparisons easy (a bit level compare suffices) and as long as you consistently use the case that you intend, it doesn’t behave unexpectedly. It really is the way of the least surprise.
After all, case often does have meaning, so why shouldn’t it be treated as a meaningful part of a filename? For example: “French fries.jpg” could contain a picture of fries specifically made in France, whereas “french fries.jpg” could contain a picture of fries made anywhere. Or “November rain.mp3” could be the sound of rain falling in the month of November, whereas “November Rain.mp3” is a Guns N’ Roses song. All silly examples of course, but they’re merely to demonstrate that capitalization does have meaning, and so we should be able to express that canonically in filenames as well.
It actually seems like it even works in explorer nowadays. I’ll be damned, they fixed something…
The point is you have to take this into account, so the decision to go with a case insensitive file system has ripple effects much further down your system. You have to design around it at every step in code where a string variable results in a file being written to or read from.
It’s much more elegant if you can simply assume that a particular string will 1-on-1 match with a unique filename.
Even Microsoft understands this btw, their Azure Blob Storage system is case sensitive. The only reason NTFS isn’t (by default) is because of legacy. It had to be compatible with all uppercase 8.3 filenames from DOS/FAT16.
The flag is called
--no-preserve-root
, but the flag wouldn’t do anything here because you’re not deleting root (/
), you’re deleting all non-hidden files and directories under root (/*
), and rm will just let you do it.