Windows is a different operating system than Linux and OS X. Windows files are stored in the “root” directory, while Linux and OS X files are stored in the “user” directory. To alter a file on Windows, you must first create a new directory called “my Documents” and then enter the following into the “File Name” field of the Windows Explorer: C:\Documents and Settings\username\My Documents To alter a file on Linux or OS X, you must first create a new directory called “my Documents” and then enter the following into the “File Name” field of the Linux or OS X Explorer: ~/.local/share/documents/user
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.
The Question
SuperUser reader the.midget wants to know why Linux and Windows treat in-use files differently:
So what’s happening behind the scenes and preventing him from wantonly deleting things in Windows like he can in Linux?
The Answer
SuperUser contributors shed some light on the situation for the.midget. Amazed writes:
Whenever you open or execute a file in Windows, Windows locks the file in place (this is a simplification, but usually true.) A file which is locked by a process cannot be deleted until that process releases it. This is why whenever Windows has to update itself you need a reboot for it to take effect.
On the other hand, Unix-like operating systems like Linux and Mac OS X don’t lock the file but rather the underlying disk sectors. This may seem a trivial differentiation but it means that the file’s record in the filesystem table of contents can be deleted without disturbing any program that already has the file open. So you can delete a file while it is still executing or otherwise in use and it will continue to exist on disk as long as some process has an open handle for it even though its entry in the file table is gone.
David Schwartz expands on the idea and highlights how things should be ideally and how they are in practice:
There you have it: two different approaches to file handling yield two different results.
A lot of old Windows code uses the C/C++ API (functions like fopen) rather than the native API (functions like CreateFile). The C/C++ API gives you no way to specify how mandatory locking will work, so you get the defaults. The default “share mode” tends to prohibit “conflicting” operations. If you open a file for writing, writes are assumed to conflict, even if you never actually write to the file. Ditto for renames.
And, here’s where it gets worse. Other than opening for read or write, the C/C++ API provides no way to specify what you intend to do with the file. So the API has to assume you are going to perform any legal operation. Since the locking is mandatory, an open that allows a conflicting operation will be refused, even if the code never intended to perform the conflicting operation but was just opening the file for another purpose.
So if code uses the C/C++ API, or uses the native API without specifically thinking about these issues, they will wind up preventing the maximum set of possible operations for every file they open and being unable to open a file unless every possible operation they could perform on it once opened is unconflicted.
In my opinion, the Windows method would work much better than the UNIX method if every program chose its share modes and open modes wisely and sanely handled failure cases. The UNIX method, however, works better if code doesn’t bother to think about these issues. Unfortunately, the basic C/C++ API doesn’t map well onto the Windows file API in a way that handles share modes and conflicting opens well. So the net result is a bit messy.
Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.