Except that would make fsync work differently on Mac OS X than on any other Unix. What fsync has always meant is actually 'flush any writes to this file to the disk device right now'. 8.4 fsync and fdatasync: Flushing Disk Buffers. On most operating systems, when you write to a file, the data is not immediately written to disk. Instead, the operating system caches the written data in a memory buffer, to reduce the number of required disk writes and improve program responsiveness.
From PostgreSQL wiki
Jump to: navigation, search
Here is a summary of what we have learned so far about the behaviour of the fsync() system call in the presence of write-back errors on various operating systems of interest to PostgreSQL users (if our build farm is a reliable survey).
What we want to know is: when can write-back errors be forgotten and go unreported to userspace? Arbitrarily, if errors are detected during asynchronous write-back? What about errors that occurred before you opened the file and got a new file descriptor and called fsync()? If fsync() reports failure and then you call fsync() again, can it falsely report success? PostgreSQL believes that a successful call to fsync() means that *all* data for a file is on disk, as part of its checkpointing protocol. Apparently that is not the case on some operating systems, leading to the potential for unreported data loss. Triggered by fsyncgate 2018.
If you see a mistake or know something I don't, please update this document with supporting references, or ping [email protected]!
Update: As of this commit, PostgreSQL will now PANIC on fsync() failure. (Similar changes were made in InnoDB/MySQL, WiredTiger/MongoDB and no doubt other software as a result of the PR around this.)
Open source kernels:
- Darwin/macOS: buffers are invalidated, code similar to NetBSD
- DragonflyBSD: not analysed -- the source of brelse might tell us
- FreeBSD: buffers remain dirty (and from version 11.1 on, they are dropped on failure after the device goes away) so future fsync() calls will try again and presumably fail; recent testing report, 10 year old testing reportcommit from over 20 years ago fixing the issue
- Illumos: writes are retried, at least in the case of asynchronous write-back. Not yet clear to me whether failure provoked by a synchronous fsync() call leaves buffers valid and dirty.
- Linux < 4.13: fsync() errors can be lost in various ways; also buffers are marked clean after errors, so retrying fsync() can falsely report success and the modified buffer can be thrown away at any time due to memory pressure
- Linux 4.13 and 4.15: fsync() only reports writeback errors that occurred after you called open() so our schemes for closing and opening files LRU-style and handing fsync() work off to the checkpointer process can hide write-back errors; also buffers are marked clean after errors so even if you opened the file before the failure, retrying fsync() can falsely report success and the modified buffer can be thrown away at any time due to memory pressure.
- Linux 4.14 and Linux >= 4.16 write-back error counter is initialised differently so that somebody gets the inode's first error even if the file was closed and opened in between, but you still only get the error once (so retrying fsync() is not OK) and the error can be forgotten if the inode falls out of the inode cache (unlikely since all file descriptors referencing the inode must be closed first, and close calls fsync); buffers are still thrown away (either immediately or on memory pressure, depending on choice of fs) so you might read back an older version of the page than you most recently wrote
- NetBSD: buffers are invalidatedhere so future fsync() calls may return success despite data loss; there may also be other problems according to a netbsd.org bug report that was triggered by our discussion
- OpenBSD: buffers are invalidated, code similar to NetBSD; OpenBSD hackers pinged for commentnew OpenBSD hackers thread; UPDATE: a recent commit changed the behaviour, analysis needed; man page updated to say 'To guard against potential inconsistency, future calls will continue failing until all references to the file are closed.', which is good as long as someone holds the file open, but that isn't guaranteed in PostgreSQL (it probably should be)
Closed source kernels:
- AIX: unknown
- HPUX: unknown
- Solaris: maybe the same as Illumos, but there was apparently a great VM allocator rewrite after Solaris reverted to closed source
- Windows: unknown
Note that ZFS is likely to be a special case even on Linux, because it doesn't use the regular page cache and has special handling for failures. More information needed.
Archeological notes: All BSD-derived systems probably inherited that brelse() logic from their common ancestor, but FreeBSD changed it in 1999 and DragonflyBSD forked from FreeBSD in 2003 but apparently rewrote the bio code significantly. Darwin inherited code directly from ancient BSD via NeXT, and later took more code from FreeBSD but apparently not the behaviour discussed above. Ancient Bell UNIX was conceptually had the same problem but since it didn't have fsync(), that's somewhat moot. According to various man pages, fsync() was introduced by 4.2BSD (1983, not sure if fsync was added a bit later), developed around the same time and same place as POSTGRES (1986), and said in its man page it for making transactional facilities. Also fsync(1) appeared in FreeBSD 4.3 (2001), a command line tool that lets you sync a named file, which probably only makes sense if you have a certain model of how I/O errors and buffering work.
Retrieved from 'https://wiki.postgresql.org/index.php?title=Fsync_Errors&oldid=34423'