From: Mathieu Desnoyers Date: Wed, 19 May 2010 21:49:56 +0000 (-0400) Subject: liblttd: don't fill the page cache X-Git-Tag: 0.85~1 X-Git-Url: http://git.lttng.org/?a=commitdiff_plain;h=14af8709e68d4f2d5bf905bf516e71307d2b1315;hp=14af8709e68d4f2d5bf905bf516e71307d2b1315;p=ltt-control.git liblttd: don't fill the page cache * Linus Torvalds (torvalds@linux-foundation.org) wrote: > > > On Wed, 19 May 2010, Mathieu Desnoyers wrote: > > > > Good point. This discard flag might do the trick and let us keep things simple. > > The major concern here is to keep the page cache disturbance relatively low. > > Which of new page allocation or stealing back the page has the lowest overhead > > would have to be determined with benchmarks. > > We could probably make it easier somehow to do the writeback and discard > thing, but I have had _very_ good experiences with even a rather trivial > file writer that basically used (iirc) 8MB windows, and the logic was very > trivial: > > - before writing a new 8M window, do "start writeback" > (SYNC_FILE_RANGE_WRITE) on the previous window, and do > a wait (SYNC_FILE_RANGE_WAIT_AFTER) on the window before that. > > in fact, in its simplest form, you can do it like this (this is from my > "overwrite disk images" program that I use on old disks): > > for (index = 0; index < max_index ;index++) { > if (write(fd, buffer, BUFSIZE) != BUFSIZE) > break; > /* This won't block, but will start writeout asynchronously */ > sync_file_range(fd, index*BUFSIZE, BUFSIZE, SYNC_FILE_RANGE_WRITE); > /* This does a blocking write-and-wait on any old ranges */ > if (index) > sync_file_range(fd, (index-1)*BUFSIZE, BUFSIZE, +SYNC_FILE_RANGE_WAIT_BEFORE|SYNC_FILE_RANGE_WRITE|SYNC_FILE_RANGE_WAIT_AFTER); > } > and even if you don't actually do a discard (maybe we should add a > SYNC_FILE_RANGE_DISCARD bit, right now you'd need to do a separate > fadvise(FADV_DONTNEED) to throw it out) the system behavior is pretty > nice, because the heavy writer gets good IO performance _and_ leaves only > easy-to-free pages around after itself. Great! I just implemented it in LTTng and it works very well ! A faced a small counter-intuitive fadvise behavior though. posix_fadvise(fd, 0, 0, POSIX_FADV_DONTNEED); only seems to affect the parts of a file that already exist. So after each splice() that appends to the file, I have to call fadvise again. I would have expected the "0" len parameter to tell the kernel to apply the hint to the whole file, even parts that will be added in the future. I expect we have this behavior because fadvise() was initially made with read behavior in mind rather than write. For the records, I do a fadvice+async range write after each splice(). Also, after each subbuffer write, I do a blocking write-and-wait on all pages that are in the subbuffer prior to the one that has just been written, instead of using the fixed 8MB window. Signed-off-by: Mathieu Desnoyers ---