Opened 18 years ago
Closed 18 years ago
#2242 closed patch (fixed)
Recordings not being deleted fast enough
Reported by: | Owned by: | Isaac Richards | |
---|---|---|---|
Priority: | minor | Milestone: | unknown |
Component: | mythtv | Version: | head |
Severity: | medium | Keywords: | |
Cc: | Ticket locked: | no |
Description
I noticed my recordings partition was nearly full so deleted several shows to make some space. It appears the new gradual file truncation code is too gradual because it's not deleting them fast enough.
I have one DVB channel currently recording. Originally the free space looked like this:
150G 144G 6.1G 96% /mnt/recording
So I deleted several shows as per this (output from lsof):
mythbacke 25571 mythtv 23w REG 253,2 5399902004 67259165 /srv/media/recording/1030_20060823192900.mpg (deleted) mythbacke 25571 mythtv 54w REG 253,2 1544553292 67109020 /srv/media/recording/1012_20060526205900.mpg (deleted) mythbacke 25571 mythtv 55w REG 253,2 1786772492 67109041 /srv/media/recording/1012_20060616205900.mpg (deleted) mythbacke 25571 mythtv 56w REG 253,2 1793806700 67110461 /srv/media/recording/1012_20060707205900.mpg (deleted) mythbacke 25571 mythtv 57w REG 253,2 21568151552 67259106 /srv/media/recording/1001_20060819182900.mpg (deleted)
Over half an hour later and the free space looks like this:
150G 146G 4.3G 98% /mnt/recording
Looking at the r10235 commit I can see that it is supposed to delete faster than the calculated recording rate but this is clearly not working. My disk is gradually filling up and soon it will die.
Unfortunately I have no useful log since I'm not running the backend with -v file although I will for next time (this happened once before but I couldn't log in and investigate). If I stop and restart with the verbose of course the files will delete immediately.
The machine is backend only, there's plenty of free cpu and the filesystem is xfs so there's no reason for it not to keep up.
Attachments (2)
Change History (6)
comment:1 Changed 18 years ago by
Resolution: | → fixed |
---|---|
Status: | new → closed |
comment:2 Changed 18 years ago by
Resolution: | fixed |
---|---|
Status: | closed → reopened |
I am reopenning this ticket because I found the actual bug. The problem was that the st_blksize field in the stat buffer is not the unit for the st_blocks field. st_blocks is always in the unit of 512 bytes.
This explains the too slow delete and the gigantic file sizes as reported by the commented out VB_FILE that some people have been seeing (because st_blksize is typically much larger than 512).
I changed the code to compute the rounded up size based on st_size instead of st_blocks. A patch is attached.
(Also, I am happy to report this wasn't my bug :-)
Changed 18 years ago by
Attachment: | file-size.patch added |
---|
Changed 18 years ago by
Attachment: | mythtv-ftruncate_block_size_and_overflow.patch added |
---|
comment:3 Changed 18 years ago by
Type: | defect → patch |
---|
I was working on this at the same time as Boleslaw and had come up with a similar patch, but with a couple of changes. First, using size_t for fsize is unsafe as it will overflow (the overflow happened to create a value that was "close enough" to the right value on some systems--including the ones on which it was tested), so we need to use off_t. Also, the file size we compute is only an estimate (that can be off by a lot), so if we decrement the estimate for the initial truncation, it could end up being a much larger truncation than desired.
In my patch, I truncate the file first to the estimated filesize (the file is almost definitely not written in the minimum number of blocks possible, so the estimate is low). For example, on a 5.5GiB recording of mine, the estimated filesize is 5.5MiB smaller than the size on disk. Therefore, if we don't truncate to the estimate first, the initial truncate would be ~9.5MiB instead of the 4MiB/500ms target. There are obviously several other ways of achieving the goal of truncating to the estimated file size first, but the approach I used seemed least intrusive to the existing code.
Note that we cannot use st_blocks*512 to determine the size on disk, either, as the units of st_blocks is actually undefined. Only some implementations use units of 512bytes ( http://www.opengroup.org/onlinepubs/000095399/basedefs/sys/stat.h.html ). Therefore, an estimate (like Boleslaw's or mine--same idea, just different math, but the same result) is probably best.
Thanks to Anduin for pointing out the blatantly obvious issue that I was missing
while he was helping me with the patch.
comment:4 Changed 18 years ago by
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
(In [10956]) Bugfix for the slow truncating delete code. Estimate the filesize using a different formula that doesn't use st_blocks.
Patch by Michael Dean. I slightly modified it to put the decrement after the truncate instead of having to have a priming increment before the while loop.
Closes #2242.
I'm not sure why this code even needs to estimate the file size. It's not going to make that much of a difference whether we're truncating to even block boundaries or not so I don't see much difference in using the real size versus an estimated size (which in this case is the real size rounded to the next lowest block). We could just set fsize = buf.st_size and be good.
Essentially fixed by [10947]. Slow deletes default to off now.