Opened 6 years ago

Closed 6 years ago

#11756 closed Bug Report - General (Invalid)

Slow reads and writes

Reported by: Craig Treleaven <ctreleaven@…> Owned by:
Priority: minor Milestone: unknown
Component: Ports - OSX Version: 0.26-fixes
Severity: medium Keywords:
Cc: Ticket locked: no

Description

Recently upgraded from 0.25-fixes to 0.26-fixes. After a few days, the logs are showing thousands of warning and error messages related to slow reads and writes. The backend has high CPU usage (compared to 0.25) while recording and becomes very slow to respond to keystrokes. There are frequent pauses while trying to playback recordings, to the point of being unwatchable.

mythcommflag and, to a much lesser extent, mythfrontend logs show thousands of lines like the following:

/opt/local/var/log/mythtv.26/mythcommflag.20130816020139.12091.log:2013-08-15 22:17:31.811969 I [12091/20539] Decoder ringbuffer.cpp:1098 (WaitForAvail) - RingBuf(/Volumes/Theatre/Mythical/1411_20130816020000.mpg): Waited 8.0 seconds for data 
...
/opt/local/var/log/mythtv.26/mythcommflag.20130816020139.12091.log:2013-08-15 22:17:42.038103 W [12091/20539] Decoder ringbuffer.cpp:1035 (WaitForReadsAllowed) - RingBuf(/Volumes/Theatre/Mythical/1411_20130816020000.mpg): Taking too long to be allowed to read..
/opt/local/var/log/mythtv.26/mythcommflag.20130816020139.12091.log:2013-08-15 22:17:42.038121 E [12091/20539] Decoder ringbuffer.cpp:1040 (WaitForReadsAllowed) - RingBuf(/Volumes/Theatre/Mythical/1411_20130816020000.mpg): Took more than 10 seconds to be allowed to read, aborting.

mythbackend logs are showing thousands of write-related messages like the following:

/opt/local/var/log/mythtv.26/mythbackend.20130816124946.12592.log:2013-08-16 08:56:01.045786 W [12592/33539] TFWWrite ThreadedFileWriter.cpp:500 (DiskLoop) - TFW(/Volumes/Theatre/Mythical/1051_20130816125200.mpg:121): write(65424) cnt 527 total 37107628 -- took a long time, 4063 ms
/opt/local/var/log/mythtv.26/mythbackend.20130816124946.12592.log:2013-08-16 08:56:05.761917 W [12592/33539] TFWWrite ThreadedFileWriter.cpp:500 (DiskLoop) - TFW(/Volumes/Theatre/Mythical/1051_20130816125200.mpg:121): write(58844) cnt 406 total 29148648 -- took a long time, 4072 ms
/opt/local/var/log/mythtv.26/mythbackend.20130816124946.12592.log:2013-08-16 08:56:09.875380 W [12592/33539] TFWWrite ThreadedFileWriter.cpp:500 (DiskLoop) - TFW(/Volumes/Theatre/Mythical/1051_20130816125200.mpg:121): write(65424) cnt 501 total 35242856 -- took a long time, 4110 ms
/opt/local/var/log/mythtv.26/mythbackend.20130816124946.12592.log:2013-08-16 08:56:12.923309 W [12592/33539] TFWWrite ThreadedFileWriter.cpp:500 (DiskLoop) - TFW(/Volumes/Theatre/Mythical/1051_20130816125200.mpg:121): write(65424) cnt 576 total 40110740 -- took a long time, 2949 ms
/opt/local/var/log/mythtv.26/mythbackend.20130816124946.12592.log:2013-08-16 08:56:17.062006 W [12592/33539] TFWWrite ThreadedFileWriter.cpp:500 (DiskLoop) - TFW(/Volumes/Theatre/Mythical/1051_20130816125200.mpg:121): write(160364) cnt 470 total 31585880 -- took a long time, 4094 ms
/opt/local/var/log/mythtv.26/mythbackend.20130816124946.12592.log:2013-08-16 08:56:21.738661 W [12592/33539] TFWWrite ThreadedFileWriter.cpp:500 (DiskLoop) - TFW(/Volumes/Theatre/Mythical/1051_20130816125200.mpg:121): write(65424) cnt 357 total 24079980 -- took a long time, 4053 ms

The machine is a Mac Mini Core2 Duo running OS X 10.7.5 with a Firewire 400-connected drive for recording storage. Under 0.25 there was no noticeable issue. (Before a fix went into 0.25 in Oct 2012, however, there was much the same problem.)

Attachments (6)

mythcommflag.20130816133229.12780.log (87.8 KB) - added by Craig Treleaven <ctreleaven@…> 6 years ago.
Example mythcommflag log
mythbackend.20130816124946.12592.log (269.2 KB) - added by Craig Treleaven <ctreleaven@…> 6 years ago.
Example mythbackend log
mbe_version_2013Aug16.txt (704 bytes) - added by Craig Treleaven <ctreleaven@…> 6 years ago.
Version info
mythfrontend.20130819020352.31214.log.gz (3.1 KB) - added by tahoward@… 6 years ago.
Frontend log
mythbackend.20130819165556.12653.log.gz (12.0 KB) - added by tahoward@… 6 years ago.
mythbackend log with --nodblog set
0001-Wait-until-there-s-more-space-to-write-in-buffer.026.patch (6.4 KB) - added by JYA 6 years ago.
write in smaller blocks

Download all attachments as: .zip

Change History (27)

Changed 6 years ago by Craig Treleaven <ctreleaven@…>

Example mythcommflag log

Changed 6 years ago by Craig Treleaven <ctreleaven@…>

Example mythbackend log

Changed 6 years ago by Craig Treleaven <ctreleaven@…>

Attachment: mbe_version_2013Aug16.txt added

Version info

comment:1 Changed 6 years ago by JYA

Status: newinfoneeded_new

which fix to 0.25 are you referring to?

comment:2 Changed 6 years ago by Craig Treleaven <ctreleaven@…>

Just trying to say the symptoms are similar; not that it is the same problem:

http://code.mythtv.org/trac/ticket/11061

Believe it was applied to 0.26, as well.

comment:3 Changed 6 years ago by JYA

Can you run a disk speed test on that firewire disk?

comment:4 Changed 6 years ago by Craig Treleaven <ctreleaven@…>

Disk Utility reports no problems. I let bonnie++ run for >15 minutes but it seemed to be stuck on "Writing intelligently..." However, I could see in Activity Monitor that it was reading at 15 MB/s so I've got to think the disk is fine. (It has only been in service for about 5 months. This hardware was running 0.25 without problems until a couple of weeks ago.

comment:5 Changed 6 years ago by JYA

the issue isn't on read but on write...

So not even disk speed tools can check the write speed and you still think the disk is fine?

comment:6 Changed 6 years ago by Craig Treleaven <ctreleaven@…>

I was not using Disk Speed Tools. I'd installed and tried bonnie++ but itseems to be broken. Tried again with bonnie (64 bit):

bonnie -d /Volumes/Theatre/Music -m Media-Mini 
File '/Volumes/Theatre/Music/Bonnie.14734', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
Media-Mi    0  19.2 38.2  32.4  6.4  32.4  5.9  87.3 100.0 1409.6 100.0  7455 18.0

Comparing that with the internal disk:

              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
internal    0  29.4 47.1  32.0  6.6  30.0  5.7  86.8 99.9 1302.2 100.0  1733  3.9

Looks OK to me.

comment:7 Changed 6 years ago by tahoward@…

I'm seeing the same thing on a machine I just upgraded from 0.22-svn to 0.26-fixes. This happens with recordings from a PVR-500 and from an HD Homerun. I also see artifacts (rectangles) in the video around the times when it happens, and recordings exported with nuvexport get progressively out of sync, with audio about 30 seconds behind video after an hour. I'm using a sata drive, and these are a few stats from hdparm and dd:

[root@motion mythtv]# hdparm -tT /dev/mapper/VolGroup00-MediaRoot?

/dev/mapper/VolGroup00-MediaRoot?:

Timing cached reads: 6968 MB in 2.00 seconds = 3487.73 MB/sec Timing buffered disk reads: 134 MB in 3.06 seconds = 43.83 MB/sec

[root@motion mythtv]# dd if=/dev/random of=/var/lib/mythvideo/t/test.img bs=8k count=256k

[root@motion mythtv]# dd if=/dev/zero of=/var/lib/mythvideo/t/test.img bs=8k count=256k 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 54.3988 s, 39.5 MB/s

comment:8 Changed 6 years ago by JYA

tahoward... please provide log with -v audio,playback,file --loglevel=debug

Changed 6 years ago by tahoward@…

Frontend log

comment:9 Changed 6 years ago by JYA

you do realise that your log has absolutely nothing in common as to what was originally reported. no error or problem in that log you didn't even use the logging option I listed...

I'm a tad baffled on how you would use a "me too" when there's zero in common.

If you have are experiencing a problem related to the this ticket, then please contribute otherwise open another ticket

While we try our best to help, remember that we are volunteers, and post like this only waste our time

comment:10 Changed 6 years ago by stuartm

Craig, is your database on the same drive as the recordings? That has always had the potential to cause I/O problems, especially with the database logging in 0.26

comment:11 Changed 6 years ago by JYA

to test Stuart's idea, you can start mythbackend with --nodblog

comment:12 Changed 6 years ago by Craig Treleaven <ctreleaven@…>

Database in on the internal drive (shown in performance test, above) and the recordings are on an external Firewire 400 drive.

comment:13 Changed 6 years ago by Craig Treleaven <ctreleaven@…>

Tried another test: started both mythbackend and mythfrontend with -q -q; no improvement. Playing back a recording still has numerous pauses while a recording is in progress and backend CPU usage remains in the 25-35% range.

comment:14 Changed 6 years ago by stuartm

-q has no meaning to mythtv applications. Try -v none instead

comment:15 Changed 6 years ago by stuartm

I take that back, seems -q -q should work, I just never noticed it before.

comment:16 Changed 6 years ago by tahoward@…

Sorry about the log - I was launching the frontend from a different script than the one where I set the logging. But after reading the ticket again, I'm only seeing half of the original problem. Playback on my frontend is fine - I'm seeing the TFWWrite took a long time errors on the backend, and playing or nuvexporting those recordings show problems. Recordings made before I upgraded still play and export fine, so it looks like at least my problem is only happening at record time and / or on my backend machine. My backend is a headless Core i7 and the frontend is a Pentium 4, both running Fedora.

Changed 6 years ago by tahoward@…

mythbackend log with --nodblog set

comment:17 Changed 6 years ago by Craig Treleaven <ctreleaven@…>

I believe this ticket can be closed. I was wrong; the drive is malfunctioning.

If it helps anyone else, the free Disk Speed Test from Black Magic Design illustrated the problem. Disk write speed was only 1.5 MB/s--about 15 to 20 X slower than normal. Reads were 15 MB/s which is less than half of normal speed. Despite this, I was able to copy off the full 2 TB of recordings without a single error...although it took 40 hours. Before this, I had never heard of disk acting in this manner.

Sorry for the waste of bandwidth.

comment:18 Changed 6 years ago by JYA

Resolution: Invalid
Status: infoneeded_newclosed

comment:19 in reply to:  18 Changed 6 years ago by tahoward@…

Resolution: Invalid
Status: closednew

Replying to jyavenard:

There definitely is an issue here - I can easily reproduce the problem on my headless backend running 0.26. It may be machine specific, as I'm not seeing it on any of my frontends. It's also not a hardware issue - I've gone back down to 0.22 for now and recordings are working fine, but I'd be happy to provide any debugging assistance if somebody can take a look at it.

Changed 6 years ago by JYA

write in smaller blocks

comment:20 Changed 6 years ago by JYA

try the provided patch and see if it helps.

but the problem remain is that your disk is too slow. having logs stating so isn't a problem by itself and can be ignored

comment:21 Changed 6 years ago by JYA

Resolution: Invalid
Status: newclosed

In: Attachment mythbackend.20130819165556.12653.log.gz​ added

shows that your drive average write speed is 618kB/s... this is way too low and less than what a typical DVB/ATSC device can stream at.

If all you're worried about is the amount of verbosity in the log

in mythtv/libs/libmythtv/ThreadedFileWriter.cpp line 497; change LOG_WARNING into LOG_DEBUG

Using floppy disks as recording medium is strongly discouraged.

Note: See TracTickets for help on using tickets.