Opened 10 years ago
Closed 7 years ago
Older Recordings Deleted After Database Connection Fails
v0.28-pre-1463-g23813a5
I use mythTV as a timeshifter so do not normally keep recordings for very long - so I did not notice straight a way when some went missing. On Jun 6th I went into I am sure I recorded that mode and noticed that I did not have any recordings older than the 25th May. Checking the logs I noticed that at 13:15 on Jun 4th something had cleared out 25 items using DoHandleDelete3.
The nasty seems to be that the backend lost the database connection for some reason. Defaulted to the storage group being /var/lib/mythtv/recordings which is on a different partition to the actual one and then flushed out the older stuff.
Jun 4 13:15:12 tv mythbackend: mythbackend[10065]: E ProcessRequest mythdbcon.cpp:217 (OpenDatabase) Driver error was [1/1040]:#012QMYSQL: Unable to connect#012Database error was:#012Too many connections
....
Jun 4 13:15:12 tv mythbackend: mythbackend[10065]: E ProcessRequest mythdb.cpp:183 (DBError) DB Error (StorageGroup::StorageGroup()):#012Query was:#012#012Bindings were:#012:GROUP="Default"#012No error type from QSqlError? Strange...
Jun 4 13:15:12 tv mythbackend: mythbackend[10065]: E ProcessRequest storagegroup.cpp:187 (Init) SG(LiveTV): Unable to find any Storage Group Directories. Using old 'RecordFilePrefix' value of '/var/lib/mythtv/recordings'
Jun 4 13:15:30 tv mythbackend: mythbackend[10065]: I Scheduler scheduler.cpp:2103 (HandleReschedule) Reschedule requested for CHECK 0 2542 0 DoHandleDelete3 | Orphan Black | 4/10 | Governed as It Were by Chance: Sci-fi drama series. With Cosima's help, Sarah digs into the origins of the clone experiment. The hunt takes her right into the belly of the beast. | fp.bbc.co.uk/4j63iy
Jun 4 13:15:30 tv mythbackend: mythbackend[10065]: I Scheduler scheduler.cpp:2103
# then deletes 13 more items
It might be a good idea to inhibit the housekeeper after database connection failure.
Change History (10)
Component: |
MythTV - General →
MythTV - Housekeeper
|
Milestone: |
unknown →
0.27.2
|
Owner: |
set to Raymond Wagner
|
Priority: |
minor →
blocker
|
Severity: |
medium →
high
|
Milestone: |
0.27.2 →
0.27.4
|
Milestone: |
0.27.4 →
0.27.6
|
Owner: |
Raymond Wagner deleted
|
Status: |
accepted →
new
|
Status: |
new →
infoneeded_new
|
Milestone: |
0.27.6 →
0.27.7
|
Resolution: |
→ Invalid
|
Status: |
infoneeded_new →
closed
|
This appears to be a configuration / packaging error that triggered a bug. If this is the case then fixing the configuration error would be the short term solution. Fixing the subsequent error would be for 0.28.
What is your MySQL connection limit? For the Debian / Ubuntu packaging we explicitly set it to 100. See https://github.com/MythTV/packaging/blob/master/deb/debian/mythtv.cnf From your other tickets it appears as if you may be using Mythbuntu packaging, but its not clear if the MySQL configuration is working as designed.
Can you identify why the auto expirer decided to delete these 25 recordings? Was is to make room for a running recording. Deleting one after another from the database, then seeing that there still was not enough free space and deleting the next recording?
Did the auto expire remove only the database entries or also the actual recording files? I'm wondering if testing for existence of the file before issuing the deletion request would prohibit this issue. Thinking that without knowledge of the Storage Groups it will not find the file and skip the recording.
Another idea is to remove the fallback to pre-StorageGroup? configurations for good.