Opened 17 years ago
Closed 16 years ago
#2817 closed defect (wontfix)
Unknown problem with socket communications causes segfault of backends
Reported by: | Owned by: | Isaac Richards | |
---|---|---|---|
Priority: | minor | Milestone: | 0.21 |
Component: | mythtv | Version: | 0.20 |
Severity: | medium | Keywords: | |
Cc: | Ticket locked: | no |
Description
I'm seeing random segfaults on average of once daily of both my backends so I enabled core dumps. I had originally thought to run mythbackend under gdb until I could catch one of these but the problem is gdb exits and kills mythbackend 1 out of every 5 channel changes on average for some reason. This is the only way I can get a backtrace of the problem. I have a total of 6 backtraces from my two myth backends all relating to inter-backend communications I believe.
Attachments (11)
Change History (19)
comment:1 Changed 17 years ago by
Let me just add one other note. The first 4 backtraces were on my backend corbin3 which was, at the time, a slave backend. The last two backtraces are from my current slave backend, corbin2. Between backtraces 4 and 5 I switched roles of my backends in n attempt to track down this issue. This is running svn 12173.
comment:2 Changed 17 years ago by
I have been able to reproduce that crash in MainServer::customEvent() about 50% of the time with the following.
remote frontend talking to master backend with 2xhdhr and 2xfirewire tuners. Start livetv (hdhr tuner), goto epg guide and select a channel that is only supported by the firewire tuner and tune it. It will crash 50% of the time at that point.
comment:3 Changed 17 years ago by
Added two more backtraces.. bt5 on the master backend and 4 minutes later bt5a on my slave.
Changed 17 years ago by
Attachment: | mythbackend-r12434-crash-gdb.txt added |
---|
Backtrace of mythbackend crash on SUSE Linux 10.1 32-bit (compiled with --compile-type=debug)
Changed 17 years ago by
Attachment: | mythevent.diff added |
---|
comment:4 Changed 17 years ago by
attached mythevent.diff to the ticket, see if this fixes the customEvent crash in the backend.
comment:5 Changed 17 years ago by
That patch didn't seem to help though it did produce a different backtrace which may give more clues as to what's going on. Bt attached.
comment:6 Changed 17 years ago by
With some debugging, below shows the last messages passing through customEvent() before the last segfault I captured in Mainserver::customEvent():
2007-01-11 11:44:30.368 Finished recording Munich: channel 1502 2007-01-11 11:44:30.374 ---------> MythEvent?(): empty 2007-01-11 11:44:30.387 ---------> MythEvent?(): Message: RECORDING_LIST_CHANGE 2007-01-11 11:44:30.392 ---------> MythEvent?(): empty 2007-01-11 11:44:30.535 ---------> MythEvent?(): Message: RECORDING_LIST_CHANGE 2007-01-11 11:44:30.541 ---------> MythEvent?(): empty 2007-01-11 11:44:31.959 ---------> MythEvent?(): Message: QUERY_NEXT_LIVETV_DIR 5 2007-01-11 11:44:31.964 ---------> MythEvent?(): empty 2007-01-11 11:44:32.042 ---------> MythEvent?(): Message: RECORDING_LIST_CHANGE 2007-01-11 11:44:32.049 ---------> MythEvent?(): empty 2007-01-11 11:44:32.073 ---------> MythEvent?(): Message: LIVETV_CHAIN UPDATE live-pc4-2007-01-11T09:08:54 2007-01-11 11:44:32.079 ---------> MythEvent?(): empty 2007-01-11 11:44:33.232 ---------> MythEvent?(): Message: SIGNAL 5
The very next line would have shown each entry in extradata or "empty", but this is where the segfault occured instead in the area where I attempted to spit out the entries:
for ( QStringList::Iterator it = _list.begin(); it != _list.end(); ++it )
VERBOSE(VB_IMPORTANT, QString("---------> MythEvent?(): %1").arg(*it));
comment:7 Changed 16 years ago by
Milestone: | unknown → 0.21 |
---|
comment:8 Changed 16 years ago by
Resolution: | → wontfix |
---|---|
Status: | new → closed |
We're no longer fixing 0.20 bugs.
Backtrace 1 of my then slave backend corbin3