Opened 11 years ago

Closed 11 years ago

#5447 closed defect (fixed)

Memory leak backend

Reported by: wieke Owned by: Janne Grunau
Priority: minor Milestone: 0.22
Component: mythtv Version: head
Severity: low Keywords: memory leak
Cc: Ticket locked: no

Description

Same problem as (closed) ticket 5436. I have the same symptoms on my backend, Q6600, 4GB RAM. Using newest trunk on fresh ubuntu 8.04, fully updated. Symptoms as follows:

Every 30-60 seconds memory increasing by appr 500kb to 3000kb, typically around 1300kb, with and without recordings. (Checked with atop).

using --noupnp flag does not change the symptoms. distclean and/or uninstall does not change the symptoms. I have to stop/start the mythtv backend once every day. Otherwise the system becomes unresponsive (memory and swap are filling up, load average 30+). I am able to produce gdb logs, as described in trac help, in the next couple of days. If a valgrind log is required, I would like some instructions/links

Please include all output in bug reports. MythTV Version : 17483 MythTV Branch : trunk Library API : 0.22.20080612-1 Network Protocol : 40 QT Version : 4.4.0 Options compiled in:

linux release using_oss using_alsa using_arts using_jack using_backend using_dbox2 using_dvb using_firewire using_frontend using_hdhomerun using_iptv using_ivtv using_v4l using_x11 using_xrandr using_xv using_xvmc using_xvmcw using_xvmc_vld using_bindings_perl using_bindings_python using_opengl using_ffmpeg_threads using_libavc_5_3 using_live

Attachments (3)

gdb.txt.tar.gz (9.0 KB) - added by wieke 11 years ago.
gdb log file
mythbackend.log.tar.gz (3.6 KB) - added by wieke 11 years ago.
Valgrind logfile
mythbackend.log.tar.2.gz (3.0 KB) - added by wieke 11 years ago.
New Valgrind logfile

Download all attachments as: .zip

Change History (19)

Changed 11 years ago by wieke

Attachment: gdb.txt.tar.gz added

gdb log file

comment:1 Changed 11 years ago by tomimo@…

This sounds like the original reporter is using some sort of process health checker which makes requests to port 6544 every 1 or 2 minutes ---> it leaks memory quite a lot in the latest Trunk (tested with SVN17481).

Howto to reproduce:

  1. Setup a few recording schedules
  2. Run the following command: 'while true; do lynx -dump http://xxx.xxx.xxx.xxx:6544/; sleep 1; done'
  3. Watch the backend memory grow constantly

Depending on the size of the system configuration, ie. how many recording schedules you have etc etc. this will consume all the memory of the host machine in a day.

comment:2 Changed 11 years ago by wieke

Thx tomimo for your reaction. I have tried the lynx dump as you described. The symptoms are matching. Although i did not run a check for hours, I see the same growth rate in memory. The only thing different is that i see it happening more than once a minute on my backed

comment:3 Changed 11 years ago by wieke

In the last week, I had to rebuild my backend computer, because of a crashed disk. I did a complete new install, only importing metadata for mythvideo and the tables regarding recordings. (I also started to use the multirec possibilities). The memory leak is still there, but, at this moment, in an acceptable manner. It grows about 10 MB per day.

comment:4 Changed 11 years ago by wieke

I have finished rebuilding my computer, including nagios which I use to check my backend status by pulling ipadres:6544 and reading the available information. Because of the memory leak and a every minute check, the memory leak was becoming huge. The memory leak is still there, but is not as large in 'normal' situations.

comment:5 Changed 11 years ago by danielk

Priority: majorminor
Severity: mediumlow
Status: newinfoneeded_new
Version: unknownhead

Please compile with --enable-valgrind with the latest head and run the backend under valgrind until it has leaked noticeably. I changed --enable-valgrind slightly to allow data on the mpeg table parser to be collected.

comment:6 Changed 11 years ago by wieke

I have compiles with --enable-valgrind. I am not familiar with valgrind, but I am running it as follows:

valgrind --log-file=/tmp/mythbackend.log --leak-check=full mythbackend

If any other options are needed, please let me know. I have turned on the 1-minute check by nagios at <backend-ipadress>:6544, so the memory leak will appear. This check is no more than een http read of the presented webpage.

MythTV Version : 17963 MythTV Branch : trunk Library API : 0.22.20080725-2 Network Protocol : 40 QT Version : 4.4.0 Options compiled in:

linux release using_oss using_alsa using_arts using_jack using_backend using_dbox2 using_dvb using_firewire using_frontend using_hdhomerun using_iptv using_ivtv using_v4l using_valgrind using_x11 using_xrandr using_xv using_xvmc using_xvmcw using_xvmc_vld using_bindings_perl using_bindings_python using_opengl using_ffmpeg_threads using_libavc_5_3 using_live

I will post the logfile in about 10 hours

Changed 11 years ago by wieke

Attachment: mythbackend.log.tar.gz added

Valgrind logfile

comment:7 Changed 11 years ago by wieke

Logfile is uploaded. I am not sure if it went well. My backend reacted differently with the valgrind option enabled. At first I let it run for about 8 hours, with the 1 minute web page check. The result was a process 'memcheck' with several GB memory. My backend was almost totally unresponsive. My only option was to kill the process (screen) in which valgrind was running. It resulted in a logfile, where only the first 5-10 minutes were logged. I started it again, without the one minute web check, but check the website manually and did a couple of refreshes. Almost instantly the memory consumption was visible. That log file is attached. At the same time smbd was running wild (CPU and memory). This is something I have not seen before.

In the next couple of days, I will do some tests with and without valgrind to make sure that smdb is (or not) unrelated to this problem.

Changed 11 years ago by wieke

Attachment: mythbackend.log.tar.2.gz added

New Valgrind logfile

comment:8 Changed 11 years ago by wieke

Attached new Valgrind logfile. smbd running wild, as mentioned in the previous comment, was caused by virtual guests systems of vmware server, which are running on the mythbackend server. I do not think that the first logfile is conteminated with this, but to be sure I have run the same test with nagios and vmware server disabled. The memory leak was substantial, around 200MB in the few hours that I have run it. After about 5 minutes after te start, when mythbackend was up and running, I have viewed the web page <mythbackend-ipaddress>:6544 and did a couple of refreshes. I did the same in the last minutes of the logging.

comment:9 Changed 11 years ago by stuartm

Status: infoneeded_newnew

comment:10 Changed 11 years ago by stuartm

Status: newinfoneeded_new

Please re-run valgrind with --show-reachable=yes

comment:11 Changed 11 years ago by Dibblah

This memory leak only appears for me when a recording is in progress. For a single recording, it is on the order of 1600k / view of the status page.

I will attempt to triage it soon, if someone else doesn't beat me to it.

It would be useful if someone can get the information that stuartm asked for two months ago.

comment:12 Changed 11 years ago by Janne Grunau

Status: infoneeded_newnew

Leak is caused by missing event loop in ThreadWorker? derived threads causing object freed by deleteLater() to stay around forever.

comment:13 Changed 11 years ago by Janne Grunau

Owner: changed from Isaac Richards to Janne Grunau
Status: newassigned

comment:14 Changed 11 years ago by Janne Grunau

Resolution: fixed
Status: assignedclosed

(In [19226]) Use an event loop in WorkerThread? to free QObject inheriting objects properly

This fixes several leaks in the HTTP status page. Fixes #5447

comment:15 Changed 11 years ago by Nigel

Resolution: fixed
Status: closednew

It looks like [19226] somehow causes problems in the backend. I am still trying to work out how, but if a UPnP client is playing media, both the MythTV protocol (port 6543) and HTML server (port 6544) are inactive. Commands sent to the MythTV protocol port will be responded to when UPnP playback stops (within client timeout period).
There are no backend errors. I have not tried -v upnp on mythbackend yet

comment:16 Changed 11 years ago by stuartm

Resolution: fixed
Status: newclosed

Fixed in [19779]

Note: See TracTickets for help on using tickets.