python - How to Clean Up subprocess.Popen Instances Upon Process Termination -


i have javascript application running on python / pyqt / qtwebkit foundation creates subprocess.popen objects run external processes.

popen objects kept in dictionary , referenced internal identifier js app can call popen's methods via pyqtslot such poll() determine whether process still running or kill() kill rogue process.

if process not running more, remove popen object dictionary garbage collection.

what recommended approach cleaning dictionary automatically prevent memory leak ?

my ideas far:

  • call popen.wait() in thread per spawned process perform automatic cleanup right upon termination.
    pro: immediate cleanup, threads not cost cpu power should sleeping, right ?
    con: many threads depending on spawning activity.
  • use thread call popen.poll() on existing processes , check returncode if have terminated , clean in case.
    pro: 1 worker thread processes, lower memory usage.
    con: periodic polling necessary, higher cpu usage if there many long-running processes or lots of processed spawned.

which 1 choose , why ? or better solutions ?

for platform-agnostic solution, i'd go option #2, since "con" of high cpu usage can circumvented like...

import time  # assuming popen objects in dictionary values process_dict = { ... }  def my_thread_main():     while 1:         dead_keys = []         k, v in process_dict.iteritems():             v.poll()             if v.returncode not none:                 dead_keys.append(k)         if not dead_keys:             time.sleep(1)  # adjust sleep time taste             continue         k in dead_keys:             del process_dict[k] 

...whereby, if no processes died on iteration, sleep bit.

so, in effect, thread still sleeping of time, , although there's potential latency between child process dying , subsequent 'cleanup', it's not big deal, , should scale better using 1 thread per process.

there better platform-dependent solutions, however.

for windows, should able use waitformultipleobjects function via ctypes ctypes.windll.kernel32.waitformultipleobjects, although you'd have feasibility.

for osx , linux, it's easiest handle sigchld asynchronously, using signal module.

a quick n' dirty example...

import os import time import signal import subprocess  # map child pid popen object subprocesses = {}  # define handler def handle_sigchld(signum, frame):     pid = os.wait()[0]     print 'subprocess pid=%d ended' % pid     del subprocesses[pid]  # handle sigchld signal.signal(signal.sigchld, handle_sigchld)  # spawn couple of subprocesses p1 = subprocess.popen(['sleep', '1']) subprocesses[p1.pid] = p1 p2 = subprocess.popen(['sleep', '2']) subprocesses[p2.pid] = p2  # wait subprocesses die while subprocesses:     print 'tick'     time.sleep(1)  # done print 'all subprocesses died' 

Comments

Popular posts from this blog

linux - Does gcc have any options to add version info in ELF binary file? -

android - send complex objects as post php java -

charts - What graph/dashboard product is facebook using in Dashboard: PUE & WUE -