Multiprocessing in python - processes not closing after completing












1














I have a Process pool in python that is starting processes as normal, however, I have just realized that these processes are not closed after the completion (I know that they completed as the last statement is a file write).
Below the code, with an example function ppp:



from multiprocessing import Pool
import itertools

def ppp(element):
window,day = element
print(window,day)
time.sleep(10)

if __name__ == '__main__': ##The line marked
print('START')
start_time = current_milli_time()
days = ['0808', '0810', '0812', '0813', '0814', '0817', '0818', '0827']
windows = [1000,2000,3000,4000,5000,10000,15000, 20000,30000,60000,120000,180000]
processes_args = list(itertools.product(windows, days))
pool = Pool(8)
results = pool.map(ppp, processes_args)
pool.close()
pool.join()
print('END', current_milli_time()-start_time)


I am working on Linux, Ubuntu 16.04. Everything was working fine before I added the line marked in the example. I am wondering if that behavior can be related to the missing of a return statement. Anyway, that is what looks like my 'htop':
enter image description here
As you can see, no process is closed, but all have completed their work.



I found that related question: Python Multiprocessing pool.close() and join() does not close processes, however, I have not understood if the solution to this problem is to use map_async instead of map.



EDIT: real function code:



def process_day(element):
window,day = element
noise = 0.2
print('Processing day:', day,', window:', window)
individual_files = glob.glob('datan/'+day+'/*[0-9].csv')
individual = readDataset(individual_files)
label_time = individual.loc[(individual['LABEL_O'] != -2) | (individual['LABEL_F'] != -2), 'TIME']
label_time = list(np.unique(list(label_time)))
individual = individual[individual['TIME'].isin(label_time)]
#Saving IDs for further processing
individual['ID'] = individual['COLLAR']
#Time variable in seconds for aggregation and merging
individual['TIME_S'] = individual['TIME'].copy()
noise_x = np.random.normal(0,noise,len(individual))
noise_y = np.random.normal(0,noise,len(individual))
noise_z = np.random.normal(0,noise,len(individual))
individual['X_AXIS'] = individual['X_AXIS'] + noise_x
individual['Y_AXIS'] = individual['Y_AXIS'] + noise_y
individual['Z_AXIS'] = individual['Z_AXIS'] + noise_z
#Time syncronization (applying milliseconds for time series processing)
print('Time syncronization:')
with progressbar.ProgressBar(max_value=len(individual.groupby('ID'))) as bar:
for baboon,df_baboon in individual.groupby('ID'):
times = list(df_baboon['TIME'].values)
d = Counter(times)
result =
for timestamp in np.unique(times):
for i in range(0,d[timestamp]):
result.append(str(timestamp+i*1000/d[timestamp]))
individual.loc[individual['ID'] == baboon,'TIME'] = result
bar.update(1)

#Time series process
ts_process = time_series_processing(window, 'TIME_S', individual, 'COLLAR', ['COLLAR', 'TIME', 'X_AXIS','Y_AXIS','Z_AXIS'])
#Aggregation and tsfresh
ts_process.do_process()
individual = ts_process.get_processed_dataframe()
individual.to_csv('noise2/processed_data/'+str(window)+'/agg/'+str(day)+'.csv', index = False)
#NEtwork inference process
ni = network_inference_process(individual, 'TIME_S_mean')
#Inference
ni.do_process()
final = ni.get_processed_dataframe()
final.to_csv('noise2/processed_data/'+str(window)+'/net/'+str(day)+'.csv', index = False)
#Saving not aggregated ground truth
ground_truth = final[['ID_mean', 'TIME_S_mean', 'LABEL_O_values', 'LABEL_F_values']].copy()
#Neighbor features process
neighbors_features_f = ni.get_neighbor_features(final, 'TIME_S_mean', 'ID_mean')
neighbors_features_f = neighbors_features_f.drop(['LABEL_O_values_n', 'LABEL_F_values_n'], axis=1)
neighbors_features_f.to_csv('noise2/processed_data/'+str(window)+'/net/'+str(day)+'_neigh.csv', index = False)
# Final features dataframe
final_neigh = pd.merge(final, neighbors_features_f, how='left', left_on=['TIME_S_mean','ID_mean'], right_on = ['TIME_S_mean_n','BABOON_NODE_n'])
final_neigh.to_csv('noise2/processed_data/'+str(window)+'/complete/'+str(day)+'.csv', index = False)
return


So as you can see, the last statement is a write to file, and it is executed by all the processes, I do not actually think that the problem is inside this function.










share|improve this question
























  • It looks like the script that you're running isn't exiting (on your HPC cluster?) and your example doesn't reproduce the problem. As a general rule, mp.Pool doesn't play well with job schedulers like slurm. I can't give you any specific advice without more information.
    – CJ59
    Nov 11 at 19:14










  • Which information should I provide you? Unfortunately, I cannot post the entire code, the thing to which I want to push the focus is that without the marked line everything works fine. @CJ59
    – Guido Muscioni
    Nov 11 at 19:31












  • Well the marked line doesn't do anything in the context of your example. So take it out? I'd bet it doesn't work fine though, it just fails more quietly. Are you running this through a job scheduler? If it's off the command line, how do you know it completes work?
    – CJ59
    Nov 11 at 19:39










  • The last statement of each process is the creation of a csv. Regarding the marked line, if I remove it everything is fine on Linux, but windows requires it for compatibility. No job scheduler, it the first time ever I get this behavior from the server.
    – Guido Muscioni
    Nov 11 at 19:41










  • You're probably blocking on pool.map() so you should double check all your files are there and there aren't any hanging file handles. You could also replace .join() with .terminate(), but it shouldn't matter. Make sure you're clearing all the old processes and output files before you run it again. It looks like you have a lot of stale processes in that list.
    – CJ59
    Nov 11 at 20:05
















1














I have a Process pool in python that is starting processes as normal, however, I have just realized that these processes are not closed after the completion (I know that they completed as the last statement is a file write).
Below the code, with an example function ppp:



from multiprocessing import Pool
import itertools

def ppp(element):
window,day = element
print(window,day)
time.sleep(10)

if __name__ == '__main__': ##The line marked
print('START')
start_time = current_milli_time()
days = ['0808', '0810', '0812', '0813', '0814', '0817', '0818', '0827']
windows = [1000,2000,3000,4000,5000,10000,15000, 20000,30000,60000,120000,180000]
processes_args = list(itertools.product(windows, days))
pool = Pool(8)
results = pool.map(ppp, processes_args)
pool.close()
pool.join()
print('END', current_milli_time()-start_time)


I am working on Linux, Ubuntu 16.04. Everything was working fine before I added the line marked in the example. I am wondering if that behavior can be related to the missing of a return statement. Anyway, that is what looks like my 'htop':
enter image description here
As you can see, no process is closed, but all have completed their work.



I found that related question: Python Multiprocessing pool.close() and join() does not close processes, however, I have not understood if the solution to this problem is to use map_async instead of map.



EDIT: real function code:



def process_day(element):
window,day = element
noise = 0.2
print('Processing day:', day,', window:', window)
individual_files = glob.glob('datan/'+day+'/*[0-9].csv')
individual = readDataset(individual_files)
label_time = individual.loc[(individual['LABEL_O'] != -2) | (individual['LABEL_F'] != -2), 'TIME']
label_time = list(np.unique(list(label_time)))
individual = individual[individual['TIME'].isin(label_time)]
#Saving IDs for further processing
individual['ID'] = individual['COLLAR']
#Time variable in seconds for aggregation and merging
individual['TIME_S'] = individual['TIME'].copy()
noise_x = np.random.normal(0,noise,len(individual))
noise_y = np.random.normal(0,noise,len(individual))
noise_z = np.random.normal(0,noise,len(individual))
individual['X_AXIS'] = individual['X_AXIS'] + noise_x
individual['Y_AXIS'] = individual['Y_AXIS'] + noise_y
individual['Z_AXIS'] = individual['Z_AXIS'] + noise_z
#Time syncronization (applying milliseconds for time series processing)
print('Time syncronization:')
with progressbar.ProgressBar(max_value=len(individual.groupby('ID'))) as bar:
for baboon,df_baboon in individual.groupby('ID'):
times = list(df_baboon['TIME'].values)
d = Counter(times)
result =
for timestamp in np.unique(times):
for i in range(0,d[timestamp]):
result.append(str(timestamp+i*1000/d[timestamp]))
individual.loc[individual['ID'] == baboon,'TIME'] = result
bar.update(1)

#Time series process
ts_process = time_series_processing(window, 'TIME_S', individual, 'COLLAR', ['COLLAR', 'TIME', 'X_AXIS','Y_AXIS','Z_AXIS'])
#Aggregation and tsfresh
ts_process.do_process()
individual = ts_process.get_processed_dataframe()
individual.to_csv('noise2/processed_data/'+str(window)+'/agg/'+str(day)+'.csv', index = False)
#NEtwork inference process
ni = network_inference_process(individual, 'TIME_S_mean')
#Inference
ni.do_process()
final = ni.get_processed_dataframe()
final.to_csv('noise2/processed_data/'+str(window)+'/net/'+str(day)+'.csv', index = False)
#Saving not aggregated ground truth
ground_truth = final[['ID_mean', 'TIME_S_mean', 'LABEL_O_values', 'LABEL_F_values']].copy()
#Neighbor features process
neighbors_features_f = ni.get_neighbor_features(final, 'TIME_S_mean', 'ID_mean')
neighbors_features_f = neighbors_features_f.drop(['LABEL_O_values_n', 'LABEL_F_values_n'], axis=1)
neighbors_features_f.to_csv('noise2/processed_data/'+str(window)+'/net/'+str(day)+'_neigh.csv', index = False)
# Final features dataframe
final_neigh = pd.merge(final, neighbors_features_f, how='left', left_on=['TIME_S_mean','ID_mean'], right_on = ['TIME_S_mean_n','BABOON_NODE_n'])
final_neigh.to_csv('noise2/processed_data/'+str(window)+'/complete/'+str(day)+'.csv', index = False)
return


So as you can see, the last statement is a write to file, and it is executed by all the processes, I do not actually think that the problem is inside this function.










share|improve this question
























  • It looks like the script that you're running isn't exiting (on your HPC cluster?) and your example doesn't reproduce the problem. As a general rule, mp.Pool doesn't play well with job schedulers like slurm. I can't give you any specific advice without more information.
    – CJ59
    Nov 11 at 19:14










  • Which information should I provide you? Unfortunately, I cannot post the entire code, the thing to which I want to push the focus is that without the marked line everything works fine. @CJ59
    – Guido Muscioni
    Nov 11 at 19:31












  • Well the marked line doesn't do anything in the context of your example. So take it out? I'd bet it doesn't work fine though, it just fails more quietly. Are you running this through a job scheduler? If it's off the command line, how do you know it completes work?
    – CJ59
    Nov 11 at 19:39










  • The last statement of each process is the creation of a csv. Regarding the marked line, if I remove it everything is fine on Linux, but windows requires it for compatibility. No job scheduler, it the first time ever I get this behavior from the server.
    – Guido Muscioni
    Nov 11 at 19:41










  • You're probably blocking on pool.map() so you should double check all your files are there and there aren't any hanging file handles. You could also replace .join() with .terminate(), but it shouldn't matter. Make sure you're clearing all the old processes and output files before you run it again. It looks like you have a lot of stale processes in that list.
    – CJ59
    Nov 11 at 20:05














1












1








1







I have a Process pool in python that is starting processes as normal, however, I have just realized that these processes are not closed after the completion (I know that they completed as the last statement is a file write).
Below the code, with an example function ppp:



from multiprocessing import Pool
import itertools

def ppp(element):
window,day = element
print(window,day)
time.sleep(10)

if __name__ == '__main__': ##The line marked
print('START')
start_time = current_milli_time()
days = ['0808', '0810', '0812', '0813', '0814', '0817', '0818', '0827']
windows = [1000,2000,3000,4000,5000,10000,15000, 20000,30000,60000,120000,180000]
processes_args = list(itertools.product(windows, days))
pool = Pool(8)
results = pool.map(ppp, processes_args)
pool.close()
pool.join()
print('END', current_milli_time()-start_time)


I am working on Linux, Ubuntu 16.04. Everything was working fine before I added the line marked in the example. I am wondering if that behavior can be related to the missing of a return statement. Anyway, that is what looks like my 'htop':
enter image description here
As you can see, no process is closed, but all have completed their work.



I found that related question: Python Multiprocessing pool.close() and join() does not close processes, however, I have not understood if the solution to this problem is to use map_async instead of map.



EDIT: real function code:



def process_day(element):
window,day = element
noise = 0.2
print('Processing day:', day,', window:', window)
individual_files = glob.glob('datan/'+day+'/*[0-9].csv')
individual = readDataset(individual_files)
label_time = individual.loc[(individual['LABEL_O'] != -2) | (individual['LABEL_F'] != -2), 'TIME']
label_time = list(np.unique(list(label_time)))
individual = individual[individual['TIME'].isin(label_time)]
#Saving IDs for further processing
individual['ID'] = individual['COLLAR']
#Time variable in seconds for aggregation and merging
individual['TIME_S'] = individual['TIME'].copy()
noise_x = np.random.normal(0,noise,len(individual))
noise_y = np.random.normal(0,noise,len(individual))
noise_z = np.random.normal(0,noise,len(individual))
individual['X_AXIS'] = individual['X_AXIS'] + noise_x
individual['Y_AXIS'] = individual['Y_AXIS'] + noise_y
individual['Z_AXIS'] = individual['Z_AXIS'] + noise_z
#Time syncronization (applying milliseconds for time series processing)
print('Time syncronization:')
with progressbar.ProgressBar(max_value=len(individual.groupby('ID'))) as bar:
for baboon,df_baboon in individual.groupby('ID'):
times = list(df_baboon['TIME'].values)
d = Counter(times)
result =
for timestamp in np.unique(times):
for i in range(0,d[timestamp]):
result.append(str(timestamp+i*1000/d[timestamp]))
individual.loc[individual['ID'] == baboon,'TIME'] = result
bar.update(1)

#Time series process
ts_process = time_series_processing(window, 'TIME_S', individual, 'COLLAR', ['COLLAR', 'TIME', 'X_AXIS','Y_AXIS','Z_AXIS'])
#Aggregation and tsfresh
ts_process.do_process()
individual = ts_process.get_processed_dataframe()
individual.to_csv('noise2/processed_data/'+str(window)+'/agg/'+str(day)+'.csv', index = False)
#NEtwork inference process
ni = network_inference_process(individual, 'TIME_S_mean')
#Inference
ni.do_process()
final = ni.get_processed_dataframe()
final.to_csv('noise2/processed_data/'+str(window)+'/net/'+str(day)+'.csv', index = False)
#Saving not aggregated ground truth
ground_truth = final[['ID_mean', 'TIME_S_mean', 'LABEL_O_values', 'LABEL_F_values']].copy()
#Neighbor features process
neighbors_features_f = ni.get_neighbor_features(final, 'TIME_S_mean', 'ID_mean')
neighbors_features_f = neighbors_features_f.drop(['LABEL_O_values_n', 'LABEL_F_values_n'], axis=1)
neighbors_features_f.to_csv('noise2/processed_data/'+str(window)+'/net/'+str(day)+'_neigh.csv', index = False)
# Final features dataframe
final_neigh = pd.merge(final, neighbors_features_f, how='left', left_on=['TIME_S_mean','ID_mean'], right_on = ['TIME_S_mean_n','BABOON_NODE_n'])
final_neigh.to_csv('noise2/processed_data/'+str(window)+'/complete/'+str(day)+'.csv', index = False)
return


So as you can see, the last statement is a write to file, and it is executed by all the processes, I do not actually think that the problem is inside this function.










share|improve this question















I have a Process pool in python that is starting processes as normal, however, I have just realized that these processes are not closed after the completion (I know that they completed as the last statement is a file write).
Below the code, with an example function ppp:



from multiprocessing import Pool
import itertools

def ppp(element):
window,day = element
print(window,day)
time.sleep(10)

if __name__ == '__main__': ##The line marked
print('START')
start_time = current_milli_time()
days = ['0808', '0810', '0812', '0813', '0814', '0817', '0818', '0827']
windows = [1000,2000,3000,4000,5000,10000,15000, 20000,30000,60000,120000,180000]
processes_args = list(itertools.product(windows, days))
pool = Pool(8)
results = pool.map(ppp, processes_args)
pool.close()
pool.join()
print('END', current_milli_time()-start_time)


I am working on Linux, Ubuntu 16.04. Everything was working fine before I added the line marked in the example. I am wondering if that behavior can be related to the missing of a return statement. Anyway, that is what looks like my 'htop':
enter image description here
As you can see, no process is closed, but all have completed their work.



I found that related question: Python Multiprocessing pool.close() and join() does not close processes, however, I have not understood if the solution to this problem is to use map_async instead of map.



EDIT: real function code:



def process_day(element):
window,day = element
noise = 0.2
print('Processing day:', day,', window:', window)
individual_files = glob.glob('datan/'+day+'/*[0-9].csv')
individual = readDataset(individual_files)
label_time = individual.loc[(individual['LABEL_O'] != -2) | (individual['LABEL_F'] != -2), 'TIME']
label_time = list(np.unique(list(label_time)))
individual = individual[individual['TIME'].isin(label_time)]
#Saving IDs for further processing
individual['ID'] = individual['COLLAR']
#Time variable in seconds for aggregation and merging
individual['TIME_S'] = individual['TIME'].copy()
noise_x = np.random.normal(0,noise,len(individual))
noise_y = np.random.normal(0,noise,len(individual))
noise_z = np.random.normal(0,noise,len(individual))
individual['X_AXIS'] = individual['X_AXIS'] + noise_x
individual['Y_AXIS'] = individual['Y_AXIS'] + noise_y
individual['Z_AXIS'] = individual['Z_AXIS'] + noise_z
#Time syncronization (applying milliseconds for time series processing)
print('Time syncronization:')
with progressbar.ProgressBar(max_value=len(individual.groupby('ID'))) as bar:
for baboon,df_baboon in individual.groupby('ID'):
times = list(df_baboon['TIME'].values)
d = Counter(times)
result =
for timestamp in np.unique(times):
for i in range(0,d[timestamp]):
result.append(str(timestamp+i*1000/d[timestamp]))
individual.loc[individual['ID'] == baboon,'TIME'] = result
bar.update(1)

#Time series process
ts_process = time_series_processing(window, 'TIME_S', individual, 'COLLAR', ['COLLAR', 'TIME', 'X_AXIS','Y_AXIS','Z_AXIS'])
#Aggregation and tsfresh
ts_process.do_process()
individual = ts_process.get_processed_dataframe()
individual.to_csv('noise2/processed_data/'+str(window)+'/agg/'+str(day)+'.csv', index = False)
#NEtwork inference process
ni = network_inference_process(individual, 'TIME_S_mean')
#Inference
ni.do_process()
final = ni.get_processed_dataframe()
final.to_csv('noise2/processed_data/'+str(window)+'/net/'+str(day)+'.csv', index = False)
#Saving not aggregated ground truth
ground_truth = final[['ID_mean', 'TIME_S_mean', 'LABEL_O_values', 'LABEL_F_values']].copy()
#Neighbor features process
neighbors_features_f = ni.get_neighbor_features(final, 'TIME_S_mean', 'ID_mean')
neighbors_features_f = neighbors_features_f.drop(['LABEL_O_values_n', 'LABEL_F_values_n'], axis=1)
neighbors_features_f.to_csv('noise2/processed_data/'+str(window)+'/net/'+str(day)+'_neigh.csv', index = False)
# Final features dataframe
final_neigh = pd.merge(final, neighbors_features_f, how='left', left_on=['TIME_S_mean','ID_mean'], right_on = ['TIME_S_mean_n','BABOON_NODE_n'])
final_neigh.to_csv('noise2/processed_data/'+str(window)+'/complete/'+str(day)+'.csv', index = False)
return


So as you can see, the last statement is a write to file, and it is executed by all the processes, I do not actually think that the problem is inside this function.







python python-multiprocessing python-pool






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 11 at 20:58

























asked Nov 11 at 18:36









Guido Muscioni

152114




152114












  • It looks like the script that you're running isn't exiting (on your HPC cluster?) and your example doesn't reproduce the problem. As a general rule, mp.Pool doesn't play well with job schedulers like slurm. I can't give you any specific advice without more information.
    – CJ59
    Nov 11 at 19:14










  • Which information should I provide you? Unfortunately, I cannot post the entire code, the thing to which I want to push the focus is that without the marked line everything works fine. @CJ59
    – Guido Muscioni
    Nov 11 at 19:31












  • Well the marked line doesn't do anything in the context of your example. So take it out? I'd bet it doesn't work fine though, it just fails more quietly. Are you running this through a job scheduler? If it's off the command line, how do you know it completes work?
    – CJ59
    Nov 11 at 19:39










  • The last statement of each process is the creation of a csv. Regarding the marked line, if I remove it everything is fine on Linux, but windows requires it for compatibility. No job scheduler, it the first time ever I get this behavior from the server.
    – Guido Muscioni
    Nov 11 at 19:41










  • You're probably blocking on pool.map() so you should double check all your files are there and there aren't any hanging file handles. You could also replace .join() with .terminate(), but it shouldn't matter. Make sure you're clearing all the old processes and output files before you run it again. It looks like you have a lot of stale processes in that list.
    – CJ59
    Nov 11 at 20:05


















  • It looks like the script that you're running isn't exiting (on your HPC cluster?) and your example doesn't reproduce the problem. As a general rule, mp.Pool doesn't play well with job schedulers like slurm. I can't give you any specific advice without more information.
    – CJ59
    Nov 11 at 19:14










  • Which information should I provide you? Unfortunately, I cannot post the entire code, the thing to which I want to push the focus is that without the marked line everything works fine. @CJ59
    – Guido Muscioni
    Nov 11 at 19:31












  • Well the marked line doesn't do anything in the context of your example. So take it out? I'd bet it doesn't work fine though, it just fails more quietly. Are you running this through a job scheduler? If it's off the command line, how do you know it completes work?
    – CJ59
    Nov 11 at 19:39










  • The last statement of each process is the creation of a csv. Regarding the marked line, if I remove it everything is fine on Linux, but windows requires it for compatibility. No job scheduler, it the first time ever I get this behavior from the server.
    – Guido Muscioni
    Nov 11 at 19:41










  • You're probably blocking on pool.map() so you should double check all your files are there and there aren't any hanging file handles. You could also replace .join() with .terminate(), but it shouldn't matter. Make sure you're clearing all the old processes and output files before you run it again. It looks like you have a lot of stale processes in that list.
    – CJ59
    Nov 11 at 20:05
















It looks like the script that you're running isn't exiting (on your HPC cluster?) and your example doesn't reproduce the problem. As a general rule, mp.Pool doesn't play well with job schedulers like slurm. I can't give you any specific advice without more information.
– CJ59
Nov 11 at 19:14




It looks like the script that you're running isn't exiting (on your HPC cluster?) and your example doesn't reproduce the problem. As a general rule, mp.Pool doesn't play well with job schedulers like slurm. I can't give you any specific advice without more information.
– CJ59
Nov 11 at 19:14












Which information should I provide you? Unfortunately, I cannot post the entire code, the thing to which I want to push the focus is that without the marked line everything works fine. @CJ59
– Guido Muscioni
Nov 11 at 19:31






Which information should I provide you? Unfortunately, I cannot post the entire code, the thing to which I want to push the focus is that without the marked line everything works fine. @CJ59
– Guido Muscioni
Nov 11 at 19:31














Well the marked line doesn't do anything in the context of your example. So take it out? I'd bet it doesn't work fine though, it just fails more quietly. Are you running this through a job scheduler? If it's off the command line, how do you know it completes work?
– CJ59
Nov 11 at 19:39




Well the marked line doesn't do anything in the context of your example. So take it out? I'd bet it doesn't work fine though, it just fails more quietly. Are you running this through a job scheduler? If it's off the command line, how do you know it completes work?
– CJ59
Nov 11 at 19:39












The last statement of each process is the creation of a csv. Regarding the marked line, if I remove it everything is fine on Linux, but windows requires it for compatibility. No job scheduler, it the first time ever I get this behavior from the server.
– Guido Muscioni
Nov 11 at 19:41




The last statement of each process is the creation of a csv. Regarding the marked line, if I remove it everything is fine on Linux, but windows requires it for compatibility. No job scheduler, it the first time ever I get this behavior from the server.
– Guido Muscioni
Nov 11 at 19:41












You're probably blocking on pool.map() so you should double check all your files are there and there aren't any hanging file handles. You could also replace .join() with .terminate(), but it shouldn't matter. Make sure you're clearing all the old processes and output files before you run it again. It looks like you have a lot of stale processes in that list.
– CJ59
Nov 11 at 20:05




You're probably blocking on pool.map() so you should double check all your files are there and there aren't any hanging file handles. You could also replace .join() with .terminate(), but it shouldn't matter. Make sure you're clearing all the old processes and output files before you run it again. It looks like you have a lot of stale processes in that list.
– CJ59
Nov 11 at 20:05

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53251905%2fmultiprocessing-in-python-processes-not-closing-after-completing%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53251905%2fmultiprocessing-in-python-processes-not-closing-after-completing%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Full-time equivalent

Bicuculline

さくらももこ