Large-Scale Population Synthesis on HPC Facilities πο
If you havenβt done so yet, export the path POSYDON environment variables. For example:
[1]:
%env PATH_TO_POSYDON=/Users/simone/Google Drive/github/POSYDON-public/
%env PATH_TO_POSYDON_DATA=/Volumes/T7/
env: PATH_TO_POSYDON=/Users/simone/Google Drive/github/POSYDON-public/
env: PATH_TO_POSYDON_DATA=/Volumes/T7/
Creating the Initialization File to Rub the Binary Population Synthesis Modelο
Letβs copy the default populatio synthesis modle to your working directory.
[ ]:
import os
import shutil
from posydon.config import PATH_TO_POSYDON
path_to_params = os.path.join(PATH_TO_POSYDON, "posydon/popsyn/population_params_default.ini")
shutil.copyfile(path_to_params, './population_params.ini')
Open the population_params.ini
file and do the following edits to run a large model at 8 differet metallicities:
set
metallicity = [2., 1., 0.45, 0.2, 0.1, 0.01, 0.001, 0.0001]
set
number_of_binaries = 1000000
Running the Population Synthesis Modelο
Write the binary population simulation script to a file to run the population synthesis model with slurm on the HPC facility.
[2]:
%%writefile script.py
from posydon.popsyn.synthetic_population import SyntheticPopulation
if __name__ == "__main__":
synth_pop = SyntheticPopulation("./population_params.ini")
synth_pop.evolve()
Writing script.py
Run the simulation on the HPC facility using the slurm magic command.
[4]:
# Note: if you do not have the slurm magic commads installed, you can install it with the following line
# !pip install git+https://github.com/NERSC/slurm-magic.git
[5]:
%load_ext slurm_magic
The following slurm file works on UNIGE HPC facility.
[ ]:
%%sbatch
#!/bin/bash
#SBATCH --array=0-34
#SBATCH --partition=private-astro-cpu
#SBATCH --job-name=pop_syn
#SBATCH --output=./pop_synth_%A_%a.out
#SBATCH --mail-type=FAIL
#SBATCH --mail-user=user@email.ch
#SBATCH --time=24:00:00
#SBATCH --mem-per-cpu=4G
export PATH_TO_POSYDON=/srv/beegfs/scratch/shares/astro/posydon/simone/POSYDON-public/
export PATH_TO_POSYDON_DATA=/srv/beegfs/scratch/shares/astro/posydon/POSYDON_GRIDS_v2/POSYDON_data/230914/
python ./script.py
The following slurm file works on Northwestern HPC facility.
[ ]:
%%sbatch
#!/bin/bash
#SBATCH --account=b1119
#SBATCH --partition=posydon-priority
#SBATCH --array=0-34
#SBATCH --job-name=pop_syn
#SBATCH --output=./pop_syn.out
#SBATCH --mail-type=FAIL
#SBATCH --mail-user=user@email.ch
#SBATCH --time=24:00:00
#SBATCH --mem-per-cpu=4G
export PATH_TO_POSYDON=/projects/b1119/ssb7065/POSYDON-public/
export PATH_TO_POSYDON_DATA=/projects/b1119/POSYDON_GRIDS/POSYDON_popsynth_data/v2/230816/
python ./script.py
Combining runs into single metallicity filesο
The above process creates a temporary batch folder per metallicity in which the sub-processes deposit their output. After the processes are done, the files have to be combined into a population file per metallicity. The follow code allows you to perform this concatenation, which is similar to the code shown in the first tutorial.
[ ]:
%%writefile concat_runs.py
from posydon.popsyn.synthetic_population import SyntheticPopulation
from posydon.config import PATH_TO_POSYDON_DATA
import os
if __name__ == "__main__":
synth_pop = SyntheticPopulation("./population_params.ini")
# Get the path to the batches in the current folder
x = os.listdir('.')
path_to_batches = [i for i in x if i.endswith('_batches')]
synth_pop.merge_parallel_runs(path_to_batches)
This script can be manually ran after the job array has finished or we can submit another SLURM job, which only starts once the job array has finished.
For this job to start once the job array has finished, the job-name
has to be the same as the job array and dependency=singleton
has to be set. If the job array does not finish correctly, this job will never run!
[ ]:
%%sbatch
#!/bin/bash
#SBATCH --job-name=pop_syn
#SBATCH --partition=private-astro-cpu
#SBATCH --output=population_concat.out
#SBATCH --mail-type=FAIL
#SBATCH --mail-user=user@email.ch
#SBATCH --time=01:00:00
#SBATCH --mem=4GB
#SBATCH --dependency=singleton
export PATH_TO_POSYDON=/srv/beegfs/scratch/shares/astro/posydon/simone/POSYDON-public/
export PATH_TO_POSYDON_DATA=/srv/beegfs/scratch/shares/astro/posydon/POSYDON_GRIDS_v2/POSYDON_data/230914/
python concat_runs.py
[ ]:
%%sbatch
#!/bin/bash
#SBATCH --job-name=pop_syn
#SBATCH --account=b1119
#SBATCH --partition=posydon-priority
#SBATCH --output=population_concat.out
#SBATCH --mail-type=FAIL
#SBATCH --mail-user=user@email.ch
#SBATCH --time=01:00:00
#SBATCH --mem=4G
#SBATCH --dependency=singleton
export PATH_TO_POSYDON=/projects/b1119/ssb7065/POSYDON-public/
export PATH_TO_POSYDON_DATA=/projects/b1119/POSYDON_GRIDS/POSYDON_popsynth_data/v2/230816/
python concat_runs.py
Parsing the Population Synthesis Model Outputο
If everything is set up correctly, the simulation will generate 8 different population synthesis models, one for each metallicity containig 1 million binaries each. The simulation will take a few hours to complete. For convinience, we have already run the simulation and the results are available in the .../POSYDON_data/tutorials/population-synthesis/example/
folder.
[11]:
import os
from posydon.config import PATH_TO_POSYDON_DATA
path = os.path.join(PATH_TO_POSYDON_DATA, "POSYDON_data/tutorials/population-synthesis/example/")
files = sorted([f for f in os.listdir(path) if f.endswith('Zsun_population.h5')])
path_to_data = [os.path.join(path, file) for file in files]
path_to_data
[11]:
['/Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e+00_Zsun_population.h5',
'/Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e-01_Zsun_population.h5',
'/Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e-02_Zsun_population.h5',
'/Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e-03_Zsun_population.h5',
'/Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e-04_Zsun_population.h5',
'/Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/2.00e+00_Zsun_population.h5',
'/Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/2.00e-01_Zsun_population.h5',
'/Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/4.50e-01_Zsun_population.h5']
Here we show how you can parse the simulation results and save the subpopulation of merging binary black holes (BBH).
[12]:
from posydon.popsyn.synthetic_population import SyntheticPopulation
pop = SyntheticPopulation(path_to_ini='./population_params.ini', verbose=True)
pop.parse(path_to_data=path_to_data, S1_state='BH', S2_state='BH', binary_state='contact', invert_S1S2=False)
pop.save_pop(os.path.join(path,'BBH_population.h5'))
pop.df.head(10)
Binary count with (S1_state, S2_state, binary_state, binary_event) equal
to (BH, BH, contact, None)
in /Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e+00_Zsun_population.h5 are 233
in /Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e-01_Zsun_population.h5 are 2643
in /Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e-02_Zsun_population.h5 are 5974
in /Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e-03_Zsun_population.h5 are 8320
in /Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/1.00e-04_Zsun_population.h5 are 9683
in /Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/2.00e+00_Zsun_population.h5 are 121
in /Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/2.00e-01_Zsun_population.h5 are 3021
in /Volumes/T7/POSYDON_data/tutorials/population-synthesis/example/4.50e-01_Zsun_population.h5 are 761
Total binaries found are 30756
/Users/simone/Google Drive/github/POSYDON-public/posydon/popsyn/synthetic_population.py:300: PerformanceWarning:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->mixed,key->block1_values] [items->Index(['state', 'event', 'step_names', 'S1_state', 'S2_state'], dtype='object')]
self.df.to_hdf(path, key='history')
/Users/simone/Google Drive/github/POSYDON-public/posydon/popsyn/synthetic_population.py:301: PerformanceWarning:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->mixed,key->block2_values] [items->Index(['state_i', 'event_i', 'step_names_i', 'state_f', 'event_f',
'step_names_f', 'S1_state_i', 'S1_state_f', 'S1_SN_type', 'S2_state_i',
'S2_state_f', 'S2_SN_type', 'interp_class_HMS_HMS',
'interp_class_CO_HMS_RLO', 'interp_class_CO_HeMS',
'interp_class_CO_HeMS_RLO', 'mt_history_HMS_HMS',
'mt_history_CO_HMS_RLO', 'mt_history_CO_HeMS',
'mt_history_CO_HeMS_RLO'],
dtype='object')]
self.df_oneline.to_hdf(path, key='oneline')
Population successfully saved!
[12]:
state | event | time | orbital_period | eccentricity | lg_mtransfer_rate | step_names | step_times | S1_state | S1_mass | ... | S2_co_core_radius | S2_center_h1 | S2_center_he4 | S2_surface_h1 | S2_surface_he4 | S2_surf_avg_omega_div_omega_crit | S2_spin | metallicity | simulated_mass_for_met | underlying_mass_for_met | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
binary_index | |||||||||||||||||||||
3587 | detached | ZAMS | 0.000000e+00 | 2.493602e+01 | 0.000000 | NaN | initial_cond | 0.000000 | H-rich_Core_H_burning | 70.069756 | ... | NaN | 7.155000e-01 | 2.703000e-01 | NaN | NaN | NaN | NaN | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | contact | oDoubleCE1 | 3.631450e+06 | 6.304073e+01 | 0.000000 | -2.989131 | step_HMS_HMS | 0.037464 | H-rich_Core_He_burning | 44.118926 | ... | 0.000000 | 0.000000e+00 | 9.828315e-01 | 4.246537e-01 | 0.561493 | 0.577444 | 0.760854 | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | detached | NaN | 3.631450e+06 | 2.371597e-01 | 0.000000 | NaN | step_CE | 0.000137 | stripped_He_Core_He_burning | 35.566786 | ... | 0.000000 | 0.000000e+00 | 9.828315e-01 | 1.000000e-02 | 0.975800 | NaN | NaN | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | detached | CC1 | 4.007490e+06 | 1.226986e+00 | 0.000000 | NaN | step_detached | 0.777758 | stripped_He_Central_C_depletion | 13.620462 | ... | 0.401483 | 1.917729e-34 | 1.605147e-02 | 9.893273e-100 | 0.247607 | 0.006843 | 0.077976 | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | detached | NaN | 4.007490e+06 | 1.358968e+00 | 0.101109 | NaN | step_SN | 0.152370 | BH | 13.120462 | ... | 0.401483 | 1.917729e-34 | 1.605147e-02 | 9.893273e-100 | 0.247607 | 0.006843 | 0.077976 | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | detached | redirect | 4.007490e+06 | 1.358968e+00 | 0.101109 | NaN | step_CO_HeMS | 0.000102 | BH | 13.120462 | ... | 0.401483 | 1.917729e-34 | 1.605147e-02 | 9.893273e-100 | 0.247607 | 0.006843 | 0.077976 | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | detached | CC2 | 4.027170e+06 | 1.377894e+00 | 0.101108 | NaN | step_detached | 0.452186 | BH | 13.120462 | ... | 0.130995 | 0.000000e+00 | 7.706932e-13 | 1.000000e-99 | 0.226684 | 0.015362 | 0.060340 | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | detached | NaN | 4.027170e+06 | 1.611464e+00 | 0.048133 | NaN | step_SN | 0.149304 | BH | 13.120462 | ... | NaN | NaN | NaN | NaN | NaN | NaN | 0.064849 | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | contact | CO_contact | 2.917891e+09 | 2.638756e-08 | 0.000000 | NaN | step_dco | 1.252216 | BH | 13.120462 | ... | NaN | NaN | NaN | NaN | NaN | NaN | 0.064849 | 0.0142 | 2.913438e+07 | 1.447893e+08 |
3587 | contact | END | 2.917891e+09 | 2.638756e-08 | 0.000000 | NaN | step_end | 0.000048 | BH | 13.120462 | ... | NaN | NaN | NaN | NaN | NaN | NaN | 0.064849 | 0.0142 | 2.913438e+07 | 1.447893e+08 |
10 rows Γ 41 columns
You can now do the same for the other subpopulations of interest. Try sorting black hole-neutron star systems (BHNS; remember to set invert_S1S2 = True
to find BHNS systems where the NS is formed first) and binary neutron star systems (BNS).
[ ]: