otwrapy.Parallelizer

Parallelizer(wrapper, backend='multiprocessing', n_cpus=-1, verbosity=True, dask_args=None, slurmcluster_kw={}, ipp_client_kw={})

Parallelize a Wrapper using ‘ipyparallel’, ‘joblib’, ‘pathos’ or ‘multiprocessing’.

Parameters:
wrapperot.Function or instance of ot.OpenTURNSPythonFunction

openturns wrapper to be distributed

backendstr, optional

Whether to parallelize using ‘ipyparallel’, ‘joblib’, ‘pathos’, ‘multiprocessing’, ‘dask/ssh’, ‘dask/slurm’, ‘concurrent/thread’, ‘concurrent/process’ or ‘serial’. Default is multiprocessing. Also the backend will fallback to multiprocessing when the corresponding third-party cannot be imported.

n_cpusint, optional

Number of CPUs on which the simulations will be distributed. Needed only if using ‘joblib’, pathos or ‘multiprocessing’ as backend. If n_cpus = 1, the behavior is the same as ‘serial’. The default is -1, which means multiprocessing.cpu_count / 2 Note that for remote/distributed backends this may not reflect the remote node capabilities.

verbositybool, optional

Whether to display a progress bar. Default is True.

dask_argsdict, optional

Dictionnary parameters when using Dask SSH Cluster. It must follow this form: {‘scheduler’: ip adress or host name, ‘workers’: {‘ip adress or host name’: n_cpus}, ‘remote_python’: {‘ip adress or host name’: path_to_bin_python}}. The parallelization uses SSHCluster class of dask distributed with 1 thread per worker. When dask is chosen, the argument n_cpus is not used. The progress bar is enabled if verbosity is True. The dask dashboard is enabled at port 8787.

slurmcluster_kwdict, optional

Parameters to instantiate the Dask SLURMCluster object. The argument n_cpus is used to set the default number of workers (n_workers).

ipp_client_kwdict, optional

Parameters to instantiate the IPython Parallel Client, like “cluster_id”, etc.

Examples

For example, in order to parallelize the beam wrapper examples.beam.Wrapper you simply instantiate your wrapper and parallelize it as follows:

>>> from otwrapy.examples.beam import Wrapper
>>> import otwrapy as otw
>>> model = otw.Parallelizer(Wrapper(), n_cpus=-1)

model will distribute calls to Wrapper() using multiprocessing and as many CPUs as you have minus one for the scheduler.

Because Parallelize is decorated with FunctionDecorator, model is already an ot.Function.