otwrapy.Parallelizer¶
- Parallelizer(wrapper, backend='multiprocessing', n_cpus=-1, verbosity=True, dask_args=None)¶
Parallelize a Wrapper using ‘ipyparallel’, ‘joblib’, ‘pathos’ or ‘multiprocessing’.
- Parameters:
- wrapperot.Function or instance of ot.OpenTURNSPythonFunction
openturns wrapper to be distributed
- backendstring (Optional)
Whether to parallelize using ‘ipyparallel’, ‘joblib’, ‘pathos’, ‘multiprocessing’ or ‘serial’. Serial backend means unit evaluation, with a progress bar if verbosity is True.
- n_cpusint (Optional)
Number of CPUs on which the simulations will be distributed. Needed Only if using ‘joblib’, pathos or ‘multiprocessing’ as backend. If n_cpus = 1, the behavior is the same as ‘serial’.
- verbositybool (Optional)
Verbose parameter when using ‘serial’, ‘joblib’, ‘multiprocessing’ or ‘dask’. Default is True. For ‘joblib’, ‘multiprocessing’ and ‘serial’, a progress bar is displayed using tqdm module. For ‘dask’ is used, the progress bar provided by dask is used.
- dask_argsdict (Optional)
Dictionnary parameters when using ‘dask’. It must follow this form: {‘scheduler’: ip adress or host name, ‘workers’: {‘ip adress or host name’: n_cpus}, ‘remote_python’: {‘ip adress or host name’: path_to_bin_python}}. The parallelization uses SSHCluster class of dask distributed with 1 thread per worker. When dask is chosen, the argument n_cpus is not used. The progress bar is enabled if verbosity is True. The dask dashboard is enabled at port 8787.
Examples
For example, in order to parallelize the beam wrapper
examples.beam.Wrapper
you simply instantiate your wrapper and parallelize it as follows:>>> from otwrapy.examples.beam import Wrapper >>> import otwrapy as otw >>> model = otw.Parallelizer(Wrapper(), n_cpus=-1)
model will distribute calls to Wrapper() using multiprocessing and as many CPUs as you have minus one for the scheduler.
Because Parallelize is decorated with
FunctionDecorator
, model is already anot.Function
.