UncertaintyCalculations

UncertaintyCalculations is the class responsible for performing the uncertainty calculations. Here we explain how they are performed as well as well as which options the user have to customize the calculations An insight into how the calculations are performed is not required to use Uncertainpy. In most cases, the default settings works fine. In addition to the customization options shown below, Uncertainpy has support for implementing entirely custom uncertainty quantification and sensitivity analysis methods. This is only recommended for expert users, as knowledge of both Uncertainpy and uncertainty quantification is needed.

Quasi-Monte Carlo method

To use the quasi-Monte Carlo method, we call quantify() with method="mc", and the optional argument nr_mc_samples:

data = UQ.quantify(
    method="mc",
    nr_mc_samples=10**4,
)

By default, the quasi-Monte Carlo method quasi-randomly draws 10000 parameter samples from the joint multivariate probability distribution of the parameters \(\rho_{\boldsymbol{Q}}\) using Hammersley sampling (Hammersley, 1960). As the name indicates, the number of samples is specified by the nr_mc_samples argument. The model is evaluated for each of these parameter samples, and features are calculated for each model evaluation (when applicable). To speed up the calculations, Uncertainpy uses the multiprocess Python package (McKerns et al., 2012) to perform this step in parallel. When model and feature calculations are done, Uncertainpy calculates the mean, variance, and 5th and 95th percentile (which gives the 90% prediction interval) for the model output as well as for each feature.

Polynomial chaos expansions

To use polynomial chaos expansions we use quantify() with the argument method="pc", which takes a set of optional arguments (default are values specified):

data = UQ.quantify(
    method="pc",
    pc_method="collocation",
    rosenblatt=False,
    polynomial_order=4,
    nr_collocation_nodes=None,
    quadrature_order=None,
    nr_pc_mc_samples=10**4,
)

As previously mentioned, Uncertainpy allows the user to select between point collocation (pc_method="collocation") and pseudo-spectral projections (pc_method="spectral"). The goal is to create separate polynomial chaos expansions hat{U} for the model and each feature. In both methods, Uncertainpy creates the orthogonal polynomial \(\boldsymbol{\phi}_n\) using \(\rho_{\boldsymbol{Q}}\) and the three-term recurrence relation if available, otherwise the discretized Stieltjes method (Stieltjes, 1884) is used. Uncertainpy uses a third order polynomial expansion, changed with polynomial_order. The polynomial \(\boldsymbol{\phi}_n\) is shared between the model and all features, since they have the same uncertain input parameters, and therefore the same \(\rho_{\boldsymbol{Q}}\). Only the polynomial coefficients \(c_n\) differ between the model and each feature.

The two polynomial chaos methods differ in terms of how they calculate \(c_n\). For point collocation Uncertainpy uses \(2(N_p + 1)\) collocation nodes, as recommended by (Hosder et al., 2007), where N_p is the number of polynomial chaos expansion factors. The number of collocation nodes can be customized with nr_collocation_nodes, but the new number of nodes must be chosen carefully. The collocation nodes are sampled from \(\rho_{\boldsymbol{Q}}\) using Hammersley sampling (Hammersley, 1960). The model and features are calculated for each of the collocation nodes. As with the quasi-Monte Carlo method, this step is performed in parallel. The polynomial coefficients \(c_n\) are calculated using Tikhonov regularization (Rifkin and Lipert, 2007) from the model and feature results.

For the pseudo-spectral projection, Uncertainpy chooses nodes and weights using a quadrature scheme, instead of choosing nodes from \(\rho_{\boldsymbol{Q}}\). The quadrature scheme used is Leja quadrature with a Smolyak sparse grid (Narayan and Jakeman, 2014; Smolyak, 1963). The Leja quadrature is of order two greater than the polynomial order, but can be changed with quadrature_order. The model and features are calculated for each of the quadrature nodes. As before, this step is performed in parallel. The polynomial coefficients \(c_n\) are then calculated from the quadrature nodes, weights, and model and feature results.

When Uncertainpy has derived \(\hat{U}\) for the model and features, it uses \(\hat{U}\) to compute the mean, variance, and the first and total order Sobol indices. The first and total order Sobol indices are also summed and normalized. Finally, Uncertainpy uses \(\hat{U}\) as a surrogate model, and performs a quasi-Monte Carlo method with Hammersley sampling and nr_pc_mc_samples=10**4 samples to find the 5th and 95th percentiles.

If the model parameters have a dependent joint multivariate distribution, the Rosenblatt transformation must be used by setting rosenblatt=True. To perform the transformation Uncertainpy chooses \(\rho_{\boldsymbol{R}}\) to be a multivariate independent normal distribution, which is used instead of \(\rho_{\boldsymbol{Q}}\) to perform the polynomial chaos expansions. Both the point collocation method and the pseudo-spectral method are performed as described above. The only difference is that we use \(\rho_{\boldsymbol{R}}\) instead of \(\rho_{\boldsymbol{Q}}\), and use the Rosenblatt transformation to transform the selected nodes from \(\boldsymbol{R}\) to \(\boldsymbol{Q}\), before they are used in the model evaluation.

API Reference

class uncertainpy.core.UncertaintyCalculations(model=None, parameters=None, features=None, create_PCE_custom=None, custom_uncertainty_quantification=None, CPUs=u'max', logger_level=u'info')[source]

Perform the calculations for the uncertainty quantification and sensitivity analysis.

This class performs the calculations for the uncertainty quantification and sensitivity analysis of the model and features. It implements both quasi-Monte Carlo methods and polynomial chaos expansions using either point collocation or pseudo-spectral method. Both of the polynomial chaos expansion methods have support for the rosenblatt transformation to handle dependent variables.

Parameters:
  • model ({None, Model or Model subclass instance, model function}, optional) – Model to perform uncertainty quantification on. For requirements see Model.run. Default is None.

  • parameters ({dict {name: parameter_object}, dict of {name: value or Chaospy distribution}, …], list of Parameter instances, list [[name, value or Chaospy distribution], …], list [[name, value, Chaospy distribution or callable that returns a Chaospy distribution],…],}) – List or dictionary of the parameters that should be created. On the form parameters =

    • {name_1: parameter_object_1, name: parameter_object_2, ...}
    • {name_1:  value_1 or Chaospy distribution, name_2:  value_2 or Chaospy distribution, ...}
    • [parameter_object_1, parameter_object_2, ...],
    • [[name_1, value_1 or Chaospy distribution], ...].
    • [[name_1, value_1, Chaospy distribution or callable that returns a Chaospy distribution], ...]
  • features ({None, Features or Features subclass instance, list of feature functions}, optional) – Features to calculate from the model result. If None, no features are calculated. If list of feature functions, all will be calculated. Default is None.

  • create_PCE_custom (callable, optional) – A custom method for calculating the polynomial chaos approximation. For the requirements of the function see UncertaintyCalculations.create_PCE_custom. Overwrites existing create_PCE_custom method. Default is None.

  • custom_uncertainty_quantification (callable, optional) – A custom method for calculating uncertainties. For the requirements of the function see UncertaintyCalculations.custom_uncertainty_quantification. Overwrites existing custom_uncertainty_quantification method. Default is None.

  • CPUs ({int, None, “max”}, optional) – The number of CPUs to use when calculating the model and features. If None, no multiprocessing is used. If “max”, the maximum number of CPUs on the computer (multiprocess.cpu_count()) is used. Default is “max”.

  • logger_level ({“info”, “debug”, “warning”, “error”, “critical”, None}, optional) – Set the threshold for the logging level. Logging messages less severe than this level is ignored. If None, no logging to file is performed. Default logger level is “info”.

Variables:
  • model (Model or Model subclass) – The model to perform uncertainty quantification on.
  • parameters (Parameters) – The uncertain parameters.
  • features (Features or Features subclass) – The features of the model to perform uncertainty quantification on.
  • runmodel (RunModel) – Runmodel object responsible for evaluating the model and calculating features.
analyse_PCE(U_hat, distribution, data, nr_samples=10000)[source]

Calculate the statistical metrics from the polynomial chaos approximation.

Parameters:
  • U_hat (dict) – A dictionary containing the polynomial approximations for the model and each feature as chaospy.Poly objects.
  • distribution (chaospy.Dist) – The multivariate distribution for the uncertain parameters.
  • data (Data) – A data object containing the values from the model evaluation and feature calculations.
  • nr_samples (int, optional) – Number of samples for the Monte Carlo sampling of the polynomial chaos approximation. Default is 10**4.
Returns:

data – The data parameter given as input with the statistical metrics added.

Return type:

Data

Notes

The data parameter should contain (but not necessarily) the following:

  1. data["model/features"].evaluations
  2. data["model/features"].time
  3. data["model/features"].labels
  4. data.model_name
  5. data.incomplete
  6. data.method
  7. data.errored

When returned data additionally contains:

  1. data["model/features"].mean
  2. data["model/features"].variance
  3. data["model/features"].percentile_5
  4. data["model/features"].percentile_95
  5. data["model/features"].sobol_first, if more than 1 parameter
  6. data["model/features"].sobol_total, if more than 1 parameter
  7. data["model/features"].sobol_first_average, if more than 1 parameter
  8. data["model/features"].sobol_total_average, if more than 1 parameter
average_sensitivity(data, sensitivity=u'sobol_first')[source]

Calculate the average of the sensitivities for the model and all features and add them to data. Ignores any occurrences of numpy.NaN.

Parameters:
  • data (Data) – A data object with all model and feature evaluations, as well as all calculated statistical metrics.
  • sensitivity ({“sobol_first”, “first”, “sobol_total”, “total”}, optional) – The sensitivity to normalize and sum. “sobol_first” and “1” are for the first order Sobol indice while “sobol_total” and “t” is for the total order Sobol indices. Default is “sobol_first”.
Returns:

data – The data object with the average of the sensitivities for the model and all features added.

Return type:

Data

convert_uncertain_parameters(uncertain_parameters=None)[source]

Converts uncertain_parameter(s) to a list of uncertain parameter(s), and checks if it is a legal set of uncertain parameter(s).

Parameters:uncertain_parameters ({None, str, list}, optional) – The name(s) of the uncertain parameters to use. If None, a list of all uncertain parameters are returned. Default is None.
Returns:uncertain_parameters – A list with the name of all uncertain parameters.
Return type:list
Raises:ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.
create_PCE_collocation(uncertain_parameters=None, polynomial_order=4, nr_collocation_nodes=None, allow_incomplete=True)[source]

Create the polynomial approximation U_hat using pseudo-spectral projection.

Parameters:
  • uncertain_parameters ({None, str, list}, optional) – The uncertain parameter(s) to use when creating the polynomial approximation. If None, all uncertain parameters are used. Default is None.
  • polynomial_order (int, optional) – The polynomial order of the polynomial approximation. Default is 4.
  • nr_collocation_nodes ({int, None}, optional) – The number of collocation nodes to choose. If None, nr_collocation_nodes = 2* number of expansion factors + 2. Default is None.
  • allow_incomplete (bool, optional) – If the polynomial approximation should be performed for features or models with incomplete evaluations. Default is True.
Returns:

  • U_hat (dict) – A dictionary containing the polynomial approximations for the model and each feature as chaospy.Poly objects.
  • distribution (chaospy.Dist) – The multivariate distribution for the uncertain parameters.
  • data (Data) – A data object containing the values from the model evaluation and feature calculations.

Raises:

ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.

Notes

The returned data should contain (but not necessarily) the following:

  1. data["model/features"].evaluations
  2. data["model/features"].time
  3. data["model/features"].labels
  4. data.model_name
  5. data.incomplete
  6. data.method
  7. data.errored

The model and feature do not necessarily give results for each node. The collocation method is robust towards missing values as long as the number of results that remain is high enough.

The polynomial chaos expansion method for uncertainty quantification approximates the model with a polynomial that follows specific requirements. This polynomial can be used to quickly calculate the uncertainty and sensitivity of the model.

To create the polynomial chaos expansion we first find the polynomials using the three-therm recurrence relation if available, otherwise the discretized Stieltjes method is used. Then we use point collocation to find the expansion coefficients for the model and each feature of the model.

In point collocation we require the polynomial approximation to be equal the model at a set of collocation nodes. This results in a set of linear equations for the polynomial coefficients we can solve. We choose nr_collocation_nodes collocation nodes with Hammersley sampling from the distribution. We evaluate the model and each feature in parallel, and solve the resulting set of linear equations with Tikhonov regularization.

create_PCE_collocation_rosenblatt(uncertain_parameters=None, polynomial_order=4, nr_collocation_nodes=None, allow_incomplete=True)[source]

Create the polynomial approximation U_hat using pseudo-spectral projection and the Rosenblatt transformation. Works for dependend uncertain parameters.

Parameters:
  • uncertain_parameters ({None, str, list}, optional) – The uncertain parameter(s) to use when creating the polynomial approximation. If None, all uncertain parameters are used. Default is None.
  • polynomial_order (int, optional) – The polynomial order of the polynomial approximation. Default is 4.
  • nr_collocation_nodes ({int, None}, optional) – The number of collocation nodes to choose. If None, nr_collocation_nodes = 2* number of expansion factors + 2. Default is None.
  • allow_incomplete (bool, optional) – If the polynomial approximation should be performed for features or models with incomplete evaluations. Default is True.
Returns:

  • U_hat (dict) – A dictionary containing the polynomial approximations for the model and each feature as chaospy.Poly objects.
  • distribution (chaospy.Dist) – The multivariate distribution for the uncertain parameters.
  • data (Data) – A data object containing the values from the model evaluation and feature calculations.

Raises:

ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.

Notes

The returned data should contain (but not necessarily) the following:

  1. data["model/features"].evaluations
  2. data["model/features"].time
  3. data["model/features"].labels
  4. data.model_name
  5. data.incomplete
  6. data.method

The model and feature do not necessarily give results for each node. The collocation method is robust towards missing values as long as the number of results that remain is high enough.

The polynomial chaos expansion method for uncertainty quantification approximates the model with a polynomial that follows specific requirements. This polynomial can be used to quickly calculate the uncertainty and sensitivity of the model.

We use the Rosenblatt transformation to transform from dependent to independent variables before we create the polynomial chaos expansion. We first find the polynomials from the independent distributions using the three-therm recurrence relation if available, otherwise the discretized Stieltjes method is used. Then we use the point collocation with the Rosenblatt transformation to find the expansion coefficients for the model and each feature of the model.

In point collocation we require the polynomial approximation to be equal the model at a set of collocation nodes. This results in a set of linear equations for the polynomial coefficients we can solve. We choose nr_collocation_nodes collocation nodes with Hammersley sampling from the independent distribution. We then transform the nodes using the Rosenblatte transformation and evaluate the model and each feature in parallel. We solve the resulting set of linear equations with Tikhonov regularization.

create_PCE_custom

A custom method for calculating the polynomial chaos approximation. Must follow the below requirements.

Parameters:
  • self (UncertaintyCalculation) – An explicit self is required as the first argument. self can be used inside the custom function.
  • uncertain_parameters ({None, str, list}, optional) – The uncertain parameter(s) to use when creating the polynomial approximation. If None, all uncertain parameters are used. Default is None.
  • **kwargs – Any number of optional arguments.
Returns:

  • U_hat (dict) – A dictionary containing the polynomial approximations for the model and each feature as chaospy.Poly objects.
  • distribution (chaospy.Dist) – The multivariate distribution for the uncertain parameters.
  • data (Data) – A data object containing the values from the model evaluation and feature calculations.

Raises:

ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.

Notes

This method can be implemented to create a custom method to calculate the polynomial chaos expansion. The method must calculate and return the return arguments described above.

The returned data should contain (but not necessarily) the following:

  1. data["model/features"].evaluations
  2. data["model/features"].time
  3. data["model/features"].labels
  4. data.model_name
  5. data.incomplete
  6. data.method

The method analyse_PCE is called after the polynomial approximation has been created.

Usefull methods in Uncertainpy are:

  1. uncertainpy.core.Uncertaintycalculations.convert_uncertain_parameters
  2. uncertainpy.core.Uncertaintycalculations.create_distribution
  3. uncertainpy.core.RunModel.run

See also

uncertainpy.Data, uncertainpy.Parameters

uncertainpy.core.Uncertaintycalculations.convert_uncertain_parameters
Converts uncertain parameters to allowed list
uncertainpy.core.Uncertaintycalculations.create_distribution
Creates the uncertain parameter distribution
uncertainpy.core.RunModel.run
Runs the model
create_PCE_spectral(uncertain_parameters=None, polynomial_order=4, quadrature_order=None, allow_incomplete=True)[source]

Create the polynomial approximation U_hat using pseudo-spectral projection.

Parameters:
  • uncertain_parameters ({None, str, list}, optional) – The uncertain parameter(s) to use when creating the polynomial approximation. If None, all uncertain parameters are used. Default is None.
  • polynomial_order (int, optional) – The polynomial order of the polynomial approximation. Default is 4.
  • quadrature_order ({int, None}, optional) – The order of the Leja quadrature method. If None, quadrature_order = polynomial_order + 2. Default is None.
  • allow_incomplete (bool, optional) – If the polynomial approximation should be performed for features or models with incomplete evaluations. Default is True.
Returns:

  • U_hat (dict) – A dictionary containing the polynomial approximations for the model and each feature as chaospy.Poly objects.
  • distribution (chaospy.Dist) – The multivariate distribution for the uncertain parameters.
  • data (Data) – A data object containing the values from the model evaluation and feature calculations.

Raises:

ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.

Notes

The returned data should contain (but not necessarily) the following:

  1. data["model/features"].evaluations
  2. data["model/features"].time
  3. data["model/features"].labels
  4. data.model_name
  5. data.incomplete
  6. data.method
  7. data.errored

The model and feature do not necessarily give results for each node. The pseudo-spectral methods is sensitive to missing values, so allow_incomplete should be used with care.

The polynomial chaos expansion method for uncertainty quantification approximates the model with a polynomial that follows specific requirements. This polynomial can be used to quickly calculate the uncertainty and sensitivity of the model.

To create the polynomial chaos expansion we first find the polynomials using the three-therm recurrence relation if available, otherwise the discretized Stieltjes method is used. Then we use the pseudo-spectral projection to find the expansion coefficients for the model and each feature of the model.

Pseudo-spectral projection is based on least squares minimization and finds the expansion coefficients through numerical integration. The integration uses a quadrature scheme with weights and nodes. We use Leja quadrature with Smolyak sparse grids to reduce the number of nodes required. For each of the nodes we evaluate the model and calculate the features, and the polynomial approximation is created from these results.

create_PCE_spectral_rosenblatt(uncertain_parameters=None, polynomial_order=4, quadrature_order=None, allow_incomplete=True)[source]

Create the polynomial approximation U_hat using pseudo-spectral projection and the Rosenblatt transformation. Works for dependend uncertain parameters.

Parameters:
  • uncertain_parameters ({None, str, list}, optional) – The uncertain parameter(s) to use when creating the polynomial approximation. If None, all uncertain parameters are used. Default is None.
  • polynomial_order (int, optional) – The polynomial order of the polynomial approximation. Default is 4.
  • quadrature_order ({int, None}, optional) – The order of the Leja quadrature method. If None, quadrature_order = polynomial_order + 2. Default is None.
  • allow_incomplete (bool, optional) – If the polynomial approximation should be performed for features or models with incomplete evaluations. Default is True.
Returns:

  • U_hat (dict) – A dictionary containing the polynomial approximations for the model and each feature as chaospy.Poly objects.
  • distribution (chaospy.Dist) – The multivariate distribution for the uncertain parameters.
  • data (Data) – A data object containing the values from the model evaluation and feature calculations.

Raises:

ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.

Notes

data should contain (but not necessarily) the following, if

applicable:

  1. data["model/features"].evaluations
  2. data["model/features"].time
  3. data["model/features"].labels
  4. data.model_name
  5. data.incomplete
  6. data.method
  7. data.errored

The model and feature do not necessarily give results for each node. The pseudo-spectral methods is sensitive to missing values, so allow_incomplete should be used with care.

The polynomial chaos expansion method for uncertainty quantification approximates the model with a polynomial that follows specific requirements. This polynomial can be used to quickly calculate the uncertainty and sensitivity of the model.

We use the Rosenblatt transformation to transform from dependent to independent variables before we create the polynomial chaos expansion. We first find the polynomials from the independent distributions using the three-therm recurrence relation if available, otherwise the discretized Stieltjes method is used. Then we use the pseudo-spectral projection with the Rosenblatt transformation to find the expansion coefficients for the model and each feature of the model.

Pseudo-spectral projection is based on least squares minimization and finds the expansion coefficients through numerical integration. The integration uses a quadrature scheme with weights and nodes. We use Leja quadrature with Smolyak sparse grids to reduce the number of nodes required. We use the Rosenblatt transformation to transform the quadrature nodes before they are sent to the model evaluation. For each of the nodes we evaluate the model and calculate the features, and the polynomial approximation is created from these results.

create_distribution(uncertain_parameters=None)[source]

Create a joint multivariate distribution for the selected parameters from univariate distributions.

Parameters:uncertain_parameters ({None, str, list}, optional) – The uncertain parameter(s) to use when creating the joint multivariate distribution. If None, the joint multivariate distribution for all uncertain parameters is created. Default is None.
Returns:distribution – The joint multivariate distribution for the given parameters.
Return type:chaospy.Dist
Raises:ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.

Notes

If a multivariate distribution is defined in the Parameters.distribution, that multivariate distribution is returned. Otherwise the joint multivariate distribution for the selected parameters is created from the univariate distributions.

create_mask(evaluations)[source]

Mask evaluations that do not give results (anything but np.nan or None).

Parameters:evaluations (array_like) – Evaluations for the model.
Returns:
  • masked_evaluations (list) – The evaluations that have results (not numpy.nan or None).
  • mask (boolean array) – The mask itself, used to create the masked arrays.
create_masked_evaluations(data, feature)[source]

Mask all model and feature evaluations that do not give results (anything but np.nan) and the corresponding nodes.

Parameters:
  • data (Data) – A Data object with evaluations for the model and each feature. Must contain data[feature].evaluations.
  • feature (str) – Name of the feature or model to mask.
Returns:

  • masked_evaluations (list) – The evaluations that have results (not numpy.nan or None).
  • mask (boolean array) – The mask itself, used to create the masked arrays.

create_masked_nodes(data, feature, nodes)[source]

Mask all model and feature evaluations that do not give results (anything but np.nan) and the corresponding nodes.

Parameters:
  • data (Data) – A Data object with evaluations for the model and each feature. Must contain data[feature].evaluations.
  • feature (str) – Name of the feature or model to mask.
  • nodes (array_like) – The nodes used to evaluate the model.
Returns:

  • masked_evaluations (array_like) – The evaluations which have results.
  • mask (boolean array) – The mask itself, used to create the masked arrays.
  • masked_nodes (array_like) – The nodes that correspond to the evaluations with results.

create_masked_nodes_weights(data, feature, nodes, weights)[source]

Mask all model and feature evaluations that do not give results (anything but numpy.nan) and the corresponding nodes.

Parameters:
  • data (Data) – A Data object with evaluations for the model and each feature. Must contain data[feature].evaluations.
  • nodes (array_like) – The nodes used to evaluate the model.
  • feature (str) – Name of the feature or model to mask.
  • weights (array_like) – Weights corresponding to each node.
Returns:

  • masked_evaluations (array_like) – The evaluations which have results.
  • mask (boolean array) – The mask itself, used to create the masked arrays.
  • masked_nodes (array_like) – The nodes that correspond to the evaluations with results.
  • masked_weights (array_like) – Masked weights that correspond to evaluations with results.

custom_uncertainty_quantification

A custom uncertainty quantification method. Must follow the below requirements.

Parameters:
  • self (UncertaintyCalculation) – An explicit self is required as the first argument. self can be used inside the custom function.
  • **kwargs – Any number of optional arguments.
Returns:

data – A Data object with calculated uncertainties.

Return type:

Data

Notes

Usefull methods in Uncertainpy are:

  1. uncertainpy.core.Uncertaintycalculations.convert_uncertain_parameters - Converts uncertain parameters to an allowed list.
  2. uncertainpy.core.Uncertaintycalculations.create_distribution - Creates the uncertain parameter distribution
  3. uncertainpy.core.RunModel.run - Runs the model and all features.

See also

uncertainpy.Data

uncertainpy.core.Uncertaintycalculations.convert_uncertain_parameters
Converts uncertain parameters to list
uncertainpy.core.Uncertaintycalculations.create_distribution
Create uncertain parameter distribution
uncertainpy.core.RunModel.run
Runs the model
dependent(distribution)[source]

Check if a distribution is dependent or not.

Parameters:distribution (chaospy.Dist) – A Chaospy probability distribution.
Returns:dependent – True if the distribution is dependent, False if is independent.
Return type:bool
features

Features to calculate from the model result.

Parameters:new_features ({None, Features or Features subclass instance, list of feature functions}) – Features to calculate from the model result. If None, no features are calculated. If list of feature functions, all will be calculated.
Returns:features – Features to calculate from the model result. If None, no features are calculated.
Return type:{None, Features object}
mc_calculate_sobol(evaluations, nr_uncertain_parameters, nr_samples)[source]

Calculate the Sobol indices.

Parameters:
  • evaluations (array_like) – The model evaluations, evaluated for the samples created by SALIB.sample.saltelli.
  • nr_uncertain_parameters (int) – Number of uncertain parameters.
  • nr_samples (int) – Number of samples used in the Monte Carlo sampling.
Returns:

  • sobol_first (list) – The first order Sobol indices for each uncertain parameter.
  • sobol_total (list) – The total order Sobol indices for each uncertain parameter.

model

Model to perform uncertainty quantification on. For requirements see Model.run.

Parameters:new_model ({None, Model or Model subclass instance, model function}) – Model to perform uncertainty quantification on.
Returns:model – Model to perform uncertainty quantification on.
Return type:Model or Model subclass instance
monte_carlo(uncertain_parameters=None, nr_samples=10000, seed=None, allow_incomplete=True)[source]

Perform an uncertainty quantification using the quasi-Monte Carlo method.

Parameters:
  • uncertain_parameters ({None, str, list}, optional) – The uncertain parameter(s) to use when creating the polynomial approximation. If None, all uncertain parameters are used. Default is None.
  • nr_samples (int, optional) – Number of samples for the quasi-Monte Carlo sampling. Default is 10**4.
  • seed (int, optional) – Set a random seed. If None, no seed is set. Default is None.
  • allow_incomplete (bool, optional) – If the uncertainty quantification should be performed for features or models with incomplete evaluations. Default is True.
Returns:

data – A data object with all model and feature evaluations, as well as all calculated statistical metrics.

Return type:

Data

Raises:

ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.

Notes

The returned data should contain the following:

  1. data["model/features"].evaluations
  2. data["model/features"].time
  3. data["model/features"].labels
  4. data.model_name
  5. data.incomplete
  6. data.method
  7. data.errored
  8. data["model/features"].mean
  9. data["model/features"].variance
  10. data["model/features"].percentile_5
  11. data["model/features"].percentile_95
  12. data["model/features"].sobol_first, if more than 1 parameter
  13. data["model/features"].sobol_total, if more than 1 parameter
  14. data["model/features"].sobol_first_average, if more than 1 parameter
  15. data["model/features"].sobol_total_average, if more than 1 parameter

In the quasi-Monte Carlo method we quasi-randomly draw (nr_samples/2)*(nr_uncertain_parameters + 2) (nr_samples=10**4 by default) parameter samples using Saltelli’s sampling scheme ([1]). We require this number of samples to be able to calculate the Sobol indices. We evaluate the model for each of these parameter samples and calculate the features from each of the model results. This step is performed in parallel to speed up the calculations. Then we use nr_samples` of the model and feature results to calculate the mean, variance, and 5th and 95th percentile for the model and each feature. Lastly, we use all calculated model and each feature results to calculate the Sobol indices using Saltellie’s approach.

References

[1]Saltelli, A., P. Annoni, I. Azzini, F. Campolongo, M. Ratto, and S. Tarantola (2010). “Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index.” Computer Physics Communications, 181(2):259-270, doi:10.1016/j.cpc.2009.09.018.
parameters

Model parameters.

Parameters:new_parameters ({None, Parameters instance, list of Parameter instances, list [[name, value, distribution], …]}) – Either None, a Parameters instance or a list of the parameters that should be created. The two lists are similar to the arguments sent to Parameters. Default is None.
Returns:parameters – Parameters of the model. If None, no parameters have been set.
Return type:{None, Parameters}
polynomial_chaos(method=u'collocation', rosenblatt=u'auto', uncertain_parameters=None, polynomial_order=4, nr_collocation_nodes=None, quadrature_order=None, nr_pc_mc_samples=10000, allow_incomplete=True, seed=None, **custom_kwargs)[source]

Perform an uncertainty quantification and sensitivity analysis using polynomial chaos expansions.

Parameters:
  • method ({“collocation”, “spectral”, “custom”}, optional) – The method to use when creating the polynomial chaos approximation. “collocation” is the point collocation method “spectral” is pseudo-spectral projection, and “custom” is the custom polynomial method. Default is “collocation”.
  • rosenblatt ({“auto”, bool}, optional) – If the Rosenblatt transformation should be used. The Rosenblatt transformation must be used if the uncertain parameters have dependent variables. If “auto” the Rosenblatt transformation is used if there are dependent parameters, and it is not used of the parameters have independent distributions. Default is “auto”.
  • uncertain_parameters ({None, str, list}, optional) – The uncertain parameter(s) to use when creating the polynomial approximation. If None, all uncertain parameters are used. Default is None.
  • polynomial_order (int, optional) – The polynomial order of the polynomial approximation. Default is 4.
  • nr_collocation_nodes ({int, None}, optional) – The number of collocation nodes to choose, if point collocation is used. If None, nr_collocation_nodes = 2* number of expansion factors + 2. Default is None.
  • quadrature_order ({int, None}, optional) – The order of the Leja quadrature method, if pseudo-spectral projection is used. If None, quadrature_order = polynomial_order + 2. Default is None.
  • nr_pc_mc_samples (int, optional) – Number of samples for the Monte Carlo sampling of the polynomial chaos approximation.
  • allow_incomplete (bool, optional) – If the polynomial approximation should be performed for features or models with incomplete evaluations. Default is True.
  • seed (int, optional) – Set a random seed. If None, no seed is set. Default is None.
Returns:

data – A data object with all model and feature values, as well as all calculated statistical metrics.

Return type:

Data

Raises:
  • ValueError – If a common multivariate distribution is given in Parameters.distribution and not all uncertain parameters are used.
  • ValueError – If method not one of “collocation”, “spectral” or “custom”.
  • NotImplementedError – If “custom” is chosen and have not been implemented.

Notes

The returned data should contain the following:

  1. data["model/features"].evaluations
  2. data["model/features"].time
  3. data["model/features"].labels
  4. data.model_name
  5. data.incomplete
  6. data.method
  7. data.errored
  8. data["model/features"].mean
  9. data["model/features"].variance
  10. data["model/features"].percentile_5
  11. data["model/features"].percentile_95
  12. data["model/features"].sobol_first, if more than 1 parameter
  13. data["model/features"].sobol_total, if more than 1 parameter
  14. data["model/features"].sobol_first_average, if more than 1 parameter
  15. data["model/features"].sobol_total_average, if more than 1 parameter

The model and feature do not necessarily give results for each node. The collocation method is robust towards missing values as long as the number of results that remain is high enough. The pseudo-spectral method on the other hand, is sensitive to missing values, so allow_incomplete should be used with care in that case.

The polynomial chaos expansion method for uncertainty quantification approximates the model with a polynomial that follows specific requirements. This polynomial can be used to quickly calculate the uncertainty and sensitivity of the model.

To create the polynomial chaos expansion we first find the polynomials using the three-therm recurrence relation if available, otherwise the discretized Stieltjes method is used. Then we use point collocation or pseudo-spectral projection to find the expansion coefficients for the model and each feature of the model.

In point collocation we require the polynomial approximation to be equal the model at a set of collocation nodes. This results in a set of linear equations for the polynomial coefficients we can solve. We choose nr_collocation_nodes collocation nodes with Hammersley sampling from the distribution. We evaluate the model and each feature in parallel, and solve the resulting set of linear equations with Tikhonov regularization.

Pseudo-spectral projection is based on least squares minimization and finds the expansion coefficients through numerical integration. The integration uses a quadrature scheme with weights and nodes. We use Leja quadrature with Smolyak sparse grids to reduce the number of nodes required. For each of the nodes we evaluate the model and calculate the features, and the polynomial approximation is created from these results.

If we have dependent uncertain parameters we must use the Rosenblatt transformation. We use the Rosenblatt transformation to transform from dependent to independent variables before we create the polynomial chaos expansion. We first find the polynomials from the independent distributions using the three-term recurrence relation if available, otherwise the discretized Stieltjes method is used

Both pseudo-spectral projection and point collocation is performed using the independent distribution, the only difference is that we use the Rosenblatt transformation to transform the nodes from the independent distribution to the dependent distribution.

separate_output_values(evaluations, nr_uncertain_parameters, nr_samples)[source]

Notes

Separate the output from the model evaluations, evaluated for the samples created by SALIB.sample.saltelli.

Parameters:
  • evaluations (array_like) – The model evaluations, evaluated for the samples created by SALIB.sample.saltelli.
  • nr_uncertain_parameters (int) – Number of uncertain parameters.
  • nr_samples (int) – Number of samples used in the Monte Carlo sampling.
Returns:

  • A (array_like) – The A sample matrix from saltellie et. al. 2010.
  • B (array_like) – The B sample matrix from saltellie et. al. 2010.
  • AB (array_like) – The AB sample matrix from saltellie et. al. 2010.

Notes

Adapted from SALib/analyze/sobol.py:

https://github.com/SALib/SALib/blob/master/SALib/analyze/sobol.py