Welcome to Uncertainpy’s documentation!¶
Uncertainpy is a python toolbox for uncertainty quantification and sensitivity analysis tailored towards computational neuroscience.
Uncertainpy is model independent and treats the model as a black box where the model can be left unchanged. Uncertainpy implements both quasi-Monte Carlo methods and polynomial chaos expansions using either point collocation or the pseudo-spectral method. Both of the polynomial chaos expansion methods have support for the rosenblatt transformation to handle dependent input parameters.
Uncertainpy is feature based, i.e., if applicable, it recognizes and calculates the uncertainty in features of the model, as well as the model itself. Examples of features in neuroscience can be spike timing and the action potential shape.
Uncertainpy is tailored towards neuroscience models, and comes with several common neuroscience models and features built in, but new models and features can easily be implemented. It should be noted that while Uncertainpy is tailored towards neuroscience, the implemented methods are general, and Uncertainpy can be used for many other types of models and features within other fields.
The Uncertainpy paper can be found here: Tennøe S, Halnes G, and Einevoll GT (2018) Uncertainpy: A Python Toolbox for Uncertainty Quantification and Sensitivity Analysis in Computational Neuroscience. Front. Neuroinform. 12:49. doi: 10.3389/fninf.2018.00049.
This is a collection of examples that shows the use of Uncertainpy for a few different case studies.
Content of Uncertainpy¶
This is the content of Uncertainpy and contains instructions for how to use all classes and functions, along with their API.
- Utility distributions
Here we give an overview of the theory behind uncertainty quantification and sensitivity analysis with a focus on (quasi-)Monte Carlo methods and polynomial chaos expansions, the methods implemented in Uncertainpy.