Math¶
This submodule contains various mathematical functions. Most of them are imported directly from theano.tensor (see there for more details). Doing any kind of math with PyMC3 random variables, or defining custom likelihoods or priors requires you to use these theano expressions rather than NumPy or Python code.

Return a symbolic dot product. 

Return a TensorConstant with value x. 

Return a copy of the array collapsed into one dimension. 

equivalent of numpy.zeros_like :param model: :type model: tensor :param dtype: :type dtype: datatype, optional :param opt: Useful for Theano optimization, not for user building a graph as this have the consequence that model isn't always in the graph. :type opt: If True, we will return a constant instead of a graph when possible. 

equivalent of numpy.ones_like :param model: :type model: tensor :param dtype: :type dtype: datatype, optional :param opt: Useful for Theano optimization, not for user building a graph as this have the consequence that model isn't always in the graph. :type opt: If True, we will return a constant instead of a graph when possible. 

Stack tensors in sequence on given axis (default is 0). 



Computes the sum along the given axis(es) of a tensor input. 

Computes the product along the given axis(es) of a tensor input. 

a < b 

a > b 

a <= b 

a >= b 

a == b 

a != b 

if cond then ift else iff 

Clip x to be between min and max. 

if cond then ift else iff 

bitwise a & b 

bitwise a  b 



e^`a` 

base e logarithm of a 

cosine of a 

sine of a 

tangent of a 

hyperbolic cosine of a 

hyperbolic sine of a 

hyperbolic tangent of a 

square of a 

square root of a 

error function 

inverse error function 

Return a symbolic dot product. 

elemwise maximum. 

elemwise minimum. 

sign of a 

ceiling of a 

floor of a 

Matrix determinant. 

Computes the inverse of a matrix \(A\). 

Return specified diagonals. 

Shorthand for product between several dots. 

Returns the sum of diagonal elements of matrix X. 

Generalizes a scalar op to tensors. 



The inverse of the logit function, 1 / (1 + exp(x)). 

 class pymc3.math.BatchedDiag¶
Fast BatchedDiag allocation
 grad(inputs, gout)¶
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
 Parameters
 inputslist of Variable
The input variables.
 output_gradslist of Variable
The gradients of the output variables.
 Returns
 gradslist of Variable
The gradients with respect to each Variable in inputs.
 make_node(diag)¶
Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by subclasses.
 Returns
 node: Apply
The constructed Apply node.
 perform(node, ins, outs, params=None)¶
Calculate the function on the inputs and put the variables in the output storage.
 Parameters
 nodeApply
The symbolic Apply node that represents this computation.
 inputsSequence
Immutable sequence of nonsymbolic/numeric inputs. These are the values of each Variable in node.inputs.
 output_storagelist of list
List of mutable singleelement lists (do not change the length of these lists). Each sublist corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sublists.
 paramstuple
A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such preset values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
 class pymc3.math.BlockDiagonalMatrix(sparse=False, format='csr')¶
 grad(inputs, gout)¶
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
 Parameters
 inputslist of Variable
The input variables.
 output_gradslist of Variable
The gradients of the output variables.
 Returns
 gradslist of Variable
The gradients with respect to each Variable in inputs.
 make_node(*matrices)¶
Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by subclasses.
 Returns
 node: Apply
The constructed Apply node.
 perform(node, inputs, output_storage, params=None)¶
Calculate the function on the inputs and put the variables in the output storage.
 Parameters
 nodeApply
The symbolic Apply node that represents this computation.
 inputsSequence
Immutable sequence of nonsymbolic/numeric inputs. These are the values of each Variable in node.inputs.
 output_storagelist of list
List of mutable singleelement lists (do not change the length of these lists). Each sublist corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sublists.
 paramstuple
A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such preset values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
 class pymc3.math.LogDet¶
Compute the logarithm of the absolute determinant of a square matrix M, log(abs(det(M))) on the CPU. Avoids det(M) overflow/ underflow.
Notes
Once PR #3959 (https://github.com/Theano/Theano/pull/3959/) by harpone is merged, this must be removed.
 grad(inputs, g_outputs)¶
Construct a graph for the gradient with respect to each input variable.
Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output. If the output is not differentiable with respect to an input, then this method should return an instance of type NullType for that input.
 Parameters
 inputslist of Variable
The input variables.
 output_gradslist of Variable
The gradients of the output variables.
 Returns
 gradslist of Variable
The gradients with respect to each Variable in inputs.
 make_node(x)¶
Construct an Apply node that represent the application of this operation to the given inputs.
This must be implemented by subclasses.
 Returns
 node: Apply
The constructed Apply node.
 perform(node, inputs, outputs, params=None)¶
Calculate the function on the inputs and put the variables in the output storage.
 Parameters
 nodeApply
The symbolic Apply node that represents this computation.
 inputsSequence
Immutable sequence of nonsymbolic/numeric inputs. These are the values of each Variable in node.inputs.
 output_storagelist of list
List of mutable singleelement lists (do not change the length of these lists). Each sublist corresponds to value of each Variable in node.outputs. The primary purpose of this method is to set the values of these sublists.
 paramstuple
A tuple containing the values of each entry in __props__.
Notes
The output_storage list might contain data. If an element of output_storage is not None, it has to be of the right type, for instance, for a TensorVariable, it has to be a NumPy ndarray with the right number of dimensions and the correct dtype. Its shape and stride pattern can be arbitrary. It is not guaranteed that such preset values were produced by a previous call to this Op.perform; they could’ve been allocated by another Op’s perform method. A Op is free to reuse output_storage as it sees fit, or to discard it and allocate new memory.
 pymc3.math.block_diagonal(matrices, sparse=False, format='csr')¶
See scipy.sparse.block_diag or scipy.linalg.block_diag for reference
 Parameters
 matrices: tensors
 format: str (default ‘csr’)
must be one of: ‘csr’, ‘csc’
 sparse: bool (default False)
if True return sparse format
 Returns
 matrix
 pymc3.math.cartesian(*arrays)¶
Makes the Cartesian product of arrays.
 Parameters
 arrays: ND arraylike
ND arrays where earlier arrays loop more slowly than later ones
 pymc3.math.expand_packed_triangular(n, packed, lower=True, diagonal_only=False)¶
Convert a packed triangular matrix into a two dimensional array.
Triangular matrices can be stored with better space efficiancy by storing the nonzero values in a onedimensional array. We number the elements by row like this (for lower or upper triangular matrices):
 [[0   ] [[0 1 2 3]
[1 2  ] [ 4 5 6] [3 4 5 ] [  7 8] [6 7 8 9]] [   9]
 Parameters
 n: int
The number of rows of the triangular matrix.
 packed: theano.vector
The matrix in packed format.
 lower: bool, default=True
If true, assume that the matrix is lower triangular.
 diagonal_only: bool
If true, return only the diagonal of the matrix.
 pymc3.math.invlogit(x, eps=2.220446049250313e16)¶
The inverse of the logit function, 1 / (1 + exp(x)).
 pymc3.math.kron_diag(*diags)¶
Returns diagonal of a kronecker product.
 Parameters
 diags: 1D arrays
The diagonals of matrices that are to be Kroneckered
 pymc3.math.kron_dot(krons, m, *, op=<function dot>)¶
Apply op to krons and m in a way that reproduces
op(kronecker(*krons), m)
 Parameters
 kronslist of square 2D arraylike objects
D square matrices \([A_1, A_2, ..., A_D]\) to be Kronecker’ed \(A = A_1 \otimes A_2 \otimes ... \otimes A_D\) Product of column dimensions must be \(N\)
 mNxM array or 1D array (treated as Nx1)
Object that krons act upon
 Returns
 numpy array
 pymc3.math.kron_matrix_op(krons, m, op)¶
Apply op to krons and m in a way that reproduces
op(kronecker(*krons), m)
 Parameters
 kronslist of square 2D arraylike objects
D square matrices \([A_1, A_2, ..., A_D]\) to be Kronecker’ed \(A = A_1 \otimes A_2 \otimes ... \otimes A_D\) Product of column dimensions must be \(N\)
 mNxM array or 1D array (treated as Nx1)
Object that krons act upon
 Returns
 numpy array
 pymc3.math.kron_solve_lower(krons, m, *, op=Solve{('lower_triangular', True, False, False)})¶
Apply op to krons and m in a way that reproduces
op(kronecker(*krons), m)
 Parameters
 kronslist of square 2D arraylike objects
D square matrices \([A_1, A_2, ..., A_D]\) to be Kronecker’ed \(A = A_1 \otimes A_2 \otimes ... \otimes A_D\) Product of column dimensions must be \(N\)
 mNxM array or 1D array (treated as Nx1)
Object that krons act upon
 Returns
 numpy array
 pymc3.math.kron_solve_upper(krons, m, *, op=Solve{('upper_triangular', False, False, False)})¶
Apply op to krons and m in a way that reproduces
op(kronecker(*krons), m)
 Parameters
 kronslist of square 2D arraylike objects
D square matrices \([A_1, A_2, ..., A_D]\) to be Kronecker’ed \(A = A_1 \otimes A_2 \otimes ... \otimes A_D\) Product of column dimensions must be \(N\)
 mNxM array or 1D array (treated as Nx1)
Object that krons act upon
 Returns
 numpy array
 pymc3.math.kronecker(*Ks)¶
 Return the Kronecker product of arguments:
\(K_1 \otimes K_2 \otimes ... \otimes K_D\)
 Parameters
 KsIterable of 2D arraylike
Arrays of which to take the product.
 Returns
 np.ndarray :
Block matrix Kroncker product of the argument matrices.
 pymc3.math.log1mexp(x)¶
Return log(1  exp(x)).
This function is numerically more stable than the naive approach.
For details, see https://cran.rproject.org/web/packages/Rmpfr/vignettes/log1mexpnote.pdf
 pymc3.math.log1mexp_numpy(x)¶
Return log(1  exp(x)). This function is numerically more stable than the naive approach. For details, see https://cran.rproject.org/web/packages/Rmpfr/vignettes/log1mexpnote.pdf
 pymc3.math.log1pexp(x)¶
Return log(1 + exp(x)), also called softplus.
This function is numerically more stable than the naive approach.
 pymc3.math.logdiffexp(exp(a)  exp(b))¶
 pymc3.math.logdiffexp_numpy(exp(a)  exp(b))¶
 pymc3.math.tround(*args, **kwargs)¶
Temporary function to silence round warning in Theano. Please remove when the warning disappears.