elicit.losses module#

elicit.losses.preprocess(elicited_statistics: Dict[str, Tensor]) Dict[str, Tensor][source]#

Preprocess elicited statistics such that they have the required format for computing the individual losses between expert- and simulated statistics.

Parameters:
elicited_statisticsdict

dictionary including the elicited statistics.

Returns:
preprocessed_elicitsdict

dictionary including all preprocessed elicited statistics which will enter the loss function to compute the individual loss components.

Raises:
AssertionError

elicited_statistics can only have 2 dimensions (i.e., tensor of rank 2)

elicit.losses.indiv_loss(elicit_expert: Dict[str, Tensor], elicit_training: Dict[str, Tensor], targets: List[Target]) List[Tensor][source]#

Computes the individual loss between expert data and model-simulated data.

Parameters:
elicit_expertdict

dictionary including all preprocessed elicited statistics

elicit_trainingdict

dictionary including all preprocessed model statistics

targetslist

user-input from elicit.elicit.target()

Returns:
indiv_losseslist

list of individual losses for each loss component

elicit.losses.total_loss(elicit_training: Dict[str, Tensor], elicit_expert: Dict[str, Tensor], targets: List[Target]) Tuple[Tensor, List[Tensor], Dict[str, Tensor], Dict[str, Tensor]][source]#

Computes the weighted average across all individual losses between expert data and model simulations.

Parameters:
elicit_trainingdict

elicited statistics simulated by the model.

elicit_expertdict

elicited statistics as queried from the expert.

targetslist

user-input from elicit.elicit.target()

Returns:
lossfloat

weighted average across individual losses quantifying the discrepancy between expert data and model simulations.

individual_losseslist

list of individual losses for each loss component.

elicit_expert_prepdict

dictionary including all preprocessed expert elicited statistics.

elicit_training_prepdict

dictionary including all preprocessed model-simulated elicited statistics.

elicit.losses.L2(loss_component_expert: Tensor, loss_component_training: Tensor, axis: int | None = None, ord: str | int = 'euclidean') Tensor[source]#

Wrapper around tf.norm that computes the norm of the difference between two tensors along the specified axis. Used for the correlation loss when priors are assumed to be independent

Parameters:
loss_component_experttf.Tensor

preprocessed expert-elicited data

loss_component_trainingtf.Tensor

preprocessed model-simulated data

axisint, optional

Axis along which to compute the norm of the difference.

ordint or str

Order of the norm. Supports ‘euclidean’ and other norms supported by tf.norm. Default is ‘euclidean’.

class elicit.losses.MMD2(kernel: str = 'energy', **kwargs)[source]#

Bases: object

Computes the biased, squared maximum mean discrepancy

Parameters:
kernelstr, (“energy”, “gaussian”)

kernel type used for computing the MMD. When using a gaussian kernel an additional ‘sigma’ argument has to be passed. The default kernel is “energy”.

**kwargs

additional keyword arguments that might be required by the different individual kernels

Raises:
ValueError

kernel must be either ‘energy’ or ‘gaussian’ kernel.

sigma argument need to be passed if kernel="gaussian"

Examples

>>> el.losses.MMD2(kernel="energy")
>>> el.losses.MMD2(kernel="gaussian", sigma = 1.)
__init__(kernel: str = 'energy', **kwargs)[source]#

Computes the biased, squared maximum mean discrepancy

Parameters:
kernelstr, (“energy”, “gaussian”)

kernel type used for computing the MMD. When using a gaussian kernel an additional ‘sigma’ argument has to be passed. The default kernel is “energy”.

**kwargs

additional keyword arguments that might be required by the different individual kernels

Raises:
ValueError

kernel must be either ‘energy’ or ‘gaussian’ kernel.

sigma argument need to be passed if kernel="gaussian"

Examples

>>> el.losses.MMD2(kernel="energy")
>>> el.losses.MMD2(kernel="gaussian", sigma = 1.)
__call__(x: Tensor, y: Tensor) Tensor[source]#

Computes the biased, squared maximum mean discrepancy of two samples

Parameters:
xtensor, shape=[B, num_stats]

preprocessed expert-elicited statistics. Preprocessing refers to broadcasting expert data to same shape as model-simulated data.

ytensor, shape=[B, num_stats]

model-simulated statistics corresponding to expert-elicited statistics

Returns:
MMD2_meantensor, shape=[]

Average biased, squared maximum mean discrepancy between expert- elicited and model simulated data.

clip(u: Tensor) Tensor[source]#

upper and lower clipping of value u to improve numerical stability

Parameters:
utf.Tensor, shape=[B, num_stats, num_stats]

result of prior computation.

Returns:
u_clippedtf.Tensor, shape=[B, num_stats, num_stats]

clipped u value with min=1e-8 and max=1e10.

diag(xx: Tensor) Tensor[source]#

get diagonale elements of a matrix, whereby the first tensor dimension are batches and should not be considered to get diagonale elements.

Parameters:
xxtensor, shape=[B,num_stats,num_stats]

Similarity matrices with batch dimension in axis=0.

Returns:
diagtensor, shape=[B,num_stats]

diagonale elements of matrices per batch.

kernel(u: Tensor, kernel: str) Tensor[source]#

Kernel used in MMD to compute discrepancy between samples.

Parameters:
utensor, shape=[B,num_stats,num_stats]

squared distance between samples.

kernelstr, (“energy”, “gaussian”)

name of kernel used for computing discrepancy.

Returns:
dtensor, shape=[B,num_stats,num_stats]

discrepancy between samples.