elicit.optimization module#
- elicit.optimization.sgd_training(expert_elicited_statistics: Dict[str, Tensor], prior_model_init: Priors, trainer: Trainer, optimizer: Dict[str, Any], model: Dict[str, Any], targets: List[Target], parameters: List[Parameter], seed: int) Tuple[dict, dict][source]#
Wrapper that runs the optimization algorithms for E epochs.
- Parameters:
- expert_elicited_statisticsdict
expert data or simulated data representing a prespecified ground truth.
- prior_model_initclass instance
instance of a class that initializes and samples from the prior distributions.
- trainer: dict
dictionary including settings specified with
elicit.elicit.trainer()- optimizer: dict
dictionary including settings specified with
elicit.elicit.optimizer()- model: dict
dictionary including settings specified with
elicit.elicit.model()- targets: list[dict]
list of target quantities specified with
elicit.elicit.target()- parameterslist[dict]
list of model parameters specified with
elicit.elicit.parameter().- seedint
internally used seed for reproducible results
- Returns:
- res_ep: dict
results saved for each epoch (history)
- output_res: dict
results saved for the last epoch (results)
- Raises:
- ValueError
Training has been stopped because loss value is NAN.