API Reference

Core

class bauer.core.BaseModel(paradigm, save_trialwise_n_estimates=False)[source]

Bases: object

fit_map_individual(data=None, flat_prior=True, **kwargs)[source]

Fit MLE/MAP estimates for each subject independently (no pooling).

Loops over subjects, builds a non-hierarchical model on each subject’s data alone, and returns a DataFrame of point estimates in natural (transformed) scale.

Parameters:
  • data (pd.DataFrame or None) – Trial-level data with a ‘subject’ index level. If None, uses self.paradigm.

  • flat_prior (bool) – If True (default), uses a very wide prior (sigma=100), making this effectively maximum-likelihood estimation. If False, uses the model’s default prior.

  • **kwargs – Forwarded to pm.find_MAP.

Returns:

Index = subject, columns = free parameter names (transformed scale).

Return type:

pd.DataFrame

class bauer.core.LapseModel(paradigm, save_trialwise_n_estimates=False)[source]

Bases: BaseModel

class bauer.core.RegressionModel(regressors=None)[source]

Bases: BaseModel

Psychometric models

class bauer.models.PsychometricModel(paradigm=None)[source]

Bases: BaseModel

Psychometric model for two-alternative forced choice with a sensitivity and bias parameter.

Parameters nu (discrimination sensitivity, softplus-transformed) and bias (decision criterion) describe the probability of choosing option 2 given stimuli x1 and x2. Paradigm requires columns x1, x2, and choice.

class bauer.models.PsychometricLapseModel(paradigm=None)[source]

Bases: LapseModel, PsychometricModel

PsychometricModel extended with a lapse rate parameter.

class bauer.models.PsychometricRegressionModel(paradigm, regressors, save_trialwise_estimates=False)[source]

Bases: RegressionModel, PsychometricModel

PsychometricModel with patsy formula regression on nu and/or bias.

class bauer.models.PsychometricLapseRegressionModel(paradigm, regressors, save_trialwise_estimates=False)[source]

Bases: LapseModel, PsychometricRegressionModel

PsychometricModel with both a lapse rate and patsy formula regression.

Magnitude comparison models

class bauer.models.MagnitudeComparisonModel(paradigm=None, fit_prior=False, fit_seperate_evidence_sd=True, memory_model='independent', save_trialwise_n_estimates=False)[source]

Bases: BaseModel

Bayesian observer model for two-alternative magnitude comparison (e.g. numerosity).

Choices between quantities n1 and n2 are modelled as Bayesian inference over log-scale representations corrupted by Gaussian noise. The prior is either estimated from the stimulus distribution (fit_prior=False) or treated as free parameters.

Parameters:
  • paradigm (pd.DataFrame, optional) – Must contain columns n1, n2, and choice.

  • fit_prior (bool) – If True, fit prior_mu and prior_sd as free parameters.

  • fit_seperate_evidence_sd (bool) – If True, fit separate noise parameters for n1 and n2 (or perceptual/memory noise when memory_model='shared_perceptual_noise').

  • memory_model ({'independent', 'shared_perceptual_noise'}) – Noise structure. 'independent' fits n1_evidence_sd and n2_evidence_sd separately. 'shared_perceptual_noise' decomposes into perceptual and memory noise.

class bauer.models.MagnitudeComparisonLapseModel(paradigm=None, fit_prior=False, fit_seperate_evidence_sd=True, memory_model='independent', save_trialwise_n_estimates=False)[source]

Bases: LapseModel, MagnitudeComparisonModel

MagnitudeComparisonModel extended with a lapse rate parameter.

class bauer.models.MagnitudeComparisonRegressionModel(paradigm, regressors, fit_prior=False, fit_seperate_evidence_sd=True, memory_model='independent', save_trialwise_estimates=False)[source]

Bases: RegressionModel, MagnitudeComparisonModel

MagnitudeComparisonModel with patsy formula regression on noise/prior parameters.

class bauer.models.MagnitudeComparisonLapseRegressionModel(paradigm, regressors, fit_prior=False, fit_seperate_evidence_sd=True, memory_model='independent', save_trialwise_estimates=False)[source]

Bases: LapseModel, MagnitudeComparisonRegressionModel

MagnitudeComparisonModel with both a lapse rate and patsy formula regression.

class bauer.models.FlexibleNoiseComparisonModel(paradigm, fit_seperate_evidence_sd=True, fit_prior=False, polynomial_order=5, memory_model='independent')[source]

Bases: BaseModel

Magnitude comparison model with stimulus-dependent noise parameterised by a polynomial spline.

Unlike MagnitudeComparisonModel, evidence noise is modelled as a polynomial function of log-magnitude, allowing the noise level to vary smoothly with stimulus size.

Parameters:
  • paradigm (pd.DataFrame) – Must contain columns n1, n2, and choice.

  • polynomial_order (int or tuple of int) – Order(s) of the polynomial for the noise curve (one per prospect when fit_seperate_evidence_sd=True).

  • memory_model ({'independent', 'shared_perceptual_noise'}) – Noise decomposition; see MagnitudeComparisonModel.

class bauer.models.FlexibleNoiseComparisonRegressionModel(paradigm, regressors, fit_seperate_evidence_sd=True, fit_prior=False, polynomial_order=5, memory_model='independent')[source]

Bases: RegressionModel, FlexibleNoiseComparisonModel

FlexibleNoiseComparisonModel with patsy formula regression on noise spline coefficients.

Risky choice models

class bauer.models.RiskModel(paradigm=None, prior_estimate='objective', fit_seperate_evidence_sd=True, incorporate_probability='after_inference', save_trialwise_n_estimates=False, memory_model='independent', n_prospects=2)[source]

Bases: BaseModel

Bayesian observer model for risky choice between two monetary lotteries.

Each lottery is characterised by a magnitude (n) and probability (p). The model infers posterior beliefs about magnitudes and computes the probability of choosing the second option via a Bayesian threshold comparison.

Parameters:
  • paradigm (pd.DataFrame, optional) – Must contain columns n1, n2, p1, p2, and choice.

  • prior_estimate ({'objective', 'shared', 'different', 'full', 'full_normed', 'klw', 'fix_prior_sd'}) – Strategy for estimating the magnitude prior.

  • fit_seperate_evidence_sd (bool) – Whether to fit separate noise for n1 and n2.

  • incorporate_probability ({'after_inference', 'before_inference'}) – Whether probabilities enter the comparison before or after Bayesian inference.

  • memory_model ({'independent', 'shared_perceptual_noise'}) – Noise structure; see MagnitudeComparisonModel.

class bauer.models.RiskLapseModel(paradigm=None, prior_estimate='objective', fit_seperate_evidence_sd=True, incorporate_probability='after_inference', save_trialwise_n_estimates=False, memory_model='independent', n_prospects=2)[source]

Bases: LapseModel, RiskModel

RiskModel extended with a lapse rate parameter.

class bauer.models.RiskRegressionModel(paradigm, regressors, prior_estimate='objective', fit_seperate_evidence_sd=True, incorporate_probability='after_inference', save_trialwise_n_estimates=False, memory_model='independent')[source]

Bases: RegressionModel, RiskModel

RiskModel with patsy formula regression on noise, prior, or bias parameters.

class bauer.models.RiskLapseRegressionModel(paradigm, regressors, prior_estimate='objective', fit_seperate_evidence_sd=True, incorporate_probability='after_inference', save_trialwise_n_estimates=False, memory_model='independent')[source]

Bases: LapseModel, RiskRegressionModel

RiskModel with both a lapse rate and patsy formula regression.

class bauer.models.ProspectTheoryModel(paradigm, save_trialwise_n_estimates=False)[source]

Bases: BaseModel

Classic Prospect Theory model for mixed (gain/loss) gambles.

Utility function: p * gain^alpha - (1-p) * lambda * loss^beta. Free parameters: alpha (gain sensitivity), beta (loss sensitivity), lambda (loss aversion coefficient). Paradigm requires columns gain, loss, prob_gain, and choice.

class bauer.models.LossAversionModel(paradigm=None, save_trialwise_n_estimates=False, magnitude_grid=None, ev_diff_grid=None, lapse_rate=0.01, normalize_likelihoods=True, paradigm_type='mixed_vs_mixed', fix_prior_sds=True)[source]

Bases: BaseModel

Bayesian observer model for risky choices with separate gain and loss representations.

Models perceptual noise and prior beliefs over gains and losses independently, integrating over a discrete grid of possible values to compute choice probabilities. Supports 'mixed_vs_mixed' (two lotteries) and 'mixed_vs_0' (lottery vs. sure zero) paradigm types.

class bauer.models.LossAversionRegressionModel(paradigm=None, save_trialwise_n_estimates=False, magnitude_grid=None, ev_diff_grid=None, lapse_rate=0.01, normalize_likelihoods=True, paradigm_type='mixed_vs_mixed', fix_prior_sds=True, regressors=None)[source]

Bases: RegressionModel, LossAversionModel

LossAversionModel with patsy formula regression on noise/prior parameters.

class bauer.models.RiskModelProbabilityDistortion(paradigm=None, magnitude_prior_estimate='objective', save_trialwise_n_estimates=False, n_prospects=2, p_grid_size=20, lapse_rate=0.01, distort_magnitudes=True, distort_probabilities=True, fix_magnitude_prior_sd=False, fix_probabiliy_prior_sd=False, estimate_magnitude_prior_mu=False)[source]

Bases: BaseModel

Risky choice model with Bayesian distortion of magnitudes and/or probabilities.

Computes the probability of choosing option 2 by integrating over posterior distributions of magnitudes and probabilities in log-odds space. Paradigm requires columns n1, n2, p1, p2, and choice.

class bauer.models.RNPModel(paradigm, risk_neutral_p=0.55)[source]

Bases: BaseModel

Risk Neutral Point model for risky choice based on an indifference probability.

Parameterises risk attitude through a rnp (risk neutral point) that sets the probability threshold at which the agent is indifferent between a risky and a safe option, scaled by a slope parameter gamma.

class bauer.models.RNPRegressionModel(paradigm, regressors, risk_neutral_p=0.55)[source]

Bases: RegressionModel, RNPModel

RNPModel with patsy formula regression on rnp or gamma.

class bauer.models.FlexibleNoiseRiskModel(paradigm, prior_estimate='full', fit_seperate_evidence_sd=True, save_trialwise_n_estimates=False, polynomial_order=5, representational_noise='payoff', memory_model='independent')[source]

Bases: FlexibleNoiseComparisonModel, RiskModel

Risky choice model combining flexible (polynomial) noise with Bayesian magnitude inference.

class bauer.models.FlexibleNoiseRiskRegressionModel(paradigm, regressors, prior_estimate='full', fit_seperate_evidence_sd=True, save_trialwise_n_estimates=False, polynomial_order=5, representational_noise='payoff', memory_model='independent')[source]

Bases: RegressionModel, FlexibleNoiseRiskModel

FlexibleNoiseRiskModel with patsy formula regression on noise spline coefficients.

class bauer.models.ExpectedUtilityRiskModel(paradigm, save_trialwise_eu=False, probability_distortion=False, n_outcomes=1)[source]

Bases: BaseModel

Expected utility model for risky choice with optional probability distortion.

Computes expected utility for each lottery and converts the utility difference to a choice probability. Supports a single-outcome paradigm (n_outcomes=1) and a multi-outcome extension. Paradigm requires columns n1, n2, p1, p2, and choice.

class bauer.models.ExpectedUtilityRiskRegressionModel(paradigm, save_trialwise_eu, probability_distortion, regressors)[source]

Bases: RegressionModel, ExpectedUtilityRiskModel

ExpectedUtilityRiskModel with patsy formula regression on utility or noise parameters.

Utilities

bauer.utils.data.load_garcia2022(task='magnitude', remove_non_responses=True)[source]

Return behavioral data from Barreto Garcia et al. (2022) as a multi-indexed DataFrame.

Parameters:
  • task ({'magnitude', 'risk'}) – Which task dataset to load.

  • remove_non_responses (bool) – If True, drop trials with missing choices and cast the choice column to bool.

bauer.utils.data.load_dehollander2024(task='dotcloud', sessions=None, bids_folder='/data/ds-risk', symbolic_folder='/data/ds-symbolicrisk', remove_non_responses=True)[source]

Return behavioral data from de Hollander et al. (2024) as a multi-indexed DataFrame.

Parameters:
  • task ({'dotcloud', 'symbolic'}) – Which task dataset to load. 'dotcloud' uses the fMRI dot-cloud gamble task (ds-risk); 'symbolic' uses the Arabic-numeral behavioural task (ds-symbolicrisk).

  • sessions (list of str or None) – Sessions to include for the dotcloud task (default ['3t2', '7t2']). Ignored for the symbolic task.

  • bids_folder (str) – Root of the ds-risk BIDS dataset.

  • symbolic_folder (str) – Root of the ds-symbolicrisk dataset.

  • remove_non_responses (bool) – If True, drop trials with missing choices and cast the choice column to bool.

bauer.utils.bayes.get_posterior(mu1, sd1, mu2, sd2)[source]
bauer.utils.bayes.get_posterior_np(mu1, sd1, mu2, sd2)[source]
bauer.utils.bayes.get_diff_dist(mu1, sd1, mu2, sd2)[source]
bauer.utils.bayes.get_diff_dist_np(mu1, sd1, mu2, sd2)[source]
bauer.utils.bayes.cumulative_normal(x, mu, sd, s=Sqrt.0)[source]
bauer.utils.bayes.summarize_ppc(ppc, groupby=None)[source]

Single-step PPC summary (legacy). Prefer summarize_ppc_group for group-level PPCs.

bauer.utils.math.logistic(x)[source]
bauer.utils.math.logistic_np(x)[source]
bauer.utils.math.softplus_np(x)[source]
bauer.utils.math.inverse_softplus_np(x)[source]
bauer.utils.math.logit(p)[source]
bauer.utils.math.logit_np(p)[source]
bauer.utils.math.logit_derivative(p)[source]
bauer.utils.math.gaussian_pdf(x, mean, std)[source]
bauer.utils.plotting.plot_ppc(df, ppc, exp_type='magnitude', plot_type=1, var_name='p', level='subject', col_wrap=5, n_clusters=13)[source]
bauer.utils.plotting.plot_subjectwise_parameters(idata, parameter, transform=None, sort_subjects=True, plot_group_mean=True, hdi_prob=0.94, color='steelblue', ax=None, label=None, **kwargs)[source]

Plot subject-level posterior estimates as a sorted point-plot with HDI error bars.

Parameters:
  • idata (arviz.InferenceData) – Posterior samples from a fitted bauer model.

  • parameter (str) – Name of the subject-level parameter (e.g. 'n1_evidence_sd').

  • transform (str or None) – Optional transform applied to samples before plotting. One of 'softplus', 'logistic', or None.

  • sort_subjects (bool) – If True (default) subjects are sorted by their posterior mean on the x-axis. If False, subjects appear in their original order.

  • plot_group_mean (bool) – If True (default) and a {parameter}_mu variable exists in idata, draw a dashed horizontal line at the group-mean posterior mean.

  • hdi_prob (float) – Posterior mass for the HDI interval shown as error bars (default 0.94).

  • color (str) – Colour for the points and error bars.

  • ax (matplotlib.axes.Axes or None) – Axes to plot on. If None, the current axes are used.

  • label (str or None) – Legend label for the series.

Return type:

matplotlib.axes.Axes

bauer.utils.plotting.plot_prediction(data, x, color, y='p_predicted', alpha=0.25, **kwargs)[source]
bauer.utils.plotting.cluster_offers(d, n=6, key='log(risky/safe)')[source]
bauer.utils.plotting.get_hdi(d)[source]