smefit package

Subpackages

Submodules

smefit.basis_rotation module

Implement the corrections table basis rotation

smefit.basis_rotation.rotate_to_fit_basis(lin_dict, quad_dict, rotation_matrix_path)[source]

Rotate to fitting basis

Parameters:
  • lin_dict (dict) – theory dictionary with linear operator corrections in the table basis

  • quad_dict (dict) – theory dictionary with quadratic operator corrections in the table basis, emptry if quadratic corrections are not used

  • rotation_matrix (pandas.DataFrame) – rotation matrix from tables basis to fitting basis

Returns:

  • lin_dict_fit_basis (dict) – theory dictionary with linear operator corrections in the fit basis

  • quad_dict_fit_basis (dict) – theory dictionary with quadratic operator corrections in the fit basis, emptry if quadratic corrections are not used

smefit.chi2 module

Module for the computation of chi-squared values.

class smefit.chi2.Scanner(run_card, n_replica)[source]

Bases: object

Class to compute and plot the idividual \(\chi^2\) scan.

Parameters:
  • run_card (dict) – run card dictionary

  • n_replica (int) – number of replica to use

compute_bounds()[source]

Compute individual bounds solving.

..math::

chi^2`- 2 = 0

compute_scan()[source]

Compute the individual \(\chi^2\) scan for each replica and coefficient.

plot_scan()[source]

Plot and save the \(\chi^2\) scan for each coefficient.

regularized_chi2_func(coeff, xs, use_replica)[source]

Individual \(\chi^2\) wrappper over series of values.

Parameters:
  • coeff (smefit.coefficient.Coefficient) – coefficient to switch on.

  • xs (numpy.array) – coeffient values.

  • use_replica (bool) – if True compute the \(\chi^2\) on MC replicas.

  • Returns

  • -------- – individual reduced \(\chi^2\) for each x value.

smefit.chi2.compute_chi2(dataset, coefficients_values, use_quad, use_multiplicative_prescription, use_replica=False)[source]

Compute the \(\chi^2\).

Parameters:
  • dataset (DataTuple) – dataset tuple

  • coefficients_values (numpy.ndarray) – EFT coefficients values

  • use_multiplicative_prescription (bool) – if True add the EFT contribution as a key factor

  • use_quad (bool) – if True include also HO corrections

  • rgemat (numpy.ndarray) – solution matrix of the RGE

Returns:

chi2_total\(\chi^2\) value

Return type:

float

smefit.coefficients module

class smefit.coefficients.Coefficient(name, minimum, maximum, value=None, constrain=False)[source]

Bases: object

Coefficient object

Parameters:
  • name (str) – name of the operator corresponding to the Wilson coefficient

  • minimum (float) – minimum value

  • maximum (float) – maximum value

  • value (float, optional) – best value. If None set to random between minimum and maximum

  • constrain (dict, bool) –

    • if False, the parameter is free, default option

    • if True, the parameter is fixed to the given value

    • if dict the parameter is fixed to a function of other coefficients

static build_additive_factor_dict(constrain)[source]

Build the dictionary for each additive factor appearing in the constrain:

Parameters:

constrain (dict) – dict object with the form {‘c1’: a1, ‘c2’: [a2,n2], …}

Returns:

factor_dict – dict object with the form {‘c1’: [a1,1], ‘c2’: [a2,n2], …}

Return type:

dict

update_constrain(inv_rotation)[source]

Update the constrain when a new basis is chosen. Only linear constrain are supported.

Parameters:

inv_rotation (pd.DataFrame) – rotation matrix from the original basis to the new_basis

class smefit.coefficients.CoefficientManager(input_array)[source]

Bases: object

Coefficient objcts manager

Parameters:

input_array (np.ndarray or list) – list of smefit.coefficients.Coefficient instances

property free_parameters

Returns the table containing only free parameters

classmethod from_dict(coefficient_config)[source]

Create a coefficientManager from a dictionary

Parameters:

coefficient_config (dict) – coefficients configuration dictionary

Returns:

coefficient_manager – instance of the class

Return type:

smefit.coefficients.CoefficientManager

property maximum
property minimum
property name
set_constraints()[source]

Sets constraints between coefficients according to the coefficient.constrain information:

set_free_parameters(value)[source]

Set the values of the free parmaters

property size
update_constrain(inv_rotation)[source]

Update the constraints according to rotation matrix. Only linear constrain are supported.

Parameters:

inv_rotation (pd.DataFrame) – rotation matrix from the original basis to the new_basis

property value

smefit.compute_theory module

Module for the generation of theory predictions

smefit.compute_theory.flatten(quad_mat, axis=0)[source]

Delete lower triangular part of a quadratic matrix and flatten it into an array

Parameters:
  • quad_mat (numpy.ndarray) – tensor to flatten

  • axis (int) – axis along which the triangular part is selected

smefit.compute_theory.make_predictions(dataset, coefficients_values, use_quad, use_multiplicative_prescription)[source]

Generate the corrected theory predictions for dataset given a set of SMEFT coefficients.

Parameters:
  • dataset (DataTuple) – dataset tuple

  • coefficients_values (numpy.ndarray) – EFT coefficients values

  • use_quad (bool) – if True include also HO corrections

  • use_multiplicative_prescription (bool) – if True add the EFT contribution as a k-factor

Returns:

corrected_theory – SM + EFT theory predictions

Return type:

numpy.ndarray

smefit.covmat module

Module containing computation of covariance matrix. Based on https://github.com/NNPDF/nnpdf/tree/master/validphys2/src/validphys/covmats_utils.py https://github.com/NNPDF/nnpdf/tree/master/validphys2/src/validphys/covmats.py

smefit.covmat.construct_covmat(stat_errors: array, sys_errors: DataFrame)[source]

Basic function to construct a covariance matrix (covmat), given the statistical error and a dataframe of systematics. Errors with name UNCORR or THEORYUNCORR are added in quadrature with the statistical error to the diagonal of the covmat. Other systematics are treated as correlated; their covmat contribution is found by multiplying them by their transpose.

Parameters:
  • stat_errors (numpy.ndarray) – a 1-D array of statistical uncertainties

  • sys_errors (pandas.DataFrame) – a dataframe with shape (N_data * N_sys) and systematic name as the column headers. The uncertainties should be in the same units as the data.

Returns:

cov_mat – Covariance matrix

Return type:

numpy.ndarray

Notes

This function doesn’t contain any logic to ignore certain contributions to the covmat, if you wanted to not include a particular systematic/set of systematics i.e all uncertainties with MULT errors, then filter those out of sys_errors before passing that to this function.

smefit.covmat.covmat_from_systematics(stat_errors: list, sys_errors: list)[source]

Given two lists containing the statistic and systematic errors, construct the full covariance matrix.

This is similar to construct_covmat() except that special corr systematics are concatenated across all datasets before being multiplied by their transpose to give off block-diagonal contributions. The other systematics contribute to the block diagonal in the same way as construct_covmat().

Parameters:
  • stat_errors (list[(stat_errors: np.array)]) – list of stat_errors for each dataset.

  • sys_errors (list[(sys_errors: pd.DataFrame)]) – list of sys_errors for each dataset.

Returns:

cov_mat – Numpy array which is N_dat x N_dat (where N_dat is the number of data points) containing uncertainty and correlation information.

Return type:

np.array

smefit.fit_manager module

class smefit.fit_manager.FitManager(path, name, label=None)[source]

Bases: object

Class to collect all the fit information, load the results, compute best theory predictions.

path

path to fit location

Type:

pathlib.Path

name

fit name

Type:

srt

label

fit label if any otherwise guess it from the name

Type:

str, optional

config

configuration dictionary

Type:

dict

has_posterior

True if the fi contains the full posterrio distribution, False if only cl bounds are stored (external fits for benchmark)

Type:

bool

results

fit results, they need to be loaded by load_results

Type:

pandas.DataFrame

Parameters:
  • path (pathlib.Path) – path to fit location

  • name (srt) – fit name

  • label (str, optional) – fit label if any otherwise guess it from the name

property coefficients

coefficient manager

load_configuration()[source]

Load configuration yaml card.

Returns:

configuration card

Return type:

dict

load_datasets()[source]

Load all datasets.

load_results()[source]

Load posterior distribution of a fit. If the fit is produced by and external source it loads the results. Results are stored in a class attribute

property n_replica

Number of replicas

property smeft_predictions

Compute SMEFT predictions for each replica.

Returns:

SMEFT predictions for each replica

Return type:

np.ndarray

property smeft_predictions_best_fit

Compute SMEFT predictions for the best fit point.

Returns:

SMEFT predictions for the best fit

Return type:

np.ndarray

smefit.loader module

class smefit.loader.DataTuple(Commondata, SMTheory, OperatorsNames, LinearCorrections, QuadraticCorrections, ExpNames, NdataExp, InvCovMat, ThCovMat, Luminosity, Replica)

Bases: tuple

Commondata

Alias for field number 0

ExpNames

Alias for field number 5

InvCovMat

Alias for field number 7

LinearCorrections

Alias for field number 3

Luminosity

Alias for field number 9

NdataExp

Alias for field number 6

OperatorsNames

Alias for field number 2

QuadraticCorrections

Alias for field number 4

Replica

Alias for field number 10

SMTheory

Alias for field number 1

ThCovMat

Alias for field number 8

class smefit.loader.Loader(setname, operators_to_keep, order, use_quad, use_theory_covmat, use_multiplicative_prescription, rot_to_fit_basis, cutoff_scale)[source]

Bases: object

Class to check, load commondata and corresponding theory predictions.

Parameters:
  • setname (str) – dataset name to load

  • operators_to_keep (list) – list of operators for which corrections are loaded

  • order ("LO", "NLO") – EFT perturbative order

  • use_quad (bool) – if True loads also HO corrections

  • use_theory_covmat (bool)

  • one (if True add the theory covariance matrix to the experimental)

  • rot_to_fit_basis (dict, None) – matrix rotation to fit basis or None

apply_cutoff_mask(cutoff_scale)[source]

Updates previously loaded theory and datasets by filtering out points with scales above the cutoff scale

Parameters:

cutoff_scale (flaot) – Value of the cutoff scale as specified in the runcard

property central_values

Central values

Returns:

central_values – experimental central values

Return type:

numpy.ndarray

commondata_path = PosixPath('.')

path to commondata folder, commondata excluded

property covmat

Experimental covariance matrix

Returns:

covmat – experimental covariance matrix of a single dataset

Return type:

numpy.ndarray

property lin_corrections

NHO corrections

Returns:

lin_corrections – dictionary with operator names and NHO correctsions

Return type:

dict

load_experimental_data()[source]

Load experimental data with corresponding uncertainties

Returns:

  • cental_values (numpy.ndarray) – experimental central values

  • covmat (numpy.ndarray) – experimental covariance matrix

static load_theory(setname, operators_to_keep, order, use_quad, use_theory_covmat, use_multiplicative_prescription, rotation_matrix=None)[source]

Load theory predictions

Parameters:
  • operators_to_keep (list) – list of operators to keep

  • order ("LO", "NLO") – EFT perturbative order

  • use_quad (bool) – if True returns also HO corrections

  • use_theory_covmat (bool) – if True add the theory covariance matrix to the experimental one

  • rotation_matrix (numpy.ndarray) – rotation matrix from tables basis to fitting basis

Returns:

  • sm (numpy.ndarray) – SM predictions

  • lin_dict (dict) – dictionary with NHO corrections

  • quad_dict (dict) – dictionary with HO corrections, empty if not use_quad

  • scales (list) – list of energy scales for the theory predictions

property lumi

Integrated luminosity of the dataset in fb^-1

Returns:

lumi – Integrated luminosity of the dataset in fb^-1

Return type:

float

property n_data

Number of data

Returns:

n_data – number of experimental data

Return type:

int

property quad_corrections

HO corrections

Returns:

quad_corrections – dictionary with operator names and HO correctsions

Return type:

dict

property sm_prediction

SM prediction for the dataset

Returns:

SM_predictions – best SM prediction

Return type:

numpy.ndarray

property stat_error

Statistical errors

Returns:

stat_error – statistical errors of the dataset

Return type:

np.array

property sys_error

Systematic errors

Returns:

sys_error – systematic errors of the dataset

Return type:

pd.DataFrame

property sys_error_t0

Systematic errors modified according to t0 prescription

Returns:

sys_error_t0 – t0 systematic errors of the dataset

Return type:

pd.DataFrame

property theory_covmat

Theory covariance matrix

Returns:

theory covmat – theory covariance matrix of a single dataset

Return type:

numpy.ndarray

theory_path = PosixPath('.')

path to theory folder, theory excluded. Default it assumes to be the same as commondata_path

smefit.loader.check_file(path)[source]

Check if path exists.

smefit.loader.check_missing_operators(loaded_corrections, coeff_config)[source]

Check if all the coefficient in the runcard are also present inside the theory tables.

smefit.loader.construct_corrections_matrix_linear(corrections_list, n_data_tot, sorted_keys, rgemat=None)[source]

Constructs the linear EFT corrections.

Parameters:
  • corrections_list (list(dict)) – list containing corrections per experiment

  • n_data_tot (int) – total number of experimental data points

  • sorted_keys (numpy.ndarray) – list of sorted operator corrections

  • rgemat (numpy.ndarray, optional) – solution matrix of the RGE, shape (k, l, m) with k the number of datapoints, l the number of generated coefficients and m the number of original EFT coefficients specified in the runcard.

Returns:

corr_values – matrix with correction values (n_data_tot, sorted_keys.size)

Return type:

np.ndarray

smefit.loader.construct_corrections_matrix_quadratic(corrections_list, n_data_tot, sorted_keys, rgemat=None)[source]

Constructs quadratic EFT corrections.

Parameters:
  • corrections_list (list(dict)) – list containing per experiment the number of datapoints and corresponding corrections

  • n_data_tot (int) – total number of experimental data points

  • sorted_keys (numpy.ndarray) – list of sorted operator corrections, shape=(n rg generated coeff,) or shape=(n original coeff,) in the absence of rgemat

Returns:

corr_values – matrix with correction values (n_data_tot, sorted_keys.size, sorted_keys.size)

Return type:

np.ndarray

smefit.loader.get_dataset(datasets, data_name)[source]
smefit.loader.load_datasets(commondata_path, datasets, operators_to_keep, use_quad, use_theory_covmat, use_t0, use_multiplicative_prescription, default_order='LO', theory_path=None, rot_to_fit_basis=None, has_uv_couplings=False, has_external_chi2=False, rgemat=None, cutoff_scale=None)[source]

Loads experimental data, theory and SMEFT corrections into a namedtuple

Parameters:
  • commondata_path (str, pathlib.Path) – path to commondata folder, commondata excluded

  • datasets (list) – List of datasets to be loaded

  • operators_to_keep (list) – list of operators for which corrections are loaded

  • default_order (str) – Default perturbative order of the theory predictions

  • use_quad (bool) – if True loads also HO corrections

  • use_theory_covmat (bool) – if True add the theory covariance matrix to the experimental one

  • theory_path (str, pathlib.Path, optional) – path to theory folder, theory excluded. Default it assumes to be the same as commondata_path

  • rot_to_fit_basis (dict, optional) – matrix rotation to fit basis or None

  • has_uv_couplings (bool, optional) – True for UV fits

  • has_external_chi2 (bool, optional) – True in the presence of external chi2 modules

  • rgemat (numpy.ndarray, optional) – solution matrix of the RGE, shape=(k, l, m) with k the number of datapoints, l the number of generated coefficients under the RG and m the number of original EFT coefficients specified in the runcard.

  • cutoff_scale (float, optional) – kinematic cutoff scale

smefit.log module

smefit.log.print_banner()[source]
smefit.log.setup_console(logfile)[source]

smefit.rge module

class smefit.rge.RGE(wc_names, init_scale, accuracy='integrate', adm_QCD=False, yukawa='top')[source]

Bases: object

Class to compute the RGE matrix for the SMEFT Wilson coefficients. The RGE matrix is computed at the initial scale init_scale and evolved to the scale of interest.

Parameters:
  • wc_names (list) – list of Wilson coefficient names to be included in the RGE matrix

  • init_scale (float) – initial scale of the Wilson coefficients

  • accuracy (str) – accuracy of the RGE integration. Options: “leadinglog” or “integrate”. Default is ‘integrate’. Inherited behaviour from wilson package.

  • adm_QCD (bool) – if True, only the QCD anomalous dimension is used. Default is False.

  • yukawa (str) – Yukawa parameterization to be used. Options: “top”, “none” or “full”. Default is “top”.

property RGEbasis

Returns the RGE basis translated from smefit to Warsaw.

RGEevolve(wcs, scale)[source]

Evolve the Wilson coefficients from the initial scale to the scale of interest.

RGEmatrix(scale)[source]

Compute the RGE solution at the scale scale and return it as a pandas DataFrame.

RGEmatrix_dict(scale)[source]

Compute the RGE solution at the scale scale and return it as a dictionary.

property all_ops
map_to_smefit(wc_final_vals, scale)[source]

Map the Wilson coefficients from the Warsaw basis to the SMEFiT basis.

smefit.rge.beta_wrapper(C, HIGHSCALE=inf, *args, **kwargs)[source]
smefit.rge.evolve_gs(scale)[source]
smefit.rge.load_rge_matrix(rge_dict, coeff_list, datasets=None, theory_path=None, cutoff_scale=None)[source]

Load the RGE matrix for the SMEFT Wilson coefficients.

Parameters:
  • rge_dict (dict) – dictionary with the RGE input parameter options

  • coeff_list (list) – list of Wilson coefficients to be included in the RGE matrix

  • datasets (list) – list of datasets

  • theory_path (str) – path to the theory files

Returns:

rgemat – RGE matrix

Return type:

numpy.ndarray

smefit.rge.load_scales(datasets, theory_path, default_scale=1000.0, cutoff_scale=None, scale_variation=1.0)[source]

Load the energy scales for the datasets.

Parameters:
  • datasets (list) – list of datasets

  • theory_path (str) – path to the theory files

  • default_scale (float) – default scale to use if the dataset does not have a scale

  • cutoff_scale (float) – cutoff scale to use for the scales

Returns:

scales – list of scales for the datasets

Return type:

list

smefit.runner module

class smefit.runner.Runner(run_card, single_parameter_fits, pairwise_fits, runcard_file=None)[source]

Bases: object

Container for all the possible SMEFiT run methods.

Init the root path of the package where tables, results, plot config and reports are stored.

Parameters:
  • run_card (dict) – run card dictionary

  • single_parameter_fits (bool) – True for single parameter fits

  • runcard_file (pathlib.Path, None) – path to runcard if already present

analytic(config)[source]

Sample the analytic linear solution.

chi2_scan(n_replica, compute_bounds)[source]

Run an individual \(\chi^2\) scan.

Parameters:
  • n_replica (int) – number of replicas to use. If 0 only the \(\chi^2\) experimental data will be computed.

  • compute_bounds (bool) – if True compute and save the \(\chi^2\) bounds.

classmethod from_file(runcard_file, replica=None)[source]

Create Runner from a runcard file

Parameters:
  • runcard_file (pathlib.Path, str) – path to runcard

  • replica (int) – replica number. Optional used only for MC

Returns:

runner – instance of class Runner

Return type:

smefit.runner.Runner

get_optimizer(optimizer)[source]

Return the seleted optimizer.

global_analysis(optimizer)[source]

Run a global fit using the selected optimizer.

Parameters:

optimizer (string) – optimizer to be used (NS, MC or A)

mc(config)[source]

Run a fit with MC.

pairwise_analysis(optimizer)[source]

Run a series of pairwise parameter fits for all the operators specified in the runcard.

Parameters:

optimizer (string) – optimizer to be used only NS is supported

rotate_to_pca()[source]

Rotate to PCA basis.

run_analysis(optimizer)[source]

Run either the global analysis or a series of single parameter fits using the selected optimizer.

Parameters:

optimizer (string) – optimizer to be used (NS, MC or A)

setup_result_folder()[source]

Create result folder and copy the runcard there

single_parameter_analysis(optimizer)[source]

Run a seried of single parameter fits for all the operators specified in the runcard.

Parameters:

optimizer (string) – optimizer to be used (NS, MC or A)

ultranest(config)[source]

Run a fit with Ultra Nest.

smefit.wcxf module