Glossary#

The following definitions are specific to the Arthur platform, though in most cases are applicable to ML more broadly.

Arthur Inference#

Container class for inferences uploaded to the Arthur platform. An inference is composed of input features, prediction values, and (optionally) ground truth values and any Non-Input data.

Example:

ground_truth = {
    "Consumer Credit Score": 652.0
}
inference = arthur_model.get_inference(external_id)
inference.update(ground_truth)

Related terms: inference, ArthurModel

Arthur Model#

Model object used for sending and retrieving data pertinent to a deployed ML system. The ArthurModel object is separate from the underlying model that is trained and which makes predictions; it serves as a wrapper for the underlying model to access Arthur platform functionality.

An ArthurModel contains at least a name, an InputType and a OutputType.

Examples:

arthur_model = connection.model(name="New_Model",
                               input_type=InputType.Tabular,
                               model_type=OutputType.Regression)
arthur_model = connection.get(model_id)
arthur_model.send_inference(...)

Arthur Model Group#

Arthur Model Groups are an organizational construct the Arthur platform uses to track different versions of an Arthur model. Every Arthur Model is a version of one Model Group and a Model Group will always have at least one Arthur Model. The Model Group for an Arthur Model can only be specified during on-boarding, and once the Arthur Model is saved, its group cannot be changed. If an Arthur Model is created without specifying a Model Group, a new Model Group will be created automatically with the new model as its single version. When adding a model to a model group, the model is assigned a unique, incrementing Version Sequence Number (starting at 1) corresponding to the order in which it was added to the model group. Additionally, you can provide a Version Label to store a custom version string label along with the Version Sequence Number.

Example:

# retrieve the first version of a model
arthur_model_v1 = connection.get(model_id)
model_group = arthur_model_v1.model_group

# create the new version of the model
arthur_model_v2 = connection.model(name="Model_V2",
                                   input_type=InputType.Tabular,
                                   model_type=OutputType.Regression)

# add the new model to the model group
model_group.add_version(arthur_model_v2, label="2.0.1")
arthur_model_v2.save()

Related terms: Version Label, Version Sequence Number

Attribute#

A variable associated with a model. Can be input, prediction, ground truth or ancillary information (these groupings are known as Stages in the Arthur platform). Can be categorical or continuous. Example:

The attribute age is an input to the model, whereas the attribute creditworthy is the target for the model.

Synonyms: variable, {predictor, input}, {ouput, target}, prediction.

Related terms: input, stage, prediction, ground truth

Bias#

While bias is an overloaded term in stats&ML, we refer specifically to where a model’s outcomes have the potential to differentially harm certain subgroups of a population.

Example:

This credit approval model tends to lead to biased outcomes: men are approved for loans at a rate 50% higher than women are.

Related terms: bias detection, bias mitigation, disparate impact

Bias Detection#

The detection and quantification of algorithmic bias in an ML system, typically as evaluated on a model’s outputs (predictions) across different populations of a sensitive attribute. Many definitions of algorithmic bias have been proposed, including group fairness and individual fairness defintions. Group fairness definitions are often defined by comparing group-conditional statistics about the model’s predictions. In the below definitions, the group membership feature is indicated by \(G\) and a particular group membership value is indicated by \(g\).

Example:

Common metrics for group fairness include Demographic Parity, Equalized Odds, and Equality of Opportunity.

Related terms: bias mitigation

Demographic Parity#

A fairness metric which compares group-conditional selection rates. The quantity being compared is:

\[ \begin{align*} \mathbb P(\hat Y = 1 | G = g) \end{align*} \]

There is not necessarily a normative ideal relationship between the selection rates for each group: in some situations, such as the allocation of resources, it may be important to minimize the disparity in selection rates across groups; in others, metrics based on group-conditional accuracy may be more relevant. However, even in the latter case, understanding group-conditional selection rates, especially when compared against the original training data, can be useful contextualization for the model and its task as a whole.

Related term: disparate impact

Equal Opportunity#

A fairness metric which compares group-conditional true positive rates. The quantity being compared is:

\[ \begin{align*} \mathbb P(\hat Y = 1 | Y = 1, G = g) \end{align*} \]

For all groups, a true positive rate closer to 1 is better.

Equalized Odds#

A fairness metric which incorporates both group-conditional true positive rates and false positive rates, or, equivalently, true positive rates and true negative rates. There are a variety of implementations (due to the fact that some quadrants of the confusion matrix are complements of one another); here is one possible quantity to compare across groups:

\[ \begin{align*} \mathbb P (\hat Y = 1 | Y = 1, G = g) + \mathbb P(\hat Y = 0 | Y = 0, G = g) \end{align*} \]

In this implementation, this quantity should be as close to 2 as possible for all groups.

Bias Mitigation#

Automated techniques to mitigating bias in a discriminatory model. Can be characterized by where the technique sits in the model lifecycle:

  • Pre-Processing: Techniques that analyze datasets and often modify/resample training datasets so that the learned classifier is less discriminatory.

  • In-Processing: Techniques for training a fairness-aware classifier (or regressor) that explicitly trades off optimizing for accuracy and also maintaining fairness across sensitive groups.

  • Post-Processing: Techniques that only adjust the output predictions from a discriminatory classifier, without modifying the training data or the classifier.

Related terms: bias detection

Binary Classification#

A modeling task where the target variable belongs to a discrete set with two possible outcomes.

Example:

This binary classifier will predict whether or not a person is likely to default on their credit card.

Related terms: output type, classification, multilabel classification

Categorical Attribute#

An attribute whose value is taken from a discrete set of possibilities.

Example:

A person’s blood type is a categorical attribute: it can only be A, B, AB, or O.

Synonyms: discrete attribute

Related terms: attribute, continuous, classification

Classification#

A modeling task where the target variable belongs to a discrete set with a fixed number of possible outcomes.

Example:

This classification model will determine whether an input image is of a cat, a dog, or fish.

Related terms: output type, binary classification, multilabel classification

Continuous Attribute#

An attribute whose value is taken from an ordered continuum, which can be bounded or unbounded.

Example:

A person’s height, weight, income, IQ can all be through of as continuous attributes.

Synonyms: numeric attribute

Related terms: attribute, continuous, regression

Custom Role#

Custom Roles are a resource in the Arthur platform used for managing access control policies when SSO authentication has been enabled. For more instructions on configuring Custom Roles, view the custom authorization documentation section.

Data Drift#

Refers to the problem arising when, after a trained model is deployed, changes in the external world lead to degradation of model performance and the model becoming stale. Detecting data drift will provide a leading indicator about data stability and integrity.

Data drift can be quantified with respect to a specific reference set (eg. the model’s training data), or more generally over any temporal shifts in a variable with respect to past time windows.

Your project can query data drift metrics through the Arthur API. This section will provide overview of the available data drift metrics in Arthur’s query service.

Related terms: out of distribution

Definitions#

P and Q#

We establish some mathematical housekeeping for the below metrics. Let \(P\) be the reference distribution and \(Q\) be the target distribution. These are both probability distributions that can be approximated by binning the underlying reference and target sets. Generally, \(P\) is an older dataset and \(Q\) is a new dataset of interest. We’d like to quantify how far the distributions differ to see if the reference set has gone stale and algorithms trained on it should not be used to perform inferences on the target dataset.

Entropy#

Let \(\text{H}(P)\) be the entropy of distribution \(P\). It is interpreted as the expected (i.e. average) number of bits (if log base 2) or nats (if log base \(e\)) required to encode information of a datapoint from distribution \(P\). Arthur applications use log base \(e\), so interpretation will be in nats.

\[ \begin{align*} \text{H}(P) = -\sum_{k=1}^K P(x_k)*\text{log}P(x_k) = -\text{E}_P[\text{log}P(x_k)] \end{align*} \]

KL Divergence#

Let \(\text{D}(P \parallel Q)\) be the Kullback-Leibler (KL) Divergence from \(P\) to \(Q\). It is interpreted as the nats of information we expect to lose in using \(Q\) instead of \(P\) for modeling data \(X\), discretized over probability space \(K\). KL Divergence is not symmetrical, i.e. \(\text{D}(P \parallel Q) \neq \text{D}(Q \parallel P)\), and should not be used as a distance metric.

\[\begin{split} \begin{align*} \text{D}(P||Q) = \sum_{k=1}^K P(x_k)*(\text{log}P(x_k)-\text{log}Q(x_k)) \\ = \text{E}_P[\text{log}P(x)-\text{log}Q(x)] \end{align*} \end{split}\]

Population Stability Index (PSI)#

Let \(\text{PSI}(P,Q)\) be the Population Stability Index (PSI) between \(P\) and \(Q\). It is interpreted as the roundtrip loss of nats of information we expect to lose from \(P\) to \(Q\) and then from \(Q\) returning back to \(P\), and vice versa. PSI smooths out KL Divergence since the return trip information loss is included, and this metric is popular in financial applications.

\[\begin{split} \begin{align*}\text{PSI}(P,Q) = \text{D}(P||Q) + \text{D}(Q||P) \\ = \sum_{k=1}^K (P(x_k)-Q(x_k))*(\text{log}P(x_k)-\text{log}Q(x_k)) \\ = \text{E}_P[\text{log}P(x)-\text{log}Q(x)]+\text{E}_Q[\text{log}Q(x)-\text{log}P(x)] \end{align*} \end{split}\]

JS Divergence#

Let \(\text{JSD}(P,Q)\) be the Jensen-Shannon (JS) Divergence between \(P\) and \(Q\). It smooths out KL divergence using a mixture of the base and target distributions and is interpreted as the entropy of the mixture \(M=\frac{P+Q}{2}\) minus the mixture of the entropies of the individual distributions.

\[\begin{split} \begin{align*}\text{JSD}(P,Q) = \frac{1}{2}\text{D}(P||M) + \frac{1}{2}\text{D}(Q||M) \\ = \text{H}(\frac{P+Q}{2})-\frac{\text{H}(P)+H(Q)}{2} \end{align*} \end{split}\]

Hellinger Distance#

Let \(\text{HE}(P,Q)\) be the Hellinger Distance between \(P\) and \(Q\). It is interpreted as the Euclidean norm of the difference of the square root distributions of \(P\) and \(Q\).

\[\begin{split} \begin{align*} \text{HE}(P,Q) = {\frac {1}{\sqrt {2}}}{\bigl \|}{\sqrt {P}}-{\sqrt {Q}}{\bigr \|}_{2} \\ = {\frac {1}{\sqrt {2}}}{\sqrt {\sum _{k=1}^{K}\left({\sqrt {P(x_k)}}-{\sqrt {Q(x_k)}}\right)^{2}}} \end{align*} \end{split}\]

Hypothesis Test#

Hypothesis testing uses different tests depending on whether a feature is categorical or continuous.

For categorical features, let \(\chi_{\text{K}-1}^2(P,Q)\) be the chi-squared test statistic for \(P\) and \(Q\), with \(\text{K}\) being the number of categories of the feature, i.e. \(\text{K}-1\) are the degrees of freedom. Let \(\text{N}_{Pk}\) and \(\text{N}_{Qk}\) be the count of occurrences of feature being \(k\), with \(1\leq k \leq K\), for \(P\) and \(Q\) respectively. The chi-squared test statistic is the summation of the standardized differences of expected counts between \(P\) and \(Q\).

\[\begin{split} \begin{align*} \chi_{K-1}^2(P,Q) = \sum_{k=1}^K \frac{(\text{N}_{Qk}-\text{N}_{Pk})^2}{\text{N}_{Pk}}\\ \end{align*} \end{split}\]

For continuous features, let \(\text{KS}(P, Q)\) be the Kolmogorov-Smirnov test statistic for \(P\) and \(Q\). Let \(F_P\) and \(F_Q\) be the empirical cumulative density, for \(P\) and \(Q\) respectively. The Kolmogorov-Smirnov test is a nonparametric, i.e. distribution-free, test that compares the empirical cumulative density functions of \(P\) and \(Q\).

\[\begin{split} \begin{align*} \text{KS}(P,Q) = \sup_x (F_P(x) - F_Q(x)) \\ \end{align*} \end{split}\]

The returned test statistic is then compared to cutoffs for significance. A higher test statistic indicates more data drift. We’ve abstracted the calculations away for you in our query endpoint.

For HypothesisTest, the returned value is transformed as -log_10(P_value) to maintain directional parity with the other data drift metrics. That is, lower P_value is more significant and implies data drift, reflected in a higher -log_10(P_value).

Multivariate#

Arthur also offers a multivariate Anomaly Score which you can configure via the steps detailed here. See here for an explanation of how these scores are calculated.

Disparate Impact#

Legal terminology orginally from Fair Lending case law. This constraint is strictly harder than Disparate Treatment and asserts that model outcomes must not be discriminatory across protected groups. That is, the outcome of a decisioning process should not be substantially higher (or lower) for one group of a protected class over another.

While there does not exist a single threshold for establishing the presence or absence of disparate impact, the so-called “80% rule” is commonly referenced. However, we strongly recommend against adopting this rule-of-thumb, as these analyses should be grounded in use-case specific analysis and the legal framework pertinent to a given industry.

Example:

Even though the model didn’t take gender as input, it still results in disparate impact when we compare outcomes for males and females.

Related terms: bias, disparate treatment

Disparate Treatment#

Legal terminology originally from Fair Lending case law. Disparate Treatment asserts that you are not allowed to consider protected variables (eg race, age, gender) when approving or denying an applicant for a credit card loan. In practical terms, this means that a data scientist cannot include these attributes as inputs to a credit decisioning model.

Adherence to Disparate Treatment is not a sufficient condition for actually acheiving a fair model (see proxy and bias detection). “Fairness through unawareness” is not good enough.

Related terms: bias, disparate impact

Enrichment#

Generally used to describe data or metrics added to raw data after ingestion. Arthur provides various enrichments such as Anomaly Detection and Explainability. See Enrichments for details around using enrichments within Arthur.

Feature#

An individual attribute that is an input to a model

Example:

The credit scoring model has features like “home_value”, “zip_code”, “height".

Ground Truth#

The true label or target-variable (Y) corresponding to inputs (X) for a dataset.

Examples:

pred = sklearn_model.predict_proba(X)
arthur_model.send_inference(
  model_pipeline_input=X,
  predicted_values={1:pred, 0: 1-pred})

Related terms: prediction

Image Data#

Imagery data commonly used for computer vision models.

Related terms: attribute, output type, Stage

Inference#

One row of a dataset. An inference refers to passing a single input into a model and computing the model’s prediction. Data associated with that inference might include (1) input data, (2) model’s prediction, (3) corresponding ground truth. With respect to the Arthur platform, the term inference denotes any and all of those related components of data for a single input&prediction.

Related terms: ArthurInference, stage

Input#

A single instance of data, upon which a model can calculate an output prediction. The input consists of all relevant features together.

Example:

The input features for the credit scoring model consist of “home_value”, “zip_code”, “height".

Related terms: feature, model

Input Type#

For an ArthurModel, this field declares what kind of input datatype will be flowing into the system.

Allowable values are defined in the InputType enum:

Example:

arthur_model = connection.model(name="New_Model",
                               input_type=InputType.Tabular,
                               model_type=OutputType.Regression)

Related terms: output type, tabular data, nlp data

Model Health Score#

On the UI dashboard, you will see a model health score between 0-100 for each of your models. The score is an aggregation based on the following metrics: performance, drift and ingestion. Each metric for a model is computed every hour, combined, and then aggregated by computing the average over a 30-day window. The thresholds for model-health are (0-32, 33-65, 66-100) for (Red, Yellow, Green) respectively.

  • Performance:

    • Regression: 1 - Normalized MAE

    • Classification: F1 Score

  • Drift

    • 1 - Average Anomaly Score

  • Ingestion

    • Variance of normalized time periods between ingestion events

    • Variance of normalized volume differences between ingestion events

You can extract the health score via an API call as well.

Model Onboarding#

Model onboarding refers to the process of defining an ArthurModel, preparing it with the necessary reference dataset, passing it through a validation check, and saving it to the Arthur system.

Once your model is onboarded onto Arthur, you can use the Arthur system to track the model and view all its performance and analytics in your online Arthur dashboard.

Related terms: ArthurModel, reference dataset

Multiclass Classification#

A modeling task where each input is associated with one label, from a fixed set of possible labels. Often this is a binary classifier (the output is either 0 or 1), but the output can also have more than 2 possible labels.

Example:

This NLP model applies the most relevant tag to news articles. The model is trained on example articles which are tagged with a topic like Congress.

Related terms: multilabel clasification, output type,

Multilabel Classification#

A modeling task where each input is associated with two or more labels, from a fixed set of possible labels.

Example:

This NLP model applies relevant tags to news articles. The model is trained on example articles which are tagged with multiple topics like Politics, Elections, Congress.

Related terms: output type, multiclass clasification

NLP Data#

Unstructured text sequences commonly used for Natural Language Processing models.

Related terms: attribute, output type, Stage

Non-Input Attribute #

A non-input attribute is an attribute that an ArthurModel will track that does not actually enter the model as an input.

Common non-input attributes are protected class attributes such as age, race, or sex. By sending such non-input attributes to Arthur, you can track model performance based on these groups in your data to evaluate model bias and fairness.

Related terms: attribute, bias

Object Detection#

The OutputType for computer vision models with the purpose of detecting an object within an image and outputting a box which bounds the object.

This bounding box is used to identify where the object resides in the image.

Related terms: image

Organization#

An Organization is a structural entity utilized by the Arthur platform to organize and manage access to resources that exist on the platform. Users are added to Arthur Organizations with a given role that provides them with Read and/or Write access to some subset of that Organization’s resources, as defined by the User’s role.

Each Organization has its own:

Out of Distribution Detection#

Refers to the challenge of detecting when an input (or set of inputs) is substantially different from the distribution of a larger set of reference inferences. This term commonly arises in the context of data drift, where we want to detect if new inputs are different from the training data (and distribution thereof) for a particular model. OOD Detection is a relevant challenge for Tabular data as well as unstructured data such as images and sequences.

Related terms: Data Drift

Output Type#

For an ArthurModel, this field declares what kind of output predictions will be flowing out of the system.

Allowable values are defined in the OutputType enum:

  • Regression

    • appropriate for continuous-valued targets

  • Multiclass

    • appropriate for both binary classiers and multiclass classifiers

  • Multilabel

    • appropriate for multilabel classifiers

  • ObjectDetection

    • only available for computer vision models

Example:

arthur_model = connection.model(name="New_Model",
                               input_type=InputType.Tabular,
                               output_type=OutputType.Regression)

Related terms: input type

Prediction#

The output prediction (y_hat) of a trained model for any input.

Related terms: ground truth

Protected Attribute#

An attribute of an inference that is considered sensitive with respect to model bias. Common examples include race, age, and gender. The term “protected” comes from the Civil Rights Act of 1964.

Synonyms: sensitive attribute

Related terms: bias, proxy

Proxy#

An input attribute in a model (or combination thereof) that is highly correlated with a protected attribute such as race, age, or gender. The presence of proxies in a dataset makes it difficult to rely only on [Disparate Treatment] as a standard for fair ML.

Example:

In most US cities, zip code is a strong proxy for race. Therefore, one must be cautious when using zip code as an input to a model.

Related terms: bias, disparate impact, disparate treatment

Reference #

The dataset used as a baseline reference for an ArthurModel.

A reference dataset must include a sample of the input features a model receives.

A reference dataset can optionally include a sample of model outputs, ground truth values, and other non-input attributes as metadata.

The reference dataset for a model is used to compute drift: the distribution of input features in the reference dataset makes up the baseline against which future inferences are compared to compute anomaly scores.

Related terms: inference

Regression#

A modeling task (or model) where the target variable is a continuous variable.

Example:

This regression model predicts what the stock price of $AAPL will be tomorrow.

Related terms: output type

Sensitive Attribute#

See protected attribute

Service Account#

Service Accounts are entities within the Arthur Platform that provide access to Arthur APIs for automated systems (machine-to-machine communication). They represent both an identity as well as a set of permissions for that identity. Each Service Account has an access token that includes a specific role, which grants access to the Arthur APIs. The roles used in Service Accounts provide access within a single Arthur Organization.

Stage#

Taxonomy used by the Arthur platform to delineate how attributes contribute to the model computations. Allowable values are defined in the Stage enum:

  • ModelPipelineInput : Input to the entire model pipeline. This will most commonly be the Stage used to represent all model inputs. Will contain base input features that are familiar to the data scientist: categorical and continuous columns of a tabular dataset.

  • PredictedValue: The predictions coming out of the model.

  • GroundTruth: The ground truth (or target) attribute for a model. Must be one-hot for classifiers

  • GroundTruthClass: The ground truth class for classification models, not one-hot encoded

  • NonInput: Ancillary data that can be associated with each inference, but not necessarily a direct input to the model. For example, sensitive attributes like age, sex, or race might not be direct model inputs, but will useful to associate with each prediction.

Tabular Data#

Data type for model inputs where the data can be thought of as a table (or spreadsheet) composed of rows and columns. Each column represents an input attribute for the model and each row represents a separate record that composes the training data. In supervised learning, exactly one of the columns acts as the target.

Example:

This credit scoring model is trained on tabular data. The input attributes are income, country, and age and the target is FICO score.

Related terms: Attribute, output type, Stage

Tag#

A tag is a custom string that you can attach to your Arthur Model. Tags can be used as custom identifiers to create custom groupings of models or denote additional metadata fields.

Examples:

“Lending Model”, “Spark”

Version Label#

A Version Label is a string that can represent a custom version for your Arthur Model within its Arthur Model Group. Version Labels are not required and the platform will default to using the Version Sequence Number when not provided.

Example:

# retrieve the model group
model_group = connection.get_model_group(model_group_id)

# create the new version of the model
arthur_model_v2 = connection.model(name="Model_V2",
                                   input_type=InputType.Tabular,
                                   model_type=OutputType.Regression)

# add the new model to the model group
model_group.add_version(arthur_model_v2, label="2.0.1")
label = arthur_model_v2.version_label
arthur_model_v2.save()
# label == "2.0.1"

Related terms: Arthur Model, Arthur Model Group, Version Sequence Number

Version Sequence Number#

A Version Sequence Number is a unique, auto-incrementing (starting at 1) integer that is assigned to Arthur Models in an Arthur Model Group. This number uniquely represents an Arthur Model’s Version within the Model Group. In the case a Version Label is not provided, the platform will show the Version Sequence Number instead.

Example:

# retrieve the first version of a model
arthur_model_v1 = connection.get(model_id)
num = arthur_model_v1.version_sequence_num
# num == 1

# retrieve the second version of a model
model_group = arthur_model_v1.model_group
arthur_model_v2 = model_group.get_version(sequence_num=2)
num = arthur_model_v2.version_sequence_num
# num == 2

Related terms: Arthur Model, Arthur Model Group, Version Label