# Meters¶

Meters are used to accumulate values over time or batch and generally provide some statistical measure of your process.

## APMeter¶

class pywick.meters.apmeter.APMeter[source]

The APMeter measures the average precision per class.

The APMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where (1) the output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function); (2) the target contains only values 0 (for negative examples) and 1 (for positive examples); and (3) the weight ( > 0) represents weight for each sample.

add(output, target, weight=None)[source]

Args:
output (Tensor): NxK tensor that for each of the N examples
indicates the probability of the example belonging to each of the K classes, according to the model. The probabilities should sum to one over all classes
target (Tensor): binary NxK tensort that encodes which of the K
classes are associated with the N-th input (eg: a row [0, 1, 0, 1] indicates that the example is associated with classes 2 and 4)
weight (optional, Tensor): Nx1 tensor representing the weight for
each example (each weight > 0)
reset()[source]

Resets the meter with empty member variables

value()[source]

Returns the model’s average precision for each class

Return:
ap (FloatTensor): 1xK tensor, with avg precision for each class k

## AUCMeter¶

class pywick.meters.aucmeter.AUCMeter[source]

The AUCMeter measures the area under the receiver-operating characteristic (ROC) curve for binary classification problems. The area under the curve (AUC) can be interpreted as the probability that, given a randomly selected positive example and a randomly selected negative example, the positive example is assigned a higher score by the classification model than the negative example.

The AUCMeter is designed to operate on one-dimensional Tensors output and target, where (1) the output contains model output scores that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a signoid function); and (2) the target contains only values 0 (for negative examples) and 1 (for positive examples).

add(output, target)[source]
reset()[source]
value()[source]

## AverageMeter¶

class pywick.meters.averagemeter.AverageMeter[source]

Bases: object

Computes and stores the average and current value

reset()[source]
update(val, n=1)[source]

## AverageValueMeter¶

class pywick.meters.averagevaluemeter.AverageValueMeter[source]

Keeps track of mean and standard deviation for some value.

add(value, n=1)[source]
reset()[source]
value()[source]

## ClassErrorMeter¶

class pywick.meters.classerrormeter.ClassErrorMeter(topk=None, accuracy=False)[source]
add(output, target)[source]
reset()[source]
value(k=-1)[source]

## ConfusionMeter¶

class pywick.meters.confusionmeter.ConfusionMeter(k, normalized=False)[source]

Maintains a confusion matrix for a given classification problem.

The ConfusionMeter constructs a confusion matrix for a multi-class classification problems. It does not support multi-label, multi-class problems: for such problems, please use MultiLabelConfusionMeter.

Parameters: (int) (k) – number of classes in the classification problem (boolean) (normalized) – Determines whether or not the confusion matrix is normalized or not
add(predicted, target)[source]

Computes the confusion matrix of K x K size where K is no of classes

Parameters: (tensor) (target) – Can be an N x K tensor of predicted scores obtained from the model for N examples and K classes or an N-tensor of integer values between 0 and K-1. (tensor) – Can be a N-tensor of integer values assumed to be integer values between 0 and K-1 or N x K tensor, where targets are assumed to be provided as one-hot vectors
reset()[source]
value()[source]
Returns:
Confustion matrix of K rows and K columns, where rows corresponds to ground-truth targets and columns corresponds to predicted targets.

## MAPMeter¶

class pywick.meters.mapmeter.mAPMeter[source]

The mAPMeter measures the mean average precision over all classes.

The mAPMeter is designed to operate on NxK Tensors output and target, and optionally a Nx1 Tensor weight where (1) the output contains model output scores for N examples and K classes that ought to be higher when the model is more convinced that the example should be positively labeled, and smaller when the model believes the example should be negatively labeled (for instance, the output of a sigmoid function); (2) the target contains only values 0 (for negative examples) and 1 (for positive examples); and (3) the weight ( > 0) represents weight for each sample.

add(output, target, weight=None)[source]
reset()[source]
value()[source]

## Meter¶

class pywick.meters.meter.Meter[source]

Bases: object

Abstract meter class from which all other meters inherit

add()[source]
reset()[source]
value()[source]

## MovingAverageValueMeter¶

class pywick.meters.movingaveragevaluemeter.MovingAverageValueMeter(windowsize)[source]

Keeps track of mean and standard deviation of some value for a given window.

add(value)[source]
reset()[source]
value()[source]

## MSEMeter¶

class pywick.meters.msemeter.MSEMeter(root=False)[source]
add(output, target)[source]
reset()[source]
value()[source]

## TimeMeter¶

class pywick.meters.timemeter.TimeMeter(unit)[source]

This meter is designed to measure the time between events and can be used to measure, for instance, the average processing time per batch of data. It is different from most other meters in terms of the methods it provides:

Mmethods:

• reset() resets the timer, setting the timer and unit counter to zero.
• value() returns the time passed since the last reset(); divided by the counter value when unit=true.
reset()[source]
value()[source]