Package nltk :: Package metrics
[hide private]
[frames] | no frames]

Package metrics

source code

Classes and methods for scoring processing modules.

Submodules [hide private]

Classes [hide private]
ConfusionMatrix
The confusion matrix between a list of reference values and a corresponding list of test values.
AnnotationTask
Represents an annotation task, i.e.
TrigramAssocMeasures
A collection of trigram association measures.
NgramAssocMeasures
An abstract class defining a collection of generic association measures.
ContingencyMeasures
Wraps NgramAssocMeasures classes such that the arguments of association measures are contingency table values rather than marginals.
BigramAssocMeasures
A collection of trigram association measures.
Functions [hide private]
 
log_likelihood(reference, test)
Given a list of reference values and a corresponding list of test probability distributions, return the average log likelihood of the reference values, given the probability distributions.
source code
float or None
recall(reference, test)
Given a set of reference values and a set of test values, return the fraction of reference values that appear in the test set.
source code
float or None
precision(reference, test)
Given a set of reference values and a set of test values, return the fraction of test values that appear in the reference set.
source code
tuple
approxrand(a, b, **kwargs)
Returns an approximate significance level between two lists of independently generated test values.
source code
float or None
f_measure(reference, test, alpha=0.5)
Given a set of reference values and a set of test values, return the f-measure of the test values, when compared against the reference values.
source code
 
accuracy(reference, test)
Given a list of reference values and a corresponding list of test values, return the fraction of corresponding values that are equal.
source code
int
windowdiff(seg1, seg2, k, boundary='1')
Compute the windowdiff score for a pair of segmentations.
source code
 
presence(label)
Higher-order function to test presence of a given label
source code
 
edit_distance(s1, s2)
Calculate the Levenshtein edit-distance between two strings.
source code
 
interval_distance(label1, label2)
Krippendorff'1 interval distance metric
source code
 
custom_distance(file) source code
 
fractional_presence(label) source code
 
jaccard_distance(label1, label2)
Distance metric comparing set-similarity.
source code
 
binary_distance(label1, label2)
Simple equality test.
source code
 
masi_distance(label1, label2)
Distance metric that takes into account partial agreement when multiple labels are assigned.
source code
 
ranks_from_sequence(seq)
Given a sequence, yields each element with an increasing rank, suitable for use as an argument to spearman_correlation.
source code
 
spearman_correlation(ranks1, ranks2)
Returns the Spearman correlation coefficient for two rankings, which should be dicts or sequences of (key, rank).
source code
 
ranks_from_scores(scores, rank_gap=1e-15)
Given a sequence of (key, score) tuples, yields each key with an increasing rank, tying with previous key's rank if the difference between their scores is less than rank_gap.
source code
Function Details [hide private]

log_likelihood(reference, test)

source code 

Given a list of reference values and a corresponding list of test probability distributions, return the average log likelihood of the reference values, given the probability distributions.

Parameters:
  • reference (list) - A list of reference values
  • test (list of ProbDistI) - A list of probability distributions over values to compare against the corresponding reference values.

recall(reference, test)

source code 

Given a set of reference values and a set of test values, return the fraction of reference values that appear in the test set. In particular, return |referencetest|/|reference|. If reference is empty, then return None.

Parameters:
  • reference (Set) - A set of reference values.
  • test (Set) - A set of values to compare against the reference set.
Returns: float or None

precision(reference, test)

source code 

Given a set of reference values and a set of test values, return the fraction of test values that appear in the reference set. In particular, return |referencetest|/|test|. If test is empty, then return None.

Parameters:
  • reference (Set) - A set of reference values.
  • test (Set) - A set of values to compare against the reference set.
Returns: float or None

approxrand(a, b, **kwargs)

source code 

Returns an approximate significance level between two lists of independently generated test values.

Approximate randomization calculates significance by randomly drawing from a sample of the possible permutations. At the limit of the number of possible permutations, the significance level is exact. The approximate significance level is the sample mean number of times the statistic of the permutated lists varies from the actual statistic of the unpermuted argument lists.

Parameters:
  • a (list) - a list of test values
  • b (list) - another list of independently generated test values
Returns: tuple
a tuple containing an approximate significance level, the count of the number of times the pseudo-statistic varied from the actual statistic, and the number of shuffles

f_measure(reference, test, alpha=0.5)

source code 

Given a set of reference values and a set of test values, return the f-measure of the test values, when compared against the reference values. The f-measure is the harmonic mean of the precision and recall, weighted by alpha. In particular, given the precision p and recall r defined by:

  • p = |referencetest|/|test|
  • r = |referencetest|/|reference|

The f-measure is:

  • 1/(alpha/p + (1-alpha)/r)

If either reference or test is empty, then f_measure returns None.

Parameters:
  • reference (Set) - A set of reference values.
  • test (Set) - A set of values to compare against the reference set.
Returns: float or None

accuracy(reference, test)

source code 

Given a list of reference values and a corresponding list of test values, return the fraction of corresponding values that are equal. In particular, return the fraction of indices 0<i<=len(test) such that test[i] == reference[i].

Parameters:
  • reference (list) - An ordered list of reference values.
  • test (list) - A list of values to compare against the corresponding reference values.
Raises:
  • ValueError - If reference and length do not have the same length.

windowdiff(seg1, seg2, k, boundary='1')

source code 

Compute the windowdiff score for a pair of segmentations. A segmentation is any sequence over a vocabulary of two items (e.g. "0", "1"), where the specified boundary value is used to mark the edge of a segmentation.

>>> s1 = "00000010000000001000000"
>>> s2 = "00000001000000010000000"
>>> s3 = "00010000000000000001000"
>>> windowdiff(s1, s1, 3)
0
>>> windowdiff(s1, s2, 3)
4
>>> windowdiff(s2, s3, 3)
16
Parameters:
  • seg1 (string or list) - a segmentation
  • seg2 (string or list) - a segmentation
  • k (int) - window width
  • boundary (string or int or bool) - boundary value
Returns: int

edit_distance(s1, s2)

source code 

Calculate the Levenshtein edit-distance between two strings. The edit distance is the number of characters that need to be substituted, inserted, or deleted, to transform s1 into s2. For example, transforming "rain" to "shine" requires three steps, consisting of two substitutions and one insertion: "rain" -> "sain" -> "shin" -> "shine". These operations could have been done in other orders, but at least three steps are needed.

Parameters:
  • s1 (string), s2 (string @rtype int) - The strings to be analysed

interval_distance(label1, label2)

source code 

Krippendorff'1 interval distance metric

>>> interval_distance(1,10)
81

Krippendorff 1980, Content Analysis: An Introduction to its Methodology

binary_distance(label1, label2)

source code 

Simple equality test.

0.0 if the labels are identical, 1.0 if they are different.

>>> binary_distance(1,1)
0.0
>>> binary_distance(1,3)
1.0

masi_distance(label1, label2)

source code 

Distance metric that takes into account partial agreement when multiple labels are assigned.

>>> masi_distance(set([1,2]),set([1,2,3,4]))
0.5

Passonneau 2005, Measuring Agreement on Set-Valued Items (MASI) for Semantic and Pragmatic Annotation.

spearman_correlation(ranks1, ranks2)

source code 

Returns the Spearman correlation coefficient for two rankings, which should be dicts or sequences of (key, rank). The coefficient ranges from -1.0 (ranks are opposite) to 1.0 (ranks are identical), and is only calculated for keys in both rankings (for meaningful results, remove keys present in only one list before ranking).

ranks_from_scores(scores, rank_gap=1e-15)

source code 

Given a sequence of (key, score) tuples, yields each key with an increasing rank, tying with previous key's rank if the difference between their scores is less than rank_gap. Suitable for use as an argument to spearman_correlation.