Home » Miscellanea » Information Retrieval » Quality Measures

Quality Measures

There are two fundamental classes of measures in order to evaluate the effectiveness of a retrieval system, depending on whether the result set D_q returned in response to the query q is unranked or ranked.

Evaluation of Unranked Result Sets
In this scenario, given the query q the IR system considers any document d_i\in D either as relevant or not relevant to q (i.e., binary relevance).
Specifically, let Rel_q be the set of all the documents that are actually relevant to q, and which is known in advance. Then, D_q = \{d_1,\ldots,d_{|D_q|}\}, is the set of |D_q| documents that the system retrieves and considers as relevant to q.
The two measures that are used most frequently to evaluate these IR systems are Precision and Recall, which are computed as follows:
Precision (P) is the fraction of results retrieved by the IR system that are actually relevant:

P = \frac{|D_q\cap Rel_q|}{|D_q|}.

On the other hand, Recall (R) is the fraction of the actually relevant results that the IR system was able to retrieve:

R = \frac{|D_q\cap Rel_q|}{|Rel_q|}.

The above quantities can be better explained using the so-called confusion matrix (or contingency table), as described below:

Confusion Matrix

The main diagonal of this matrix specifies the two cases where the IR system performs correctly whereas the anti-diagonal indicates the two situations where the system behaves wrongly.
Concretely, true positives (tp), indicate the number of relevant documents that the IR system correctly retrieves (i.e., hits). Similarly, true negatives (tn) represent the number of not-relevant documents, which are not retrieved at all by the system (i.e., correct rejections).
On the other hand, false positives (fp) refer to those documents that the IR system retrieves as relevant which in fact are not (a.k.a. false alarms or Type I errors). Finally, false negatives (fn) represent actually relevant documents, which the system missed to retrieve (a.k.a. misses or Type II errors).
Precision, which is also called positive predicted value (PPV), and Recall, which is also refer to as sensitivity (as well as true positive rate (TPR) or hit rate) can be thus defined in terms of the above indicators, as follows:

P = \frac{tp}{tp+fp};

R = \frac{tp}{tp+fn}.

In addition, the following measures can be derived from the confusion matrix above:
specificity (or true negative rate (TNR)) computes the fraction of not-relevant documents that were not correctly retrieved among all the documents that are actually not relevant:
TNR = \frac{tn}{tn+fp}.
negative predicted value (NPV) measures the fraction of not-relevant documents that were not correctly retrieved among all the documents that were not retrieved:
NPV = \frac{tn}{tn+fn}.
false positive rate (FPR) computes the ratio of documents that the IR system mistakenly considered not relevant among all the documents that are actually not relevant:
FPR = \frac{fp}{tn+fp}.
false discovery rate (FDR) computes the ratio of documents that the IR system mistakenly considered not relevant among all the documents that were retrieved:
FDR = \frac{fp}{tp+fp}.
accuracy (ACC) measures overall ability of the IR system to correctly retrieve relevant documents and miss not-relevant ones:
ACC = \frac{tp+tn}{tp+fp+tn+fn}.

On might be tented to use accuracy as the right indicator for the effectiveness of an IR system. However, most of the times, the distribution of relevance across documents is very skewed, meaning that the vast majority of documents turn out to be not-relevant and very few are relevant. Let us see this with an example.
Suppose we have a collection of 1,000 documents (i.e., |D|=1,000). Assume also that, for a given query q, 995 documents are not relevant (i.e., 99.5\%) and only 5 documents are relevant to q (i.e., 0.5\%). In such a case, an IR system which barely does not retrieve any document at all in response to q (i.e., |D_q|=0 because the system considers all the documents as not-relevant) has an accuracy of 99.5\%, which is a very remarkable value.
Unfortunately, this high accuracy values is obtained without retrieving any result document in response to the query, which is generally not the case for the user interacting with an IR system! In fact, users always tolerate to see some not-relevant results (i.e., false positives), providing that they eventually find interesting information (i.e., true positives). That’s why it is useful to have precision and recall, which both focus on true positives.
The advantage of having the two numbers for precision and recall is that one is more important than the other in many circumstances. For instance, in some cases one might prefer to have high precision (e.g., web search results should contain relevant documents in the first page) whereas in some other cases it is preferable to get as the high recall as possible (e.g., hard disk searches). Anyway, the two indicators clearly trade off against one other: because recall is a non-decreasing function of the number of documents retrieved (i.e., D_q) it is always possible to design an IR system with recall equals to 1 (but with a very low precision!) by simply returning the document collection D as a whole in response to any query. Of course, this solution is unfeasible as the number of documents turn out to be increasingly large.
Finally, in general we want to get some amount of recall while tolerating only a certain percentage of false positives. A single measure that trades off precision versus recall is the F-measure, which is the weighted harmonic mean of precision and recall:

F = \frac{1}{\alpha\frac{1}{P}+(1-\alpha)\frac{1}{R}} = \frac{(\beta^2 + 1)PR}{\beta^2 P + R}, where \beta^2 = \frac{1-\alpha}{\alpha}.

The default balanced F-measure equally weights precision and recall, which means making \alpha = 1/2 or \beta=1. It is commonly written as F_1 (or F1), which is the shorthand for F_{\beta=1}.

F_1 = \frac{2PR}{P+R}.

Evaluation of Ranked Result Sets
When the result set returned by an IR system is a ranked list of documents, it is often the case we want to measure the performance of two retrieval systems X and Y by comparing the ranked lists they retrieve in response to the same query q, i.e., D^X_q and D^Y_q, respectively.
To this end, there exists distinct measures depending on the grade of relevance, i.e., binary vs. multi-graded.
Binary relevance, which we already introduced before, simply states whether a document d_i\in D is relevant or not relevant to q. This simply means that each document is assigned either with 1 to indicate it is relevant or with 0 (sometimes -1) if it doesn’t.
On the other hand, multi-graded relevance may assign each document a discrete value in the range \{0,1,\ldots,K-1\} or, equivalently, \{1,2,\ldots,K\}.
In the following, we examine those two situations separately.

Binary Relevance
If documents are simply considered either as relevant or not, the following measures can be used to compare any two ranked result lists in response to the query q:

Precision@n (P@n)
Consider the following ranked result list containing 8 documents. Relevant results are showed in green, while non relevant ones are showed in red
rankedlist
According to the definition, we can compute P@n for n=\{1,2,\ldots,8\}, as follows:
– P@1 = How many relevant documents out of top-1 retrieved documents? = 1/1=1.00
– P@2 = How many relevant documents out of top-2 retrieved documents? = 1/2=.500
– P@3 = How many relevant documents out of top-3 retrieved documents? = 2/3\approx.667
– …
– P@8 = How many relevant documents out of top-8 retrieved documents? = 5/8=.625

Mean Average Precision (MAP)
Consider the ranked result list above, and focus only on the rank positions of each relevant documents, i.e., 1\leq n_1 < n_2 < \ldots< n_r\leq |D_q| (where |D_q| is the total number of retrieved documents for q). For instance, in the example above we have n_1 = 1, n_2 = 3, n_3 = 5, n_4 = 6, n_5 = 8.
Then, we compute the Precision@n_i for each i=\{1,\ldots,r\}, and we make the average:
AvgP(q,n) = \frac{1}{r} \sum_{i=1}^r P@n_i.
AvgP(q,n) is the average precision, and in the example above is:
\frac{1}{5} (\frac{1}{1} + \frac{2}{3} + \frac{3}{5} + \frac{4}{6} + \frac{5}{8}) \approx .712
Now, assuming we have to measure the Mean Average Precision (MAP) of an IR system for multiple queries and/or for multiple rankings given the same query, and let this total queries/rankings be Q then we simply compute the following:
MAP = \frac{1}{Q} \sum_{q\in Q} AvgP(q,n) .

Mean Reciprocal Rank (MRR)
Given a query q and a list of results, we can define rank(q) as the rank of the (first) result that is relevant to q, i.e., the position in the list indicating the (first) relevant result. The reciprocal of this number is called reciprocal rank for the query q, and is computed as \frac{1}{rank(q)}.
The Mean Reciprocal Rank (MRR) is thus the average of the reciprocal ranks of results for a sample of queries Q:
MRR = \frac{1}{Q} \sum_{q\in Q} \frac{1}{rank(q)} .
Some issues arise either when no relevant result appears in the returned list (use MRR = 0) or when multiple relevant results are returned (use MAP, instead).

Multi-graded Relevance
Instead of using a binary graded relevance score, where a document d_i\in D is considered either as relevant or not, we use multi-graded relevance.
The rationale of this choice is that a document can be marginally relevant is some circumstances. For instance, even an highly relevant document can be redundant if there exist other documents in the result list that contain the same information.
We still assume D_q = \{d_1, \ldots, d_{|D_q|}\} is the set of the retrieved results (i.e., documents) that are supposed to be relevant to the query q. In addition, we denote by rel_q(i) \in \{0,\ldots,K-1\},~i\in\{1,\ldots,|D_q|\} the relevance function which maps each document to the corresponding relevance score. Note that the binary graded relevance score seen above is a special case of this definition of relevance function, where K=2.
In the following, we present 3 measures which apply in the case of multi-graded relevance.

Cumulative Gain (CG)
Cumulative Gain (CG) is the predecessor of DCG (see below) and does not include the position of a result in the consideration of the usefulness of a result set. Concretely, it is the sum of the graded relevance scores of all results in a search result list.
The CG with respect to a query q at a particular rank position p, 1\leq p \leq|D_q| is defined as:

CG(q,p) = \sum_{i=1}^p rel_q(i) .

It turns out that the value computed with the CG function is unaffected by changes in the ordering of search results. That is, moving a highly relevant document d_i above a higher ranked, less relevant, document d_j does not change the computed value for CG.
However, we might want the following:
– Highly relevant documents are more useful when appearing earlier in a search engine result list (have higher ranks)
– Highly relevant documents are more useful than marginally relevant documents, which are in turn more useful than irrelevant documents.
In order to deal with the two assumptions above, Discounted Cumulative Gain (DCG) is used instead.

Discounted Cumulative Gain (DCG)
The rationale of DCG is that highly relevant documents appearing lower in a search result list should be penalized as the graded relevance value is reduced logarithmically proportional to the position of the result. Typically, this means that the relevance score at rank position i is multiplied by a factor which is equal to \frac{1}{log_2(i)}.
Overall, the DCG with respect to a query q accumulated at a particular rank position p, 1\leq p \leq|D_q|, is thus defined as:

DCG(q,p) = rel_q(1) + \sum_{i=2}^p \frac{rel_q(i)}{log_2(i)} .

Alternatively, if we want to put more emphasis on relevant results retrieved, we can compute the DCG(q,p) as follows:

DCG(q,p) = \sum_{i=1}^p \frac{2^{rel_q(i)}-1}{log_2(i+1)} .

Note that there is no restriction on the base of the logarithm to be used. Choosing the base of the logarithm affects the weight of the discount. For instance, using the typical value of log_2 the discount at rank i=4 would be \frac{1}{log_2(4)} = \frac{1}{2}. Of course, if we would use another logarithmic base, we had a stronger or a weaker penalization.

Normalized Discounted Cumulative Gain (nDCG)
Search result lists vary in length depending on the query q. Comparing a search engine’s performance from one query to the next cannot be consistently achieved using DCG alone, so the cumulative gain at each position for a chosen value of p should be normalized across queries. This is done by sorting documents of a result list by relevance, producing the maximum possible DCG until position p, also called Ideal DCG (IDCG) until that position.
For a query q, the normalized discounted cumulative gain (nDCG) is therefore computed as:

nDCG(q,p) = \frac{DCG(q,p)}{IDCG(q,p)} .

If we want a measure of the average performance of a search engine’s ranking algorithm on a set of queries Q, the nDCG values for all queries q \in Q can be averaged.
Note also that in a perfect ranking algorithm, the DCG(q,p) will be the same as the IDCG(q,p) producing an nDCG of 1. It turns out that 0\leq nDCG(q,p)\leq 1,~\forall q\in Q, and so are cross-query comparable.
The main difficulty encountered in using nDCG is the unavailability of an ideal ordering of results when only partial relevance feedback is available.

Example
Suppose we have the following ranked list of documents D_q, each of those scored on the scale \{0,\ldots,3\}, where 0 means totally not relevant, 3 completely relevant, and 1 and 2 meaning “somewhere in between”:
D_q = 3, 2, 3, 0, 0, 1, 2, 2, 3, 0 .
This means that the first-ranked document has relevance score 3, the second-ranked document has relevance score 2, etc.
Let’s compute the CG for this list, namely CG(q,p=10). By the definition above it follows that:

CG(q,10) = \sum_{i=1}^{10} rel_q(i) = 3+2+3+0+0+1+2+2+3+0 = 16 .

As we said before, CG is not affected by the ordering of the results in the ranked list. For instance, if we swap the third document (scored with 3) with the forth (scored with 0), still we obtain the same value of CG = 16, though the ranked list appears different.
DCG is used to emphasize highly relevant documents appearing early in the result list. Using the logarithmic scale for reduction, the DCG for each result in order is:

i rel_q(i) log_2(i) \frac{rel_q(i)}{log_2(i)}
1 3 0 N/A
2 2 1 2
3 3 1.58 1.89
4 0 2 0.00
5 0 2.32 0.00
6 1 2.58 0.39
7 2 2.81 0.71
8 2 3 0.67
9 3 3.17 0.95
10 0 3.32 0.00

Therefore, DCG(q,10) can be computed as follows:

DCG(q,10) = rel_q(1) + \sum_{i=2}^{10} \frac{rel_q(i)}{log_2(i)} =
= 3 + (2+1.89+0+0+0.39+0.71+0.67+0.95+0) = 9.61 .

Now a switch of the third and fourth documents results in a reduced DCG because a less relevant document is placed higher in the ranking, i.e., a more relevant document is discounted more by being placed in a lower rank.
The performance of this query to another is incomparable in this form since the other query may have more results, resulting in a larger overall DCG which may not necessarily be better. In order to compare, the DCG values must be normalized.
To normalize DCG values, an ideal ordering for the given query is needed. For this example, that ordering would be the monotonically decreasing sort of the relevance judgments provided by the experiment participant, which is:

D_q = 3, 3, 3, 2, 2, 2, 1, 0, 0, 0 .

The DCG of the ideal ordering above, or IDCG, is then:

IDCG(q,10) = 3 + (3 + 1.89 + 1 + 0.86 + 0.78 + 0.36 + 0 + 0 + 0) =
= 10.89 .

Therefore, the nDCG for this query is given as:

nDCG(q,10) = \frac{DCG(q,10)}{IDCG(q,10)} = \frac{9.61}{10.89} = 0.88 .


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: