分享

tf*idf

 昵称9361952 2012-03-25

tf*idf

From Wikipedia, the free encyclopedia
  (Redirected from Tf-idf)
Jump to: navigation, search

The tf*idf weight (term frequency–inverse document frequency) is a numerical statistic which reflects how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval and text mining. The tf-idf value increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus, which helps to control for the fact that some words are generally more common than others.

Variations of the tf*idf weighting scheme are often used by search engines as a central tool in scoring and ranking a document's relevance given a user query. tf*idf can be successfully used for stop-words filtering in various subject fields including text summarization and classification.[1]

One of the simplest ranking functions is computed by summing the tf*idf for each query term; many more sophisticated ranking functions are variants of this simple model.

Contents

 [hide

[edit] Motivation

Suppose we have a set of English text documents and wish to determine which document is most relevant to the query "the brown cow". A simple way to start out is by eliminating documents that do not contain all three words "the", "brown", and "cow", but this still leaves many documents. To further distinguish them, we might count the number of times each term occurs in each document and sum them all together; the number of times a term occurs in a document is called its term frequency.

However, because the term "the" is so common, this will tend to incorrectly emphasize documents which happen to use the word "the" more frequently, without giving enough weight to the more meaningful terms "brown" and "cow". The term "the" is not a good keyword to distinguish relevant and non-relevant documents and terms, unlike the less common words "brown" and "cow". Hence an inverse document frequency factor is incorporated which diminishes the weight of terms that occur very frequently in the collection and increases the weight of terms that occur rarely.

[edit] Mathematical details

The term count in the given document is simply the number of times a given term appears in that document. This count is usually normalized to prevent a bias towards longer documents (which may have a higher term count regardless of the actual importance of that term in the document) to give a measure of the importance of the term t within the particular document d. Thus we have the term frequency \mathrm{tf}(t,d). (Many variants have been suggested; see e.g. Manning, Raghavan and Schütze, p. 118.)

The inverse document frequency is a measure of whether the term is common or rare across all documents. It is obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient.

 \mathrm{idf}(t, D) =  \log \frac{|D|}{|\{d \in D: t \in d\}|}

with

  •  |D| : cardinality of D, or the total number of documents in the corpus
  •  |\{d \in D: t \in d\}|  : number of documents where the term  t appears (i.e.,  \mathrm{tf}(t,d) \neq 0). If the term is not in the corpus, this will lead to a division-by-zero. It is therefore common to adjust the formula to 1 + |\{d \in D: t \in d\}|.

Mathematically the base of the log function does not matter and constitutes a constant multiplicative factor towards the overall result.

Then the tf*idf is calculated as

\mathrm{tf*idf}(t,d,D) = \mathrm{tf}(t,d) \times \mathrm{idf}(t, D)

A high weight in tf*idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than 1, the value of idf (and td-idf) is greater than 0. As a term appears in more documents then ratio inside the log approaches 1 and making idf and td-idf approaching 0. If a 1 is added to the denominator, a term that appears in all documents will have negative idf, and a term that occurs in all but one document will have an idf equal to zero.

Various (mathematical) forms of the tf*idf term weight can be derived from a probabilistic retrieval model that mimicks human relevance decision making.

[edit] Example

Consider a document containing 100 words wherein the word cow appears 3 times. Following the previously defined formulas, the term frequency (TF) for cow is then (3 / 100) = 0.03. Now, assume we have 10 million documents and cow appears in one thousand of these. Then, the inverse document frequency is calculated as log(10 000 000 / 1 000) = 4. The tf*idf score is the product of these quantities: 0.03 × 4 = 0.12.

    本站是提供个人知识管理的网络存储空间,所有内容均由用户发布,不代表本站观点。请注意甄别内容中的联系方式、诱导购买等信息,谨防诈骗。如发现有害或侵权内容,请点击一键举报。
    转藏 分享 献花(0

    0条评论

    发表

    请遵守用户 评论公约

    类似文章 更多