WhitespaceAnalyzer:仅仅是去除空格,对字符没有lowcase化,不支持中文,会保留原文中的破折号,以空格为边界,将空格间的内容切分为最小的语汇单元。 SimpleAnalyzer:功能强于WhitespaceAnalyzer,将所有的字符lowcase化,不支持中文,保留停用词,并以非字母字符作为单个语汇单元的边界。 StopAnalyzer:StopAnalyzer的功能超越了SimpleAnalyzer,在SimpleAnalyzer的基础上增加了去除StopWords的功能,不支持中文 StandardAnalyzer:英文的处理能力同于StopAnalyzer,保留XY&Z形式的单词,且会把email地址保留下来。支持中文采用的方法为单字切分。 以上四个Analyzer可以用下例来说明: 输入字符串:XY&Z mail is - xyz@sohu.com =====Whitespace analyzer==== 分析方法:空格分割 XY&Z mail is - xyz@sohu.com =====Simple analyzer==== 分析方法:空格及各种符号分割 xy z mail is xyz sohu com =====stop analyzer==== 分析方法:空格及各种符号分割,去掉停止词,停止词包括 is,are,in,on,the等无实际意义 的词 xy z mail xyz sohu com =====standard analyzer==== 分析方法:混合分割,包括了去掉停止词,支持汉语 xy&z mail xyz@sohu.com
ChineseAnalyzer:来自于Lucene的sand box.性能类似于StandardAnalyzer,缺点是不支持中英文混和分词。 CJKAnalyzer:chedong写的CJKAnalyzer的功能在英文处理上的功能和StandardAnalyzer相同,但是在汉语的分词上,不能过滤掉标点符号,即使用二元切分。 TjuChineseAnalyzer:自定义的,功能最为强大。TjuChineseAnlyzer的功能相当强大,在中文分词方面由于其调用的为ICTCLAS的java接口.所以其在中文方面性能上同与ICTCLAS.其在英文分词上采用了Lucene的StopAnalyzer,可以去除 stopWords,而且可以不区分大小写,过滤掉各类标点符号。 各个Analyzer的功能已经比较介绍完毕了,现在咱们应该学写Analyzer,如何diy自己的analyzer呢?? 如何DIY一个Analyzer
咱们写一个Analyzer,要求有一下功能 (1) 可以处理中文和英文,对于中文实现的是单字切分,对于英文实现的是以空格切分. (2) 对于英文部分要进行小写化. (3) 具有过滤功能,可以人工设定StopWords列表.如果不是人工设定,系统会给出默认的StopWords列表. (4) 使用P-stemming算法对于英文部分进行词缀处理. 代码如下: public final class DiyAnalyzer extends Analyzer { private Set stopWords; public static final String[] CHINESE_ENGLISH_STOP_WORDS = { "a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "no", "not", "of", "on", "or", "s", "such", "t", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with", "我", "我们" }; public DiyAnalyzer() { this.stopWords=StopFilter.makeStopSet(CHINESE_ENGLISH_STOP_WORDS); } public DiyAnalyzer(String[] stopWordList) { this.stopWords=StopFilter.makeStopSet(stopWordList); } public TokenStream tokenStream(String fieldName, Reader reader) { TokenStream result = new StandardTokenizer(reader); result = new LowerCaseFilter(result); result = new StopFilter(result, stopWords); result = new PorterStemFilter(result); return result; } public static void main(String[] args) { //好像英文的结束符号标点.,StandardAnalyzer不能识别 String string = new String("我爱中国,我爱天津大学!I love China!Tianjin is a City"); Analyzer analyzer = new DiyAnalyzer(); TokenStream ts = analyzer.tokenStream("dummy", new StringReader(string)); Token token; try { while ( (token = ts.next()) != null) { System.out.println(token.toString()); } } catch (IOException ioe) { ioe.printStackTrace(); } } } 可以看见其后的结果如下: Token's (termText,startOffset,endOffset,type,positionIncrement) is:(爱,1,2,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(中,2,3,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(国,3,4,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(爱,6,7,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(天,7,8,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(津,8,9,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(大,9,10,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(学,10,11,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(i,12,13,<ALPHANUM>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(love,14,18,<ALPHANUM>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(china,19,24,<ALPHANUM>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(tianjin,25,32,<ALPHANUM>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(citi,39,43,<ALPHANUM>,1) 到此为止这个简单的但是功能强大的分词器就写完了,下面咱们可以尝试写一个功能更强大的分词器. 如何DIY一个功能更加强大Analyzer 譬如你有词典,然后你根据正向最大匹配法或者逆向最大匹配法写了一个分词方法,却想在Lucene中应用,很简单,你只要把他们包装成Lucene的TokenStream就好了.下边我以调用中科院写的ICTCLAS接口为例,进行演示.你去中科院网站可以拿到此接口的free版本,谁叫你没钱呢,有钱,你就可以购买了.哈哈 好,由于ICTCLAS进行分词之后,在Java中,中间会以两个空格隔开!too easy,我们直接使用继承Lucene的WhiteSpaceTokenizer就好了. 所以TjuChineseTokenizer 看起来像是这样. public class TjuChineseTokenizer extends WhitespaceTokenizer { public TjuChineseTokenizer(Reader readerInput) { super(readerInput); } } 而TjuChineseAnalyzer看起来象是这样 public final class TjuChineseAnalyzer extends Analyzer { private Set stopWords; /** An array containing some common English words that are not usually useful for searching. */ /* public static final String[] CHINESE_ENGLISH_STOP_WORDS = { "a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "no", "not", "of", "on", "or", "s", "such", "t", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with", "我", "我们" }; */ /** Builds an analyzer which removes words in ENGLISH_STOP_WORDS. */ public TjuChineseAnalyzer() { stopWords = StopFilter.makeStopSet(StopWords.SMART_CHINESE_ENGLISH_STOP_WORDS); } /** Builds an analyzer which removes words in the provided array. */ //提供独自的stopwords public TjuChineseAnalyzer(String[] stopWords) { this.stopWords = StopFilter.makeStopSet(stopWords); } /** Filters LowerCaseTokenizer with StopFilter. */ public TokenStream tokenStream(String fieldName, Reader reader) { try { ICTCLAS splitWord = new ICTCLAS(); String inputString = FileIO.readerToString(reader); //分词中间加入了空格 String resultString = splitWord.paragraphProcess(inputString); System.out.println(resultString); TokenStream result = new TjuChineseTokenizer(new StringReader(resultString)); result = new LowerCaseFilter(result); //使用stopWords进行过滤 result = new StopFilter(result, stopWords); //使用p-stemming算法进行过滤 result = new PorterStemFilter(result); return result; } catch (IOException e) { System.out.println("转换出错"); return null; } } public static void main(String[] args) { String string = "我爱中国人民"; Analyzer analyzer = new TjuChineseAnalyzer(); TokenStream ts = analyzer.tokenStream("dummy", new StringReader(string)); Token token; System.out.println("Tokens:"); try { int n=0; while ( (token = ts.next()) != null) { System.out.println((n++)+"->"+token.toString()); } } catch (IOException ioe) { ioe.printStackTrace(); } } } 对于此程序的输出接口可以看一下 0->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(爱,3,4,word,1) 1->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(中国,6,8,word,1) 2->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(人民,10,12,word,1) OK,经过这样一番讲解,你已经对Lucene的Analysis包认识的比较好了,当然如果你想更加了解,还是认真读读源码才好, 呵呵,源码说明一切!
|