Lucene與搜索引擎技術(shù)
TjuAILab windshow 2005.11.11
Analysis包分析
算法和數(shù)據(jù)結(jié)構(gòu)分析:
由于Analysis包比較簡單,不詳述了!
算法:基于機械分詞 1-gram,2-gram,HMM(如果使用ICTCLAS接口的話)
數(shù)據(jù)結(jié)構(gòu):部分源碼用到了Set ,HashTable,HashMap
認(rèn)真理解Token
Lucene中的Analysis包專門用于完成對于索引文件的分詞.Lucene中的Token是一個非常重要的概念.
看一下其源碼實現(xiàn):
public final class Token {
String termText; // the text of the term
int startOffset; // start in source text
int endOffset; // end in source text
String type = "word"; // lexical type
private int positionIncrement = 1;
public Token(String text, int start, int end)
public Token(String text, int start, int end, String typ)
public void setPositionIncrement(int positionIncrement)
public int getPositionIncrement() { return positionIncrement; }
public final String termText() { return termText; }
public final int startOffset() { return startOffset; }
public void setStartOffset(int givenStartOffset)
public final int endOffset() { return endOffset; }
public void setEndOffset(int givenEndOffset)
public final String type() { return type; }
public String toString()
}
下面編一段代碼來看一下
TestToken.java
package org.apache.lucene.analysis.test;
import org.apache.lucene.analysis.*;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import java.io.*;
public class TestToken
{
public static void main(String[] args)
{
String string = new String("我愛天大,但我更愛中國");
//Analyzer analyzer = new StandardAnalyzer();
Analyzer analyzer = new TjuChineseAnalyzer();
//Analyzer analyzer= new StopAnalyzer();
TokenStream ts = analyzer.tokenStream("dummy",new StringReader(string));
Token token;
try
{
int n=0;
while ( (token = ts.next()) != null)
{
System.out.println((n++)+"->"+token.toString());
}
}
catch(IOException ioe)
{
ioe.printStackTrace();
}
}
}注意看其結(jié)果如下所示
0->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(我,0,1,<CJK>,1)
1->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(愛,1,2,<CJK>,1)
2->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(天,2,3,<CJK>,1)
3->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(大,3,4,<CJK>,1)
4->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(但,5,6,<CJK>,1)
5->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(我,6,7,<CJK>,1)
6->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(更,7,8,<CJK>,1)
7->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(愛,8,9,<CJK>,1)
8->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(中,9,10,<CJK>,1)
9->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(國,10,11,<CJK>,1)
注意:其中”,”被StandardAnalyzer給過濾掉了,所以大家注意第4個Token直接startOffset從5開始.
如果改用StopAnalyzer()
0->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(我愛天大,0,4,word,1)
1->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(但我更愛中國,5,11,word,1)
改用TjuChineseAnalyzer(我寫的,下文會講到如何去寫)
0->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(愛,3,4,word,1)
1->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(天大,6,8,word,1)
2->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(更,19,20,word,1)
3->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(愛,22,23,word,1)
4->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(中國,25,27,word,1)
講明白了Token,咱們來看以下其他的東西
一個TokenStream是用來走訪Token的iterator(迭代器)
看一下其源代碼:
public abstract class TokenStream {
public abstract Token next() throws IOException;
public void close() throws IOException {}
}
一個Tokenizer,is-a TokenStream(派生自TokenStream),其輸入為Reader
看一下其源碼如下:
public abstract class Tokenizer extends TokenStream {
protected Reader input;
protected Tokenizer() {}
protected Tokenizer(Reader input) {
this.input = input;
}
public void close() throws IOException {
input.close();
}
}
一個TokenFilter is–a TokenStream(派生自TokenStream),其義如名就是用來完成對TokenStream的過濾操作,譬如
去StopWords,將Token變?yōu)樾懙取?/span>
源碼如下:
public abstract class TokenFilter extends TokenStream {
protected TokenStream input;
protected TokenFilter() {}
protected TokenFilter(TokenStream input) {
this.input = input;
}
public void close() throws IOException {
input.close();
}
}
一個Analyzer就是一個TokenStream工廠
看一下其源碼就:
public abstract class Analyzer {
public TokenStream tokenStream(String fieldName, Reader reader)
{
return tokenStream(reader);
}
public TokenStream tokenStream(Reader reader)
{
return tokenStream(null, reader);
}
}
好,現(xiàn)在咱們來看一下Lucene的Analysis包下面的各個類文件都是用來干什么的。按照字典排序。
Analysis包中的源碼詳解
Analyzer.java 上文已經(jīng)講過。
CharTokenizer.java 此類為簡單一個抽象類,用來對基于字符的進行簡單分詞(tokenizer)
LetterTokenizer.java兩個非字符之間的字符串定義為token(舉例來說英文單詞由空白隔開,那個兩個空白之間的字符串即被定義為一個token。備注:對于絕大多數(shù)歐洲語言來說,這個類工作效能很好。當(dāng)時對于不用空白符分割的亞洲語言,效能極差(譬如中日韓)。)
LowerCaseFilter.java is-a TokenFilter用于將字母小寫化
LowerCaseTokenizer is-a Tokenizer功能上等價于LetterTokenizer+LowerCaseFilter
PerFieldAnalyzerWrapper是一個Analyzer,因為繼承自Analyzer當(dāng)不同的域(Field)需要不同的語言分析器(Analyzer)時,這個Analyzer就派上了用場。使用成員函數(shù)addAnalyzer可以增加一個非缺省的基于某個Field的analyzer。很少使用。
PorterStemFilter.java使用詞干抽取算法對每一個token流進行詞干抽取。
PorterStemmer.java 有名的P-stemming算法
SimpleAnalyzer.java
StopAnalyzer.java 具有過濾停用詞的功能
StopFilter.java StopFilter為一個Filter,主要用于從token流中去除StopWords
Token.java 上面已講.
TokenFilter.java 上面已經(jīng)講了
Tokenizer.java 上面已經(jīng)講了
TokenStream.java 上面已經(jīng)講了
WhitespaceAnalyzer.java
WhitespaceTokenizer.java 只是按照space區(qū)分Token.
由于Lucene的analyisis包下的Standard包下的StandardAnalyzer()功能很強大,而且支持CJK分詞,我們簡要說一下.
此包下的文件是有StandardTokenizer.jj經(jīng)過javac命令生成的.由于是機器自動生成的代碼,可能可讀性很差,想了解的話好好看看那個StandardTokenizer.jj文件就會比較明了了.
Lucene常用的Analyzer功能概述.
WhitespaceAnalyzer:僅僅是去除空格,對字符沒有lowcase化,不支持中文
SimpleAnalyzer:功能強于WhitespaceAnalyzer,將除去letter之外的符號全部過濾掉,并且將所有的字符lowcase化,不支持中文
StopAnalyzer:StopAnalyzer的功能超越了SimpleAnalyzer,在SimpleAnalyzer的基礎(chǔ)上
增加了去除StopWords的功能,不支持中文
StandardAnalyzer:英文的處理能力同于StopAnalyzer.支持中文采用的方法為單字切分.
ChineseAnalyzer:來自于Lucene的sand box.性能類似于StandardAnalyzer,缺點是不支持中英文混和分詞.
CJKAnalyzer:chedong寫的CJKAnalyzer的功能在英文處理上的功能和StandardAnalyzer相同
但是在漢語的分詞上,不能過濾掉標(biāo)點符號,即使用二元切分
TjuChineseAnalyzer:我寫的,功能最為強大.TjuChineseAnlyzer的功能相當(dāng)強大,在中文分詞方面由于其調(diào)用的為ICTCLAS的java接口.所以其在中文方面性能上同與ICTCLAS.其在英文分詞上采用了Lucene的StopAnalyzer,可以去除 stopWords,而且可以不區(qū)分大小寫,過濾掉各類標(biāo)點符號.
各個Analyzer的功能已經(jīng)比較介紹完畢了,現(xiàn)在咱們應(yīng)該學(xué)寫Analyzer,如何diy自己的analyzer呢??
如何DIY一個Analyzer
咱們寫一個Analyzer,要求有一下功能
(1) 可以處理中文和英文,對于中文實現(xiàn)的是單字切分,對于英文實現(xiàn)的是以空格切分.
(2) 對于英文部分要進行小寫化.
(3) 具有過濾功能,可以人工設(shè)定StopWords列表.如果不是人工設(shè)定,系統(tǒng)會給出默認(rèn)的StopWords列表.
(4) 使用P-stemming算法對于英文部分進行詞綴處理.
代碼如下:
public final class DiyAnalyzer
extends Analyzer
{
private Set stopWords;
public static final String[] CHINESE_ENGLISH_STOP_WORDS =
{
"a", "an", "and", "are", "as", "at", "be", "but", "by",
"for", "if", "in", "into", "is", "it",
"no", "not", "of", "on", "or", "s", "such",
"t", "that", "the", "their", "then", "there", "these",
"they", "this", "to", "was", "will", "with",
"我", "我們"
};
public DiyAnalyzer()
{
this.stopWords=StopFilter.makeStopSet(CHINESE_ENGLISH_STOP_WORDS);
}
public DiyAnalyzer(String[] stopWordList)
{
this.stopWords=StopFilter.makeStopSet(stopWordList);
}
public TokenStream tokenStream(String fieldName, Reader reader)
{
TokenStream result = new StandardTokenizer(reader);
result = new LowerCaseFilter(result);
result = new StopFilter(result, stopWords);
result = new PorterStemFilter(result);
return result;
}
public static void main(String[] args)
{
//好像英文的結(jié)束符號標(biāo)點.,StandardAnalyzer不能識別
String string = new String("我愛中國,我愛天津大學(xué)!I love China!Tianjin is a City");
Analyzer analyzer = new DiyAnalyzer();
TokenStream ts = analyzer.tokenStream("dummy", new StringReader(string));
Token token;
try
{
while ( (token = ts.next()) != null)
{
System.out.println(token.toString());
}
}
catch (IOException ioe)
{
ioe.printStackTrace();
}
}
}
可以看見其后的結(jié)果如下:
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(愛,1,2,<CJK>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(中,2,3,<CJK>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(國,3,4,<CJK>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(愛,6,7,<CJK>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(天,7,8,<CJK>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(津,8,9,<CJK>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(大,9,10,<CJK>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(學(xué),10,11,<CJK>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(i,12,13,<ALPHANUM>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(love,14,18,<ALPHANUM>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(china,19,24,<ALPHANUM>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(tianjin,25,32,<ALPHANUM>,1)
Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(citi,39,43,<ALPHANUM>,1)
到此為止這個簡單的但是功能強大的分詞器就寫完了,下面咱們可以嘗試寫一個功能更強大的分詞器.
如何DIY一個功能更加強大Analyzer
譬如你有詞典,然后你根據(jù)正向最大匹配法或者逆向最大匹配法寫了一個分詞方法,卻想在Lucene中應(yīng)用,很簡單
你只要把他們包裝成Lucene的TokenStream就好了.下邊我以調(diào)用中科院寫的ICTCLAS接口為例,進行演示.你去中科院
網(wǎng)站可以拿到此接口的free版本,誰叫你沒錢呢,有錢,你就可以購買了.哈哈
好,由于ICTCLAS進行分詞之后,在Java中,中間會以兩個空格隔開!too easy,我們直接使用繼承Lucene的
WhiteSpaceTokenizer就好了.
所以TjuChineseTokenizer 看起來像是這樣.
public class TjuChineseTokenizer extends WhitespaceTokenizer
{
public TjuChineseTokenizer(Reader readerInput)
{
super(readerInput);
}
}
而TjuChineseAnalyzer看起來象是這樣
public final class TjuChineseAnalyzer
extends Analyzer
{
private Set stopWords;
/** An array containing some common English words that are not usually useful
for searching. */
/*
public static final String[] CHINESE_ENGLISH_STOP_WORDS =
{
"a", "an", "and", "are", "as", "at", "be", "but", "by",
"for", "if", "in", "into", "is", "it",
"no", "not", "of", "on", "or", "s", "such",
"t", "that", "the", "their", "then", "there", "these",
"they", "this", "to", "was", "will", "with",
"我", "我們"
};
*/
/** Builds an analyzer which removes words in ENGLISH_STOP_WORDS. */
public TjuChineseAnalyzer()
{
stopWords = StopFilter.makeStopSet(StopWords.SMART_CHINESE_ENGLISH_STOP_WORDS);
}
/** Builds an analyzer which removes words in the provided array. */
//提供獨自的stopwords
public TjuChineseAnalyzer(String[] stopWords)
{
this.stopWords = StopFilter.makeStopSet(stopWords);
}
/** Filters LowerCaseTokenizer with StopFilter. */
public TokenStream tokenStream(String fieldName, Reader reader)
{
try
{
ICTCLAS splitWord = new ICTCLAS();
String inputString = FileIO.readerToString(reader);
//分詞中間加入了空格
String resultString = splitWord.paragraphProcess(inputString);
System.out.println(resultString);
TokenStream result = new TjuChineseTokenizer(new StringReader(resultString));
result = new LowerCaseFilter(result);
//使用stopWords進行過濾
result = new StopFilter(result, stopWords);
//使用p-stemming算法進行過濾
result = new PorterStemFilter(result);
return result;
}
catch (IOException e)
{
System.out.println("轉(zhuǎn)換出錯");
return null;
}
}
public static void main(String[] args)
{
String string = "我愛中國人民";
Analyzer analyzer = new TjuChineseAnalyzer();
TokenStream ts = analyzer.tokenStream("dummy", new StringReader(string));
Token token;
System.out.println("Tokens:");
try
{
int n=0;
while ( (token = ts.next()) != null)
{
System.out.println((n++)+"->"+token.toString());
}
}
catch (IOException ioe)
{
ioe.printStackTrace();
}
}
}對于此程序的輸出接口可以看一下
0->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(愛,3,4,word,1)
1->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(中國,6,8,word,1)
2->Token‘s (termText,startOffset,endOffset,type,positionIncrement) is:(人民,10,12,word,1)
OK,經(jīng)過這樣一番講解,你已經(jīng)對Lucene的Analysis包認(rèn)識的比較好了,當(dāng)然如果你想更加了解,還是認(rèn)真讀讀源碼才好,
呵呵,源碼說明一切!