首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 软件管理 > 软件架构设计 >

关于lucene的分词(3)

2012-10-16 
关于lucene的分词(三)到此为止这个简单的但是功能强大的分词器就写完了,下面咱们可以尝试写一个功能更强大

关于lucene的分词(三)

到此为止这个简单的但是功能强大的分词器就写完了,下面咱们可以尝试写一个功能更强大的分词器.

如何DIY一个功能更加强大Analyzer

譬如你有词典,然后你根据正向最大匹配法或者逆向最大匹配法写了一个分词方法,却想在Lucene中应用,很简单

你只要把他们包装成Lucene的TokenStream就好了.下边我以调用中科院写的ICTCLAS接口为例,进行演示.你去中科院

网站可以拿到此接口的free版本,谁叫你没钱呢,有钱,你就可以购买了.哈哈

好,由于ICTCLAS进行分词之后,在Java中,中间会以两个空格隔开!too easy,我们直接使用继承Lucene的

WhiteSpaceTokenizer就好了.

所以TjuChineseTokenizer 看起来像是这样.

public class TjuChineseTokenizer extends WhitespaceTokenizer

{

public TjuChineseTokenizer(Reader readerInput)

{

??? super(readerInput);

}

}

而TjuChineseAnalyzer看起来象是这样

public final class TjuChineseAnalyzer

??? extends Analyzer

{

private Set stopWords;

?

/** An array containing some common English words that are not usually useful

??? for searching. */

/*

???? public static final String[] CHINESE_ENGLISH_STOP_WORDS =

????? {

????? "a", "an", "and", "are", "as", "at", "be", "but", "by",

????? "for", "if", "in", "into", "is", "it",

????? "no", "not", "of", "on", "or", "s", "such",

????? "t", "that", "the", "their", "then", "there", "these",

????? "they", "this", "to", "was", "will", "with",

????? "我", "我们"

???? };

?? */

/** Builds an analyzer which removes words in ENGLISH_STOP_WORDS. */

public TjuChineseAnalyzer()

{

??? stopWords = StopFilter.makeStopSet(StopWords.SMART_CHINESE_ENGLISH_STOP_WORDS);

}

?

/** Builds an analyzer which removes words in the provided array. */

//提供独自的stopwords

public TjuChineseAnalyzer(String[] stopWords)

{

??? this.stopWords = StopFilter.makeStopSet(stopWords);

}

?

/** Filters LowerCaseTokenizer with StopFilter. */

public TokenStream tokenStream(String fieldName, Reader reader)

{

??? try

??? {

????? ICTCLAS splitWord = new ICTCLAS();

????? String inputString = FileIO.readerToString(reader);

????? //分词中间加入了空格

????? String resultString = splitWord.paragraphProcess(inputString);

????? System.out.println(resultString);

????? TokenStream result = new TjuChineseTokenizer(new StringReader(resultString));

?

????? result = new LowerCaseFilter(result);

????? //使用stopWords进行过滤

???? result = new StopFilter(result, stopWords);

????? //使用p-stemming算法进行过滤

???? result = new PorterStemFilter(result);

????? return result;

?

??? }

??? catch (IOException e)

??? {

????? System.out.println("转换出错");

????? return null;

??? }

}

?

public static void main(String[] args)

{

??? String string = "我爱中国人民";

??? Analyzer analyzer = new TjuChineseAnalyzer();

??? TokenStream ts = analyzer.tokenStream("dummy", new StringReader(string));

??? Token token;

??? System.out.println("Tokens:");

??? try

??? {

????? int n=0;

????? while ( (token = ts.next()) != null)

????? {

??????? System.out.println((n++)+"->"+token.toString());

????? }

??? }

??? catch (IOException ioe)

??? {

???? ioe.printStackTrace();

??? }

}

}


对于此程序的输出接口可以看一下

0->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(爱,3,4,word,1)

1->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(中国,6,8,word,1)

2->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(人民,10,12,word,1)

?

OK,经过这样一番讲解,你已经对Lucene的Analysis包认识的比较好了,当然如果你想更加了解,还是认真读读源码才好,

呵呵,源码说明一切!

热点排行