- 浏览: 999850 次
- 性别:
- 来自: 上海
文章分类
- 全部博客 (1441)
- 软件思想&演讲 (9)
- 行业常识 (250)
- 时时疑问 (5)
- java/guava/python/php/ruby/R/scala/groovy (213)
- struct/spring/springmvc (37)
- mybatis/hibernate/JPA (10)
- mysql/oracle/sqlserver/db2/mongdb/redis/neo4j/GreenPlum/Teradata/hsqldb/Derby/sakila (268)
- js/jquery/jqueryUi/jqueryEaseyUI/extjs/angulrJs/react/es6/grunt/zepto/raphael (81)
- ZMQ/RabbitMQ/ActiveMQ/JMS/kafka (17)
- lucene/solr/nuth/elasticsearch/MG4J (167)
- html/css/ionic/nodejs/bootstrap (19)
- Linux/shell/centos (56)
- cvs/svn/git/sourceTree/gradle/ant/maven/mantis/docker/Kubernetes (26)
- sonatype nexus (1)
- tomcat/jetty/netty/jboss (9)
- 工具 (17)
- ETL/SPASS/MATLAB/RapidMiner/weka/kettle/DataX/Kylin (11)
- hadoop/spark/Hbase/Hive/pig/Zookeeper/HAWQ/cloudera/Impala/Oozie (190)
- ios/swift/android (9)
- 机器学习&算法&大数据 (18)
- Mesos是Apache下的开源分布式资源管理框架 (1)
- echarts/d3/highCharts/tableau (1)
- 行业技能图谱 (1)
- 大数据可视化 (2)
- tornado/ansible/twisted (2)
- Nagios/Cacti/Zabbix (0)
- eclipse/intellijIDEA/webstorm (5)
- cvs/svn/git/sourceTree/gradle/jira/bitbucket (4)
- jsp/jsf/flex/ZKoss (0)
- 测试技术 (2)
- splunk/flunm (2)
- 高并发/大数据量 (1)
- freemarker/vector/thymeleaf (1)
- docker/Kubernetes (2)
- dubbo/ESB/dubboX/wso2 (2)
最新评论
Lucene内置很多的分词器工具包,几乎涵盖了全球所有的国家和地区,最近散仙,在搞多语言分词的一个处理,主要国家有西班牙,葡萄牙,德语,法语,意大利,其实这些语系都与英语非常类似,都是以空格为分割的语种。
那么首先,探讨下分词器的词形还原和词干提取的对搜索的意义?在这之前,先看下两者的概念:
词形还原(lemmatization),是把一个任何形式的语言词汇还原为一般形式(能表达完整语义),而词干提取
(stemming)是抽取词的词干或词根形式(不一定能够表达完整语义)。词形还原和词干提取是词形规范化的两类
重要方式,都能够达到有效归并词形的目的,二者既有联系也有区别
详细介绍,请参考这篇文章
在电商搜索里,词干的抽取,和单复数的还原比较重要(这里主要针对名词来讲),因为这有关搜索的查准率,和查全率的命中,如果我们的分词器没有对这些词做过处理,会造成什么影响呢?那么请看如下的一个例子?
句子: i have two cats
分词器如果什么都没有做:
这时候我们搜cat,就会无命中结果,而必须搜cats才能命中到一条数据,而事实上cat和cats是同一个东西,只不过单词的形式不一样,这样以来,如果不做处理,我们的查全率和查全率都会下降,会涉及影响到我们的搜索体验,所以stemming这一步,在某些场合的分词中至关重要。
本篇,散仙,会参考源码分析一下,关于德语分词中中如何做的词干提取,先看下德语的分词声明:
Java代码 复制代码 收藏代码
1.List<String> list=new ArrayList<String>();
2.list.add("player");//这里面的词,不会被做词干抽取,词形还原
3.CharArraySet ar=new CharArraySet(Version.LUCENE_43,list , true);
4.//分词器的第二个参数是禁用词参数,第三个参数是排除不做词形转换,或单复数的词
5.GermanAnalyzer sa=new GermanAnalyzer(Version.LUCENE_43,null,ar);
接着,我们具体看下,在德语的分词器中,都经过了哪几部分的过滤处理:
Java代码 复制代码 收藏代码
1. protected TokenStreamComponents createComponents(String fieldName,
2. Reader reader) {
3. //标准分词器过滤
4. final Tokenizer source = new StandardTokenizer(matchVersion, reader);
5. TokenStream result = new StandardFilter(matchVersion, source);
6.//转小写过滤
7. result = new LowerCaseFilter(matchVersion, result);
8.//禁用词过滤
9. result = new StopFilter( matchVersion, result, stopwords);
10.//排除词过滤
11. result = new SetKeywordMarkerFilter(result, exclusionSet);
12. if (matchVersion.onOrAfter(Version.LUCENE_36)) {
13.//在lucene3.6以后的版本,采用如下filter过滤
14. //规格化,将德语中的特殊字符,映射成英语
15. result = new GermanNormalizationFilter(result);
16. //stem词干抽取,词性还原
17. result = new GermanLightStemFilter(result);
18. } else if (matchVersion.onOrAfter(Version.LUCENE_31)) {
19.//在lucene3.1至3.6的版本中,采用SnowballFilter处理
20. result = new SnowballFilter(result, new German2Stemmer());
21. } else {
22.//在lucene3.1之前的采用兼容的GermanStemFilter处理
23. result = new GermanStemFilter(result);
24. }
25. return new TokenStreamComponents(source, result);
26. }
OK,我们从源码中得知,在Lucene4.x中对德语的分词也做了向前和向后兼容,现在我们主要关注在lucene4.x之后的版本如何的词形转换,下面分别看下
result = new GermanNormalizationFilter(result);
result = new GermanLightStemFilter(result);
这两个类的功能:
Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
25.import org.apache.lucene.analysis.util.StemmerUtil;
26.
27./**
28. * Normalizes German characters according to the heuristics
29. * of the <a href="http://snowball.tartarus.org/algorithms/german2/stemmer.html">
30. * German2 snowball algorithm</a>.
31. * It allows for the fact that ä, ö and ü are sometimes written as ae, oe and ue.
32. *
33. *
39. * <p>
40. * This is useful if you want this normalization without using
41. * the German2 stemmer, or perhaps no stemming at all.
42. *上面的解释说得很清楚,主要是对德文的一些特殊字母,转换成对应的英文处理
43. *
44. */
45.
46.public final class GermanNormalizationFilter extends TokenFilter {
47. // FSM with 3 states:
48. private static final int N = 0; /* ordinary state */
49. private static final int V = 1; /* stops 'u' from entering umlaut state */
50. private static final int U = 2; /* umlaut state, allows e-deletion */
51.
52. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
53.
54. public GermanNormalizationFilter(TokenStream input) {
55. super(input);
56. }
57.
58. @Override
59. public boolean incrementToken() throws IOException {
60. if (input.incrementToken()) {
61. int state = N;
62. char buffer[] = termAtt.buffer();
63. int length = termAtt.length();
64. for (int i = 0; i < length; i++) {
65. final char c = buffer[i];
66. switch(c) {
67. case 'a':
68. case 'o':
69. state = U;
70. break;
71. case 'u':
72. state = (state == N) ? U : V;
73. break;
74. case 'e':
75. if (state == U)
76. length = StemmerUtil.delete(buffer, i--, length);
77. state = V;
78. break;
79. case 'i':
80. case 'q':
81. case 'y':
82. state = V;
83. break;
84. case 'ä':
85. buffer[i] = 'a';
86. state = V;
87. break;
88. case 'ö':
89. buffer[i] = 'o';
90. state = V;
91. break;
92. case 'ü':
93. buffer[i] = 'u';
94. state = V;
95. break;
96. case 'ß':
97. buffer[i++] = 's';
98. buffer = termAtt.resizeBuffer(1+length);
99. if (i < length)
100. System.arraycopy(buffer, i, buffer, i+1, (length-i));
101. buffer[i] = 's';
102. length++;
103. state = N;
104. break;
105. default:
106. state = N;
107. }
108. }
109. termAtt.setLength(length);
110. return true;
111. } else {
112. return false;
113. }
114. }
115.}
Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter;
25.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
26.import org.apache.lucene.analysis.tokenattributes.KeywordAttribute;
27.
28./**
29. * A {@link TokenFilter} that applies {@link GermanLightStemmer} to stem German
30. * words.
31. * <p>
32. * To prevent terms from being stemmed use an instance of
33. * {@link SetKeywordMarkerFilter} or a custom {@link TokenFilter} that sets
34. * the {@link KeywordAttribute} before this {@link TokenStream}.
35. *
36.
37. *
38. *
39. *这个类,主要做Stemmer(词干提取),而我们主要关注
40. *GermanLightStemmer这个类的作用
41. *
42. *
43. */
44.public final class GermanLightStemFilter extends TokenFilter {
45. private final GermanLightStemmer stemmer = new GermanLightStemmer();
46. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
47. private final KeywordAttribute keywordAttr = addAttribute(KeywordAttribute.class);
48.
49. public GermanLightStemFilter(TokenStream input) {
50. super(input);
51. }
52.
53. @Override
54. public boolean incrementToken() throws IOException {
55. if (input.incrementToken()) {
56. if (!keywordAttr.isKeyword()) {
57. final int newlen = stemmer.stem(termAtt.buffer(), termAtt.length());
58. termAtt.setLength(newlen);
59. }
60. return true;
61. } else {
62. return false;
63. }
64. }
65.}
下面看下,在GermanLightStemmer中,如何做的词干提取:源码如下:
Java代码 复制代码 收藏代码
1. package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20./*
21. * This algorithm is updated based on code located at:
22. * http://members.unine.ch/jacques.savoy/clef/
23. *
24. * Full copyright for that code follows:
25. */
26.
27./*
28. * Copyright (c) 2005, Jacques Savoy
29. * All rights reserved.
30. *
31. * Redistribution and use in source and binary forms, with or without
32. * modification, are permitted provided that the following conditions are met:
33. *
34. * Redistributions of source code must retain the above copyright notice, this
35. * list of conditions and the following disclaimer. Redistributions in binary
36. * form must reproduce the above copyright notice, this list of conditions and
37. * the following disclaimer in the documentation and/or other materials
38. * provided with the distribution. Neither the name of the author nor the names
39. * of its contributors may be used to endorse or promote products derived from
40. * this software without specific prior written permission.
41. *
42. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
43. * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
44. * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
45. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
46. * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
47. * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
48. * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
49. * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
50. * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
51. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
52. * POSSIBILITY OF SUCH DAMAGE.
53. */
54.
55./**
56. * Light Stemmer for German.
57. * <p>
58. * This stemmer implements the "UniNE" algorithm in:
59. * <i>Light Stemming Approaches for the French, Portuguese, German and Hungarian Languages</i>
60. * Jacques Savoy
61. */
62.public class GermanLightStemmer {
63.
64. //处理特殊字符映射
65. public int stem(char s[], int len) {
66. for (int i = 0; i < len; i++)
67. switch(s[i]) {
68. case 'ä':
69. case 'à':
70. case 'á':
71. case 'â': s[i] = 'a'; break;
72. case 'ö':
73. case 'ò':
74. case 'ó':
75. case 'ô': s[i] = 'o'; break;
76. case 'ï':
77. case 'ì':
78. case 'í':
79. case 'î': s[i] = 'i'; break;
80. case 'ü':
81. case 'ù':
82. case 'ú':
83. case 'û': s[i] = 'u'; break;
84. }
85.
86. len = step1(s, len);
87. return step2(s, len);
88. }
89.
90.
91. private boolean stEnding(char ch) {
92. switch(ch) {
93. case 'b':
94. case 'd':
95. case 'f':
96. case 'g':
97. case 'h':
98. case 'k':
99. case 'l':
100. case 'm':
101. case 'n':
102. case 't': return true;
103. default: return false;
104. }
105. }
106. //处理基于以下规则的词干抽取和缩减
107. private int step1(char s[], int len) {
108. if (len > 5 && s[len-3] == 'e' && s[len-2] == 'r' && s[len-1] == 'n')
109. return len - 3;
110.
111. if (len > 4 && s[len-2] == 'e')
112. switch(s[len-1]) {
113. case 'm':
114. case 'n':
115. case 'r':
116. case 's': return len - 2;
117. }
118.
119. if (len > 3 && s[len-1] == 'e')
120. return len - 1;
121.
122. if (len > 3 && s[len-1] == 's' && stEnding(s[len-2]))
123. return len - 1;
124.
125. return len;
126. }
127. //处理基于以下规则est,er,en等的词干抽取和缩减
128. private int step2(char s[], int len) {
129. if (len > 5 && s[len-3] == 'e' && s[len-2] == 's' && s[len-1] == 't')
130. return len - 3;
131.
132. if (len > 4 && s[len-2] == 'e' && (s[len-1] == 'r' || s[len-1] == 'n'))
133. return len - 2;
134.
135. if (len > 4 && s[len-2] == 's' && s[len-1] == 't' && stEnding(s[len-3]))
136. return len - 2;
137.
138. return len;
139. }
140.}
具体的分析结果如下:
Java代码 复制代码 收藏代码
1.搜索技术交流群:324714439
2.大数据hadoop交流群:376932160
3.
4.0,将一些德语特殊字符,替换成对应的英文表示
5.1,将所有词干元音还原 a ,o,i,u
6.ste(2)(按先后顺序,符合以下任意一项,就完成一次校验(return))
7.2,单词长度大于5的词,以ern结尾的,直接去掉
8.3,单词长度大于4的词,以em,en,es,er结尾的,直接去掉
9.4,单词长度大于3的词,以e结尾的直接去掉
10.5,单词长度大于3的词,以bs,ds,fs,gs,hs,ks,ls,ms,ns,ts结尾的,直接去掉s
11.step(3)(按先后顺序,符合以下任意一项,就完成一次校验(return))
12.6,单词长度大于5的词,以est结尾的,直接去掉
13.7,单词长度大于4的词,以er或en结尾的直接去掉
14.8,单词长度大于4的词,bst,dst,fst,gst,hst,kst,lst,mst,nst,tst,直接去掉后两位字母st
最后,结合网上资料分析,基于er,en,e,s结尾的是做单复数转换的,其他的几条规则主要是对非名词的单词,做词干抽取。
那么首先,探讨下分词器的词形还原和词干提取的对搜索的意义?在这之前,先看下两者的概念:
词形还原(lemmatization),是把一个任何形式的语言词汇还原为一般形式(能表达完整语义),而词干提取
(stemming)是抽取词的词干或词根形式(不一定能够表达完整语义)。词形还原和词干提取是词形规范化的两类
重要方式,都能够达到有效归并词形的目的,二者既有联系也有区别
详细介绍,请参考这篇文章
在电商搜索里,词干的抽取,和单复数的还原比较重要(这里主要针对名词来讲),因为这有关搜索的查准率,和查全率的命中,如果我们的分词器没有对这些词做过处理,会造成什么影响呢?那么请看如下的一个例子?
句子: i have two cats
分词器如果什么都没有做:
这时候我们搜cat,就会无命中结果,而必须搜cats才能命中到一条数据,而事实上cat和cats是同一个东西,只不过单词的形式不一样,这样以来,如果不做处理,我们的查全率和查全率都会下降,会涉及影响到我们的搜索体验,所以stemming这一步,在某些场合的分词中至关重要。
本篇,散仙,会参考源码分析一下,关于德语分词中中如何做的词干提取,先看下德语的分词声明:
Java代码 复制代码 收藏代码
1.List<String> list=new ArrayList<String>();
2.list.add("player");//这里面的词,不会被做词干抽取,词形还原
3.CharArraySet ar=new CharArraySet(Version.LUCENE_43,list , true);
4.//分词器的第二个参数是禁用词参数,第三个参数是排除不做词形转换,或单复数的词
5.GermanAnalyzer sa=new GermanAnalyzer(Version.LUCENE_43,null,ar);
接着,我们具体看下,在德语的分词器中,都经过了哪几部分的过滤处理:
Java代码 复制代码 收藏代码
1. protected TokenStreamComponents createComponents(String fieldName,
2. Reader reader) {
3. //标准分词器过滤
4. final Tokenizer source = new StandardTokenizer(matchVersion, reader);
5. TokenStream result = new StandardFilter(matchVersion, source);
6.//转小写过滤
7. result = new LowerCaseFilter(matchVersion, result);
8.//禁用词过滤
9. result = new StopFilter( matchVersion, result, stopwords);
10.//排除词过滤
11. result = new SetKeywordMarkerFilter(result, exclusionSet);
12. if (matchVersion.onOrAfter(Version.LUCENE_36)) {
13.//在lucene3.6以后的版本,采用如下filter过滤
14. //规格化,将德语中的特殊字符,映射成英语
15. result = new GermanNormalizationFilter(result);
16. //stem词干抽取,词性还原
17. result = new GermanLightStemFilter(result);
18. } else if (matchVersion.onOrAfter(Version.LUCENE_31)) {
19.//在lucene3.1至3.6的版本中,采用SnowballFilter处理
20. result = new SnowballFilter(result, new German2Stemmer());
21. } else {
22.//在lucene3.1之前的采用兼容的GermanStemFilter处理
23. result = new GermanStemFilter(result);
24. }
25. return new TokenStreamComponents(source, result);
26. }
OK,我们从源码中得知,在Lucene4.x中对德语的分词也做了向前和向后兼容,现在我们主要关注在lucene4.x之后的版本如何的词形转换,下面分别看下
result = new GermanNormalizationFilter(result);
result = new GermanLightStemFilter(result);
这两个类的功能:
Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
25.import org.apache.lucene.analysis.util.StemmerUtil;
26.
27./**
28. * Normalizes German characters according to the heuristics
29. * of the <a href="http://snowball.tartarus.org/algorithms/german2/stemmer.html">
30. * German2 snowball algorithm</a>.
31. * It allows for the fact that ä, ö and ü are sometimes written as ae, oe and ue.
32. *
33. *
34. * <li> 'ß' is replaced by 'ss'
35. * <li> 'ä', 'ö', 'ü' are replaced by 'a', 'o', 'u', respectively.
36. * <li> 'ae' and 'oe' are replaced by 'a', and 'o', respectively.
37. * <li> 'ue' is replaced by 'u', when not following a vowel or q.
38. *
39. * <p>
40. * This is useful if you want this normalization without using
41. * the German2 stemmer, or perhaps no stemming at all.
42. *上面的解释说得很清楚,主要是对德文的一些特殊字母,转换成对应的英文处理
43. *
44. */
45.
46.public final class GermanNormalizationFilter extends TokenFilter {
47. // FSM with 3 states:
48. private static final int N = 0; /* ordinary state */
49. private static final int V = 1; /* stops 'u' from entering umlaut state */
50. private static final int U = 2; /* umlaut state, allows e-deletion */
51.
52. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
53.
54. public GermanNormalizationFilter(TokenStream input) {
55. super(input);
56. }
57.
58. @Override
59. public boolean incrementToken() throws IOException {
60. if (input.incrementToken()) {
61. int state = N;
62. char buffer[] = termAtt.buffer();
63. int length = termAtt.length();
64. for (int i = 0; i < length; i++) {
65. final char c = buffer[i];
66. switch(c) {
67. case 'a':
68. case 'o':
69. state = U;
70. break;
71. case 'u':
72. state = (state == N) ? U : V;
73. break;
74. case 'e':
75. if (state == U)
76. length = StemmerUtil.delete(buffer, i--, length);
77. state = V;
78. break;
79. case 'i':
80. case 'q':
81. case 'y':
82. state = V;
83. break;
84. case 'ä':
85. buffer[i] = 'a';
86. state = V;
87. break;
88. case 'ö':
89. buffer[i] = 'o';
90. state = V;
91. break;
92. case 'ü':
93. buffer[i] = 'u';
94. state = V;
95. break;
96. case 'ß':
97. buffer[i++] = 's';
98. buffer = termAtt.resizeBuffer(1+length);
99. if (i < length)
100. System.arraycopy(buffer, i, buffer, i+1, (length-i));
101. buffer[i] = 's';
102. length++;
103. state = N;
104. break;
105. default:
106. state = N;
107. }
108. }
109. termAtt.setLength(length);
110. return true;
111. } else {
112. return false;
113. }
114. }
115.}
Java代码 复制代码 收藏代码
1.package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20.import java.io.IOException;
21.
22.import org.apache.lucene.analysis.TokenFilter;
23.import org.apache.lucene.analysis.TokenStream;
24.import org.apache.lucene.analysis.miscellaneous.SetKeywordMarkerFilter;
25.import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
26.import org.apache.lucene.analysis.tokenattributes.KeywordAttribute;
27.
28./**
29. * A {@link TokenFilter} that applies {@link GermanLightStemmer} to stem German
30. * words.
31. * <p>
32. * To prevent terms from being stemmed use an instance of
33. * {@link SetKeywordMarkerFilter} or a custom {@link TokenFilter} that sets
34. * the {@link KeywordAttribute} before this {@link TokenStream}.
35. *
36.
37. *
38. *
39. *这个类,主要做Stemmer(词干提取),而我们主要关注
40. *GermanLightStemmer这个类的作用
41. *
42. *
43. */
44.public final class GermanLightStemFilter extends TokenFilter {
45. private final GermanLightStemmer stemmer = new GermanLightStemmer();
46. private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
47. private final KeywordAttribute keywordAttr = addAttribute(KeywordAttribute.class);
48.
49. public GermanLightStemFilter(TokenStream input) {
50. super(input);
51. }
52.
53. @Override
54. public boolean incrementToken() throws IOException {
55. if (input.incrementToken()) {
56. if (!keywordAttr.isKeyword()) {
57. final int newlen = stemmer.stem(termAtt.buffer(), termAtt.length());
58. termAtt.setLength(newlen);
59. }
60. return true;
61. } else {
62. return false;
63. }
64. }
65.}
下面看下,在GermanLightStemmer中,如何做的词干提取:源码如下:
Java代码 复制代码 收藏代码
1. package org.apache.lucene.analysis.de;
2.
3./*
4. * Licensed to the Apache Software Foundation (ASF) under one or more
5. * contributor license agreements. See the NOTICE file distributed with
6. * this work for additional information regarding copyright ownership.
7. * The ASF licenses this file to You under the Apache License, Version 2.0
8. * (the "License"); you may not use this file except in compliance with
9. * the License. You may obtain a copy of the License at
10. *
11. * http://www.apache.org/licenses/LICENSE-2.0
12. *
13. * Unless required by applicable law or agreed to in writing, software
14. * distributed under the License is distributed on an "AS IS" BASIS,
15. * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16. * See the License for the specific language governing permissions and
17. * limitations under the License.
18. */
19.
20./*
21. * This algorithm is updated based on code located at:
22. * http://members.unine.ch/jacques.savoy/clef/
23. *
24. * Full copyright for that code follows:
25. */
26.
27./*
28. * Copyright (c) 2005, Jacques Savoy
29. * All rights reserved.
30. *
31. * Redistribution and use in source and binary forms, with or without
32. * modification, are permitted provided that the following conditions are met:
33. *
34. * Redistributions of source code must retain the above copyright notice, this
35. * list of conditions and the following disclaimer. Redistributions in binary
36. * form must reproduce the above copyright notice, this list of conditions and
37. * the following disclaimer in the documentation and/or other materials
38. * provided with the distribution. Neither the name of the author nor the names
39. * of its contributors may be used to endorse or promote products derived from
40. * this software without specific prior written permission.
41. *
42. * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
43. * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
44. * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
45. * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
46. * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
47. * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
48. * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
49. * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
50. * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
51. * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
52. * POSSIBILITY OF SUCH DAMAGE.
53. */
54.
55./**
56. * Light Stemmer for German.
57. * <p>
58. * This stemmer implements the "UniNE" algorithm in:
59. * <i>Light Stemming Approaches for the French, Portuguese, German and Hungarian Languages</i>
60. * Jacques Savoy
61. */
62.public class GermanLightStemmer {
63.
64. //处理特殊字符映射
65. public int stem(char s[], int len) {
66. for (int i = 0; i < len; i++)
67. switch(s[i]) {
68. case 'ä':
69. case 'à':
70. case 'á':
71. case 'â': s[i] = 'a'; break;
72. case 'ö':
73. case 'ò':
74. case 'ó':
75. case 'ô': s[i] = 'o'; break;
76. case 'ï':
77. case 'ì':
78. case 'í':
79. case 'î': s[i] = 'i'; break;
80. case 'ü':
81. case 'ù':
82. case 'ú':
83. case 'û': s[i] = 'u'; break;
84. }
85.
86. len = step1(s, len);
87. return step2(s, len);
88. }
89.
90.
91. private boolean stEnding(char ch) {
92. switch(ch) {
93. case 'b':
94. case 'd':
95. case 'f':
96. case 'g':
97. case 'h':
98. case 'k':
99. case 'l':
100. case 'm':
101. case 'n':
102. case 't': return true;
103. default: return false;
104. }
105. }
106. //处理基于以下规则的词干抽取和缩减
107. private int step1(char s[], int len) {
108. if (len > 5 && s[len-3] == 'e' && s[len-2] == 'r' && s[len-1] == 'n')
109. return len - 3;
110.
111. if (len > 4 && s[len-2] == 'e')
112. switch(s[len-1]) {
113. case 'm':
114. case 'n':
115. case 'r':
116. case 's': return len - 2;
117. }
118.
119. if (len > 3 && s[len-1] == 'e')
120. return len - 1;
121.
122. if (len > 3 && s[len-1] == 's' && stEnding(s[len-2]))
123. return len - 1;
124.
125. return len;
126. }
127. //处理基于以下规则est,er,en等的词干抽取和缩减
128. private int step2(char s[], int len) {
129. if (len > 5 && s[len-3] == 'e' && s[len-2] == 's' && s[len-1] == 't')
130. return len - 3;
131.
132. if (len > 4 && s[len-2] == 'e' && (s[len-1] == 'r' || s[len-1] == 'n'))
133. return len - 2;
134.
135. if (len > 4 && s[len-2] == 's' && s[len-1] == 't' && stEnding(s[len-3]))
136. return len - 2;
137.
138. return len;
139. }
140.}
具体的分析结果如下:
Java代码 复制代码 收藏代码
1.搜索技术交流群:324714439
2.大数据hadoop交流群:376932160
3.
4.0,将一些德语特殊字符,替换成对应的英文表示
5.1,将所有词干元音还原 a ,o,i,u
6.ste(2)(按先后顺序,符合以下任意一项,就完成一次校验(return))
7.2,单词长度大于5的词,以ern结尾的,直接去掉
8.3,单词长度大于4的词,以em,en,es,er结尾的,直接去掉
9.4,单词长度大于3的词,以e结尾的直接去掉
10.5,单词长度大于3的词,以bs,ds,fs,gs,hs,ks,ls,ms,ns,ts结尾的,直接去掉s
11.step(3)(按先后顺序,符合以下任意一项,就完成一次校验(return))
12.6,单词长度大于5的词,以est结尾的,直接去掉
13.7,单词长度大于4的词,以er或en结尾的直接去掉
14.8,单词长度大于4的词,bst,dst,fst,gst,hst,kst,lst,mst,nst,tst,直接去掉后两位字母st
最后,结合网上资料分析,基于er,en,e,s结尾的是做单复数转换的,其他的几条规则主要是对非名词的单词,做词干抽取。
发表评论
-
elasticsearch异常信息汇总
2017-11-06 09:34 14701.IndexMissingException 异常信息 ... -
Elasticsearch的架构
2018-03-22 10:30 467为什么要学习架构? Elasticsearch的一些架构 ... -
怎么在Ubuntu上打开端口
2017-10-21 20:45 0Netstat -tln 命令是用来查看linux的端口使用情 ... -
Elasticsearch工作原理
2018-03-22 10:30 412一、关于搜索引擎 各 ... -
Elasticsearch的路由(Routing)特性
2017-10-11 10:41 0Elasticsearch路由机制介 ... -
Elasticsearch中的segment理解
2017-10-11 09:58 1770在Elasticsearch中, 需要搞清楚几个名词,如se ... -
Elasticsearch的路由(Routing)特性
2017-09-28 16:52 569Elasticsearch路由机制介绍 Elastics ... -
Elasticsearch 的 Shard 和 Segment
2017-09-28 16:05 1152Shard(分片) 一个Shard就是一个Lu ... -
开源大数据查询分析引擎现状
2017-09-22 03:04 794大数据查询分析是云计算中核心问题之一,自从Google在20 ... -
大数据处理方面的 7 个开源搜索引擎
2017-09-22 03:01 447大数据是一个包括一切 ... -
开源大数据查询分析引擎现状
2017-09-23 11:26 509大数据查询分析是云计算中核心问题之一,自从Google在2 ... -
elasticsearch 把很多类型都放在一个索引下面 会不会导致查询慢
2017-09-25 09:45 952主要看数据量ES索引优 ... -
腾讯大数据Hermes爱马仕的系统
2017-09-23 11:15 917腾讯大数据最近做了几件事,上线了一个官方网站http:// ... -
配置高性能Elasticsearch集群的9个小贴士
2017-09-25 10:02 551Loggly服务底层的很多 ... -
Elasticsearch与Solr
2017-09-25 16:24 518Elasticsearch简介* Elasti ... -
大数据杂谈微课堂|Elasticsearch 5.0新版本的特性与改进
2017-09-26 09:57 762Elastic将在今年秋季的 ... -
ElasticSearch性能优化策略
2017-09-26 09:51 416ElasticSearch性能优化主 ... -
ES索引优化
2017-09-19 20:39 0ES索引优化篇主要从两个方面解决问题,一是索引数据过程;二是 ... -
分词与索引的关系
2017-09-19 20:33 0分词与索引,是中文搜索里最重要的两个技术,而且两者间是密不可 ... -
Elasticsearch中的segment理解
2017-09-19 20:30 0在Elasticsearch中, 需要搞清楚几个名词,如se ...
相关推荐
compass2.1.4包+所用lucene包+中文分词器所用包
来自“猎图网 www.richmap.cn”基于IKAnalyzer分词算法的准商业化Lucene中文分词器。 1. 正向全切分算法,42万汉字字符/每秒的处理能力(IBM ThinkPad 酷睿I 1.6G 1G内存 WinXP) 2. 对数量词、地名、路名的...
IKAnalyzer 是一个开源的,基于java语言开发的轻量级的中文分词工具包,将一段文字进行IK分词处理一般经过:词典加载、预处理、分词器分词、歧义处理、善后结尾 五个部分
lucene.NET 中文分词 高亮 lucene.NET 中文分词 高亮 lucene.NET 中文分词 高亮 lucene.NET 中文分词 高亮
lucene默认自带的分词器对中文支持并不好,所以对于中文索引的分词器,建议使用第三方开源的中文分词器
Lucene.Net只是一个全文检索开发包,不是一个成型的搜索引擎 它提供了类似SQLServer数据库正式版中的全文检索功能的... 但是Lucene.Net内置分词算法对中文支持不是很好,以下会使用国内较为流行的分词算法 -- 盘古分词
Lucene.Net+盘古分词是一个常见的中文信息检索组合。但是随着盘古分词停止更新,与Lucene.Net3.0无法兼容。为了使得大家少走弯路,本人利用Lucene.Net2.9+盘古分词2.3搭建了一个Demo,里面包含了两个模块的源码,方便...
并按照lucene的得分算法进行多条件检索并按照得分算法计算匹配度排序。 可以输入一句话进行检索。 lucene.net的版本为2.9.2 盘古分词的版本为2.3.1 并实现了高亮功能。高亮上使用的是盘古分词的高亮算法。 有点小bug...
lucene6.6中适配的拼音分词jar包以及ik中文分词jar包,以及自定义的分词词组包和停词包,可以实现lucene建立索引时进行拼音和中文的分词。
lucene3.0 中文分词器, 庖丁解牛
lucene搜索引擎中文分词器,版本2.0.4,强大的中文分词效果在其它中文分词器当中独领风骚
Lucene中文分词器组件,不错的。
lucene java 搜索引擎 比较经典的全文搜索引擎,最近发现在官方网站上找不到了,放在这里给大家,方便点,多谢谢支持!
Lucene.Net+盘古分词是一个常见的中文信息检索组合。但是随着盘古分词停止更新,与Lucene.Net3.0无法兼容。为了使得大家少走弯路,本人利用Lucene.Net2.9+盘古分词2.3搭建了一个Demo,里面包含了两个模块的源码,方便...
Ik中分分词器介绍,可用于lucene4.0
压缩包内含有Lucene分词时所需要的jar包,可以添加到本地maven当中使用,但不含demo,如需使用Lucene的demo,还请下载Lucene-Demo.rar
IKAnalyzer是一个开源的,基于Java语言开发的轻量级的中文分词语言包,它是以Lucene为应用主体,结合词典分词和文法分析算法的中文词组组件。从3.0版本开始,IK发展为面向java的公用分词组件,独立Lucene项目,同时...
lucene自带的中文分词器,将jar放入编译路径即可使用
lucene 所有jar包 包含IKAnalyzer分词器
maven库中现有的ik分词器只支持低版本的Lucene,想要支持高版本的Lucene,需要重写老版本ik分词器里的部分代码. 下载代码片段后导入项目,在创建分词器的时候把new IKAnalyzer替换为 new IKAnalyzer5x即可。