Proj1_Language_detector

Language_detector base on ML


项目流程与步骤

是一个有监督的文本分类问题。

  1. 读入文件并进行预处理(清洗,分词)

  2. 文本进行向量化表示(TF-IDF,BOW,word2vec,word embedding,ELMo…)

  3. 建模(机器学习,深度学习方法)

  4. 模型封装以备后续使用

  5. 项目部署到Web框架(基于Flask)

数据预处理(清洗,分词)

数据读入并查看数据

twitter数据,包含English, French, German, Spanish, Italian 和 Dutch 6种语言

1
!head -5 data.csv

1 december wereld aids dag voorlichting in zuidafrika over bieten taboes en optimisme,nl
1 millón de afectados ante las inundaciones en sri lanka unicef está distribuyendo ayuda de emergencia srilanka,es
1 millón de fans en facebook antes del 14 de febrero y paty miki dani y berta se tiran en paracaídas qué harías tú porunmillondefans,es
1 satellite galileo sottoposto ai test presso lesaestec nl galileo navigation space in inglese,it
10 der welt sind bei,de

1
2
3
4
5
in_f = open('data.csv')
lines = in_f.readlines()
in_f.close()
dataset = [(line.strip()[:-3], line.strip()[-2:]) for line in lines]
dataset[:5]

[(‘1 december wereld aids dag voorlichting in zuidafrika over bieten taboes en optimisme’,
‘nl’),
(‘1 millón de afectados ante las inundaciones en sri lanka unicef está distribuyendo ayuda de emergencia srilanka’,
‘es’),
(‘1 millón de fans en facebook antes del 14 de febrero y paty miki dani y berta se tiran en paracaídas qué harías tú porunmillondefans’,
‘es’),
(‘1 satellite galileo sottoposto ai test presso lesaestec nl galileo navigation space in inglese’,
‘it’),
(‘10 der welt sind bei’, ‘de’)]

数据集和验证集的拆分

1
2
3
from sklearn.model_selection import train_test_split
x, y = zip(*dataset) #将拆分的数据集进行合拢
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=1)# random_state是随机种子

数据清洗

用正则表达式对数据进行去噪处理,主要是清楚网址,@,#等内容

1
2
3
4
5
6
7
import re
def remove_noise(document):
noise_pattern = re.compile("|".join(["http\S+", "\@\w+", "\#\w+"]))
clean_text = re.sub(noise_pattern, "", document)
return clean_text.strip()

remove_noise("Trump images are now more popular than cat gifs. @trump #trends http://www.trumptrends.html")

文本进行向量化表示

(TF-IDF,BOW,word2vec,word embedding,ELMo…)

词频向量化和表示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
from sklearn.feature_extraction.text import CountVectorizer

vec = CountVectorizer(
lowercase=True, # 英文文本全小写
analyzer='char_wb', # 逐个字母解析
ngram_range=(1,3), # 1=出现的字母以及每个字母出现的次数,2=出现的连续2个字母,和连续2个字母出现的频次
# trump images are now... => 1gram = t,r,u,m,p... 2gram = tr,ru,um,mp...
max_features=1000, # keep the most common 1000 ngrams
preprocessor=remove_noise
)
vec.fit(x_train)

def get_features(x):
vec.transform(x)

import 分类器

注意这里分类器拟合需要对vector先进行transform处理

1
2
3
from sklearn.naive_bayes import MultinomialNB #多项式分类器
classifier = MultinomialNB()
classifier.fit(vec.transform(x_train), y_train)

3.3 查看分类效果

1
classifier.score(vec.transform(x_test), y_test)

建模(机器学习,深度学习方法)

模型存储

1
2
model_path = "model/language_detector.model"
language_detector.save_model(model_path)

模型加载

1
2
new_language_detector = LanguageDetector()
new_language_detector.load_model(model_path)

使用加载的模型预测

1
new_language_detector.predict("10 der welt sind bei")

模型封装以备后续使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
import re
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from joblib import dump, load


class LanguageDetector():
# 成员函数
def __init__(self, classifier=MultinomialNB()):
self.classifier = classifier
self.vectorizer = CountVectorizer(ngram_range=(1,2), max_features=1000, preprocessor=self._remove_noise)

# 私有函数,数据清洗
def _remove_noise(self, document):
noise_pattern = re.compile("|".join(["http\S+", "\@\w+", "\#\w+"]))
clean_text = re.sub(noise_pattern, "", document)
return clean_text

# 特征构建
def features(self, X):
return self.vectorizer.transform(X)

# 拟合数据
def fit(self, X, y):
self.vectorizer.fit(X)
self.classifier.fit(self.features(X), y)

# 预估类别
def predict(self, x):
return self.classifier.predict(self.features([x]))

# 测试集评分
def score(self, X, y):
return self.classifier.score(self.features(X), y)

# 模型持久化存储
def save_model(self, path):
dump((self.classifier, self.vectorizer), path)

# 模型加载
def load_model(self, path):
self.classifier, self.vectorizer = load(path)

项目部署到Web框架(基于Flask)

Flask 工具

部署参考文档

Flask部署机器学习.pdf