diff --git a/README.md b/README.md index b0764d0c..094b233f 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ * 合作or侵权,请联系: `apachecn@163.com` * **我们不是 Apache 的官方组织/机构/团体,只是 Apache 技术栈(以及 AI)的爱好者!** -* **ApacheCN - 学习机器学习群【629470233】ApacheCN - 学习机器学习群[629470233]** +* **ApacheCN - 学习机器学习群【915394271】ApacheCN - 学习机器学习群[915394271]** > **欢迎任何人参与和完善:一个人可以走的很快,但是一群人却可以走的更远** @@ -243,6 +243,21 @@ * [实战项目 3 优化 过拟合和欠拟合](docs/TensorFlow2.x/实战项目_3_优化_过拟合和欠拟合.md) * [实战项目 4 古诗词自动生成](docs/TensorFlow2.x/实战项目_4_古诗词自动生成.md) +切分(分词) + +词性标注 + +命名实体识别 + +句法分析 + +WordNet可以被看作是一个同义词词典 + +词干提取(stemming)与词形还原(lemmatization) + +* https://www.biaodianfu.com/nltk.html/amp + + ## 3.自然语言处理 学习过程中-内心复杂的变化!!! @@ -512,55 +527,6 @@ mage字幕是为给定图像生成文本描述的任务。 * [@那伊抹微笑](https://github.com/wangyangting) * [@LAMDA-健忘症]() 永久留任-非常感谢对群的贡献 -> Ml 第一届 (2017-09-01) - -* [@易漠]() -* [@Mike](https://github.com/mikechengwei) -* [@Books]() -* [@李孟禹]() -* [@张假飞]() -* [@Glassy]() -* [@红色石头]() -* [@微光同尘]() - -> Ml 第二届 (2018-07-04) - -* [@张假飞]() -* [@李孟禹]() -* [@小明教主]() -* [@平淡的天]() -* [@凌少skierゞ]() -* [@じ☆νЁ坐看云起]() -* [古柳-DesertsX]() -* [woodchuck]() -* [自由精灵]() -* [楚盟]() -* [99杆清台]() -* [时空守望者@]() -* [只想发论文的渣渣]() -* [目标: ml劝退专家]() - -> Ml 第三届 (2019-01-01) - -* [只会喊666的存在]() -* [codefun007.xyz]() -* [荼靡]() -* [大鱼]() -* [青鸟]() -* [古柳-DesertsX]() -* [Edge]() -* [Alluka]() -* [不发篇paper不改名片]() -* [FontTian]() -* [Bigjing]() -* [仁 礼 智 爱]() -* [可啪的小乖受]() -* [老古董]() -* [时空守望者]() -* [我好菜啊]() -* [Messi 19]() -* [萌Jay小公举]() - > Ml 第四届 (2019-06-01) * [佛学爱好者]() @@ -584,6 +550,55 @@ mage字幕是为给定图像生成文本描述的任务。 * [zhiqing]() * [SrL.z]() +> Ml 第三届 (2019-01-01) + +* [只会喊666的存在]() +* [codefun007.xyz]() +* [荼靡]() +* [大鱼]() +* [青鸟]() +* [古柳-DesertsX]() +* [Edge]() +* [Alluka]() +* [不发篇paper不改名片]() +* [FontTian]() +* [Bigjing]() +* [仁 礼 智 爱]() +* [可啪的小乖受]() +* [老古董]() +* [时空守望者]() +* [我好菜啊]() +* [Messi 19]() +* [萌Jay小公举]() + +> Ml 第二届 (2018-07-04) + +* [@张假飞]() +* [@李孟禹]() +* [@小明教主]() +* [@平淡的天]() +* [@凌少skierゞ]() +* [@じ☆νЁ坐看云起]() +* [古柳-DesertsX]() +* [woodchuck]() +* [自由精灵]() +* [楚盟]() +* [99杆清台]() +* [时空守望者@]() +* [只想发论文的渣渣]() +* [目标: ml劝退专家]() + +> Ml 第一届 (2017-09-01) + +* [@易漠]() +* [@Mike](https://github.com/mikechengwei) +* [@Books]() +* [@李孟禹]() +* [@张假飞]() +* [@Glassy]() +* [@红色石头]() +* [@微光同尘]() + **欢迎贡献者不断的追加** ## 免责声明 - 【只供学习参考】 diff --git a/src/py3.x/tensorflow2.x/EmotionData.xlsx b/src/py3.x/tensorflow2.x/Emotion/EmotionData.xlsx similarity index 100% rename from src/py3.x/tensorflow2.x/EmotionData.xlsx rename to src/py3.x/tensorflow2.x/Emotion/EmotionData.xlsx diff --git a/src/py3.x/tensorflow2.x/EmotionData的副本.xlsx b/src/py3.x/tensorflow2.x/Emotion/EmotionData的副本.xlsx similarity index 100% rename from src/py3.x/tensorflow2.x/EmotionData的副本.xlsx rename to src/py3.x/tensorflow2.x/Emotion/EmotionData的副本.xlsx diff --git a/src/py3.x/tensorflow2.x/Emotion_acc.png b/src/py3.x/tensorflow2.x/Emotion/Emotion_acc.png similarity index 100% rename from src/py3.x/tensorflow2.x/Emotion_acc.png rename to src/py3.x/tensorflow2.x/Emotion/Emotion_acc.png diff --git a/src/py3.x/tensorflow2.x/Emotion_loss.png b/src/py3.x/tensorflow2.x/Emotion/Emotion_loss.png similarity index 100% rename from src/py3.x/tensorflow2.x/Emotion_loss.png rename to src/py3.x/tensorflow2.x/Emotion/Emotion_loss.png diff --git a/src/py3.x/tensorflow2.x/config.py b/src/py3.x/tensorflow2.x/config.py index 7a7322f0..5f03420a 100644 --- a/src/py3.x/tensorflow2.x/config.py +++ b/src/py3.x/tensorflow2.x/config.py @@ -8,10 +8,10 @@ class Config(object): poetry_file = 'poetry.txt' weight_file = 'poetry_model.h5' - data_file = 'EmotionData.xlsx' - model_file = 'EmotionModel.h5' - vocab_list = 'vocal_list.pkl' - word_index = 'word_index.pkl' + data_file = 'Emotion/EmotionData.xlsx' + model_file = 'Emotion/EmotionModel.h5' + vocab_list = 'Emotion/vocal_list.pkl' + word_index = 'Emotion/word_index.pkl' # 根据前六个字预测第七个字 max_len = 6 batch_size = 512 diff --git a/src/py3.x/tensorflow2.x/text_Emotion.py b/src/py3.x/tensorflow2.x/text_Emotion.py index b8437a76..7c5034e0 100644 --- a/src/py3.x/tensorflow2.x/text_Emotion.py +++ b/src/py3.x/tensorflow2.x/text_Emotion.py @@ -56,7 +56,7 @@ def loadMyWord2Vec(outfile): def load_embeding(): # 训练词向量(用空格隔开的文本) infile = "./CarCommentAll_cut.csv" - outfile = "gensim_word2vec_60/Word60.model" + outfile = "/opt/data/nlp/开源词向量/gensim_word2vec_60/Word60.model" # trainWord2Vec(infile, outfile) # 加载词向量 Word2VecModel = loadMyWord2Vec(outfile) diff --git a/src/py3.x/tensorflow2.x/text_gru.py b/src/py3.x/tensorflow2.x/text_PoetryModel.py similarity index 99% rename from src/py3.x/tensorflow2.x/text_gru.py rename to src/py3.x/tensorflow2.x/text_PoetryModel.py index 5479e1a2..ff0ca79d 100644 --- a/src/py3.x/tensorflow2.x/text_gru.py +++ b/src/py3.x/tensorflow2.x/text_PoetryModel.py @@ -2,6 +2,7 @@ ''' 代码参考: https://github.com/ioiogoo/poetry_generator_Keras 做了一定的简化,作者 @ioiogoo 协议是 MIT +目标: 自动生成歌词的 ''' import re import os