阅读大概需要9分钟
跟随小博主,每天进步一丢丢
转载自:AINLP 推荐Github上一个NLP相关的项目:msgi/nlp-journey
项目地址,阅读原文可以直达,欢迎参与和Star: https://github.com/msgi/nlp-journey 这个项目的作者是AINLP交流群里的慢时光同学,该项目收集了NLP相关的一些代码, 包括词向量(Word Embedding)、命名实体识别(NER)、文本分类(Text Classificatin)、文本生成、文本相似性(Text Similarity)计算等,基于keras和tensorflow,也收集了相关的书目、论文、博文、算法、项目资源链接,并且很细致的做了分类。 以下来自该项目介绍页,点击阅读原文可以直达相关资源链接。
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. A Neural Probabilistic Language Model. Transformer. Transformer-XL. Convolutional Neural Networks for Sentence Classification. Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. A Question-Focused Multi-Factor Attention Network for Question Answering. AutoCross: Automatic Feature Crossing for Tabular Data in Real-World Applications. GloVe: Global Vectors for Word Representation. A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. A Knowledge-Grounded Neural Conversation Model. Neural Generative Question Answering. A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification. ImageNet Classification with Deep Convolutional Neural Networks. Network In Network. Long Short-term Memory. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. Get To The Point: Summarization with Pointer-Generator Networks. Generative Adversarial Text to Image Synthesis. Image-to-Image Translation with Conditional Adversarial Networks. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. Unsupervised Learning of Visual Structure using Predictive Generative Networks. Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. Event Extraction via Dynamic Multi-Pooling Convolutional Neural. Low-Memory Neural Network Training:A Technical Report. Language Models are Unsupervised Multitask Learners. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient.
The Illustrated Transformer. Attention-based-model. KL divergence. Building Autoencoders in Keras. Modern Deep Learning Techniques Applied to Natural Language Processing. Node2vec embeddings for graph data. Bert解读. 难以置信!LSTM和GRU的解析从未如此清晰(动图+视频)。
构建词向量 fasttext(skipgram+cbow) gensim(word2vec) 数据增强 eda
svm fasttext textcnn bilstm+attention rcnn han
莫坠青云志 彗双智能-Keras源码分析 机器之心 colah ZHPMATRIX wildml 徐阿衡 零基础入门深度学习
Association of Computational Linguistics(计算语言学协会). ACL Empirical Methods in Natural Language Processing. EMNLP International Conference on Computational Linguistics. COLING Neural Information Processing Systems(神经信息处理系统会议). NIPS AAAI Conference on Artificial Intelligence. AAAI International Joint Conferences on AI. IJCAI International Conference on Machine Learning(国际机器学习大会). ICML
编辑不易,还望给个好看!
|