Archive: 2019


Meta-Learning Learning to Learn

Meta-learning, also known as “learning to learn”, intends to design models that can learn new skills or adapt to new environments rapidly with a few training examples. There are three common approaches: 1) learn an efficient distance metric (metric-based); 2) use (recurrent) network with external or internal memory (model-based); 3) optimize the model parameters explicitly for fast learning (optimization-based).


Neural Response Generation with Meta-Words

本文提出用meta-word来表示输入和回复间的关系,基于meta-word的架构,诸如情感对话生成、个性化对话生成等热点问题都可通过该论文提出的框架解决。ACL2019 paper link


Multi-Level Memory for Task Oriented Dialogs

本文提出了基于多层记忆网络的对话生成模型,创新点在于将context memory和KB memory分隔开,并且以一种分层的结构表示KB memory,以符合KB结果中自然的层次关系,使得模型可以支持非连续的对话(用户引用之前历史提过的KB结果),在entity F1和BLEU上远远超过之前的模型(Mem2Seq等)。 paper linkcode link