Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
M
mtbookv2
概览
Overview
Details
Activity
Cycle Analytics
版本库
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
问题
0
Issues
0
列表
Board
标记
里程碑
合并请求
0
Merge Requests
0
CI / CD
CI / CD
流水线
作业
日程表
图表
维基
Wiki
代码片段
Snippets
成员
Collapse sidebar
Close sidebar
活动
图像
聊天
创建新问题
作业
提交
Issue Boards
Open sidebar
NiuTrans
mtbookv2
Commits
d9c7e8d9
Commit
d9c7e8d9
authored
Nov 09, 2020
by
zengxin
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
10 12
parent
2b0aa7b3
显示空白字符变更
内嵌
并排
正在显示
3 个修改的文件
包含
185 行增加
和
99 行删除
+185
-99
Chapter10/chapter10.tex
+1
-1
Chapter12/chapter12.tex
+1
-1
bibliography.bib
+183
-97
没有找到文件。
Chapter10/chapter10.tex
查看文件 @
d9c7e8d9
...
...
@@ -1257,7 +1257,7 @@ L(\mathbi{Y},\widehat{\mathbi{Y}}) = \sum_{j=1}^n L_{\textrm{ce}}(\mathbi{y}_j,\
\vspace
{
0.5em
}
\item
注意力机制的使用是机器翻译乃至整个自然语言处理近几年获得成功的重要因素之一
\upcite
{
bahdanau2014neural,DBLP:journals/corr/LuongPM15
}
。早期,有研究者尝试将注意力机制和统计机器翻译的词对齐进行统一
\upcite
{
WangNeural,He2016ImprovedNM,li-etal-2019-word
}
。最近,也有大量的研究工作对注意力机制进行改进,比如,使用自注意力机制构建翻译模型等
\upcite
{
vaswani2017attention
}
。而对注意力模型的改进也成为了自然语言处理中的热点问题之一。在
{
\chapterfifteen
}
会对机器翻译中不同注意力模型进行进一步讨论。
\vspace
{
0.5em
}
\item
一般来说,神经机器翻译的计算过程是没有人工干预的,翻译流程也无法用人类的知识直接进行解释,因此一个有趣的方向是在神经机器翻译中引入先验知识,使得机器翻译的行为更“像”人。比如,可以使用句法树来引入人类的语言学知识
\upcite
{
Yang2017TowardsBH,Wang2019TreeTI
}
,基于句法的神经机器翻译也包含大量的树结构的神经网络建模
\upcite
{
DBLP:journals/corr/abs-1809-01854,DBLP:journals/corr/abs-1808-09374
}
。此外,也可以把用户定义的词典或者翻译记忆加入到翻译过程来
\upcite
{
DBLP:journals/corr/ZhangZ16c,zhang-etal-2017-prior,duan-etal-2020-bilingual,cao-xiong-2018-encoding
}
,使得用户的约束可以直接反映到机器翻译的结果上来。先验知识的种类还有很多,包括词对齐
\upcite
{
li-etal-2019-word
}
、 篇章信息
\upcite
{
Werlen2018DocumentLevelNM,DBLP:journals/corr/abs-1805-10163
}
等等,都是神经机器翻译中能够使用的信息。
\item
一般来说,神经机器翻译的计算过程是没有人工干预的,翻译流程也无法用人类的知识直接进行解释,因此一个有趣的方向是在神经机器翻译中引入先验知识,使得机器翻译的行为更“像”人。比如,可以使用句法树来引入人类的语言学知识
\upcite
{
Yang2017TowardsBH,Wang2019TreeTI
}
,基于句法的神经机器翻译也包含大量的树结构的神经网络建模
\upcite
{
DBLP:journals/corr/abs-1809-01854,DBLP:journals/corr/abs-1808-09374
}
。此外,也可以把用户定义的词典或者翻译记忆加入到翻译过程来
\upcite
{
DBLP:journals/corr/ZhangZ16c,zhang-etal-2017-prior,duan-etal-2020-bilingual,cao-xiong-2018-encoding
}
,使得用户的约束可以直接反映到机器翻译的结果上来。先验知识的种类还有很多,包括词对齐
\upcite
{
li-etal-2019-word
,DBLP:conf/emnlp/MiWI16,DBLP:conf/coling/LiuUFS16
}
、 篇章信息
\upcite
{
Werlen2018DocumentLevelNM,DBLP:journals/corr/abs-1805-10163,DBLP:conf/acl/LiLWJXZLL20
}
等等,都是神经机器翻译中能够使用的信息。
\end{itemize}
Chapter12/chapter12.tex
查看文件 @
d9c7e8d9
...
...
@@ -587,7 +587,7 @@ Transformer Deep(48层) & 30.2 & 43.1 & 194$\times 10^
\begin{itemize}
\vspace
{
0.5em
}
\item
近两年,有研究已经发现注意力机制可以捕捉一些语言现象
\upcite
{
DBLP:journals/corr/abs-1905-09418
}
,比如,在Transformer 的多头注意力中,不同头往往会捕捉到不同的信息,比如,有些头对低频词更加敏感,有些头更适合词意消歧,甚至有些头可以捕捉句法信息。此外,由于注意力机制增加了模型的复杂性,而且随着网络层数的增多,神经机器翻译中也存在大量的冗余,因此研发轻量的注意力模型也是具有实践意义的方向
\upcite
{
Xiao2019SharingAW,DBLP:journals/corr/abs-1805-00631,Lin2020WeightDT
}
。
\item
近两年,有研究已经发现注意力机制可以捕捉一些语言现象
\upcite
{
DBLP:journals/corr/abs-1905-09418
}
,比如,在Transformer 的多头注意力中,不同头往往会捕捉到不同的信息,比如,有些头对低频词更加敏感,有些头更适合词意消歧,甚至有些头可以捕捉句法信息。此外,由于注意力机制增加了模型的复杂性,而且随着网络层数的增多,神经机器翻译中也存在大量的冗余,因此研发轻量的注意力模型也是具有实践意义的方向
\upcite
{
Xiao2019SharingAW,DBLP:journals/corr/abs-1805-00631,Lin2020WeightDT
,DBLP:conf/iclr/WuLLLH20,Kitaev2020ReformerTE,DBLP:journals/corr/abs-2005-00743,dai-etal-2019-transformer,DBLP:journals/corr/abs-2004-05150,DBLP:conf/iclr/RaePJHL20
}
。
\vspace
{
0.5em
}
\item
神经机器翻译依赖成本较高的GPU设备,因此对模型的裁剪和加速也是很多系统研发人员所感兴趣的方向。比如,从工程上,可以考虑减少运算强度,比如使用低精度浮点数
\upcite
{
Ott2018ScalingNM
}
或者整数
\upcite
{
DBLP:journals/corr/abs-1906-00532,Lin2020TowardsF8
}
进行计算,或者引入缓存机制来加速模型的推断
\upcite
{
Vaswani2018Tensor2TensorFN
}
;也可以通过对模型参数矩阵的剪枝来减小整个模型的体积
\upcite
{
DBLP:journals/corr/SeeLM16
}
;另一种方法是知识蒸馏
\upcite
{
Hinton2015Distilling,kim-rush-2016-sequence
}
。 利用大模型训练小模型,这样往往可以得到比单独训练小模型更好的效果
\upcite
{
DBLP:journals/corr/ChenLCL17
}
。
\vspace
{
0.5em
}
...
...
bibliography.bib
查看文件 @
d9c7e8d9
...
...
@@ -4337,6 +4337,43 @@ year = {2012}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% chapter 10------------------------------------------------------
@inproceedings{DBLP:conf/acl/LiLWJXZLL20,
author = {Bei Li and
Hui Liu and
Ziyang Wang and
Yufan Jiang and
Tong Xiao and
Jingbo Zhu and
Tongran Liu and
Changliang Li},
title = {Does Multi-Encoder Help? {A} Case Study on Context-Aware Neural Machine
Translation},
pages = {3512--3518},
publisher = {Association for Computational Linguistics},
year = {2020}
}
@inproceedings{DBLP:conf/emnlp/MiWI16,
author = {Haitao Mi and
Zhiguo Wang and
Abe Ittycheriah},
title = {Supervised Attentions for Neural Machine Translation},
pages = {2283--2288},
publisher = {The Association for Computational Linguistics},
year = {2016}
}
@inproceedings{DBLP:conf/coling/LiuUFS16,
author = {Lemao Liu and
Masao Utiyama and
Andrew M. Finch and
Eiichiro Sumita},
title = {Neural Machine Translation with Supervised Attention},
pages = {3093--3102},
publisher = {The Association for Computational Linguistics},
year = {2016}
}
@inproceedings{devlin-etal-2014-fast,
author = {Jacob Devlin and
Rabih Zbib and
...
...
@@ -4378,13 +4415,15 @@ year = {2012}
year = {1998},
}
@article{BENGIO1994Learning,
author ={Y. {Bengio} and P. {Simard} and P. {Frasconi}},
journal ={IEEE Transactions on Neural Networks},
title ={Learning long-term dependencies with gradient descent is difficult},
year ={1994},
volume ={5},
number ={2},
pages ={157-166},
author = {Yoshua Bengio and
Patrice Y. Simard and
Paolo Frasconi},
title = {Learning long-term dependencies with gradient descent is difficult},
journal = {Institute of Electrical and Electronics Engineers},
volume = {5},
number = {2},
pages = {157--166},
year = {1994}
}
@inproceedings{NIPS2017_7181,
author = {Ashish Vaswani and
...
...
@@ -4460,7 +4499,7 @@ pages ={157-166},
title = {Learning Deep Transformer Models for Machine Translation},
pages = {1810--1822},
publisher = {Association for Computational Linguistics},
year = {2019}
,
year = {2019}
}
@article{Li2020NeuralMT,
author = {Yanyang Li and
...
...
@@ -4860,21 +4899,24 @@ pages ={157-166},
year={2018}
}
@inproceedings{Lin2020TowardsF8,
title={Towards Fully 8-bit Integer Inference for the Transformer Model},
author={Y. Lin and Yanyang Li and Tengbo Liu and Tong Xiao and T. Liu and Jingbo Zhu},
publisher={International Joint Conference on Artificial Intelligence},
year={2020}
author = {Ye Lin and
Yanyang Li and
Tengbo Liu and
Tong Xiao and
Tongran Liu and
Jingbo Zhu},
title = {Towards Fully 8-bit Integer Inference for the Transformer Model},
pages = {3759--3765},
publisher = {International Joint Conference on Artificial Intelligence},
year = {2020}
}
@inproceedings{kim-rush-2016-sequence,
title = "Sequence-Level Knowledge Distillation",
author = "Kim, Yoon and
Rush, Alexander M.",
publisher = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2016",
//address = "Austin, Texas",
//publisher = "Association for Computational Linguistics",
pages = "1317--1327",
author = {Yoon Kim and
Alexander M. Rush},
title = {Sequence-Level Knowledge Distillation},
pages = {1317--1327},
publisher = {The Association for Computational Linguistics},
year = {2016}
}
@article{Akaike1969autoregressive,
author = {Hirotugu Akaike},
...
...
@@ -4914,16 +4956,14 @@ pages ={157-166},
year={2018}
}
@inproceedings{cho-etal-2014-properties,
title = "On the Properties of Neural Machine Translation: Encoder--Decoder Approaches",
author = {Cho, Kyunghyun and
van Merri{\"e}nboer, Bart and
Bahdanau, Dzmitry and
Bengio, Yoshua},
month = oct,
year = "2014",
address = "Doha, Qatar",
publisher = "Association for Computational Linguistics",
pages = "103--111",
author = {Kyunghyun Cho and
Bart van Merrienboer and
Dzmitry Bahdanau and
Yoshua Bengio},
title = {On the Properties of Neural Machine Translation: Encoder-Decoder Approaches},
pages = {103--111},
publisher = {Association for Computational Linguistics},
year = {2014}
}
@inproceedings{DBLP:conf/acl/JeanCMB15,
...
...
@@ -4948,10 +4988,14 @@ pages ={157-166},
year = {2015}
}
@inproceedings{He2016ImprovedNM,
title={Improved Neural Machine Translation with SMT Features},
author={W. He and Zhongjun He and Hua Wu and H. Wang},
booktitle={AAAI Conference on Artificial Intelligence},
year={2016}
author = {Wei He and
Zhongjun He and
Hua Wu and
Haifeng Wang},
title = {Improved Neural Machine Translation with {SMT} Features},
pages = {151--157},
publisher = {the Association for the Advance of Artificial Intelligence},
year = {2016}
}
@inproceedings{zhang-etal-2017-prior,
title = {Prior Knowledge Integration for Neural Machine Translation using Posterior Regularization},
...
...
@@ -4966,45 +5010,40 @@ pages ={157-166},
}
@inproceedings{duan-etal-2020-bilingual,
title = "Bilingual Dictionary Based Neural Machine Translation without Using Parallel Sentences",
author = "Duan, Xiangyu and
Ji, Baijun and
Jia, Hao and
Tan, Min and
Zhang, Min and
Chen, Boxing and
Luo, Weihua and
Zhang, Yue",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
pages = "1570--1579",
author = {Xiangyu Duan and
Baijun Ji and
Hao Jia and
Min Tan and
Min Zhang and
Boxing Chen and
Weihua Luo and
Yue Zhang},
title = {Bilingual Dictionary Based Neural Machine Translation without Using
Parallel Sentences},
pages = {1570--1579},
publisher = {Association for Computational Linguistics},
year = {2020}
}
@inproceedings{cao-xiong-2018-encoding,
title = "Encoding Gated Translation Memory into Neural Machine Translation",
author = "Cao, Qian and
Xiong, Deyi",
month = oct,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
pages = "3042--3047",
author = {Qian Cao and
Deyi Xiong},
title = {Encoding Gated Translation Memory into Neural Machine Translation},
pages = {3042--3047},
publisher = {Association for Computational Linguistics},
year = {2018}
}
@inproceedings{yang-etal-2016-hierarchical,
title = "Hierarchical Attention Networks for Document Classification",
author = "Yang, Zichao and
Yang, Diyi and
Dyer, Chris and
He, Xiaodong and
Smola, Alex and
Hovy, Eduard",
month = jun,
year = "2016",
address = "San Diego, California",
publisher = "Association for Computational Linguistics",
pages = "1480--1489",
author = {Zichao Yang and
Diyi Yang and
Chris Dyer and
Xiaodong He and
Alexander J. Smola and
Eduard H. Hovy},
title = {Hierarchical Attention Networks for Document Classification},
pages = {1480--1489},
publisher = {The Association for Computational Linguistics},
year = {2016}
}
%%%%% chapter 10------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
...
...
@@ -5014,9 +5053,6 @@ pages ={157-166},
@inproceedings{DBLP:conf/naacl/Johnson015,
author = {Rie Johnson and
Tong Zhang},
editor = {Rada Mihalcea and
Joyce Yue Chai and
Anoop Sarkar},
title = {Effective Use of Word Order for Text Categorization with Convolutional
Neural Networks},
pages = {103--112},
...
...
@@ -5027,10 +5063,6 @@ pages ={157-166},
@inproceedings{DBLP:conf/naacl/NguyenG15,
author = {Thien Huu Nguyen and
Ralph Grishman},
editor = {Phil Blunsom and
Shay B. Cohen and
Paramveer S. Dhillon and
Percy Liang},
title = {Relation Extraction: Perspective from Convolutional Neural Networks},
pages = {39--48},
publisher = {The Association for Computational Linguistics},
...
...
@@ -5411,6 +5443,51 @@ pages ={157-166},
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%% chapter 12------------------------------------------------------
@inproceedings{DBLP:conf/iclr/RaePJHL20,
author = {Jack W. Rae and
Anna Potapenko and
Siddhant M. Jayakumar and
Chloe Hillier and
Timothy P. Lillicrap},
title = {Compressive Transformers for Long-Range Sequence Modelling},
publisher = {OpenReview.net},
year = {2020}
}
@article{DBLP:journals/corr/abs-2004-05150,
author = {Iz Beltagy and
Matthew E. Peters and
Arman Cohan},
title = {Longformer: The Long-Document Transformer},
journal = {CoRR},
volume = {abs/2004.05150},
year = {2020}
}
@article{DBLP:journals/corr/abs-2005-00743,
author = {Yi Tay and
Dara Bahri and
Donald Metzler and
Da-Cheng Juan and
Zhe Zhao and
Che Zheng},
title = {Synthesizer: Rethinking Self-Attention in Transformer Models},
journal = {CoRR},
volume = {abs/2005.00743},
year = {2020}
}
@inproceedings{DBLP:conf/iclr/WuLLLH20,
author = {Zhanghao Wu and
Zhijian Liu and
Ji Lin and
Yujun Lin and
Song Han},
title = {Lite Transformer with Long-Short Range Attention},
publisher = {OpenReview.net},
year = {2020}
}
@inproceedings{DBLP:journals/corr/abs-1905-09418,
author = {Elena Voita and
David Talbot and
...
...
@@ -5506,18 +5583,16 @@ pages ={157-166},
}
@inproceedings{dai-etal-2019-transformer,
title = "Transformer-{XL}: Attentive Language Models beyond a Fixed-Length Context",
author = "Dai, Zihang and
Yang, Zhilin and
Yang, Yiming and
Carbonell, Jaime and
Le, Quoc and
Salakhutdinov, Ruslan",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
pages = "2978--2988",
author = {Zihang Dai and
Zhilin Yang and
Yiming Yang and
Jaime G. Carbonell and
Quoc Viet Le and
Ruslan Salakhutdinov},
title = {Transformer-XL: Attentive Language Models beyond a Fixed-Length Context},
pages = {2978--2988},
publisher = {Association for Computational Linguistics},
year = {2019}
}
@article{Liu2020LearningTE,
title={Learning to Encode Position for Transformer with Continuous Dynamical Model},
...
...
@@ -5563,10 +5638,15 @@ pages ={157-166},
year={2018}
}
@inproceedings{Dou2018ExploitingDR,
title={Exploiting Deep Representations for Neural Machine Translation},
author={Zi-Yi Dou and Zhaopeng Tu and Xing Wang and Shuming Shi and T. Zhang},
publisher={Conference on Empirical Methods in Natural Language Processing},
year={2018}
author = {Zi-Yi Dou and
Zhaopeng Tu and
Xing Wang and
Shuming Shi and
Tong Zhang},
title = {Exploiting Deep Representations for Neural Machine Translation},
pages = {4253--4262},
publisher = {Association for Computational Linguistics},
year = {2018}
}
@inproceedings{Wang2019ExploitingSC,
title={Exploiting Sentential Context for Neural Machine Translation},
...
...
@@ -5576,10 +5656,16 @@ pages ={157-166},
}
@inproceedings{Dou2019DynamicLA,
title={Dynamic Layer Aggregation for Neural Machine Translation},
author={Zi-Yi Dou and Zhaopeng Tu and Xing Wang and Longyue Wang and Shuming Shi and T. Zhang},
publisher={AAAI Conference on Artificial Intelligence},
year={2019}
author = {Zi-Yi Dou and
Zhaopeng Tu and
Xing Wang and
Longyue Wang and
Shuming Shi and
Tong Zhang},
title = {Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement},
pages = {86--93},
publisher = {the Association for the Advance of Artificial Intelligence},
year = {2019}
}
@inproceedings{Wei2020MultiscaleCD,
title={Multiscale Collaborative Deep Models for Neural Machine Translation},
...
...
@@ -5614,7 +5700,7 @@ pages ={157-166},
@article{li2020shallow,
title={Shallow-to-Deep Training for Neural Machine Translation},
author={Li, Bei and Wang, Ziyang and Liu, Hui and Jiang, Yufan and Du, Quan and Xiao, Tong and Wang, Huizhen and Zhu, Jingbo},
journal={arXiv preprint arXiv:2010.03737
},
publisher={Conference on Empirical Methods in Natural Language Processing
},
year={2020}
}
%%%%% chapter 12------------------------------------------------------
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论