Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
T
Toy-MT-Introduction
概览
Overview
Details
Activity
Cycle Analytics
版本库
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
问题
0
Issues
0
列表
Board
标记
里程碑
合并请求
0
Merge Requests
0
CI / CD
CI / CD
流水线
作业
日程表
图表
维基
Wiki
代码片段
Snippets
成员
Collapse sidebar
Close sidebar
活动
图像
聊天
创建新问题
作业
提交
Issue Boards
Open sidebar
NiuTrans
Toy-MT-Introduction
Commits
1f839aed
Commit
1f839aed
authored
Apr 30, 2020
by
xiaotong
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
updates of section 4
parent
035ff955
显示空白字符变更
内嵌
并排
正在显示
4 个修改的文件
包含
598 行增加
和
14 行删除
+598
-14
Book/Chapter4/chapter4.tex
+9
-4
Book/mt-book-xelatex.idx
+4
-4
Book/mt-book-xelatex.ptc
+579
-0
Book/mt-book-xelatex.tex
+6
-6
没有找到文件。
Book/Chapter4/chapter4.tex
查看文件 @
1f839aed
...
@@ -2245,7 +2245,7 @@ d_1 = {d'} \circ {r_5}
...
@@ -2245,7 +2245,7 @@ d_1 = {d'} \circ {r_5}
\parinterval
对于这个问题,有两种常用的解决办法:
\parinterval
对于这个问题,有两种常用的解决办法:
\begin{itemize}
\begin{itemize}
\vspace
{
0.3em
}
\vspace
{
0.3em
}
\item
对文法进行限制。比如,可以限制规则中变量的数量;或者不允许连续的变量,这样的规则也被称作满足
{
\small\bfnew
{
Lexicalized Norm Form
}}
(LNF)
的规则。比如,层次短语规则就是LNF规则。由于LNF 中单词(终结符)可以作为锚点,因此规则匹配时所有变量的匹配范围是固定的;
\item
对文法进行限制。比如,可以限制规则中变量的数量;或者不允许连续的变量,这样的规则也被称作满足
{
\small\bfnew
{
Lexicalized Norm Form
}}
(LNF)
的规则。比如,层次短语规则就是LNF规则。由于LNF 中单词(终结符)可以作为锚点,因此规则匹配时所有变量的匹配范围是固定的;
\vspace
{
0.3em
}
\vspace
{
0.3em
}
\item
对规则进行二叉化,使用CYK方法进行分析。这个方法也是句法分析中常用的策略。所谓规则二叉化是把规则转化为最多只含两个变量或连续词串的规则(串到树规则)。比如,对于如下的规则:
\item
对规则进行二叉化,使用CYK方法进行分析。这个方法也是句法分析中常用的策略。所谓规则二叉化是把规则转化为最多只含两个变量或连续词串的规则(串到树规则)。比如,对于如下的规则:
\begin{eqnarray}
\begin{eqnarray}
...
@@ -2268,11 +2268,16 @@ d_1 = {d'} \circ {r_5}
...
@@ -2268,11 +2268,16 @@ d_1 = {d'} \circ {r_5}
\parinterval
统计机器翻译模型是近三十年内自然语言处理的重要里程碑之一。其统计建模的思想长期影响着自然语言处理的研究。无论是基于短语的模型,还是基于层次短语的模型,还是基于语言学句法的模型都在尝试回答:究竟应该用什么样的知识对机器翻译进行统计建模?不过,这个问题至今还没有确定的答案。但是,显而易见,统计机器翻译为机器翻译的研究提供了一种范式,即让计算机用概率化的``知识''描述翻译问题。这些`` 知识''就是统计模型的参数,模型可以从大量的双语和单语数据中自动学习参数。这种建模思想在今天的机器翻译研究中仍然随处可见。
\parinterval
统计机器翻译模型是近三十年内自然语言处理的重要里程碑之一。其统计建模的思想长期影响着自然语言处理的研究。无论是基于短语的模型,还是基于层次短语的模型,还是基于语言学句法的模型都在尝试回答:究竟应该用什么样的知识对机器翻译进行统计建模?不过,这个问题至今还没有确定的答案。但是,显而易见,统计机器翻译为机器翻译的研究提供了一种范式,即让计算机用概率化的``知识''描述翻译问题。这些`` 知识''就是统计模型的参数,模型可以从大量的双语和单语数据中自动学习参数。这种建模思想在今天的机器翻译研究中仍然随处可见。
\parinterval
本章对统计机器翻译的经典模型进行了介绍。从早期的基于短语的模型,再到层次短语模型,以及更为复杂的基于语言学句法的模型,本章尝试对不同的建模思想进行阐释。只是,统计机器翻译的内容非常丰富,很难通过
几十页的文字
进行面面俱到的介绍。还有很多方向值得读者进一步了解:
\parinterval
本章对统计机器翻译的经典模型进行了介绍。从早期的基于短语的模型,再到层次短语模型,以及更为复杂的基于语言学句法的模型,本章尝试对不同的建模思想进行阐释。只是,统计机器翻译的内容非常丰富,很难通过
一章的内容
进行面面俱到的介绍。还有很多方向值得读者进一步了解:
\begin{itemize}
\begin{itemize}
\item
统计机器翻译的成功很大程度上来自判别式模型引入任意特征的能力。因此,在统计机器翻译时代,很多工作都集中在新特征的设计上。比如,可以基于不同的统计特征和先验知识设计翻译特征
\cite
{
och2004smorgasbord,Chiang200911,gildea2003loosely
}
,也可以模仿很多分类任务设计大规模的稀疏特征
\cite
{
chiang2008online
}
。另一方面,模型训练和特征权重调优也是统计机器翻译中的重要问题,除了最小错误率训练,还有很多方法。在过去十年,研究人员提出了许多有效的方法来学习现代SMT系统的特征值和权重。这些方法有最大似然训练
\cite
{
koehn2003statistical,Peter1993The
}
,区分性训练
\cite
{
Blunsom2008A
}
,贝叶斯训练
\cite
{
Blunsom2009A,Cohn2009A
}
,最小错误率训练(MERT)
\cite
{
och2003minimum
}
,最小风险训练
\cite
{
smith2006minimum,li2009first-
}
,最大利益训练
\cite
{
watanabe2007online,Chiang200911
}
以及两两排序优化(PRO)
\cite
{
Hopkins2011Tuning,dreyer2015apro
}
。尽管这些方法被广泛使用,但是他们仍然没有解决训练与解码间不匹配的问题。大多数系统仍然采用相对频率估计加上MERT/PRO的模式。实际上,统计机器翻译的训练和解码本身也存在不一致的问题,比如,特征值由双语数据上的极大似然估计得到(没有剪枝),而解码时却使用束剪枝,而且模型的目标是最大化机器翻译评价指标。对于这个问题也可以通过调整训练的目标函数进行缓解
\cite
{
XiaoA
}
;
\vspace
{
0.3em
}
\item
统计机器翻译的另一个基础问题是如何表示并获取翻译单元(如短语)。这个问题在本章介绍的三种模型中都有涉及。传统方法中,研究者大多使用词对齐或者句法树等结构化信息,通过启发性方法进行短语或者规则的获取。不过这类方法最大的问题是上游系统(比如,词对齐、句法分析等)中的错误会影响到下游系统。因此,很多研究者尝试使用更多样的对齐或者句法分析来指导翻译单元的获取。比如,可以绕过词对齐,直接进行短语对齐
\cite
{
koehn2003statistical
}
;也可以使用多个句法树或者句法森林来覆盖更多的句法现象,进而增加规则抽取的召回率
\cite
{
mi2008forest,xiao2010empirical
}
。另一个有趣的方向是用更紧凑的方式表示更多样的翻译假设,比如,直接将翻译结果用有限状态自动机表示,进行更大搜索空间上的解码
\cite
{
de2010hierarchical,Casacuberta2004Machine
}
;
\item
统计机器翻译的成功很大程度上来自判别式模型引入任意特征的能力。因此,在统计机器翻译时代,很多工作都集中在新特征的设计上。比如,可以基于不同的统计特征和先验知识设计翻译特征
\cite
{
och2004smorgasbord,Chiang200911,gildea2003loosely
}
,也可以模仿分类任务设计大规模的稀疏特征
\cite
{
chiang2008online
}
。另一方面,模型训练和特征权重调优也是统计机器翻译中的重要问题,除了最小错误率训练,还有很多方法。在过去十年,研究人员提出了许多有效的方法来学习现代SMT系统的特征值和权重,比如,最大似然估计
\cite
{
koehn2003statistical,Peter1993The
}
、判别式方法
\cite
{
Blunsom2008A
}
、贝叶斯方法
\cite
{
Blunsom2009A,Cohn2009A
}
、最小风险训练
\cite
{
smith2006minimum,li2009first-
}
、基于Margin的方法
\cite
{
watanabe2007online,Chiang200911
}
以及基于排序模型的方法(PRO)
\cite
{
Hopkins2011Tuning,dreyer2015apro
}
。实际上,统计机器翻译的训练和解码也存在不一致的问题,比如,特征值由双语数据上的极大似然估计得到(没有剪枝),而解码时却使用束剪枝,而且模型的目标是最大化机器翻译评价指标。对于这个问题也可以通过调整训练的目标函数进行缓解
\cite
{
XiaoA
}
(
{
\red
还有参考文献!SEARN方法!Practical structured learning techniques for natural lan- guage processing
}
);
\vspace
{
0.3em
}
\item
统计机器翻译的另一个基础问题是如何表示并获取翻译单元(如短语)。传统方法中,研究者大多使用词对齐或者句法树等结构化信息,通过启发性方法进行短语和翻译规则的获取。不过这类方法最大的问题是上游系统(比如,词对齐、句法分析等)中的错误会影响到下游系统。因此,很多研究者尝试使用更多样的对齐或者句法分析来指导翻译单元的获取。比如,可以绕过词对齐,直接进行短语对齐
\cite
{
koehn2003statistical
}
(
{
\red
不用这个文献。用这个Phrase Alignment Models for Statistical Machine Translation
}
);也可以使用多个句法树或者句法森林来覆盖更多的句法现象,进而增加规则抽取的召回率
\cite
{
mi2008forest,xiao2010empirical
}
。另一个有趣的方向是用更紧凑的方式表示更多样的翻译假设,比如,直接将翻译结果用有限状态自动机表示,进行更大搜索空间上的解码
\cite
{
de2010hierarchical,Casacuberta2004Machine
}
;
\vspace
{
0.3em
}
\item
系统融合是具有统计机器翻译时代特色的研究方向。某种意义上说,系统融合的兴起源于本世纪初各种机器翻译比赛。因为当时提升翻译性能的主要方法之一就是将多个翻译引擎进行融合。系统融合的出发点是:多样的翻译候选有助于生成更好的译文。系统融合有很多思路,比较简单的方法是假设选择,即从多个翻译系统的输出中直接选择一个译文
\cite
{
bangalore2001computing,rosti2007combining,xiao2013bagging
}
;另一种方法是用多个系统的输出构建解码格或者混淆网络,这样可以生成新的翻译结果
\cite
{
Yang2009Lattice,He2008Indirect,Li2009Incremental
}
;此外,还可以在解码过程中动态融合不同模型
\cite
{
Yang2009Joint,Mu2009Collaborative
}
。另一方面,也有研究者探讨如何在一个翻译系统中让不同的模型进行互补,而不是简单的融合。比如,可以控制句法在机器翻译中使用的程度,让句法模型和层次短语模型处理各自擅长的问题
\cite
{
Tong2016Syntactic
}
;
\item
系统融合是具有统计机器翻译时代特色的研究方向。某种意义上说,系统融合的兴起源于本世纪初各种机器翻译比赛。因为当时提升翻译性能的主要方法之一就是将多个翻译引擎进行融合。系统融合的出发点是:多样的翻译候选有助于生成更好的译文。系统融合有很多思路,比较简单的方法是假设选择,即从多个翻译系统的输出中直接选择一个译文
\cite
{
bangalore2001computing,rosti2007combining,xiao2013bagging
}
;另一种方法是用多个系统的输出构建解码格或者混淆网络,这样可以生成新的翻译结果
\cite
{
Yang2009Lattice,He2008Indirect,Li2009Incremental
}
;此外,还可以在解码过程中动态融合不同模型
\cite
{
Yang2009Joint,Mu2009Collaborative
}
。另一方面,也有研究者探讨如何在一个翻译系统中让不同的模型进行互补,而不是简单的融合。比如,可以控制句法在机器翻译中使用的程度,让句法模型和层次短语模型处理各自擅长的问题
\cite
{
Tong2016Syntactic
}
;
\vspace
{
0.3em
}
\item
语言模型是统计机器翻译系统所使用的重要特征。但是,即使引入
$
n
$
-gram语言模型,机器翻译系统仍然会产生语法上不正确的译文,甚至会生成结构完全错误的译文。对于这个问题,研究者尝试使用基于句法的语言模型。早期的探索有Charniak等人
\cite
{
charniak2001immediate
}
和Och等人
\cite
{
och2004smorgasbord
}
的工作,不过当时的结果并没有显示出基于句法的语言模型可以显著提升机器翻译的品质。后来,BBN的研究团队提出了基于依存树的语言模型
\cite
{
shen2008a
}
,这个模型可以显著提升层次短语模型的性能。正是凭借着这项技术,BBN的系统也在多个机器翻译评测比赛中名列前茅,引起了广泛关注。除此之外,也有研究工作探索基于树替换文法等结构的语言模型
\cite
{
xiao2011language
}
。实际上,树到树、串到树模型也可以被看作是一种对目标语言句法合理性的度量,只不过目标语言的句法信息被隐含在翻译规则中。这时,可以在翻译规则上设计相应的特征,以达到引入目标语句法语言模型的目的。
\item
语言模型是统计机器翻译系统所使用的重要特征。但是,即使引入
$
n
$
-gram语言模型,机器翻译系统仍然会产生语法上不正确的译文,甚至会生成结构完全错误的译文。对于这个问题,研究者尝试使用基于句法的语言模型。早期的探索有Charniak等人
\cite
{
charniak2001immediate
}
和Och等人
\cite
{
och2004smorgasbord
}
的工作,不过当时的结果并没有显示出基于句法的语言模型可以显著提升机器翻译的品质。后来,BBN的研究团队提出了基于依存树的语言模型
\cite
{
shen2008a
}
,这个模型可以显著提升层次短语模型的性能。正是凭借着这项技术,BBN的系统也在多个机器翻译评测比赛中名列前茅,引起了广泛关注。除此之外,也有研究工作探索基于树替换文法等结构的语言模型
\cite
{
xiao2011language
}
。实际上,树到树、串到树模型也可以被看作是一种对目标语言句法合理性的度量,只不过目标语言的句法信息被隐含在翻译规则中。这时,可以在翻译规则上设计相应的特征,以达到引入目标语句法语言模型的目的。
\vspace
{
0.3em
}
\end{itemize}
\end{itemize}
Book/mt-book-xelatex.idx
查看文件 @
1f839aed
...
@@ -46,10 +46,10 @@
...
@@ -46,10 +46,10 @@
\indexentry{Chapter4.4.3.5|hyperpage}{65}
\indexentry{Chapter4.4.3.5|hyperpage}{65}
\indexentry{Chapter4.4.4|hyperpage}{66}
\indexentry{Chapter4.4.4|hyperpage}{66}
\indexentry{Chapter4.4.4.1|hyperpage}{67}
\indexentry{Chapter4.4.4.1|hyperpage}{67}
\indexentry{Chapter4.4.4.2|hyperpage}{6
8
}
\indexentry{Chapter4.4.4.2|hyperpage}{6
7
}
\indexentry{Chapter4.4.5|hyperpage}{
70
}
\indexentry{Chapter4.4.5|hyperpage}{
68
}
\indexentry{Chapter4.4.5|hyperpage}{71}
\indexentry{Chapter4.4.5|hyperpage}{71}
\indexentry{Chapter4.4.7|hyperpage}{73}
\indexentry{Chapter4.4.7|hyperpage}{73}
\indexentry{Chapter4.4.7.1|hyperpage}{7
5
}
\indexentry{Chapter4.4.7.1|hyperpage}{7
4
}
\indexentry{Chapter4.4.7.2|hyperpage}{76}
\indexentry{Chapter4.4.7.2|hyperpage}{76}
\indexentry{Chapter4.5|hyperpage}{7
8
}
\indexentry{Chapter4.5|hyperpage}{7
7
}
Book/mt-book-xelatex.ptc
0 → 100644
查看文件 @
1f839aed
\boolfalse {citerequest}\boolfalse {citetracker}\boolfalse {pagetracker}\boolfalse {backtracker}\relax
\defcounter {refsection}{0}\relax
\select@language {english}
\defcounter {refsection}{0}\relax
\contentsline {part}{\@mypartnumtocformat {I}{机器翻译基础}}{11}{part.1}
\ttl@starttoc {default@1}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {1}机器翻译简介}{13}{chapter.1}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.1}机器翻译的概念}{13}{section.1.1}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.2}机器翻译简史}{16}{section.1.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.2.1}人工翻译}{16}{subsection.1.2.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.2.2}机器翻译的萌芽}{17}{subsection.1.2.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.2.3}机器翻译的受挫}{18}{subsection.1.2.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.2.4}机器翻译的快速成长}{19}{subsection.1.2.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.2.5}机器翻译的爆发}{20}{subsection.1.2.5}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.3}机器翻译现状}{21}{section.1.3}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.4}机器翻译方法}{22}{section.1.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.4.1}基于规则的机器翻译}{22}{subsection.1.4.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.4.2}基于实例的机器翻译}{24}{subsection.1.4.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.4.3}统计机器翻译}{25}{subsection.1.4.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.4.4}神经机器翻译}{26}{subsection.1.4.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.4.5}对比分析}{27}{subsection.1.4.5}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.5}翻译质量评价}{28}{section.1.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.5.1}人工评价}{28}{subsection.1.5.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.5.2}自动评价}{29}{subsection.1.5.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{BLEU}{29}{section*.15}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{TER}{31}{section*.16}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于检测点的评价}{31}{section*.17}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.6}机器翻译应用}{32}{section.1.6}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.7}开源项目与评测}{34}{section.1.7}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.7.1}开源机器翻译系统}{34}{subsection.1.7.1}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{统计机器翻译开源系统}{34}{section*.19}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经机器翻译开源系统}{36}{section*.20}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.7.2}常用数据集及公开评测任务}{38}{subsection.1.7.2}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.8}推荐学习资源}{40}{section.1.8}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {2}词法、语法及统计建模基础}{45}{chapter.2}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.1}问题概述 }{46}{section.2.1}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.2}概率论基础}{47}{section.2.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.1}随机变量和概率}{47}{subsection.2.2.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.2}联合概率、条件概率和边缘概率}{49}{subsection.2.2.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.3}链式法则}{50}{subsection.2.2.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.4}贝叶斯法则}{51}{subsection.2.2.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.5}KL距离和熵}{53}{subsection.2.2.5}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{信息熵}{53}{section*.27}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{KL距离}{54}{section*.29}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{交叉熵}{54}{section*.30}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.3}中文分词}{55}{section.2.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.3.1}基于词典的分词方法}{56}{subsection.2.3.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.3.2}基于统计的分词方法}{57}{subsection.2.3.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{统计模型的学习与推断}{57}{section*.34}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{掷骰子游戏}{58}{section*.36}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{全概率分词方法}{60}{section*.40}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.4}$n$-gram语言模型 }{62}{section.2.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.4.1}建模}{63}{subsection.2.4.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.4.2}未登录词和平滑算法}{65}{subsection.2.4.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{加法平滑方法}{66}{section*.46}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{古德-图灵估计法}{67}{section*.48}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Kneser-Ney平滑方法}{68}{section*.50}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.5}句法分析(短语结构分析)}{70}{section.2.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.5.1}句子的句法树表示}{70}{subsection.2.5.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.5.2}上下文无关文法}{72}{subsection.2.5.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.5.3}规则和推导的概率}{76}{subsection.2.5.3}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.6}小结及深入阅读}{78}{section.2.6}
\defcounter {refsection}{0}\relax
\contentsline {part}{\@mypartnumtocformat {II}{统计机器翻译}}{81}{part.2}
\ttl@stoptoc {default@1}
\ttl@starttoc {default@2}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {3}基于词的机器翻译模型}{83}{chapter.3}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.1}什么是基于词的翻译模型}{83}{section.3.1}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.2}构建一个简单的机器翻译系统}{85}{section.3.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.1}如何进行翻译?}{85}{subsection.3.2.1}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{机器翻译流程}{86}{section*.63}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{人工翻译 vs. 机器翻译}{87}{section*.65}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.2}基本框架}{87}{subsection.3.2.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.3}单词翻译概率}{88}{subsection.3.2.3}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{什么是单词翻译概率?}{88}{section*.67}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{如何从一个双语平行数据中学习?}{88}{section*.69}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{如何从大量的双语平行数据中学习?}{90}{section*.70}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.4}句子级翻译模型}{91}{subsection.3.2.4}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基础模型}{91}{section*.72}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{生成流畅的译文}{93}{section*.74}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.5}解码}{95}{subsection.3.2.5}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.3}基于词的翻译建模}{98}{section.3.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.3.1}噪声信道模型}{98}{subsection.3.3.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.3.2}统计机器翻译的三个基本问题}{100}{subsection.3.3.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{词对齐}{101}{section*.83}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于词对齐的翻译模型}{101}{section*.86}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于词对齐的翻译实例}{103}{section*.88}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.4}IBM模型1-2}{104}{section.3.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.4.1}IBM模型1}{104}{subsection.3.4.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.4.2}IBM模型2}{106}{subsection.3.4.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.4.3}解码及计算优化}{107}{subsection.3.4.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.4.4}训练}{108}{subsection.3.4.4}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{目标函数}{108}{section*.93}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{优化}{109}{section*.95}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.5}IBM模型3-5及隐马尔可夫模型}{115}{section.3.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.1}基于产出率的翻译模型}{115}{subsection.3.5.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.2}IBM 模型3}{118}{subsection.3.5.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.3}IBM 模型4}{119}{subsection.3.5.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.4} IBM 模型5}{121}{subsection.3.5.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.5}隐马尔可夫模型}{122}{subsection.3.5.5}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{隐马尔可夫模型}{123}{section*.107}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{词对齐模型}{124}{section*.109}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.6}解码和训练}{125}{subsection.3.5.6}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.6}问题分析}{125}{section.3.6}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.6.1}词对齐及对称化}{125}{subsection.3.6.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.6.2}Deficiency}{126}{subsection.3.6.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.6.3}句子长度}{127}{subsection.3.6.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.6.4}其他问题}{128}{subsection.3.6.4}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.7}小结及深入阅读}{128}{section.3.7}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {4}基于短语和句法的机器翻译模型}{131}{chapter.4}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.1}翻译中的结构信息}{131}{section.4.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.1.1}更大粒度的翻译单元}{132}{subsection.4.1.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.1.2}句子的结构信息}{134}{subsection.4.1.2}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.2}基于短语的翻译模型}{136}{section.4.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.1}机器翻译中的短语}{136}{subsection.4.2.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.2}数学建模及判别式模型}{139}{subsection.4.2.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于翻译推导的建模}{139}{section*.121}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{对数线性模型}{140}{section*.122}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{搭建模型的基本流程}{141}{section*.123}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.3}短语抽取}{142}{subsection.4.2.3}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{与词对齐一致的短语}{143}{section*.126}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{获取词对齐}{144}{section*.130}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{度量双语短语质量}{145}{section*.132}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.4}调序}{146}{subsection.4.2.4}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于距离的调序}{146}{section*.136}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于方向的调序}{147}{section*.138}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于分类的调序}{149}{section*.141}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.5}特征}{149}{subsection.4.2.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.6}最小错误率训练}{150}{subsection.4.2.6}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.7}栈解码}{153}{subsection.4.2.7}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{翻译候选匹配}{154}{section*.146}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{翻译假设扩展}{154}{section*.148}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{剪枝}{155}{section*.150}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{解码中的栈结构}{156}{section*.152}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.3}基于层次短语的模型}{157}{section.4.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.1}同步上下文无关文法}{160}{subsection.4.3.1}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{文法定义}{161}{section*.157}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{推导}{162}{section*.158}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{胶水规则}{163}{section*.159}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{处理流程}{164}{section*.160}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.2}层次短语规则抽取}{164}{subsection.4.3.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.3}翻译模型及特征}{165}{subsection.4.3.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.4}CYK解码}{166}{subsection.4.3.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.5}立方剪枝}{170}{subsection.4.3.5}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.4}基于语言学句法的模型}{173}{section.4.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.1}基于句法的翻译模型分类}{175}{subsection.4.4.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.2}基于树结构的文法}{175}{subsection.4.4.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{树到树翻译规则}{177}{section*.176}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于树结构的翻译推导}{179}{section*.178}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{树到串翻译规则}{181}{section*.181}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.3}树到串翻译规则抽取}{182}{subsection.4.4.3}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{树的切割与最小规则}{183}{section*.183}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{空对齐处理}{186}{section*.189}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{组合规则}{187}{section*.191}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{SPMT规则}{188}{section*.193}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{句法树二叉化}{189}{section*.195}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.4}树到树翻译规则抽取}{190}{subsection.4.4.4}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于节点对齐的规则抽取}{191}{section*.199}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于对齐矩阵的规则抽取}{191}{section*.202}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.5}句法翻译模型的特征}{192}{subsection.4.4.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.6}基于超图的推导空间表示}{195}{subsection.4.4.6}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.7}基于树的解码 vs 基于串的解码}{197}{subsection.4.4.7}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于树的解码}{198}{section*.209}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于串的解码}{200}{section*.212}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.5}小结及深入阅读}{201}{section.4.5}
\defcounter {refsection}{0}\relax
\contentsline {part}{\@mypartnumtocformat {III}{神经机器翻译}}{205}{part.3}
\ttl@stoptoc {default@2}
\ttl@starttoc {default@3}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {5}人工神经网络和神经语言建模}{207}{chapter.5}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.1}深度学习与人工神经网络}{208}{section.5.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.1.1}发展简史}{208}{subsection.5.1.1}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{早期的人工神经网络和第一次寒冬}{208}{section*.214}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经网络的第二次高潮和第二次寒冬}{209}{section*.215}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{深度学习和神经网络方法的崛起}{210}{section*.216}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.1.2}为什么需要深度学习}{211}{subsection.5.1.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{端到端学习和表示学习}{211}{section*.218}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{深度学习的效果}{212}{section*.220}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.2}神经网络基础}{212}{section.5.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.2.1}线性代数基础}{212}{subsection.5.2.1}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{标量、向量和矩阵}{213}{section*.222}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{矩阵的转置}{214}{section*.223}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{矩阵加法和数乘}{214}{section*.224}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{矩阵乘法和矩阵点乘}{215}{section*.225}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{线性映射}{216}{section*.226}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{范数}{217}{section*.227}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.2.2}人工神经元和感知机}{218}{subsection.5.2.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{感知机\ \raisebox {0.5mm}{------}\ 最简单的人工神经元模型}{219}{section*.230}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经元内部权重}{220}{section*.233}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经元的输入\ \raisebox {0.5mm}{------}\ 离散 vs 连续}{221}{section*.235}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经元内部的参数学习}{221}{section*.237}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.2.3}多层神经网络}{222}{subsection.5.2.3}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{线性变换和激活函数}{222}{section*.239}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{单层神经网络$\rightarrow $多层神经网络}{224}{section*.246}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.2.4}函数拟合能力}{225}{subsection.5.2.4}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.3}神经网络的张量实现}{229}{section.5.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.1} 张量及其计算}{230}{subsection.5.3.1}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{张量}{230}{section*.256}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{张量的矩阵乘法}{232}{section*.259}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{张量的单元操作}{233}{section*.261}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.2}张量的物理存储形式}{234}{subsection.5.3.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.3}使用开源框架实现张量计算}{234}{subsection.5.3.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.4}前向传播与计算图}{236}{subsection.5.3.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.5}神经网络实例}{239}{subsection.5.3.5}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.4}神经网络的参数训练}{240}{section.5.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.1}损失函数}{241}{subsection.5.4.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.2}基于梯度的参数优化}{241}{subsection.5.4.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{梯度下降}{242}{section*.279}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{梯度获取}{244}{section*.281}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于梯度的方法的变种和改进}{247}{section*.285}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.3}参数更新的并行化策略}{250}{subsection.5.4.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.4}梯度消失、梯度爆炸和稳定性训练}{252}{subsection.5.4.4}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{易于优化的激活函数}{252}{section*.288}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{梯度裁剪}{253}{section*.292}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{稳定性训练}{254}{section*.293}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.5}过拟合}{255}{subsection.5.4.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.6}反向传播}{256}{subsection.5.4.6}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{输出层的反向传播}{257}{section*.296}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{隐藏层的反向传播}{259}{section*.300}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{程序实现}{260}{section*.303}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.5}神经语言模型}{261}{section.5.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.5.1}基于神经网络的语言建模}{262}{subsection.5.5.1}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于前馈神经网络的语言模型}{263}{section*.306}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于循环神经网络的语言模型}{265}{section*.309}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于自注意力机制的语言模型}{266}{section*.311}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{语言模型的评价}{267}{section*.313}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.5.2}单词表示模型}{268}{subsection.5.5.2}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{One-hot编码}{268}{section*.314}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{分布式表示}{268}{section*.316}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.5.3}句子表示模型及预训练}{270}{subsection.5.5.3}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{简单的上下文表示模型}{270}{section*.320}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{ELMO模型}{272}{section*.323}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{GPT模型}{272}{section*.325}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{BERT模型}{273}{section*.327}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{为什么要预训练?}{274}{section*.329}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.6}小结及深入阅读}{275}{section.5.6}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {6}神经机器翻译模型}{277}{chapter.6}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.1}神经机器翻译的发展简史}{277}{section.6.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.1.1}神经机器翻译的起源}{279}{subsection.6.1.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.1.2}神经机器翻译的品质 }{281}{subsection.6.1.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.1.3}神经机器翻译的优势 }{284}{subsection.6.1.3}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.2}编码器-解码器框架}{286}{section.6.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.2.1}框架结构}{286}{subsection.6.2.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.2.2}表示学习}{287}{subsection.6.2.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.2.3}简单的运行实例}{288}{subsection.6.2.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.2.4}机器翻译范式的对比}{289}{subsection.6.2.4}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.3}基于循环神经网络的翻译模型及注意力机制}{290}{section.6.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.1}建模}{291}{subsection.6.3.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.2}输入(词嵌入)及输出(Softmax)}{294}{subsection.6.3.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.3}循环神经网络结构}{297}{subsection.6.3.3}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{循环神经单元(RNN)}{297}{section*.351}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{长短时记忆网络(LSTM)}{298}{section*.352}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{门控循环单元(GRU)}{299}{section*.355}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{双向模型}{301}{section*.357}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{多层循环神经网络}{302}{section*.359}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.4}注意力机制}{302}{subsection.6.3.4}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{翻译中的注意力机制}{304}{section*.362}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{上下文向量的计算}{305}{section*.365}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{注意力机制的解读}{308}{section*.370}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.5}训练}{310}{subsection.6.3.5}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{损失函数}{310}{section*.373}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{长参数初始化}{311}{section*.374}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{优化策略}{311}{section*.375}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{梯度裁剪}{312}{section*.377}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{学习率策略}{312}{section*.378}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{并行训练}{313}{section*.381}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.6}推断}{315}{subsection.6.3.6}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{贪婪搜索}{317}{section*.385}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{束搜索}{318}{section*.388}
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{长度惩罚}{318}{section*.390}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.7}实例-GNMT}{320}{subsection.6.3.7}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.4}Transformer}{320}{section.6.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.1}自注意力模型}{322}{subsection.6.4.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.2}Transformer架构}{324}{subsection.6.4.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.3}位置编码}{326}{subsection.6.4.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.4}基于点乘的注意力机制}{328}{subsection.6.4.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.5}掩码操作}{330}{subsection.6.4.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.6}多头注意力}{331}{subsection.6.4.6}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.7}残差网络和层正则化}{332}{subsection.6.4.7}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.8}前馈全连接网络子层}{333}{subsection.6.4.8}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.9}训练}{334}{subsection.6.4.9}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.10}推断}{337}{subsection.6.4.10}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.5}序列到序列问题及应用}{337}{section.6.5}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.1}自动问答}{338}{subsection.6.5.1}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.2}自动文摘}{338}{subsection.6.5.2}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.3}文言文翻译}{339}{subsection.6.5.3}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.4}对联生成}{339}{subsection.6.5.4}
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.5}古诗生成}{340}{subsection.6.5.5}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.6}小结及深入阅读}{341}{section.6.6}
\defcounter {refsection}{0}\relax
\contentsline {part}{\@mypartnumtocformat {IV}{附录}}{343}{part.4}
\ttl@stoptoc {default@3}
\ttl@starttoc {default@4}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {A}附录A}{345}{Appendix.1.A}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {B}附录B}{347}{Appendix.2.B}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.1}IBM模型3训练方法}{347}{section.2.B.1}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.2}IBM模型4训练方法}{349}{section.2.B.2}
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.3}IBM模型5训练方法}{351}{section.2.B.3}
\contentsfinish
Book/mt-book-xelatex.tex
查看文件 @
1f839aed
...
@@ -110,14 +110,14 @@
...
@@ -110,14 +110,14 @@
% CHAPTERS
% CHAPTERS
%----------------------------------------------------------------------------------------
%----------------------------------------------------------------------------------------
\include
{
Chapter1/chapter1
}
%
\include{Chapter1/chapter1}
\include
{
Chapter2/chapter2
}
%
\include{Chapter2/chapter2}
\include
{
Chapter3/chapter3
}
%
\include{Chapter3/chapter3}
\include
{
Chapter4/chapter4
}
\include
{
Chapter4/chapter4
}
\include
{
Chapter5/chapter5
}
%
\include{Chapter5/chapter5}
\include
{
Chapter6/chapter6
}
%
\include{Chapter6/chapter6}
%\include{Chapter7/chapter7}
%\include{Chapter7/chapter7}
\include
{
ChapterAppend/chapterappend
}
%
\include{ChapterAppend/chapterappend}
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论