Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
T
Toy-MT-Introduction
概览
Overview
Details
Activity
Cycle Analytics
版本库
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
问题
0
Issues
0
列表
Board
标记
里程碑
合并请求
0
Merge Requests
0
CI / CD
CI / CD
流水线
作业
日程表
图表
维基
Wiki
代码片段
Snippets
成员
Collapse sidebar
Close sidebar
活动
图像
聊天
创建新问题
作业
提交
Issue Boards
Open sidebar
NiuTrans
Toy-MT-Introduction
Commits
bd87e7ad
Commit
bd87e7ad
authored
Nov 24, 2021
by
zengxin
Browse files
Options
Browse Files
Download
Plain Diff
合并分支 'master' 到 'zengxin'
Master 查看合并请求
!295
parents
e10fac8e
dee6cf16
显示空白字符变更
内嵌
并排
正在显示
17 个修改的文件
包含
1071 行增加
和
1074 行删除
+1071
-1074
Book/Chapter3/Figures/figure-human-translation.tex
+2
-2
Book/Chapter3/Figures/figure-noise-channel-model.tex
+2
-1
Book/Chapter3/Figures/figure-process-of-machine-translation.tex
+2
-2
Book/Chapter3/Figures/greedy-mt-decoding-process-1.tex
+6
-6
Book/Chapter3/Figures/greedy-mt-decoding-process-3.tex
+6
-6
Book/Chapter3/chapter3.tex
+12
-12
Book/Chapter4/Figures/grid-search-2.tex
+0
-43
Book/Chapter4/Figures/grid-search.tex
+44
-2
Book/Chapter4/Figures/search-space-representation-of-feature-weight-1.tex
+0
-30
Book/Chapter4/Figures/search-space-representation-of-feature-weight-2.tex
+0
-41
Book/Chapter4/Figures/search-space-representation-of-feature-weight.tex
+69
-2
Book/Chapter4/chapter4.tex
+2
-6
Book/ChapterAppend/chapterappend.tex
+20
-20
Book/mt-book-xelatex.idx
+526
-526
Book/mt-book-xelatex.ptc
+359
-354
Section03-Word-Based-Models/section03.tex
+19
-19
Section05-Neural-Networks-and-Language-Modeling/section05.tex
+2
-2
没有找到文件。
Book/Chapter3/Figures/figure-human-translation.tex
查看文件 @
bd87e7ad
...
...
@@ -8,7 +8,7 @@
\node
[anchor=west] (s1) at (0,0)
{{
我
}}
;
\node
[anchor=west] (s2) at ([xshift=2em]s1.east)
{{
对
}}
;
\node
[anchor=west] (s3) at ([xshift=2em]s2.east)
{{
你
}}
;
\node
[anchor=west] (s4) at ([xshift=2em]s3.east)
{{
表示
}}
;
\node
[anchor=west] (s4) at ([xshift=2em]s3.east)
{{
感到
}}
;
\node
[anchor=west] (s5) at ([xshift=2em]s4.east)
{{
满意
}}
;
\node
[anchor=south west] (sentlabel) at ([yshift=-0.5em]s1.north west)
{
\scriptsize
{
\sffamily\bfseries
{
\color
{
red
}{
待翻译句子(已经分词):
}}}}
;
...
...
@@ -38,7 +38,7 @@
\node
[anchor=north west,inner sep=1pt,fill=black] (tl31) at (t31.north west)
{
\tiny
{{
\color
{
white
}
\textbf
{
3
}}}}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3em] (t41) at ([yshift=-1em]s4.south)
{$
\phi
$}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3em] (t42) at ([yshift=-0.2em]t41.south)
{
show
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3em] (t42) at ([yshift=-0.2em]t41.south)
{
feel
}
;
\node
[anchor=north west,inner sep=1pt,fill=black] (tl41) at (t41.north west)
{
\tiny
{{
\color
{
white
}
\textbf
{
4
}}}}
;
\node
[anchor=north west,inner sep=1pt,fill=black] (tl42) at (t42.north west)
{
\tiny
{{
\color
{
white
}
\textbf
{
4
}}}}
;
...
...
Book/Chapter3/Figures/figure-noise-channel-model.tex
查看文件 @
bd87e7ad
...
...
@@ -9,7 +9,8 @@
\node
[draw,red,fill=red!10,thick,anchor=center,circle,inner sep=3.5pt] (s) at (0,0)
{
\black
{$
\mathbf
{
s
}$}}
;
\node
[draw,ublue,fill=blue!10,thick,anchor=center,circle,inner sep=3.3pt] (t) at ([xshift=1.5in]s.east)
{
\black
{$
\mathbf
{
t
}$}}
;
\draw
[<->,thick,] (s.east) -- (t.west) node [pos=0.5,draw,fill=white]
{
噪声信道
}
;
\draw
[->,thick,] (s.east) -- (t.west) node [pos=0.5,draw,fill=white]
{
噪声信道
}
;
\draw
[->,thick]
(s.east) -- ([xshift=2.2em]s.east);
\node
[anchor=east] at (s.west)
{
\scriptsize
{
信宿
}}
;
\node
[anchor=west] at (t.east)
{
\scriptsize
{
信源
}}
;
...
...
Book/Chapter3/Figures/figure-process-of-machine-translation.tex
查看文件 @
bd87e7ad
...
...
@@ -5,7 +5,7 @@
\node
[anchor=west] (s1) at (0,0)
{{
我
}}
;
\node
[anchor=west] (s2) at ([xshift=2em]s1.east)
{{
对
}}
;
\node
[anchor=west] (s3) at ([xshift=2em]s2.east)
{{
你
}}
;
\node
[anchor=west] (s4) at ([xshift=2em]s3.east)
{{
表示
}}
;
\node
[anchor=west] (s4) at ([xshift=2em]s3.east)
{{
感到
}}
;
\node
[anchor=west] (s5) at ([xshift=2em]s4.east)
{{
满意
}}
;
\node
[anchor=south west] (sentlabel) at ([yshift=-0.5em]s1.north west)
{
\scriptsize
{{
\color
{
red
}{
待翻译句子(已经分词):
}}}}
;
...
...
@@ -35,7 +35,7 @@
\node
[anchor=north west,inner sep=1pt,fill=black] (tl31) at (t31.north west)
{
\tiny
{{
\color
{
white
}
\textbf
{
3
}}}}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3em] (t41) at ([yshift=-1em]s4.south)
{$
\phi
$}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3em] (t42) at ([yshift=-0.2em]t41.south)
{
show
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3em] (t42) at ([yshift=-0.2em]t41.south)
{
feel
}
;
\node
[anchor=north west,inner sep=1pt,fill=black] (tl41) at (t41.north west)
{
\tiny
{{
\color
{
white
}
\textbf
{
4
}}}}
;
\node
[anchor=north west,inner sep=1pt,fill=black] (tl42) at (t42.north west)
{
\tiny
{{
\color
{
white
}
\textbf
{
4
}}}}
;
...
...
Book/Chapter3/Figures/greedy-mt-decoding-process-1.tex
查看文件 @
bd87e7ad
...
...
@@ -16,7 +16,7 @@
\node
[anchor=west] (s1) at (0,0)
{{
我
}}
;
\node
[anchor=west] (s2) at ([xshift=3em]s1.east)
{{
对
}}
;
\node
[anchor=west] (s3) at ([xshift=3em]s2.east)
{{
你
}}
;
\node
[anchor=west] (s4) at ([xshift=2.5em]s3.east)
{{
表示
}}
;
\node
[anchor=west] (s4) at ([xshift=2.5em]s3.east)
{{
感到
}}
;
\node
[anchor=west] (s5) at ([xshift=2.5em]s4.east)
{{
满意
}}
;
\node
[anchor=south west,inner sep=1pt] (sentlabel) at ([yshift=0.3em]s1.north west)
{
\scriptsize
{{
输入: 待翻译句子(已经分词)
}}}
;
...
...
@@ -53,8 +53,8 @@
{
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t41) at ([yshift=-1.3em]s4.south)
{$
\phi
$}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t42) at ([yshift=-0.2em]t41.south)
{
show
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t43) at ([yshift=-0.2em]t42.south)
{
show
s
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t42) at ([yshift=-0.2em]t41.south)
{
feel
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t43) at ([yshift=-0.2em]t42.south)
{
feel
s
}
;
}
{
...
...
@@ -121,7 +121,7 @@
\node
[anchor=west] (s1) at (0,0)
{{
我
}}
;
\node
[anchor=west] (s2) at ([xshift=3em]s1.east)
{{
对
}}
;
\node
[anchor=west] (s3) at ([xshift=3em]s2.east)
{{
你
}}
;
\node
[anchor=west] (s4) at ([xshift=2.5em]s3.east)
{{
表示
}}
;
\node
[anchor=west] (s4) at ([xshift=2.5em]s3.east)
{{
感到
}}
;
\node
[anchor=west] (s5) at ([xshift=2.5em]s4.east)
{{
满意
}}
;
\node
[anchor=south west,inner sep=1pt] (sentlabel) at ([yshift=0.3em]s1.north west)
{
\scriptsize
{{
输入: 待翻译句子(已经分词)
}}}
;
...
...
@@ -160,8 +160,8 @@
{
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t41) at ([yshift=-1.3em]s4.south)
{$
\phi
$}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t42) at ([yshift=-0.2em]t41.south)
{
show
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t43) at ([yshift=-0.2em]t42.south)
{
show
s
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t42) at ([yshift=-0.2em]t41.south)
{
feel
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t43) at ([yshift=-0.2em]t42.south)
{
feel
s
}
;
}
...
...
Book/Chapter3/Figures/greedy-mt-decoding-process-3.tex
查看文件 @
bd87e7ad
...
...
@@ -11,7 +11,7 @@
\node
[anchor=west] (s1) at (0,0)
{{
我
}}
;
\node
[anchor=west] (s2) at ([xshift=3em]s1.east)
{{
对
}}
;
\node
[anchor=west] (s3) at ([xshift=3em]s2.east)
{{
你
}}
;
\node
[anchor=west] (s4) at ([xshift=2.5em]s3.east)
{{
表示
}}
;
\node
[anchor=west] (s4) at ([xshift=2.5em]s3.east)
{{
感到
}}
;
\node
[anchor=west] (s5) at ([xshift=2.5em]s4.east)
{{
满意
}}
;
\node
[anchor=south west,inner sep=1pt] (sentlabel) at ([yshift=0.3em]s1.north west)
{
\scriptsize
{{
输入: 待翻译句子(已经分词)
}}}
;
...
...
@@ -50,8 +50,8 @@
{
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t41) at ([yshift=-1.3em]s4.south)
{$
\phi
$}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t42) at ([yshift=-0.2em]t41.south)
{
show
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t43) at ([yshift=-0.2em]t42.south)
{
show
s
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t42) at ([yshift=-0.2em]t41.south)
{
feel
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t43) at ([yshift=-0.2em]t42.south)
{
feel
s
}
;
}
...
...
@@ -176,7 +176,7 @@
\node
[anchor=west] (s1) at (0,0)
{{
我
}}
;
\node
[anchor=west] (s2) at ([xshift=3em]s1.east)
{{
对
}}
;
\node
[anchor=west] (s3) at ([xshift=3em]s2.east)
{{
你
}}
;
\node
[anchor=west] (s4) at ([xshift=2.5em]s3.east)
{{
表示
}}
;
\node
[anchor=west] (s4) at ([xshift=2.5em]s3.east)
{{
感到
}}
;
\node
[anchor=west] (s5) at ([xshift=2.5em]s4.east)
{{
满意
}}
;
\node
[anchor=south west,inner sep=1pt] (sentlabel) at ([yshift=0.3em]s1.north west)
{
\scriptsize
{{
输入: 待翻译句子(已经分词)
}}}
;
...
...
@@ -215,8 +215,8 @@
{
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t41) at ([yshift=-1.3em]s4.south)
{$
\phi
$}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t42) at ([yshift=-0.2em]t41.south)
{
show
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t43) at ([yshift=-0.2em]t42.south)
{
show
s
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t42) at ([yshift=-0.2em]t41.south)
{
feel
}
;
\node
[anchor=north,inner sep=2pt,fill=orange!20,minimum height=1.5em,minimum width=3.5em] (t43) at ([yshift=-0.2em]t42.south)
{
feel
s
}
;
}
...
...
Book/Chapter3/chapter3.tex
查看文件 @
bd87e7ad
...
...
@@ -111,7 +111,7 @@
%----------------------------------------------
\vspace
{
-0.2em
}
\parinterval
图
\ref
{
fig:3-3
}
展示了人在翻译``我
对 你表示
满意''时可能会思考的内容。具体来说,有如下两方面内容。
\parinterval
图
\ref
{
fig:3-3
}
展示了人在翻译``我
\;
对
\;
你
\;
感到
\;
满意''时可能会思考的内容。具体来说,有如下两方面内容。
\begin{itemize}
\vspace
{
0.5em
}
...
...
@@ -243,9 +243,9 @@
\begin{example}
一个汉英互译的句对
\qquad\qquad\quad
$
\mathbf
{
s
}$
= 机器
\quad
{
\color
{
red
}
翻译
}
\;
就
\;
是
\;
用
\;
计算机
\;
来
\;
进行
\;
{
\color
{
red
}
翻译
}
$
\mathbf
{
s
}$
= 机器
\quad
{
\color
{
red
}
翻译
}
\;
就
\;
是
\;
用
\;
计算机
\;
来
\;
生成
\;
{
\color
{
red
}
翻译
}
\;
的
\;
过程
\qquad\qquad\quad
$
\mathbf
{
t
}$
= machine
\;
{
\color
{
red
}
translation
}
\;
is
\;
just
\;
{
\color
{
red
}
translation
}
\;
by
\;
computer
$
\mathbf
{
t
}$
= machine
\;
{
\color
{
red
}
translation
}
\;
is
\;
a
\;
process
\;
of
\;
generating
\;
a
\;
{
\color
{
red
}
translation
}
\;
by
\;
computer
\label
{
eg:3-1
}
\end{example}
...
...
@@ -253,14 +253,14 @@
\begin{eqnarray}
\textrm
{
P
}
(
\text
{
``翻译''
}
,
\text
{
``translation''
}
;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\frac
{
c(
\textrm
{
``翻译''
}
,
\textrm
{
``translation''
}
;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
}{
\sum
_{
x',y'
}
c(x',y';
\mathbf
{
s
}
,
\mathbf
{
t
}
)
}
\nonumber
\\
&
=
&
\frac
{
4
}{
|
\mathbf
{
s
}
|
\times
|
\mathbf
{
t
}
|
}
\nonumber
\\
&
=
&
\frac
{
4
}{
63
}
&
=
&
\frac
{
4
}{
121
}
\label
{
eq:3-2
}
\end{eqnarray}
\noindent
这里运算
$
|
\cdot
|
$
表示句子长度。类似的,可以得到``机器''和``translation''、``机器''和``look''的单词翻译概率:
\begin{eqnarray}
\textrm
{
P
}
(
\text
{
``机器''
}
,
\text
{
``translation''
}
;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\frac
{
2
}{
63
}
\\
\textrm
{
P
}
(
\text
{
``机器''
}
,
\text
{
``look''
}
;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\frac
{
0
}{
63
}
\textrm
{
P
}
(
\text
{
``机器''
}
,
\text
{
``translation''
}
;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\frac
{
2
}{
121
}
\\
\textrm
{
P
}
(
\text
{
``机器''
}
,
\text
{
``look''
}
;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\frac
{
0
}{
121
}
\label
{
eq:3-3
}
\end{eqnarray}
...
...
@@ -283,13 +283,13 @@
\begin{example}
两个汉英互译的句对
\qquad\qquad
\;
$
\mathbf
{
s
}^
1
$
= 机器
\quad
{
\color
{
red
}
翻译
}
\;
就
\;
是
\;
用
\;
计算机
\;
来
\;
进行
\;
{
\color
{
red
}
翻译
}
$
\mathbf
{
s
}^{
[
1
]
}$
= 机器
\quad
{
\color
{
red
}
翻译
}
\;
就
\;
是
\;
用
\;
计算机
\;
来
\;
生成
\;
{
\color
{
red
}
翻译
}
\;
的
\;
过程
\qquad\qquad\;
$
\mathbf
{
s
}^
1
$
= Machine
\;
{
\color
{
red
}
translation
}
\;
is
\;
just
\;
{
\color
{
red
}
translation
}
\;
by
\;
computer
$
\mathbf
{
t
}^{
[
1
]
}$
= machine
\;
{
\color
{
red
}
translation
}
\;
is
\;
a
\;
process
\;
of
\;
generating
\;
a
\;
{
\color
{
red
}
translation
}
\;
by
\;
computer
\qquad\qquad\;
$
\mathbf
{
s
}^
2
$
= 那
\quad
人工
\quad
{
\color
{
red
}
翻译
}
\quad
呢
\quad
?
$
\mathbf
{
s
}^{
[
2
]
}
$
= 那
\quad
人工
\quad
{
\color
{
red
}
翻译
}
\quad
呢
\quad
?
\qquad\qquad\;
$
\mathbf
{
t
}^
2
$
= So
\;
,
\;
what
\;
is
\;
human
\;
{
\color
{
red
}
translation
}
\;
?
$
\mathbf
{
t
}^{
[
2
]
}
$
= So
\;
,
\;
what
\;
is
\;
human
\;
{
\color
{
red
}
translation
}
\;
?
\label
{
eg:3-2
}
\end{example}
...
...
@@ -298,8 +298,8 @@
\begin{eqnarray}
{
\textrm
{
P
}
(
\textrm
{
``翻译''
}
,
\textrm
{
``translation''
}
)
}
&
=
&
{
\frac
{
c(
\textrm
{
``翻译''
}
,
\textrm
{
``translation''
}
;
\mathbf
{
s
}^{
[1]
}
,
\mathbf
{
t
}^{
[1]
}
)+c(
\textrm
{
``翻译''
}
,
\textrm
{
``translation''
}
;
\mathbf
{
s
}^{
[2]
}
,
\mathbf
{
t
}^{
[2]
}
)
}{
\sum
_{
x',y'
}
c(x',y';
\mathbf
{
s
}^{
[1]
}
,
\mathbf
{
t
}^{
[1]
}
) +
\sum
_{
x',y'
}
c(x',y';
\mathbf
{
s
}^{
[2]
}
,
\mathbf
{
t
}^{
[2]
}
)
}}
\nonumber
\\
&
=
&
\frac
{
4 + 1
}{
|
\mathbf
{
s
}^{
[1]
}
|
\times
|
\mathbf
{
t
}^{
[1]
}
| + |
\mathbf
{
s
}^{
[2]
}
|
\times
|
\mathbf
{
t
}^{
[2]
}
|
}
\nonumber
\\
&
=
&
\frac
{
4 + 1
}{
9
\times
7
+ 5
\times
7
}
\nonumber
\\
&
=
&
\frac
{
5
}{
98
}
&
=
&
\frac
{
4 + 1
}{
11
\times
11
+ 5
\times
7
}
\nonumber
\\
&
=
&
\frac
{
5
}{
156
}
\label
{
eq:3-5
}
\end{eqnarray}
}
...
...
Book/Chapter4/Figures/grid-search-2.tex
deleted
100644 → 0
查看文件 @
e10fac8e
\begin{tikzpicture}
\begin{scope}
[scale=0.62]
{
\tiny
\draw
[step=1,help lines,color=black]
(0,0) grid (4,4);
\node
[anchor=north]
(y2) at ([xshift=-3.3em,yshift=0em]n1.north)
{
0.01
}
;
\node
[anchor=north]
(y1) at ([xshift=0em,yshift=-3.3em]y2.south)
{
0.00
}
;
\node
[anchor=north]
(y3) at ([xshift=0em,yshift=4.5em]y2.north)
{
0.02
}
;
\node
[anchor=north]
(y4) at ([xshift=0em,yshift=6.6em]y3.north)
{$
\vdots
$}
;
\node
[anchor=north]
(y5) at ([xshift=0em,yshift=2em]y4.north)
{
1.00
}
;
\node
[anchor=north]
(x1) at ([xshift=2em,yshift=-3em]n1.south)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x2) at ([xshift=4.5em,yshift=0em]x1.north)
{$
\lambda
_
2
$}
;
\node
[anchor=north]
(x3) at ([xshift=4em,yshift=-1em]x2.north)
{$
...
$}
;
\node
[anchor=north]
(x4) at ([xshift=5em,yshift=1em]x3.north)
{$
\lambda
_{
M
-
1
}$}
;
\node
[anchor=north]
(x5) at ([xshift=5em,yshift=0em]x4.north)
{$
\lambda
_
M
$}
;
\draw
[-](n1) (0,4) -- (0,4.4);
\draw
[-](n2) (1,4) -- (1,4.4);
\draw
[-](n3) (2,4) -- (2,4.4);
\draw
[-](n4) (3,4) -- (3,4.4);
\draw
[-](n5) (4,4) -- (4,4.4);
\node
[anchor=center,draw,circle,inner sep=1.5pt,red!30,fill=red!30] (r31) at (2,4)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,red!30,fill=red!30] (r32) at (2,0)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,red!30,fill=red!30] (r33) at (2,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,red!30,fill=red!30] (r35) at (2,1)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,ugreen!50,fill=ugreen!50] (r34) at (2,3)
{}
;
\draw
[-,very thick,red!50, dashed] (1,2) -- (2,4) -- (3,2) -- (2,3) -- (1,2) -- (3,2) -- (2,1) -- (1,2) -- (2,0) -- (3,2);
\draw
[-,very thick,blue!50] (0,1) -- (1,2);
\draw
[-,very thick,blue!50] (3,2) -- (4,4);
\draw
[-,very thick,ugreen!50, dashed] (1,2) -- (2,3) -- (3,2);
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r11) at (0,1)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r12) at (1,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r14) at (3,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r15) at (4,4)
{}
;
}
\end{scope}
\end{tikzpicture}
\ No newline at end of file
Book/Chapter4/Figures/grid-search
-1
.tex
→
Book/Chapter4/Figures/grid-search.tex
查看文件 @
bd87e7ad
...
...
@@ -3,13 +3,13 @@
{
\tiny
\draw
[step=1,help lines,color=black]
(0,0) grid (4,4);
\node
[anchor=north]
(y2) at (
[xshift=-3.3em,yshift=0em]n1.north
)
{
0.01
}
;
\node
[anchor=north]
(y2) at (
-5.3em,1.5
)
{
0.01
}
;
\node
[anchor=north]
(y1) at ([xshift=0em,yshift=-3.3em]y2.south)
{
0.00
}
;
\node
[anchor=north]
(y3) at ([xshift=0em,yshift=4.5em]y2.north)
{
0.02
}
;
\node
[anchor=north]
(y4) at ([xshift=0em,yshift=6.6em]y3.north)
{$
\vdots
$}
;
\node
[anchor=north]
(y5) at ([xshift=0em,yshift=2em]y4.north)
{
1.00
}
;
\node
[anchor=north]
(x1) at (
[xshift=2em,yshift=-3em]n1.south
)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x1) at (
1em,-3em
)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x2) at ([xshift=4.5em,yshift=0em]x1.north)
{$
\lambda
_
2
$}
;
\node
[anchor=north]
(x3) at ([xshift=4em,yshift=-1em]x2.north)
{$
...
$}
;
\node
[anchor=north]
(x4) at ([xshift=5em,yshift=1em]x3.north)
{$
\lambda
_{
M
-
1
}$}
;
...
...
@@ -44,4 +44,45 @@
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r15) at (4,4)
{}
;
}
\end{scope}
\begin{scope}
[scale=0.62,xshift=3in]
{
\tiny
\draw
[step=1,help lines,color=black]
(0,0) grid (4,4);
\node
[anchor=north]
(y2) at (-5.3em,1.5)
{
0.01
}
;
\node
[anchor=north]
(y1) at ([xshift=0em,yshift=-3.3em]y2.south)
{
0.00
}
;
\node
[anchor=north]
(y3) at ([xshift=0em,yshift=4.5em]y2.north)
{
0.02
}
;
\node
[anchor=north]
(y4) at ([xshift=0em,yshift=6.6em]y3.north)
{$
\vdots
$}
;
\node
[anchor=north]
(y5) at ([xshift=0em,yshift=2em]y4.north)
{
1.00
}
;
\node
[anchor=north]
(x1) at (1em,-3em)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x2) at ([xshift=4.5em,yshift=0em]x1.north)
{$
\lambda
_
2
$}
;
\node
[anchor=north]
(x3) at ([xshift=4em,yshift=-1em]x2.north)
{$
...
$}
;
\node
[anchor=north]
(x4) at ([xshift=5em,yshift=1em]x3.north)
{$
\lambda
_{
M
-
1
}$}
;
\node
[anchor=north]
(x5) at ([xshift=5em,yshift=0em]x4.north)
{$
\lambda
_
M
$}
;
\draw
[-](n1) (0,4) -- (0,4.4);
\draw
[-](n2) (1,4) -- (1,4.4);
\draw
[-](n3) (2,4) -- (2,4.4);
\draw
[-](n4) (3,4) -- (3,4.4);
\draw
[-](n5) (4,4) -- (4,4.4);
\node
[anchor=center,draw,circle,inner sep=1.5pt,red!30,fill=red!30] (r31) at (2,4)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,red!30,fill=red!30] (r32) at (2,0)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,red!30,fill=red!30] (r33) at (2,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,red!30,fill=red!30] (r35) at (2,1)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,ugreen!50,fill=ugreen!50] (r34) at (2,3)
{}
;
\draw
[-,very thick,red!50, dashed] (1,2) -- (2,4) -- (3,2) -- (2,3) -- (1,2) -- (3,2) -- (2,1) -- (1,2) -- (2,0) -- (3,2);
\draw
[-,very thick,blue!50] (0,1) -- (1,2);
\draw
[-,very thick,blue!50] (3,2) -- (4,4);
\draw
[-,very thick,ugreen!50, dashed] (1,2) -- (2,3) -- (3,2);
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r11) at (0,1)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r12) at (1,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r14) at (3,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r15) at (4,4)
{}
;
}
\end{scope}
\end{tikzpicture}
\ No newline at end of file
Book/Chapter4/Figures/search-space-representation-of-feature-weight-1.tex
deleted
100644 → 0
查看文件 @
e10fac8e
\begin{tikzpicture}
\begin{scope}
[scale=0.55]
{
\tiny
\draw
[step=1,help lines,color=black]
grid (4,4);
\node
[anchor=north]
(y2) at ([xshift=-3.3em,yshift=0em]n1.north)
{
0.01
}
;
\node
[anchor=north]
(y1) at ([xshift=0em,yshift=-3.3em]y2.south)
{
0.00
}
;
\node
[anchor=north]
(y3) at ([xshift=0em,yshift=4.5em]y2.north)
{
0.02
}
;
\node
[anchor=north]
(y4) at ([xshift=0em,yshift=6.6em]y3.north)
{$
\vdots
$}
;
\node
[anchor=north]
(y5) at ([xshift=0em,yshift=2em]y4.north)
{
1.00
}
;
\node
[anchor=north]
(x1) at ([xshift=2em,yshift=-3em]n1.south)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x2) at ([xshift=4.5em,yshift=0em]x1.north)
{$
\lambda
_
2
$}
;
\node
[anchor=north]
(x3) at ([xshift=4em,yshift=-1em]x2.north)
{$
...
$}
;
\node
[anchor=north]
(x4) at ([xshift=5em,yshift=1em]x3.north)
{$
\lambda
_{
M
-
1
}$}
;
\node
[anchor=north]
(x5) at ([xshift=5em,yshift=0em]x4.north)
{$
\lambda
_
M
$}
;
\draw
[-](n1) (0,4) -- (0,4.4);
\draw
[-](n2) (1,4) -- (1,4.4);
\draw
[-](n3) (2,4) -- (2,4.4);
\draw
[-](n4) (3,4) -- (3,4.4);
\draw
[-](n5) (4,4) -- (4,4.4);
\draw
[decorate,decoration={brace}]
(0,4.7) --(4,4.7) node [xshift=-4em,yshift=1.5em,align=center](label1)
{
M dimensions
}
;
\draw
[decorate,decoration={brace}]
(4.5,4.3) --(4.5,0) node [xshift=2.3em,yshift=5.8em,align=center](label2)
{
Values
}
;
}
\end{scope}
\end{tikzpicture}
\ No newline at end of file
Book/Chapter4/Figures/search-space-representation-of-feature-weight-2.tex
deleted
100644 → 0
查看文件 @
e10fac8e
\begin{tikzpicture}
\begin{scope}
[scale=0.55]
{
\tiny
\draw
[step=1,help lines,color=black]
grid (4,4);
\node
[anchor=north]
(y2) at ([xshift=-3.3em,yshift=0em]n1.north)
{
0.01
}
;
\node
[anchor=north]
(y1) at ([xshift=0em,yshift=-3.3em]y2.south)
{
0.00
}
;
\node
[anchor=north]
(y3) at ([xshift=0em,yshift=4.5em]y2.north)
{
0.02
}
;
\node
[anchor=north]
(y4) at ([xshift=0em,yshift=6.6em]y3.north)
{$
\vdots
$}
;
\node
[anchor=north]
(y5) at ([xshift=0em,yshift=2em]y4.north)
{
1.00
}
;
\node
[anchor=north]
(x1) at ([xshift=2em,yshift=-3em]n1.south)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x2) at ([xshift=4.5em,yshift=0em]x1.north)
{$
\lambda
_
2
$}
;
\node
[anchor=north]
(x3) at ([xshift=4em,yshift=-1em]x2.north)
{$
...
$}
;
\node
[anchor=north]
(x4) at ([xshift=5em,yshift=1em]x3.north)
{$
\lambda
_{
M
-
1
}$}
;
\node
[anchor=north]
(x5) at ([xshift=5em,yshift=0em]x4.north)
{$
\lambda
_
M
$}
;
\draw
[-](n1) (0,4) -- (0,4.4);
\draw
[-](n2) (1,4) -- (1,4.4);
\draw
[-](n3) (2,4) -- (2,4.4);
\draw
[-](n4) (3,4) -- (3,4.4);
\draw
[-](n5) (4,4) -- (4,4.4);
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r11) at (0,1)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r12) at (1,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r13) at (2,1)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r14) at (3,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r15) at (4,4)
{}
;
\draw
[-,very thick,blue!50] (0,1) -- (1,2) -- (2,1) -- (3,2) -- (4,4);
\node
[anchor=north]
(p1) at ([xshift=5em,yshift=13em]n5.north)
{
\scriptsize
{$
\leftarrow
$
\textbf
{
path
}
:
}}
;
\node
[anchor=north]
(e1) at ([xshift=0,yshift=-0.4em]p1.south)
{$
w
_
1
=
0
.
01
$}
;
\node
[anchor=north]
(e2) at ([xshift=0,yshift=-0.8em]e1.south)
{$
w
_
2
=
0
.
02
$}
;
\node
[anchor=north]
(e3) at ([xshift=0,yshift=0.4em]e2.south)
{$
\vdots
$}
;
\node
[anchor=north]
(e4) at ([xshift=0,yshift=-0.2em]e3.south)
{$
w
_
M
=
1
.
00
$}
;
}
\end{scope}
\end{tikzpicture}
\ No newline at end of file
Book/Chapter4/Figures/search-space-representation-of-feature-weight
-3
.tex
→
Book/Chapter4/Figures/search-space-representation-of-feature-weight.tex
查看文件 @
bd87e7ad
...
...
@@ -3,13 +3,80 @@
{
\tiny
\draw
[step=1,help lines,color=black]
grid (4,4);
\node
[anchor=north]
(y2) at ([xshift=-3.3em,yshift=0em]n1.north)
{
0.01
}
;
\draw
[-](n1) (0,4) -- (0,4.4);
\draw
[-](n2) (1,4) -- (1,4.4);
\draw
[-](n3) (2,4) -- (2,4.4);
\draw
[-](n4) (3,4) -- (3,4.4);
\draw
[-](n5) (4,4) -- (4,4.4);
\node
[anchor=north]
(y2) at (-5.3em,1.5)
{
0.01
}
;
\node
[anchor=north]
(y1) at ([xshift=0em,yshift=-3.3em]y2.south)
{
0.00
}
;
\node
[anchor=north]
(y3) at ([xshift=0em,yshift=4.5em]y2.north)
{
0.02
}
;
\node
[anchor=north]
(y4) at ([xshift=0em,yshift=6.6em]y3.north)
{$
\vdots
$}
;
\node
[anchor=north]
(y5) at ([xshift=0em,yshift=2em]y4.north)
{
1.00
}
;
\node
[anchor=north]
(x1) at (1em,-3em)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x2) at ([xshift=4.5em,yshift=0em]x1.north)
{$
\lambda
_
2
$}
;
\node
[anchor=north]
(x3) at ([xshift=4em,yshift=-1em]x2.north)
{$
...
$}
;
\node
[anchor=north]
(x4) at ([xshift=5em,yshift=1em]x3.north)
{$
\lambda
_{
M
-
1
}$}
;
\node
[anchor=north]
(x5) at ([xshift=5em,yshift=0em]x4.north)
{$
\lambda
_
M
$}
;
\draw
[decorate,decoration={brace}]
(0,4.7) --(4,4.7) node [xshift=-4em,yshift=1.5em,align=center](label1)
{
M dimensions
}
;
\draw
[decorate,decoration={brace}]
(4.5,4.3) --(4.5,0) node [xshift=2.3em,yshift=5.8em,align=center](label2)
{
Values
}
;
}
\end{scope}
\begin{scope}
[scale=0.55,xshift=3.2in]
{
\tiny
\draw
[step=1,help lines,color=black]
grid (4,4);
\node
[anchor=north]
(y2) at (-5.3em,1.5)
{
0.01
}
;
\node
[anchor=north]
(y1) at ([xshift=0em,yshift=-3.3em]y2.south)
{
0.00
}
;
\node
[anchor=north]
(y3) at ([xshift=0em,yshift=4.5em]y2.north)
{
0.02
}
;
\node
[anchor=north]
(y4) at ([xshift=0em,yshift=6.6em]y3.north)
{$
\vdots
$}
;
\node
[anchor=north]
(y5) at ([xshift=0em,yshift=2em]y4.north)
{
1.00
}
;
\node
[anchor=north]
(x1) at (1em,-3em)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x2) at ([xshift=4.5em,yshift=0em]x1.north)
{$
\lambda
_
2
$}
;
\node
[anchor=north]
(x3) at ([xshift=4em,yshift=-1em]x2.north)
{$
...
$}
;
\node
[anchor=north]
(x4) at ([xshift=5em,yshift=1em]x3.north)
{$
\lambda
_{
M
-
1
}$}
;
\node
[anchor=north]
(x5) at ([xshift=5em,yshift=0em]x4.north)
{$
\lambda
_
M
$}
;
\draw
[-](n1) (0,4) -- (0,4.4);
\draw
[-](n2) (1,4) -- (1,4.4);
\draw
[-](n3) (2,4) -- (2,4.4);
\draw
[-](n4) (3,4) -- (3,4.4);
\draw
[-](n5) (4,4) -- (4,4.4);
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r11) at (0,1)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r12) at (1,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r13) at (2,1)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r14) at (3,2)
{}
;
\node
[anchor=center,draw,circle,inner sep=1.5pt,blue!30,fill=blue!30] (r15) at (4,4)
{}
;
\draw
[-,very thick,blue!50] (0,1) -- (1,2) -- (2,1) -- (3,2) -- (4,4);
\node
[anchor=north]
(p1) at (5.7,4.3)
{
\scriptsize
{$
\leftarrow
$
\textbf
{
path
}
:
}}
;
\node
[anchor=north]
(e1) at ([xshift=0,yshift=-0.4em]p1.south)
{$
w
_
1
=
0
.
01
$}
;
\node
[anchor=north]
(e2) at ([xshift=0,yshift=-0.8em]e1.south)
{$
w
_
2
=
0
.
02
$}
;
\node
[anchor=north]
(e3) at ([xshift=0,yshift=0.4em]e2.south)
{$
\vdots
$}
;
\node
[anchor=north]
(e4) at ([xshift=0,yshift=-0.2em]e3.south)
{$
w
_
M
=
1
.
00
$}
;
}
\end{scope}
\begin{scope}
[scale=0.55,xshift=6.8in]
{
\tiny
\draw
[step=1,help lines,color=black]
grid (4,4);
\node
[anchor=north]
(y2) at (-5.3em,1.5)
{
0.01
}
;
\node
[anchor=north]
(y1) at ([xshift=0em,yshift=-3.3em]y2.south)
{
0.00
}
;
\node
[anchor=north]
(y3) at ([xshift=0em,yshift=4.5em]y2.north)
{
0.02
}
;
\node
[anchor=north]
(y4) at ([xshift=0em,yshift=6.6em]y3.north)
{$
\vdots
$}
;
\node
[anchor=north]
(y5) at ([xshift=0em,yshift=2em]y4.north)
{
1.00
}
;
\node
[anchor=north]
(x1) at (
[xshift=2em,yshift=-3em]n1.south
)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x1) at (
1em,-3em
)
{$
\lambda
_
1
$}
;
\node
[anchor=north]
(x2) at ([xshift=4.5em,yshift=0em]x1.north)
{$
\lambda
_
2
$}
;
\node
[anchor=north]
(x3) at ([xshift=4em,yshift=-1em]x2.north)
{$
...
$}
;
\node
[anchor=north]
(x4) at ([xshift=5em,yshift=1em]x3.north)
{$
\lambda
_{
M
-
1
}$}
;
...
...
Book/Chapter4/chapter4.tex
查看文件 @
bd87e7ad
...
...
@@ -701,9 +701,7 @@ dr = start_i-end_{i-1}-1
%----------------------------------------------
\begin{figure}
[htp]
\centering
\begin{tabular}
{
l l l
}
&
\subfigure
{
\input
{
./Chapter4/Figures/search-space-representation-of-feature-weight-1
}}
\subfigure
{
\input
{
./Chapter4/Figures/search-space-representation-of-feature-weight-2
}}
\subfigure
{
\input
{
./Chapter4/Figures/search-space-representation-of-feature-weight-3
}}
&
\\
\end{tabular}
\input
{
./Chapter4/Figures/search-space-representation-of-feature-weight
}
\caption
{
特征权重的搜索空间表示
}
\label
{
fig:4-23
}
\end{figure}
...
...
@@ -716,9 +714,7 @@ dr = start_i-end_{i-1}-1
%----------------------------------------------
\begin{figure}
[htp]
\centering
\begin{tabular}
{
l l
}
\subfigure
{
\input
{
./Chapter4/Figures/grid-search-1
}}
&
\subfigure
{
\input
{
./Chapter4/Figures/grid-search-2
}}
\\
\end{tabular}
\input
{
./Chapter4/Figures/grid-search
}
\caption
{
格搜索(左侧:所有点都访问(蓝色);右侧:避开无效点(绿色))
}
\label
{
fig:4-24
}
\end{figure}
...
...
Book/ChapterAppend/chapterappend.tex
查看文件 @
bd87e7ad
...
...
@@ -173,7 +173,7 @@
%----------------------------------------------------------------------------------------
\section
{
IBM模型3训练方法
}
\parinterval
模型3的参数估计与模型1和模型2采用相同的方法。这里直接给出辅助函数。
\parinterval
IBM
模型3的参数估计与模型1和模型2采用相同的方法。这里直接给出辅助函数。
\begin{eqnarray}
h(t,d,n,p,
\lambda
,
\mu
,
\nu
,
\zeta
)
&
=
&
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
s
}
|
\mathbf
{
t
}
)-
\sum
_{
t
}
\lambda
_{
t
}
\big
(
\sum
_{
s
}
t(s|t)-1
\big
)
\nonumber
\\
&
&
-
\sum
_{
i
}
\mu
_{
iml
}
\big
(
\sum
_{
j
}
d(j|i,m,l)-1
\big
)
\nonumber
\\
...
...
@@ -181,7 +181,7 @@ h(t,d,n,p, \lambda,\mu, \nu, \zeta) & = & \textrm{P}_{\theta}(\mathbf{s}|\mathb
\label
{
eq:1.1
}
\end{eqnarray}
\parinterval
由于篇幅所限这里略去了推导步骤直接给出
一些用于参数估计的等
式。
\parinterval
由于篇幅所限这里略去了推导步骤直接给出
具体公
式。
\begin{eqnarray}
c(s|t,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
\mathbf
{
a
}}
\big
[\textrm{P}_{\theta}(\mathbf{s},\mathbf{a}|\mathbf{t}) \times \sum_{j=1}^{m} (\delta(s_j,s) \cdot \delta(t_{a_{j}},t))\big]
\label
{
eq:1.2
}
\\
c(j|i,m,l;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
\mathbf
{
a
}}
\big
[\textrm{P}_{\theta}(\mathbf{s},\mathbf{a}|\mathbf{t}) \times \delta(i,a_j)\big]
\label
{
eq:1.3
}
\\
...
...
@@ -202,9 +202,9 @@ n(\varphi|t) & = & \nu_{t}^{-1} \times \sum_{s=1}^{K}c(\varphi |t;\mathbf{s}^{[k
p
_
x
&
=
&
\zeta
^{
-1
}
\sum
_{
k=1
}^{
K
}
c(x;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
\label
{
eq:1.10
}
\end{eqnarray}
\parinterval
在模型3中,因为
产出率的引入,并不能像模型1和模型2那样,在保证正确性的情况下加速参数估计的过程。这就使得每次迭代过程中,都不得不面对大小为
$
(
l
+
1
)
^
m
$
的词对齐空间。遍历所有
$
(
l
+
1
)
^
m
$
个词对齐所带来的高时间复杂度显然是不能被接受的。因此就要考虑能否仅利用词对齐空间中的部分词对齐对这些参数进行估计。比较简单且直接的方法就是仅利用Viterbi对齐来进行参数估计
\footnote
{
Viterbi词对齐可以被简单的看作搜索到的最好词对齐。
}
。 遗憾的是,在模型3中并没有方法直接获得Viterbi对齐。这样只能采用一种折中的策略,即仅考虑那些使得
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
s
}
,
\mathbf
{
a
}
|
\mathbf
{
t
}
)
$
达到较高值的词对齐。这里把这部分词对齐组成的集合记为
$
S
$
。式
\ref
{
eq:1.2
}
可以被修改为:
\parinterval
在模型3中,因为
繁衍率的引入,并不能像模型1和模型2那样,在保证正确性的情况下加速参数估计的过程。这就使得每次迭代过程中,都不得不面对大小为
$
(
l
+
1
)
^
m
$
的词对齐空间。遍历所有
$
(
l
+
1
)
^
m
$
个词对齐所带来的高时间复杂度显然是不能被接受的。因此就要考虑能否仅利用词对齐空间中的部分词对齐对这些参数进行估计。比较简单的方法是仅使用Viterbi对齐来进行参数估计,这里Viterbi 词对齐可以被简单的看作搜索到的最好词对齐。遗憾的是,在模型3中并没有方法直接获得Viterbi对齐。这样只能采用一种折中的策略,即仅考虑那些使得
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
s
}
,
\mathbf
{
a
}
|
\mathbf
{
t
}
)
$
达到较高值的词对齐。这里把这部分词对齐组成的集合记为
$
S
$
。式
\ref
{
eq:1.2
}
可以被修改为:
\begin{eqnarray}
c(s|t,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
\approx
\sum
_{
\mathbf
{
a
}
\in
\mathbf
{
S
}
}
\big
[\textrm{P}_{\theta}(\mathbf{s},\mathbf{a}|\mathbf{t}) \times \sum_{j=1}^{m}(\delta(s_j,\mathbf{s}) \cdot \delta(t_{a_{j}},\mathbf{t})) \big]
c(s|t,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
\approx
\sum
_{
\mathbf
{
a
}
\in
S
}
\big
[\textrm{P}_{\theta}(\mathbf{s},\mathbf{a}|\mathbf{t}) \times \sum_{j=1}^{m}(\delta(s_j,\mathbf{s}) \cdot \delta(t_{a_{j}},\mathbf{t})) \big]
\label
{
eq:1.11
}
\end{eqnarray}
...
...
@@ -222,7 +222,7 @@ S = N(b^{\infty}(V(\mathbf{s}|\mathbf{t};2))) \cup (\mathop{\cup}\limits_{ij} N(
\end{itemize}
\vspace
{
0.5em
}
\parinterval
公式
\ref
{
eq:1.12
}
中,
$
b
^{
\infty
}
(
V
(
\mathbf
{
s
}
|
\mathbf
{
t
}
;
2
))
$
和
$
b
_{
i
\leftrightarrow
j
}^{
\infty
}
(
V
_{
i
\leftrightarrow
j
}
(
\mathbf
{
s
}
|
\mathbf
{
t
}
,
2
))
$
分别是对
$
V
(
\mathbf
{
s
}
|
\mathbf
{
t
}
;
3
)
$
和
$
V
_{
i
\leftrightarrow
j
}
(
\mathbf
{
s
}
|
\mathbf
{
t
}
,
3
)
$
的估计。在计算
$
S
$
的过程中,需要知道一个对齐
$
\bf
{
a
}$
的邻居
$
\bf
{
a
}^{
'
}$
的概率,即通过
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
,
\mathbf
{
s
}
|
\mathbf
{
t
}
)
$
计算
$
\textrm
{
p
}_{
\theta
}
(
\mathbf
{
a
}
',
\mathbf
{
s
}
|
\mathbf
{
t
}
)
$
。在模型3中,如果
$
\bf
{
a
}$
和
$
\bf
{
a
}
'
$
仅区别于某个源语单词对齐到的目标位置上(
$
a
_
j
\neq
a
_{
j
}
'
$
),那么
\parinterval
公式
\ref
{
eq:1.12
}
中,
$
b
^{
\infty
}
(
V
(
\mathbf
{
s
}
|
\mathbf
{
t
}
;
2
))
$
和
$
b
_{
i
\leftrightarrow
j
}^{
\infty
}
(
V
_{
i
\leftrightarrow
j
}
(
\mathbf
{
s
}
|
\mathbf
{
t
}
,
2
))
$
分别是对
$
V
(
\mathbf
{
s
}
|
\mathbf
{
t
}
;
3
)
$
和
$
V
_{
i
\leftrightarrow
j
}
(
\mathbf
{
s
}
|
\mathbf
{
t
}
,
3
)
$
的估计。在计算
$
S
$
的过程中,需要知道一个对齐
$
\bf
{
a
}$
的邻居
$
\bf
{
a
}^{
'
}$
的概率,即通过
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
,
\mathbf
{
s
}
|
\mathbf
{
t
}
)
$
计算
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
',
\mathbf
{
s
}
|
\mathbf
{
t
}
)
$
。在模型3中,如果
$
\bf
{
a
}$
和
$
\bf
{
a
}
'
$
仅区别于某个源语单词对齐到的目标位置上(
$
a
_
j
\neq
a
_{
j
}
'
$
),那么
\begin{eqnarray}
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
',
\mathbf
{
s
}
|
\mathbf
{
t
}
)
&
=
&
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
,
\mathbf
{
s
}
|
\mathbf
{
t
}
)
\cdot
\nonumber
\\
...
...
@@ -247,7 +247,7 @@ S = N(b^{\infty}(V(\mathbf{s}|\mathbf{t};2))) \cup (\mathop{\cup}\limits_{ij} N(
\parinterval
模型4的参数估计基本与模型3一致。需要修改的是扭曲度的估计公式,对于目标语第
$
i
$
个cept.生成的第一单词,可以得到(假设有
$
K
$
个训练样本):
\begin{eqnarray}
d
_
1(
\Delta
_
j|ca,cb
;
\mathbf
{
s
}
,
\mathbf
{
t
}
) =
\mu
_{
1cacb
}^{
-1
}
\times
\sum
_{
k=1
}^{
K
}
c
_
1(
\Delta
_
j|ca,cb;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
d
_
1(
\Delta
_
j|ca,cb) =
\mu
_{
1cacb
}^{
-1
}
\times
\sum
_{
k=1
}^{
K
}
c
_
1(
\Delta
_
j|ca,cb;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
\label
{
eq:1.15
}
\end{eqnarray}
...
...
@@ -255,7 +255,7 @@ d_1(\Delta_j|ca,cb;\mathbf{s},\mathbf{t}) = \mu_{1cacb}^{-1} \times \sum_{k=1}^{
\begin{eqnarray}
c
_
1(
\Delta
_
j|ca,cb;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
\mathbf
{
a
}}
\big
[\textrm{P}_{\theta}(\mathbf{s},\mathbf{a}|\mathbf{t}) \times s_1(\Delta_j|ca,cb;\mathbf{a},\mathbf{s},\mathbf{t})\big]
\label
{
eq:1.16
}
\\
s
_
1(
\Delta
_
j|ca,cb;
\rm
{
a
}
,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
i=1
}^
l
\big
[
\varepsilon
(
\phi
_
i)
\cdot
\delta
(
\pi
_{
i1
}
-
\odot
_{
i
}
,
\Delta
_
j)
\cdot
\nonumber
\\
s
_
1(
\Delta
_
j|ca,cb;
\rm
{
a
}
,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
i=1
}^
l
\big
[
\varepsilon
(
\
var
phi
_
i)
\cdot
\delta
(
\pi
_{
i1
}
-
\odot
_{
i
}
,
\Delta
_
j)
\cdot
\nonumber
\\
&
&
\delta
(A(t
_{
i-1
}
),ca)
\cdot
\delta
(B(
\tau
_{
i1
}
),cb)
\big
]
\label
{
eq:1.17
}
\end{eqnarray}
...
...
@@ -272,7 +272,7 @@ s_1(\Delta_j|ca,cb;\rm{a},\mathbf{s},\mathbf{t}) & = & \sum_{i=1}^l \big[\vareps
对于目标语第
$
i
$
个cept.生成的其他单词(非第一个单词),可以得到:
\begin{eqnarray}
d
_{
>1
}
(
\Delta
_
j|cb
;
\mathbf
{
s
}
,
\mathbf
{
t
}
) =
\mu
_{
>1cb
}^{
-1
}
\times
\sum
_{
k=1
}^{
K
}
c
_{
>1
}
(
\Delta
_
j|cb;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
d
_{
>1
}
(
\Delta
_
j|cb) =
\mu
_{
>1cb
}^{
-1
}
\times
\sum
_{
k=1
}^{
K
}
c
_{
>1
}
(
\Delta
_
j|cb;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
\label
{
eq:1.18
}
\end{eqnarray}
...
...
@@ -280,7 +280,7 @@ d_{>1}(\Delta_j|cb;\mathbf{s},\mathbf{t}) = \mu_{>1cb}^{-1} \times \sum_{k=1}^{K
\begin{eqnarray}
c
_{
>1
}
(
\Delta
_
j|cb;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
\mathbf
{
a
}}
\big
[\textrm{p}_{\theta}(\mathbf{s},\mathbf{a}|\mathbf{t}) \times s_{>1}(\Delta_j|cb;\mathbf{a},\mathbf{s},\mathbf{t}) \big]
\label
{
eq:1.19
}
\\
s
_{
>1
}
(
\Delta
_
j|cb;
\mathbf
{
a
}
,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
i=1
}^
l
\big
[\varepsilon(\
phi_i-1)\sum_{k=2}^{\
phi_i}\delta(\pi_{[i]
k
}
-
\pi
_{
[i]k-1
}
,
\Delta
_
j)
\cdot
\nonumber
ß
\\
s
_{
>1
}
(
\Delta
_
j|cb;
\mathbf
{
a
}
,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
i=1
}^
l
\big
[\varepsilon(\
varphi_i-1)\sum_{k=2}^{\var
phi_i}\delta(\pi_{[i]
k
}
-
\pi
_{
[i]k-1
}
,
\Delta
_
j)
\cdot
\nonumber
ß
\\
&
&
\delta
(B(
\tau
_{
[i]k
}
),cb)
\big
]
\label
{
eq:1.20
}
\end{eqnarray}
...
...
@@ -291,7 +291,7 @@ s_{>1}(\Delta_j|cb;\mathbf{a},\mathbf{s},\mathbf{t}) & = & \sum_{i=1}^l \big[\va
\label
{
eq:1.22
}
\end{eqnarray}
\parinterval
对于一个对齐
$
\mathbf
{
a
}$
,可用模型3对它的邻居进行排名,即按
$
\textrm
{
P
}_{
\theta
}
(
b
(
\mathbf
{
a
}
)
|
\mathbf
{
s
}
,
\mathbf
{
t
}
;
3
)
$
排序,其中
$
b
(
\mathbf
{
a
}
)
$
表示
$
\mathbf
{
a
}$
的邻居。
$
\tilde
{
b
}
(
\mathbf
{
a
}
)
$
表示这个排名表中满足
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
'|
\mathbf
{
s
}
,
\mathbf
{
t
}
;
4
)
>
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
|
\mathbf
{
s
}
,
\mathbf
{
t
}
;
4
)
$
的最高排名的
$
\mathbf
{
a
}
'
$
。
同理可知
$
\tilde
{
b
}_{
i
\leftrightarrow
j
}^{
\infty
}
(
\mathbf
{
a
}
)
$
的意义。这里之所以不用模型3中采用的方法直接利用
$
b
^{
\infty
}
(
\mathbf
{
a
}
)
$
得到模型4中高概率的对齐,是因为模型4中,要想获得某个对齐
$
\mathbf
{
a
}$
的邻居
$
\mathbf
{
a
}
'
$
,
必须做很大调整,比如:调整
$
\tau
_{
[
i
]
1
}$
和
$
\odot
_{
i
}$
等等。这个过程要比模型3的相应过程复杂得多。因此在模型4中只能借助于模型3的中间步骤来进行参数估计。
\parinterval
对于一个对齐
$
\mathbf
{
a
}$
,可用模型3对它的邻居进行排名,即按
$
\textrm
{
P
}_{
\theta
}
(
b
(
\mathbf
{
a
}
)
|
\mathbf
{
s
}
,
\mathbf
{
t
}
;
3
)
$
排序,其中
$
b
(
\mathbf
{
a
}
)
$
表示
$
\mathbf
{
a
}$
的邻居。
$
\tilde
{
b
}
(
\mathbf
{
a
}
)
$
表示这个排名表中满足
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
'|
\mathbf
{
s
}
,
\mathbf
{
t
}
;
4
)
>
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
|
\mathbf
{
s
}
,
\mathbf
{
t
}
;
4
)
$
的最高排名的
$
\mathbf
{
a
}
'
$
。
同理可知
$
\tilde
{
b
}_{
i
\leftrightarrow
j
}^{
\infty
}
(
\mathbf
{
a
}
)
$
的意义。这里之所以不用模型3中采用的方法直接利用
$
b
^{
\infty
}
(
\mathbf
{
a
}
)
$
得到模型4中高概率的对齐,是因为模型4中要想获得某个对齐
$
\mathbf
{
a
}$
的邻居
$
\mathbf
{
a
}
'
$
必须做很大调整,比如:调整
$
\tau
_{
[
i
]
1
}$
和
$
\odot
_{
i
}$
等等。这个过程要比模型3的相应过程复杂得多。因此在模型4中只能借助于模型3的中间步骤来进行参数估计。
\setlength
{
\belowdisplayskip
}{
3pt
}
%调整空白大小
%----------------------------------------------------------------------------------------
...
...
@@ -299,10 +299,10 @@ s_{>1}(\Delta_j|cb;\mathbf{a},\mathbf{s},\mathbf{t}) & = & \sum_{i=1}^l \big[\va
%----------------------------------------------------------------------------------------
\section
{
IBM模型5训练方法
}
\parinterval
模型5的参数估计过程也
与模型3
的过程基本一致,二者的区别在于扭曲度的估计公式。在模型5中,对于目标语第
$
i
$
个cept.生成的第一单词,可以得到(假设有
$
K
$
个训练样本):
\parinterval
模型5的参数估计过程也
模型4
的过程基本一致,二者的区别在于扭曲度的估计公式。在模型5中,对于目标语第
$
i
$
个cept.生成的第一单词,可以得到(假设有
$
K
$
个训练样本):
\begin{eqnarray}
d
_
1(
\Delta
_
j|cb
;
\mathbf
{
s
}
,
\mathbf
{
t
}
) =
\mu
_{
1cb
}^{
-1
}
\times
\sum
_{
k=1
}^{
K
}
c
_
1(
\Delta
_
j|cb;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
d
_
1(
\Delta
_
j|cb) =
\mu
_{
1cb
}^{
-1
}
\times
\sum
_{
k=1
}^{
K
}
c
_
1(
\Delta
_
j|cb;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
\label
{
eq:1.23
}
\end{eqnarray}
...
...
@@ -310,15 +310,15 @@ d_1(\Delta_j|cb;\mathbf{s},\mathbf{t}) = \mu_{1cb}^{-1} \times \sum_{k=1}^{K}c_1
\begin{eqnarray}
c
_
1(
\Delta
_
j|cb,v
_
x,v
_
y;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
\mathbf
{
a
}}
\Big
[ \textrm{P}(\mathbf{s},\mathbf{a}|\mathbf{t}) \times s_1(\Delta_j|cb,v_x,v_y;\mathbf{a},\mathbf{s},\mathbf{t}) \Big]
\label
{
eq:1.24
}
\\
s
_
1(
\Delta
_
j|cb,v
_
x,v
_
y;
\mathbf
{
a
}
,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
i=1
}^
l
\Big
[
\varepsilon
(
\phi
_
i)
\cdot
\delta
(v
_{
\pi
_{
i1
}}
,
\Delta
_
j)
\cdot
\delta
(v
_{
\odot
_{
i-1
}}
,v
_
x)
\nonumber
\\
&
&
\cdot
\delta
(v
_
m-
\phi
_
i+1,v
_
y)
\cdot
\delta
(v
_{
\pi
_{
i1
}}
,v
_{
\pi
_{
i1
}
-1
}
)
\Big
]
\label
{
eq:1.25
}
s
_
1(
\Delta
_
j|cb,v
_
x,v
_
y;
\mathbf
{
a
}
,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
i=1
}^
l
\Big
[
\varepsilon
(
\
var
phi
_
i)
\cdot
\delta
(v
_{
\pi
_{
i1
}}
,
\Delta
_
j)
\cdot
\delta
(v
_{
\odot
_{
i-1
}}
,v
_
x)
\nonumber
\\
&
&
\cdot
\delta
(v
_
m-
\
var
phi
_
i+1,v
_
y)
\cdot
\delta
(v
_{
\pi
_{
i1
}}
,v
_{
\pi
_{
i1
}
-1
}
)
\Big
]
\label
{
eq:1.25
}
\end{eqnarray}
对于目标语第
$
i
$
个cept.生成的其他单词(非第一个单词),可以得到:
\begin{eqnarray}
d
_{
>1
}
(
\Delta
_
j|cb,v
;
\mathbf
{
s
}
,
\mathbf
{
t
}
) =
\mu
_{
>1cb
}^{
-1
}
\times
\sum
_{
k=1
}^{
K
}
c
_{
>1
}
(
\Delta
_
j|cb,v;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
d
_{
>1
}
(
\Delta
_
j|cb,v) =
\mu
_{
>1cb
}^{
-1
}
\times
\sum
_{
k=1
}^{
K
}
c
_{
>1
}
(
\Delta
_
j|cb,v;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
\label
{
eq:1.26
}
\end{eqnarray}
...
...
@@ -326,18 +326,18 @@ d_{>1}(\Delta_j|cb,v;\mathbf{s},\mathbf{t}) = \mu_{>1cb}^{-1} \times \sum_{k=1}^
\begin{eqnarray}
c
_{
>1
}
(
\Delta
_
j|cb,v;
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
\mathbf
{
a
}}
\Big
[\textrm{P}(\mathbf{a},\mathbf{s}|\mathbf{t}) \times s_{>1}(\Delta_j|cb,v;\mathbf{a},\mathbf{s},\mathbf{t}) \Big]
\label
{
eq:1.27
}
\\
s
_{
>1
}
(
\Delta
_
j|cb,v;
\mathbf
{
a
}
,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
i=1
}^
l
\Big
[\varepsilon(\
phi_i-1)\sum_{k=2}^{\
phi_i} \big[\delta(v_{\pi_{ik}}-v_{\pi_{[i]
k
}
-1
}
,
\Delta
_
j)
\nonumber
\\
&
&
\cdot
\delta
(B(
\tau
_{
[i]k
}
) ,cb)
\cdot
\delta
(v
_
m-v
_{
\pi
_{
i(k-1)
}}
-
\phi
_
i+k,v)
\nonumber
\\
s
_{
>1
}
(
\Delta
_
j|cb,v;
\mathbf
{
a
}
,
\mathbf
{
s
}
,
\mathbf
{
t
}
)
&
=
&
\sum
_{
i=1
}^
l
\Big
[\varepsilon(\
varphi_i-1)\sum_{k=2}^{\var
phi_i} \big[\delta(v_{\pi_{ik}}-v_{\pi_{[i]
k
}
-1
}
,
\Delta
_
j)
\nonumber
\\
&
&
\cdot
\delta
(B(
\tau
_{
[i]k
}
) ,cb)
\cdot
\delta
(v
_
m-v
_{
\pi
_{
i(k-1)
}}
-
\
var
phi
_
i+k,v)
\nonumber
\\
&
&
\cdot
\delta
(v
_{
\pi
_{
i1
}}
,v
_{
\pi
_{
i1
}
-1
}
)
\big
]
\Big
]
\label
{
eq:1.28
}
\end{eqnarray}
\vspace
{
0.5em
}
\parinterval
从式
(
\ref
{
eq:1.24
}
)中可以看出因子
$
\delta
(
v
_{
\pi
_{
i
1
}}
,v
_{
\pi
_{
i
1
}
-
1
}
)
$
保证了,即使对齐
$
\mathbf
{
a
}$
不合理(一个源语位置对应多个目标语
位置)也可以避免在这个不合理的对齐上计算结果。需要注意的是因子
$
\delta
(
v
_{
\pi
_{
p
1
}}
,v
_{
\pi
_{
p
1
-
1
}}
)
$
,确保了
$
\mathbf
{
a
}$
中不合理的部分不产生坏的影响,而
$
\mathbf
{
a
}$
中其他正确的部分仍会参与迭代。
\parinterval
从式
\ref
{
eq:1.24
}
中可以看出因子
$
\delta
(
v
_{
\pi
_{
i
1
}}
,v
_{
\pi
_{
i
1
}
-
1
}
)
$
保证了,即使对齐
$
\mathbf
{
a
}$
不合理(一个源语言位置对应多个目标语言
位置)也可以避免在这个不合理的对齐上计算结果。需要注意的是因子
$
\delta
(
v
_{
\pi
_{
p
1
}}
,v
_{
\pi
_{
p
1
-
1
}}
)
$
,确保了
$
\mathbf
{
a
}$
中不合理的部分不产生坏的影响,而
$
\mathbf
{
a
}$
中其他正确的部分仍会参与迭代。
\parinterval
不过上面的参数估计过程与IBM前4个模型的参数估计过程并不完全一样。IBM前4个模型在每次迭代中,可以在给定
$
\mathbf
{
s
}$
、
$
\mathbf
{
t
}$
和一个对齐
$
\mathbf
{
a
}$
的情况下直接计算并更新参数。但是在模型5的参数估计过程中(如公式
\ref
{
eq:1.24
}
),需要模拟出由
$
\mathbf
{
t
}$
生成
$
\mathbf
{
s
}$
的过程才能得到正确的结果,因为从
$
\mathbf
{
t
}$
、
$
\mathbf
{
s
}$
和
$
\mathbf
{
a
}$
中是不能直接得到 的正确结果的。具体说,就是要从目标语言句子的第一个单词开始到最后一个单词结束,依次生成每个目标语言单词对应的源语言单词,每处理完一个目标语言单词就要暂停,然后才能计算式
\ref
{
eq:1.24
}
中求和符号里面的内容。这也就是说即使给定了
$
\mathbf
{
s
}$
、
$
\mathbf
{
t
}$
和一个对齐
$
\mathbf
{
a
}$
,也不能直接在它们上进行计算,必须重新模拟
$
\mathbf
{
t
}$
到
$
\mathbf
{
s
}$
的生成过程。
\parinterval
从前面的分析可以看出,虽然模型5比模型4更精确,但是模型5过于复杂以至于给参数估计增加了计算量(对于每组
$
\mathbf
{
t
}$
、
$
\mathbf
{
s
}$
和
$
\mathbf
{
a
}$
都要模拟
$
\mathbf
{
t
}$
生成
$
\mathbf
{
s
}$
的翻译过程)。因此模型5的
开发对于
系统实现是一个挑战。
\parinterval
从前面的分析可以看出,虽然模型5比模型4更精确,但是模型5过于复杂以至于给参数估计增加了计算量(对于每组
$
\mathbf
{
t
}$
、
$
\mathbf
{
s
}$
和
$
\mathbf
{
a
}$
都要模拟
$
\mathbf
{
t
}$
生成
$
\mathbf
{
s
}$
的翻译过程)。因此模型5的系统实现是一个挑战。
\parinterval
在模型5中同样需要定义一个词对齐集合
$
S
$
,使得每次迭代都在
$
S
$
上进行。可以对
$
S
$
进行如下定义
\begin{eqnarray}
...
...
@@ -346,7 +346,7 @@ s_{>1}(\Delta_j|cb,v;\mathbf{a},\mathbf{s},\mathbf{t}) & = & \sum_{i=1}^l\Big[\v
\end{eqnarray}
\vspace
{
0.5em
}
\
parinterval
这里
$
\tilde
{
\tilde
{
b
}}
(
\mathbf
{
a
}
)
$
借用了模型4中
$
\tilde
{
b
}
(
\mathbf
{
a
}
)
$
的概念。不过
$
\tilde
{
\tilde
{
b
}}
(
\mathbf
{
a
}
)
$
表示在利用模型3进行排名的列表中满足
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
'|
\mathbf
{
s
}
,
\mathbf
{
t
}
;
5
)
$
的最高排名的词对齐
。
\
noindent
其中,
$
\tilde
{
\tilde
{
b
}}
(
\mathbf
{
a
}
)
$
借用了模型4中
$
\tilde
{
b
}
(
\mathbf
{
a
}
)
$
的概念。不过
$
\tilde
{
\tilde
{
b
}}
(
\mathbf
{
a
}
)
$
表示在利用模型3进行排名的列表中满足
$
\textrm
{
P
}_{
\theta
}
(
\mathbf
{
a
}
'|
\mathbf
{
s
}
,
\mathbf
{
t
}
;
5
)
$
的最高排名的词对齐,这里
$
\mathbf
{
a
}
'
$
表示
$
\mathbf
{
a
}$
的邻居
。
\end{appendices}
...
...
Book/mt-book-xelatex.idx
查看文件 @
bd87e7ad
...
...
@@ -7,14 +7,14 @@
\indexentry{数据驱动|hyperpage}{23}
\indexentry{Data-Driven|hyperpage}{23}
\indexentry{编码器-解码器|hyperpage}{30}
\indexentry{
encoder-d
ecoder|hyperpage}{30}
\indexentry{
Encoder-D
ecoder|hyperpage}{30}
\indexentry{质量评价|hyperpage}{32}
\indexentry{Quality Evaluation|hyperpage}{32}
\indexentry{无参考答案的评价|hyperpage}{32}
\indexentry{Quality Estimation|hyperpage}{32}
\indexentry{$n$元语法单元|hyperpage}{33}
\indexentry{$n$-gram准确率|hyperpage}{3
4
}
\indexentry{$n$-gram Precision|hyperpage}{3
4
}
\indexentry{$n$-gram准确率|hyperpage}{3
3
}
\indexentry{$n$-gram Precision|hyperpage}{3
3
}
\indexentry{短句惩罚因子|hyperpage}{34}
\indexentry{Brevity Penalty|hyperpage}{34}
\indexentry{分词|hyperpage}{50}
...
...
@@ -115,10 +115,10 @@
\indexentry{Disambiguation|hyperpage}{79}
\indexentry{最左优先推导|hyperpage}{79}
\indexentry{Left-most Derivation|hyperpage}{79}
\indexentry{概率上下文无关文法|hyperpage}{8
1
}
\indexentry{Probabilistic Context-Free Grammar|hyperpage}{8
1
}
\indexentry{树库|hyperpage}{8
2
}
\indexentry{Treebank|hyperpage}{8
2
}
\indexentry{概率上下文无关文法|hyperpage}{8
0
}
\indexentry{Probabilistic Context-Free Grammar|hyperpage}{8
0
}
\indexentry{树库|hyperpage}{8
1
}
\indexentry{Treebank|hyperpage}{8
1
}
\indexentry{生成模型|hyperpage}{83}
\indexentry{Generative Model|hyperpage}{83}
\indexentry{判别模型|hyperpage}{83}
...
...
@@ -153,525 +153,525 @@
\indexentry{The Lagrange Multiplier Method|hyperpage}{113}
\indexentry{期望最大化|hyperpage}{115}
\indexentry{Expectation Maximization|hyperpage}{115}
\indexentry{期望频次|hyperpage}{11
6
}
\indexentry{Expected Count|hyperpage}{11
6
}
\indexentry{产出率|hyperpage}{11
9
}
\indexentry{繁衍率|hyperpage}{11
9
}
\indexentry{Fertility|hyperpage}{11
9
}
\indexentry{扭曲度|hyperpage}{12
1
}
\indexentry{Distortion|hyperpage}{12
1
}
\indexentry{概念单元|hyperpage}{12
3
}
\indexentry{概念|hyperpage}{12
3
}
\indexentry{Concept|hyperpage}{12
3
}
\indexentry{缺陷|hyperpage}{12
5
}
\indexentry{Deficiency|hyperpage}{12
5
}
\indexentry{凸函数|hyperpage}{12
9
}
\indexentry{Convex function|hyperpage}{12
9
}
\indexentry{对称化|hyperpage}{1
30
}
\indexentry{Symmetrization|hyperpage}{1
30
}
\indexentry{系统偏置|hyperpage}{13
1
}
\indexentry{System Bias|hyperpage}{13
1
}
\indexentry{组合性翻译|hyperpage}{13
6
}
\indexentry{Compositional Translation|hyperpage}{13
6
}
\indexentry{短语|hyperpage}{13
6
}
\indexentry{短语切分|hyperpage}{1
41
}
\indexentry{Phrasal Segmentation|hyperpage}{1
41
}
\indexentry{短语对|hyperpage}{1
41
}
\indexentry{推导|hyperpage}{14
1
}
\indexentry{Derivation|hyperpage}{14
1
}
\indexentry{生成式模型|hyperpage}{14
4
}
\indexentry{Generative Model|hyperpage}{14
4
}
\indexentry{判别式模型|hyperpage}{14
4
}
\indexentry{Discriminative Model|hyperpage}{14
4
}
\indexentry{对数线性模型|hyperpage}{14
5
}
\indexentry{Log-linear Model|hyperpage}{14
5
}
\indexentry{短语抽取|hyperpage}{14
6
}
\indexentry{Phrase Extraction|hyperpage}{14
6
}
\indexentry{词汇化翻译概率|hyperpage}{14
9
}
\indexentry{Lexical Translation Probability|hyperpage}{14
9
}
\indexentry{短语表|hyperpage}{1
50
}
\indexentry{Phrase Table|hyperpage}{1
50
}
\indexentry{调序|hyperpage}{1
50
}
\indexentry{Reordering|hyperpage}{1
50
}
\indexentry{模型训练|hyperpage}{15
4
}
\indexentry{Model Training|hyperpage}{15
4
}
\indexentry{权重调优|hyperpage}{15
4
}
\indexentry{Weight Tuning|hyperpage}{15
4
}
\indexentry{最小错误率训练|hyperpage}{15
4
}
\indexentry{Minimum Error Rate Training|hyperpage}{15
4
}
\indexentry{调优集合|hyperpage}{15
4
}
\indexentry{Tuning Set|hyperpage}{15
4
}
\indexentry{线搜索|hyperpage}{15
5
}
\indexentry{Line Search|hyperpage}{15
5
}
\indexentry{格搜索|hyperpage}{15
6
}
\indexentry{Grid Search|hyperpage}{15
6
}
\indexentry{覆盖度模型|hyperpage}{15
8
}
\indexentry{Coverage Model|hyperpage}{15
8
}
\indexentry{翻译候选|hyperpage}{15
8
}
\indexentry{Translation Candidate|hyperpage}{15
8
}
\indexentry{翻译假设|hyperpage}{15
9
}
\indexentry{Translation Hypothesis|hyperpage}{15
9
}
\indexentry{剪枝|hyperpage}{1
60
}
\indexentry{Pruning|hyperpage}{1
60
}
\indexentry{束剪枝|hyperpage}{1
60
}
\indexentry{Beam Pruning|hyperpage}{1
60
}
\indexentry{直方图剪枝|hyperpage}{1
60
}
\indexentry{Histogram Pruning|hyperpage}{1
60
}
\indexentry{阈值剪枝|hyperpage}{1
60
}
\indexentry{Threshold Pruning|hyperpage}{1
60
}
\indexentry{假设重组|hyperpage}{1
60
}
\indexentry{Hypothesis Recombination|hyperpage}{1
60
}
\indexentry{基于层次短语的模型|hyperpage}{16
4
}
\indexentry{Hierarchical Phrase-based Model|hyperpage}{16
4
}
\indexentry{同步上下文无关文法|hyperpage}{16
5
}
\indexentry{Synchronous Context-free Grammar|hyperpage}{16
5
}
\indexentry{基于层次短语的文法|hyperpage}{16
6
}
\indexentry{Hierarchical Phrase-based Grammar|hyperpage}{16
6
}
\indexentry{推导|hyperpage}{16
7
}
\indexentry{Derivation|hyperpage}{16
7
}
\indexentry{胶水规则|hyperpage}{16
7
}
\indexentry{Glue Rule|hyperpage}{16
7
}
\indexentry{乔姆斯基范式|hyperpage}{1
71
}
\indexentry{Chomsky Normal Form|hyperpage}{1
71
}
\indexentry{跨度|hyperpage}{1
71
}
\indexentry{Span|hyperpage}{1
71
}
\indexentry{自下而上的分析|hyperpage}{17
2
}
\indexentry{Top-down Parsing|hyperpage}{17
2
}
\indexentry{束剪枝|hyperpage}{17
4
}
\indexentry{Beam Pruning|hyperpage}{17
4
}
\indexentry{立方剪枝|hyperpage}{17
6
}
\indexentry{Cube Pruning|hyperpage}{17
6
}
\indexentry{期望频次|hyperpage}{11
5
}
\indexentry{Expected Count|hyperpage}{11
5
}
\indexentry{产出率|hyperpage}{11
8
}
\indexentry{繁衍率|hyperpage}{11
8
}
\indexentry{Fertility|hyperpage}{11
8
}
\indexentry{扭曲度|hyperpage}{12
0
}
\indexentry{Distortion|hyperpage}{12
0
}
\indexentry{概念单元|hyperpage}{12
2
}
\indexentry{概念|hyperpage}{12
2
}
\indexentry{Concept|hyperpage}{12
2
}
\indexentry{缺陷|hyperpage}{12
4
}
\indexentry{Deficiency|hyperpage}{12
4
}
\indexentry{凸函数|hyperpage}{12
8
}
\indexentry{Convex function|hyperpage}{12
8
}
\indexentry{对称化|hyperpage}{1
29
}
\indexentry{Symmetrization|hyperpage}{1
29
}
\indexentry{系统偏置|hyperpage}{13
0
}
\indexentry{System Bias|hyperpage}{13
0
}
\indexentry{组合性翻译|hyperpage}{13
4
}
\indexentry{Compositional Translation|hyperpage}{13
4
}
\indexentry{短语|hyperpage}{13
4
}
\indexentry{短语切分|hyperpage}{1
39
}
\indexentry{Phrasal Segmentation|hyperpage}{1
39
}
\indexentry{短语对|hyperpage}{1
39
}
\indexentry{推导|hyperpage}{14
0
}
\indexentry{Derivation|hyperpage}{14
0
}
\indexentry{生成式模型|hyperpage}{14
2
}
\indexentry{Generative Model|hyperpage}{14
2
}
\indexentry{判别式模型|hyperpage}{14
3
}
\indexentry{Discriminative Model|hyperpage}{14
3
}
\indexentry{对数线性模型|hyperpage}{14
3
}
\indexentry{Log-linear Model|hyperpage}{14
3
}
\indexentry{短语抽取|hyperpage}{14
4
}
\indexentry{Phrase Extraction|hyperpage}{14
4
}
\indexentry{词汇化翻译概率|hyperpage}{14
7
}
\indexentry{Lexical Translation Probability|hyperpage}{14
7
}
\indexentry{短语表|hyperpage}{1
48
}
\indexentry{Phrase Table|hyperpage}{1
48
}
\indexentry{调序|hyperpage}{1
48
}
\indexentry{Reordering|hyperpage}{1
48
}
\indexentry{模型训练|hyperpage}{15
2
}
\indexentry{Model Training|hyperpage}{15
2
}
\indexentry{权重调优|hyperpage}{15
2
}
\indexentry{Weight Tuning|hyperpage}{15
2
}
\indexentry{最小错误率训练|hyperpage}{15
2
}
\indexentry{Minimum Error Rate Training|hyperpage}{15
2
}
\indexentry{调优集合|hyperpage}{15
2
}
\indexentry{Tuning Set|hyperpage}{15
2
}
\indexentry{线搜索|hyperpage}{15
3
}
\indexentry{Line Search|hyperpage}{15
3
}
\indexentry{格搜索|hyperpage}{15
4
}
\indexentry{Grid Search|hyperpage}{15
4
}
\indexentry{覆盖度模型|hyperpage}{15
6
}
\indexentry{Coverage Model|hyperpage}{15
6
}
\indexentry{翻译候选|hyperpage}{15
6
}
\indexentry{Translation Candidate|hyperpage}{15
6
}
\indexentry{翻译假设|hyperpage}{15
7
}
\indexentry{Translation Hypothesis|hyperpage}{15
7
}
\indexentry{剪枝|hyperpage}{1
58
}
\indexentry{Pruning|hyperpage}{1
58
}
\indexentry{束剪枝|hyperpage}{1
58
}
\indexentry{Beam Pruning|hyperpage}{1
58
}
\indexentry{直方图剪枝|hyperpage}{1
58
}
\indexentry{Histogram Pruning|hyperpage}{1
58
}
\indexentry{阈值剪枝|hyperpage}{1
58
}
\indexentry{Threshold Pruning|hyperpage}{1
58
}
\indexentry{假设重组|hyperpage}{1
58
}
\indexentry{Hypothesis Recombination|hyperpage}{1
58
}
\indexentry{基于层次短语的模型|hyperpage}{16
3
}
\indexentry{Hierarchical Phrase-based Model|hyperpage}{16
3
}
\indexentry{同步上下文无关文法|hyperpage}{16
3
}
\indexentry{Synchronous Context-free Grammar|hyperpage}{16
3
}
\indexentry{基于层次短语的文法|hyperpage}{16
4
}
\indexentry{Hierarchical Phrase-based Grammar|hyperpage}{16
4
}
\indexentry{推导|hyperpage}{16
5
}
\indexentry{Derivation|hyperpage}{16
5
}
\indexentry{胶水规则|hyperpage}{16
5
}
\indexentry{Glue Rule|hyperpage}{16
5
}
\indexentry{乔姆斯基范式|hyperpage}{1
69
}
\indexentry{Chomsky Normal Form|hyperpage}{1
69
}
\indexentry{跨度|hyperpage}{1
69
}
\indexentry{Span|hyperpage}{1
69
}
\indexentry{自下而上的分析|hyperpage}{17
0
}
\indexentry{Top-down Parsing|hyperpage}{17
0
}
\indexentry{束剪枝|hyperpage}{17
2
}
\indexentry{Beam Pruning|hyperpage}{17
2
}
\indexentry{立方剪枝|hyperpage}{17
4
}
\indexentry{Cube Pruning|hyperpage}{17
4
}
\indexentry{序列化|hyperpage}{179}
\indexentry{线性化|hyperpage}{179}
\indexentry{Linearization|hyperpage}{179}
\indexentry{树到串翻译规则|hyperpage}{1
81
}
\indexentry{Tree-to-String Translation Rule|hyperpage}{1
81
}
\indexentry{树到树翻译规则|hyperpage}{1
81
}
\indexentry{Tree-to-Tree Translation Rule|hyperpage}{1
81
}
\indexentry{树片段|hyperpage}{18
2
}
\indexentry{Tree Fragment|hyperpage}{18
2
}
\indexentry{同步树替换文法规则|hyperpage}{18
3
}
\indexentry{Synchronous Tree Substitution Grammar Rule|hyperpage}{18
3
}
\indexentry{边缘集合|hyperpage}{18
9
}
\indexentry{Frontier Set|hyperpage}{18
9
}
\indexentry{最小规则|hyperpage}{1
90
}
\indexentry{Minimal Rules|hyperpage}{1
90
}
\indexentry{二叉化|hyperpage}{19
4
}
\indexentry{Binarization|hyperpage}{19
4
}
\indexentry{基于短语的特征|hyperpage}{19
8
}
\indexentry{基于句法的特征|hyperpage}{19
8
}
\indexentry{有向超图|hyperpage}{19
9
}
\indexentry{Directed Hyper-graph|hyperpage}{19
9
}
\indexentry{超边|hyperpage}{19
9
}
\indexentry{Hyper-edge|hyperpage}{19
9
}
\indexentry{半环分析|hyperpage}{
200
}
\indexentry{Semi-ring Parsing|hyperpage}{
200
}
\indexentry{组合|hyperpage}{
201
}
\indexentry{Composition|hyperpage}{
201
}
\indexentry{基于串的解码|hyperpage}{
201
}
\indexentry{String-based Decoding|hyperpage}{
201
}
\indexentry{基于树的解码|hyperpage}{
201
}
\indexentry{Tree-based Decoding|hyperpage}{
201
}
\indexentry{Lexicalized Norm Form|hyperpage}{20
5
}
\indexentry{人工神经网络|hyperpage}{2
11
}
\indexentry{Artificial Neural Networks|hyperpage}{2
11
}
\indexentry{神经网络|hyperpage}{2
11
}
\indexentry{Neural Networks|hyperpage}{2
11
}
\indexentry{深度学习|hyperpage}{2
12
}
\indexentry{Deep Learning|hyperpage}{2
12
}
\indexentry{连接主义|hyperpage}{2
13
}
\indexentry{Connectionism|hyperpage}{2
13
}
\indexentry{分布式表示|hyperpage}{2
13
}
\indexentry{Distributed representation|hyperpage}{2
13
}
\indexentry{符号主义|hyperpage}{2
13
}
\indexentry{Symbolicism|hyperpage}{2
13
}
\indexentry{端到端学习|hyperpage}{21
5
}
\indexentry{End-to-End Learning|hyperpage}{21
5
}
\indexentry{表示学习|hyperpage}{21
5
}
\indexentry{Representation Learning|hyperpage}{21
5
}
\indexentry{分布式表示|hyperpage}{21
6
}
\indexentry{Distributed Representation|hyperpage}{21
6
}
\indexentry{标量|hyperpage}{21
7
}
\indexentry{Scalar|hyperpage}{21
7
}
\indexentry{向量|hyperpage}{21
7
}
\indexentry{Vector|hyperpage}{21
7
}
\indexentry{矩阵|hyperpage}{21
7
}
\indexentry{Matrix|hyperpage}{21
7
}
\indexentry{转置|hyperpage}{21
8
}
\indexentry{Transpose|hyperpage}{21
8
}
\indexentry{按元素加法|hyperpage}{21
8
}
\indexentry{Element-wise Addition|hyperpage}{21
8
}
\indexentry{数乘|hyperpage}{21
9
}
\indexentry{Scalar Multiplication|hyperpage}{21
9
}
\indexentry{按元素乘积|hyperpage}{2
20
}
\indexentry{Element-wise Product|hyperpage}{2
20
}
\indexentry{线性映射|hyperpage}{2
20
}
\indexentry{Linear Mapping|hyperpage}{2
20
}
\indexentry{线性变换|hyperpage}{2
20
}
\indexentry{Linear Transformation|hyperpage}{2
20
}
\indexentry{范数|hyperpage}{2
21
}
\indexentry{Norm|hyperpage}{2
21
}
\indexentry{欧几里得范数|hyperpage}{2
22
}
\indexentry{Euclidean Norm|hyperpage}{2
22
}
\indexentry{Frobenius 范数|hyperpage}{2
22
}
\indexentry{Frobenius Norm|hyperpage}{2
22
}
\indexentry{权重|hyperpage}{2
23
}
\indexentry{weight|hyperpage}{2
23
}
\indexentry{张量|hyperpage}{2
33
}
\indexentry{Tensor|hyperpage}{2
33
}
\indexentry{阶|hyperpage}{2
33
}
\indexentry{Rank|hyperpage}{2
33
}
\indexentry{广播机制|hyperpage}{23
7
}
\indexentry{向量化|hyperpage}{23
7
}
\indexentry{Vectorization|hyperpage}{23
7
}
\indexentry{前向传播|hyperpage}{2
41
}
\indexentry{计算图|hyperpage}{2
42
}
\indexentry{Computation Graph|hyperpage}{2
42
}
\indexentry{模型参数|hyperpage}{2
43
}
\indexentry{Model Parameters|hyperpage}{2
43
}
\indexentry{训练|hyperpage}{2
43
}
\indexentry{Training|hyperpage}{2
43
}
\indexentry{有标注数据|hyperpage}{2
43
}
\indexentry{Annotated Data/Labeled Data|hyperpage}{2
43
}
\indexentry{有指导的训练|hyperpage}{2
43
}
\indexentry{有监督的训练|hyperpage}{2
43
}
\indexentry{Supervised Training|hyperpage}{2
43
}
\indexentry{训练数据集合|hyperpage}{24
4
}
\indexentry{Training Data Set|hyperpage}{24
4
}
\indexentry{损失函数|hyperpage}{24
4
}
\indexentry{Loss Function|hyperpage}{24
4
}
\indexentry{目标函数|hyperpage}{24
4
}
\indexentry{Objective Function|hyperpage}{24
4
}
\indexentry{代价函数|hyperpage}{24
6
}
\indexentry{Cost Function|hyperpage}{24
6
}
\indexentry{梯度下降方法|hyperpage}{24
6
}
\indexentry{Gradient Descent Method|hyperpage}{24
6
}
\indexentry{参数更新的规则|hyperpage}{24
6
}
\indexentry{Update Rule|hyperpage}{24
6
}
\indexentry{学习率|hyperpage}{24
6
}
\indexentry{Learning Rate|hyperpage}{24
6
}
\indexentry{基于梯度的方法|hyperpage}{24
6
}
\indexentry{Gradient-based Method|hyperpage}{24
6
}
\indexentry{批量梯度下降|hyperpage}{24
7
}
\indexentry{Batch Gradient Descent|hyperpage}{24
7
}
\indexentry{随机梯度下降|hyperpage}{24
7
}
\indexentry{Stochastic Gradient Descent|hyperpage}{24
7
}
\indexentry{小批量梯度下降|hyperpage}{24
7
}
\indexentry{Mini-Batch Gradient Descent|hyperpage}{24
7
}
\indexentry{数值微分|hyperpage}{24
8
}
\indexentry{Numerical Differentiation|hyperpage}{24
8
}
\indexentry{截断误差|hyperpage}{24
8
}
\indexentry{Truncation Error|hyperpage}{24
8
}
\indexentry{舍入误差|hyperpage}{24
8
}
\indexentry{Round-off Error|hyperpage}{24
8
}
\indexentry{符号微分|hyperpage}{24
9
}
\indexentry{Symbolic Differentiation|hyperpage}{24
9
}
\indexentry{表达式膨胀|hyperpage}{24
9
}
\indexentry{Expression Swell|hyperpage}{24
9
}
\indexentry{自动微分|hyperpage}{24
9
}
\indexentry{Automatic Differentiation|hyperpage}{24
9
}
\indexentry{反向模式|hyperpage}{2
50
}
\indexentry{Backward Mode|hyperpage}{2
50
}
\indexentry{学习率|hyperpage}{2
51
}
\indexentry{Learning Rate|hyperpage}{2
51
}
\indexentry{Momentum|hyperpage}{2
51
}
\indexentry{AdaGrad|hyperpage}{2
52
}
\indexentry{衰减|hyperpage}{2
52
}
\indexentry{Decay|hyperpage}{2
52
}
\indexentry{RMSprop|hyperpage}{2
52
}
\indexentry{Adam|hyperpage}{2
53
}
\indexentry{数据并行|hyperpage}{2
53
}
\indexentry{同步更新|hyperpage}{25
4
}
\indexentry{Synchronous Update|hyperpage}{25
4
}
\indexentry{异步更新|hyperpage}{25
4
}
\indexentry{Asynchronous Update|hyperpage}{25
4
}
\indexentry{参数服务器|hyperpage}{25
4
}
\indexentry{Parameter Server|hyperpage}{25
4
}
\indexentry{梯度消失|hyperpage}{25
5
}
\indexentry{Gradient Vanishing|hyperpage}{25
5
}
\indexentry{梯度爆炸|hyperpage}{25
5
}
\indexentry{Gradient Explosion|hyperpage}{25
5
}
\indexentry{梯度裁剪|hyperpage}{25
6
}
\indexentry{Gradient Clipping|hyperpage}{25
6
}
\indexentry{批量归一化|hyperpage}{25
7
}
\indexentry{Batch Normalization|hyperpage}{25
7
}
\indexentry{层归一化|hyperpage}{25
7
}
\indexentry{Layer Normalization|hyperpage}{25
7
}
\indexentry{残差网络|hyperpage}{25
7
}
\indexentry{Residual Networks|hyperpage}{25
7
}
\indexentry{跳接|hyperpage}{25
7
}
\indexentry{Shortcut Connection|hyperpage}{25
7
}
\indexentry{过拟合|hyperpage}{25
8
}
\indexentry{Overfitting|hyperpage}{25
8
}
\indexentry{正则化|hyperpage}{25
8
}
\indexentry{Regularization|hyperpage}{25
8
}
\indexentry{反向传播|hyperpage}{25
9
}
\indexentry{back propagation|hyperpage}{25
9
}
\indexentry{神经语言模型|hyperpage}{26
5
}
\indexentry{Neural Language Model|hyperpage}{26
5
}
\indexentry{前馈神经网络语言模型|hyperpage}{26
6
}
\indexentry{Feed-forward Neural Network Language Model|hyperpage}{26
6
}
\indexentry{循环神经网络|hyperpage}{26
8
}
\indexentry{Recurrent Neural Network|hyperpage}{26
8
}
\indexentry{循环神经网络语言模型|hyperpage}{26
8
}
\indexentry{RNNLM|hyperpage}{26
8
}
\indexentry{循环单元|hyperpage}{26
8
}
\indexentry{RNN Cell|hyperpage}{26
8
}
\indexentry{自注意力机制|hyperpage}{2
70
}
\indexentry{Self-Attention Mechanism|hyperpage}{2
70
}
\indexentry{注意力权重|hyperpage}{2
70
}
\indexentry{Attention Weight|hyperpage}{2
70
}
\indexentry{困惑度|hyperpage}{2
71
}
\indexentry{Perplexity|hyperpage}{2
71
}
\indexentry{One-hot编码|hyperpage}{2
71
}
\indexentry{独热编码|hyperpage}{2
71
}
\indexentry{分布式表示|hyperpage}{2
72
}
\indexentry{Distributed Representation|hyperpage}{2
72
}
\indexentry{词嵌入|hyperpage}{2
72
}
\indexentry{Word Embedding|hyperpage}{2
72
}
\indexentry{句子表示模型|hyperpage}{27
4
}
\indexentry{句子的表示|hyperpage}{27
4
}
\indexentry{表示学习|hyperpage}{27
4
}
\indexentry{Representation Learning|hyperpage}{27
4
}
\indexentry{可解释机器学习|hyperpage}{27
8
}
\indexentry{Explainable Machine Learning|hyperpage}{27
8
}
\indexentry{神经机器翻译|hyperpage}{2
81
}
\indexentry{Neural Machine Translation|hyperpage}{2
81
}
\indexentry{分布式表示|hyperpage}{2
83
}
\indexentry{Distributed Representation|hyperpage}{2
83
}
\indexentry{特征工程|hyperpage}{28
9
}
\indexentry{Feature Engineering|hyperpage}{28
9
}
\indexentry{编码器-解码器模型|hyperpage}{2
90
}
\indexentry{Encoder-Decoder Paradigm|hyperpage}{2
90
}
\indexentry{编码器-解码器框架|hyperpage}{2
90
}
\indexentry{循环神经网络|hyperpage}{2
95
}
\indexentry{Recurrent Neural Network, RNN|hyperpage}{2
95
}
\indexentry{词嵌入|hyperpage}{29
7
}
\indexentry{Word Embedding|hyperpage}{29
7
}
\indexentry{表示学习|hyperpage}{29
7
}
\indexentry{Representation Learning|hyperpage}{29
7
}
\indexentry{生成|hyperpage}{29
7
}
\indexentry{Generation|hyperpage}{29
7
}
\indexentry{长短时记忆|hyperpage}{
302
}
\indexentry{Long Short-Term Memory|hyperpage}{
302
}
\indexentry{遗忘|hyperpage}{
302
}
\indexentry{记忆更新|hyperpage}{
303
}
\indexentry{输出|hyperpage}{
303
}
\indexentry{门循环单元|hyperpage}{
304
}
\indexentry{Gated Recurrent Unit,GRU|hyperpage}{
304
}
\indexentry{注意力权重|hyperpage}{30
9
}
\indexentry{Attention Weight|hyperpage}{30
9
}
\indexentry{一阶矩估计|hyperpage}{3
15
}
\indexentry{First Moment Estimation|hyperpage}{3
15
}
\indexentry{二阶矩估计|hyperpage}{3
15
}
\indexentry{Second Moment Estimation|hyperpage}{3
15
}
\indexentry{学习率|hyperpage}{3
16
}
\indexentry{Learning Rate|hyperpage}{3
16
}
\indexentry{逐渐预热|hyperpage}{31
6
}
\indexentry{Gradual Warmup|hyperpage}{31
6
}
\indexentry{分段常数衰减|hyperpage}{31
7
}
\indexentry{Piecewise Constant Decay|hyperpage}{31
7
}
\indexentry{数据并行|hyperpage}{31
8
}
\indexentry{模型并行|hyperpage}{31
8
}
\indexentry{全搜索|hyperpage}{3
20
}
\indexentry{Full Search|hyperpage}{3
20
}
\indexentry{贪婪搜索|hyperpage}{3
20
}
\indexentry{Greedy Search|hyperpage}{3
20
}
\indexentry{束搜索|hyperpage}{3
20
}
\indexentry{Beam Search|hyperpage}{3
20
}
\indexentry{自回归模型|hyperpage}{3
20
}
\indexentry{Autoregressive Model|hyperpage}{3
21
}
\indexentry{非自回归模型|hyperpage}{3
21
}
\indexentry{Non-autoregressive Model|hyperpage}{3
21
}
\indexentry{自注意力机制|hyperpage}{32
6
}
\indexentry{Self-Attention|hyperpage}{32
6
}
\indexentry{特征提取|hyperpage}{32
7
}
\indexentry{自注意力子层|hyperpage}{32
8
}
\indexentry{Self-attention Sub-layer|hyperpage}{32
8
}
\indexentry{前馈神经网络子层|hyperpage}{32
8
}
\indexentry{Feed-forward Sub-layer|hyperpage}{32
8
}
\indexentry{残差连接|hyperpage}{32
8
}
\indexentry{Residual Connection|hyperpage}{32
8
}
\indexentry{层正则化|hyperpage}{32
8
}
\indexentry{Layer Normalization|hyperpage}{32
8
}
\indexentry{编码-解码注意力子层|hyperpage}{32
9
}
\indexentry{Encoder-decoder Attention Sub-layer|hyperpage}{32
9
}
\indexentry{词嵌入|hyperpage}{32
9
}
\indexentry{Word Embedding|hyperpage}{32
9
}
\indexentry{位置编码|hyperpage}{32
9
}
\indexentry{Position Embedding|hyperpage}{32
9
}
\indexentry{点乘注意力|hyperpage}{3
33
}
\indexentry{Scaled Dot-Product Attention|hyperpage}{3
33
}
\indexentry{多头注意力|hyperpage}{3
35
}
\indexentry{Multi-head Attention|hyperpage}{3
35
}
\indexentry{残差连接|hyperpage}{3
36
}
\indexentry{短连接|hyperpage}{33
7
}
\indexentry{Short-cut Connection|hyperpage}{33
7
}
\indexentry{后正则化|hyperpage}{33
8
}
\indexentry{Post-norm|hyperpage}{33
8
}
\indexentry{前正则化|hyperpage}{33
8
}
\indexentry{Pre-norm|hyperpage}{33
8
}
\indexentry{交叉熵损失|hyperpage}{33
9
}
\indexentry{Cross Entropy Loss|hyperpage}{33
9
}
\indexentry{预热|hyperpage}{33
9
}
\indexentry{Warmup|hyperpage}{33
9
}
\indexentry{小批量训练|hyperpage}{3
40
}
\indexentry{Mini-batch Training|hyperpage}{3
40
}
\indexentry{Dropout|hyperpage}{3
40
}
\indexentry{过拟合|hyperpage}{3
40
}
\indexentry{Over fitting|hyperpage}{3
40
}
\indexentry{标签平滑|hyperpage}{3
40
}
\indexentry{Label Smoothing|hyperpage}{3
40
}
\indexentry{序列到序列的转换/生成问题|hyperpage}{3
42
}
\indexentry{Sequence-to-Sequence Problem|hyperpage}{3
42
}
\indexentry{未登录词|hyperpage}{3
53
}
\indexentry{Out of Vocabulary Word,OOV Word|hyperpage}{3
53
}
\indexentry{子词切分|hyperpage}{3
53
}
\indexentry{Sub-word Segmentation|hyperpage}{3
53
}
\indexentry{标准化|hyperpage}{3
53
}
\indexentry{Normalization|hyperpage}{3
53
}
\indexentry{数据清洗|hyperpage}{3
53
}
\indexentry{Dada Cleaning|hyperpage}{3
53
}
\indexentry{数据选择|hyperpage}{3
55
}
\indexentry{Data Selection|hyperpage}{3
55
}
\indexentry{数据过滤|hyperpage}{3
55
}
\indexentry{Data Filtering|hyperpage}{3
55
}
\indexentry{开放词表|hyperpage}{35
8
}
\indexentry{Open-Vocabulary|hyperpage}{35
8
}
\indexentry{子词|hyperpage}{35
9
}
\indexentry{Sub-word|hyperpage}{35
9
}
\indexentry{字节对编码|hyperpage}{35
9
}
\indexentry{双字节编码|hyperpage}{35
9
}
\indexentry{Byte Pair Encoding,BPE|hyperpage}{35
9
}
\indexentry{正则化|hyperpage}{3
62
}
\indexentry{Regularization|hyperpage}{3
62
}
\indexentry{过拟合问题|hyperpage}{3
62
}
\indexentry{Overfitting Problem|hyperpage}{3
62
}
\indexentry{反问题|hyperpage}{3
62
}
\indexentry{Inverse Problem|hyperpage}{3
62
}
\indexentry{适定的|hyperpage}{3
63
}
\indexentry{Well-posed|hyperpage}{3
63
}
\indexentry{不适定问题|hyperpage}{3
63
}
\indexentry{Ill-posed Problem|hyperpage}{3
63
}
\indexentry{降噪|hyperpage}{3
63
}
\indexentry{Denoising|hyperpage}{3
63
}
\indexentry{泛化|hyperpage}{3
64
}
\indexentry{Generalization|hyperpage}{3
64
}
\indexentry{标签平滑|hyperpage}{3
65
}
\indexentry{Label Smoothing|hyperpage}{3
65
}
\indexentry{相互适应|hyperpage}{3
66
}
\indexentry{Co-Adaptation|hyperpage}{3
66
}
\indexentry{集成学习|hyperpage}{3
68
}
\indexentry{Ensemble Learning|hyperpage}{3
68
}
\indexentry{容量|hyperpage}{36
9
}
\indexentry{Capacity|hyperpage}{36
9
}
\indexentry{宽残差网络|hyperpage}{36
9
}
\indexentry{Wide Residual Network|hyperpage}{36
9
}
\indexentry{探测任务|hyperpage}{3
71
}
\indexentry{Probing Task|hyperpage}{3
71
}
\indexentry{表面信息|hyperpage}{3
71
}
\indexentry{Surface Information|hyperpage}{3
71
}
\indexentry{语法信息|hyperpage}{3
71
}
\indexentry{Syntactic Information|hyperpage}{3
71
}
\indexentry{语义信息|hyperpage}{3
71
}
\indexentry{Semantic Information|hyperpage}{3
71
}
\indexentry{词嵌入|hyperpage}{3
71
}
\indexentry{Embedding|hyperpage}{3
71
}
\indexentry{数据并行|hyperpage}{3
72
}
\indexentry{Data Parallelism|hyperpage}{3
72
}
\indexentry{模型并行|hyperpage}{3
72
}
\indexentry{Model Parallelism|hyperpage}{3
72
}
\indexentry{小批量训练|hyperpage}{3
72
}
\indexentry{Mini-batch Training|hyperpage}{3
72
}
\indexentry{课程学习|hyperpage}{3
74
}
\indexentry{Curriculum Learning|hyperpage}{3
74
}
\indexentry{推断|hyperpage}{3
75
}
\indexentry{Inference|hyperpage}{3
75
}
\indexentry{解码|hyperpage}{3
75
}
\indexentry{Decoding|hyperpage}{3
75
}
\indexentry{准确性|hyperpage}{3
75
}
\indexentry{Accuracy|hyperpage}{3
75
}
\indexentry{时延|hyperpage}{3
75
}
\indexentry{Latency|hyperpage}{3
75
}
\indexentry{时延|hyperpage}{3
75
}
\indexentry{Memory|hyperpage}{3
75
}
\indexentry{搜索错误|hyperpage}{3
75
}
\indexentry{Search Error|hyperpage}{3
75
}
\indexentry{模型错误|hyperpage}{3
75
}
\indexentry{Modeling Error|hyperpage}{3
75
}
\indexentry{重排序|hyperpage}{3
77
}
\indexentry{Re-ranking|hyperpage}{3
77
}
\indexentry{双向推断|hyperpage}{3
77
}
\indexentry{Bidirectional Inference|hyperpage}{3
77
}
\indexentry{批量推断|hyperpage}{3
8
1}
\indexentry{Batch Inference|hyperpage}{3
8
1}
\indexentry{批量处理|hyperpage}{3
8
1}
\indexentry{Batching|hyperpage}{3
8
1}
\indexentry{二值网络|hyperpage}{3
8
3}
\indexentry{Binarized Neural Networks|hyperpage}{3
8
3}
\indexentry{自回归翻译|hyperpage}{3
8
3}
\indexentry{Autoregressive Translation|hyperpage}{3
8
3}
\indexentry{非自回归翻译|hyperpage}{3
83
}
\indexentry{
Regressive Translation|hyperpage}{383
}
\indexentry{繁衍率|hyperpage}{3
83
}
\indexentry{Fertility|hyperpage}{3
83
}
\indexentry{偏置|hyperpage}{3
8
5}
\indexentry{Bias|hyperpage}{3
8
5}
\indexentry{退化|hyperpage}{3
8
5}
\indexentry{Degenerate|hyperpage}{3
8
5}
\indexentry{过翻译|hyperpage}{3
86
}
\indexentry{Over Translation|hyperpage}{3
86
}
\indexentry{欠翻译|hyperpage}{3
86
}
\indexentry{Under Translation|hyperpage}{3
86
}
\indexentry{充分性|hyperpage}{3
8
7}
\indexentry{Adequacy|hyperpage}{3
8
7}
\indexentry{系统融合|hyperpage}{3
8
8}
\indexentry{System Combination|hyperpage}{3
8
8}
\indexentry{假设选择|hyperpage}{3
88
}
\indexentry{Hypothesis Selection|hyperpage}{3
88
}
\indexentry{多样性|hyperpage}{3
88
}
\indexentry{Diversity|hyperpage}{3
88
}
\indexentry{重排序|hyperpage}{3
8
9}
\indexentry{Re-ranking|hyperpage}{3
8
9}
\indexentry{混淆网络|hyperpage}{3
90
}
\indexentry{Confusion Network|hyperpage}{3
90
}
\indexentry{动态线性层聚合方法|hyperpage}{3
94
}
\indexentry{Dynamic Linear Combination of Layers,DLCL|hyperpage}{3
94
}
\indexentry{相互适应|hyperpage}{3
98
}
\indexentry{Co-adaptation|hyperpage}{3
98
}
\indexentry{数据增强|hyperpage}{
40
1}
\indexentry{Data Augmentation|hyperpage}{
40
1}
\indexentry{回译|hyperpage}{
40
1}
\indexentry{Back Translation|hyperpage}{
40
1}
\indexentry{迭代式回译|hyperpage}{
401
}
\indexentry{Iterative Back Translation|hyperpage}{
401
}
\indexentry{前向翻译|hyperpage}{
40
2}
\indexentry{Forward Translation|hyperpage}{
40
2}
\indexentry{预训练|hyperpage}{
402
}
\indexentry{Pre-training|hyperpage}{
402
}
\indexentry{微调|hyperpage}{
402
}
\indexentry{Fine-tuning|hyperpage}{
402
}
\indexentry{多任务学习|hyperpage}{
40
4}
\indexentry{Multitask Learning|hyperpage}{
40
4}
\indexentry{模型压缩|hyperpage}{
405
}
\indexentry{Model Compression|hyperpage}{
405
}
\indexentry{学习难度|hyperpage}{
405
}
\indexentry{Learning Difficulty|hyperpage}{
40
6}
\indexentry{教师模型|hyperpage}{
40
6}
\indexentry{Teacher Model|hyperpage}{
40
6}
\indexentry{学生模型|hyperpage}{
406
}
\indexentry{Student Model|hyperpage}{
406
}
\indexentry{基于单词的知识精炼|hyperpage}{
406
}
\indexentry{Word-level Knowledge Distillation|hyperpage}{
406
}
\indexentry{基于序列的知识精炼|hyperpage}{
40
7}
\indexentry{Sequence-level Knowledge Distillation|hyperpage}{
40
7}
\indexentry{中间层输出|hyperpage}{
40
8}
\indexentry{Hint-based Knowledge Transfer|hyperpage}{
40
8}
\indexentry{注意力分布|hyperpage}{
40
8}
\indexentry{Attention To Attention Transfer|hyperpage}{
40
8}
\indexentry{循环一致性|hyperpage}{4
10
}
\indexentry{Circle Consistency|hyperpage}{4
10
}
\indexentry{翻译中回译|hyperpage}{4
11
}
\indexentry{On-the-fly Back-translation|hyperpage}{4
11
}
\indexentry{网络结构搜索技术|hyperpage}{4
1
4}
\indexentry{Neural Architecture Search;NAS|hyperpage}{4
1
4}
\indexentry{树到串翻译规则|hyperpage}{1
79
}
\indexentry{Tree-to-String Translation Rule|hyperpage}{1
79
}
\indexentry{树到树翻译规则|hyperpage}{1
79
}
\indexentry{Tree-to-Tree Translation Rule|hyperpage}{1
79
}
\indexentry{树片段|hyperpage}{18
0
}
\indexentry{Tree Fragment|hyperpage}{18
0
}
\indexentry{同步树替换文法规则|hyperpage}{18
1
}
\indexentry{Synchronous Tree Substitution Grammar Rule|hyperpage}{18
1
}
\indexentry{边缘集合|hyperpage}{18
7
}
\indexentry{Frontier Set|hyperpage}{18
7
}
\indexentry{最小规则|hyperpage}{1
88
}
\indexentry{Minimal Rules|hyperpage}{1
88
}
\indexentry{二叉化|hyperpage}{19
1
}
\indexentry{Binarization|hyperpage}{19
1
}
\indexentry{基于短语的特征|hyperpage}{19
5
}
\indexentry{基于句法的特征|hyperpage}{19
5
}
\indexentry{有向超图|hyperpage}{19
6
}
\indexentry{Directed Hyper-graph|hyperpage}{19
6
}
\indexentry{超边|hyperpage}{19
6
}
\indexentry{Hyper-edge|hyperpage}{19
6
}
\indexentry{半环分析|hyperpage}{
197
}
\indexentry{Semi-ring Parsing|hyperpage}{
197
}
\indexentry{组合|hyperpage}{
198
}
\indexentry{Composition|hyperpage}{
198
}
\indexentry{基于串的解码|hyperpage}{
199
}
\indexentry{String-based Decoding|hyperpage}{
199
}
\indexentry{基于树的解码|hyperpage}{
199
}
\indexentry{Tree-based Decoding|hyperpage}{
199
}
\indexentry{Lexicalized Norm Form|hyperpage}{20
2
}
\indexentry{人工神经网络|hyperpage}{2
07
}
\indexentry{Artificial Neural Networks|hyperpage}{2
07
}
\indexentry{神经网络|hyperpage}{2
07
}
\indexentry{Neural Networks|hyperpage}{2
07
}
\indexentry{深度学习|hyperpage}{2
08
}
\indexentry{Deep Learning|hyperpage}{2
08
}
\indexentry{连接主义|hyperpage}{2
09
}
\indexentry{Connectionism|hyperpage}{2
09
}
\indexentry{分布式表示|hyperpage}{2
09
}
\indexentry{Distributed representation|hyperpage}{2
09
}
\indexentry{符号主义|hyperpage}{2
09
}
\indexentry{Symbolicism|hyperpage}{2
09
}
\indexentry{端到端学习|hyperpage}{21
1
}
\indexentry{End-to-End Learning|hyperpage}{21
1
}
\indexentry{表示学习|hyperpage}{21
1
}
\indexentry{Representation Learning|hyperpage}{21
1
}
\indexentry{分布式表示|hyperpage}{21
2
}
\indexentry{Distributed Representation|hyperpage}{21
2
}
\indexentry{标量|hyperpage}{21
3
}
\indexentry{Scalar|hyperpage}{21
3
}
\indexentry{向量|hyperpage}{21
3
}
\indexentry{Vector|hyperpage}{21
3
}
\indexentry{矩阵|hyperpage}{21
3
}
\indexentry{Matrix|hyperpage}{21
3
}
\indexentry{转置|hyperpage}{21
4
}
\indexentry{Transpose|hyperpage}{21
4
}
\indexentry{按元素加法|hyperpage}{21
4
}
\indexentry{Element-wise Addition|hyperpage}{21
4
}
\indexentry{数乘|hyperpage}{21
5
}
\indexentry{Scalar Multiplication|hyperpage}{21
5
}
\indexentry{按元素乘积|hyperpage}{2
16
}
\indexentry{Element-wise Product|hyperpage}{2
16
}
\indexentry{线性映射|hyperpage}{2
16
}
\indexentry{Linear Mapping|hyperpage}{2
16
}
\indexentry{线性变换|hyperpage}{2
16
}
\indexentry{Linear Transformation|hyperpage}{2
16
}
\indexentry{范数|hyperpage}{2
17
}
\indexentry{Norm|hyperpage}{2
17
}
\indexentry{欧几里得范数|hyperpage}{2
18
}
\indexentry{Euclidean Norm|hyperpage}{2
18
}
\indexentry{Frobenius 范数|hyperpage}{2
18
}
\indexentry{Frobenius Norm|hyperpage}{2
18
}
\indexentry{权重|hyperpage}{2
19
}
\indexentry{weight|hyperpage}{2
19
}
\indexentry{张量|hyperpage}{2
29
}
\indexentry{Tensor|hyperpage}{2
29
}
\indexentry{阶|hyperpage}{2
29
}
\indexentry{Rank|hyperpage}{2
29
}
\indexentry{广播机制|hyperpage}{23
3
}
\indexentry{向量化|hyperpage}{23
3
}
\indexentry{Vectorization|hyperpage}{23
3
}
\indexentry{前向传播|hyperpage}{2
37
}
\indexentry{计算图|hyperpage}{2
38
}
\indexentry{Computation Graph|hyperpage}{2
38
}
\indexentry{模型参数|hyperpage}{2
39
}
\indexentry{Model Parameters|hyperpage}{2
39
}
\indexentry{训练|hyperpage}{2
39
}
\indexentry{Training|hyperpage}{2
39
}
\indexentry{有标注数据|hyperpage}{2
39
}
\indexentry{Annotated Data/Labeled Data|hyperpage}{2
39
}
\indexentry{有指导的训练|hyperpage}{2
39
}
\indexentry{有监督的训练|hyperpage}{2
39
}
\indexentry{Supervised Training|hyperpage}{2
39
}
\indexentry{训练数据集合|hyperpage}{24
0
}
\indexentry{Training Data Set|hyperpage}{24
0
}
\indexentry{损失函数|hyperpage}{24
0
}
\indexentry{Loss Function|hyperpage}{24
0
}
\indexentry{目标函数|hyperpage}{24
0
}
\indexentry{Objective Function|hyperpage}{24
0
}
\indexentry{代价函数|hyperpage}{24
2
}
\indexentry{Cost Function|hyperpage}{24
2
}
\indexentry{梯度下降方法|hyperpage}{24
2
}
\indexentry{Gradient Descent Method|hyperpage}{24
2
}
\indexentry{参数更新的规则|hyperpage}{24
2
}
\indexentry{Update Rule|hyperpage}{24
2
}
\indexentry{学习率|hyperpage}{24
2
}
\indexentry{Learning Rate|hyperpage}{24
2
}
\indexentry{基于梯度的方法|hyperpage}{24
2
}
\indexentry{Gradient-based Method|hyperpage}{24
2
}
\indexentry{批量梯度下降|hyperpage}{24
3
}
\indexentry{Batch Gradient Descent|hyperpage}{24
3
}
\indexentry{随机梯度下降|hyperpage}{24
3
}
\indexentry{Stochastic Gradient Descent|hyperpage}{24
3
}
\indexentry{小批量梯度下降|hyperpage}{24
3
}
\indexentry{Mini-Batch Gradient Descent|hyperpage}{24
3
}
\indexentry{数值微分|hyperpage}{24
4
}
\indexentry{Numerical Differentiation|hyperpage}{24
4
}
\indexentry{截断误差|hyperpage}{24
4
}
\indexentry{Truncation Error|hyperpage}{24
4
}
\indexentry{舍入误差|hyperpage}{24
4
}
\indexentry{Round-off Error|hyperpage}{24
4
}
\indexentry{符号微分|hyperpage}{24
5
}
\indexentry{Symbolic Differentiation|hyperpage}{24
5
}
\indexentry{表达式膨胀|hyperpage}{24
5
}
\indexentry{Expression Swell|hyperpage}{24
5
}
\indexentry{自动微分|hyperpage}{24
5
}
\indexentry{Automatic Differentiation|hyperpage}{24
5
}
\indexentry{反向模式|hyperpage}{2
46
}
\indexentry{Backward Mode|hyperpage}{2
46
}
\indexentry{学习率|hyperpage}{2
47
}
\indexentry{Learning Rate|hyperpage}{2
47
}
\indexentry{Momentum|hyperpage}{2
47
}
\indexentry{AdaGrad|hyperpage}{2
48
}
\indexentry{衰减|hyperpage}{2
48
}
\indexentry{Decay|hyperpage}{2
48
}
\indexentry{RMSprop|hyperpage}{2
48
}
\indexentry{Adam|hyperpage}{2
49
}
\indexentry{数据并行|hyperpage}{2
49
}
\indexentry{同步更新|hyperpage}{25
0
}
\indexentry{Synchronous Update|hyperpage}{25
0
}
\indexentry{异步更新|hyperpage}{25
0
}
\indexentry{Asynchronous Update|hyperpage}{25
0
}
\indexentry{参数服务器|hyperpage}{25
0
}
\indexentry{Parameter Server|hyperpage}{25
0
}
\indexentry{梯度消失|hyperpage}{25
1
}
\indexentry{Gradient Vanishing|hyperpage}{25
1
}
\indexentry{梯度爆炸|hyperpage}{25
1
}
\indexentry{Gradient Explosion|hyperpage}{25
1
}
\indexentry{梯度裁剪|hyperpage}{25
2
}
\indexentry{Gradient Clipping|hyperpage}{25
2
}
\indexentry{批量归一化|hyperpage}{25
3
}
\indexentry{Batch Normalization|hyperpage}{25
3
}
\indexentry{层归一化|hyperpage}{25
3
}
\indexentry{Layer Normalization|hyperpage}{25
3
}
\indexentry{残差网络|hyperpage}{25
3
}
\indexentry{Residual Networks|hyperpage}{25
3
}
\indexentry{跳接|hyperpage}{25
3
}
\indexentry{Shortcut Connection|hyperpage}{25
3
}
\indexentry{过拟合|hyperpage}{25
4
}
\indexentry{Overfitting|hyperpage}{25
4
}
\indexentry{正则化|hyperpage}{25
4
}
\indexentry{Regularization|hyperpage}{25
4
}
\indexentry{反向传播|hyperpage}{25
5
}
\indexentry{back propagation|hyperpage}{25
5
}
\indexentry{神经语言模型|hyperpage}{26
1
}
\indexentry{Neural Language Model|hyperpage}{26
1
}
\indexentry{前馈神经网络语言模型|hyperpage}{26
2
}
\indexentry{Feed-forward Neural Network Language Model|hyperpage}{26
2
}
\indexentry{循环神经网络|hyperpage}{26
4
}
\indexentry{Recurrent Neural Network|hyperpage}{26
4
}
\indexentry{循环神经网络语言模型|hyperpage}{26
4
}
\indexentry{RNNLM|hyperpage}{26
4
}
\indexentry{循环单元|hyperpage}{26
4
}
\indexentry{RNN Cell|hyperpage}{26
4
}
\indexentry{自注意力机制|hyperpage}{2
66
}
\indexentry{Self-Attention Mechanism|hyperpage}{2
66
}
\indexentry{注意力权重|hyperpage}{2
66
}
\indexentry{Attention Weight|hyperpage}{2
66
}
\indexentry{困惑度|hyperpage}{2
67
}
\indexentry{Perplexity|hyperpage}{2
67
}
\indexentry{One-hot编码|hyperpage}{2
67
}
\indexentry{独热编码|hyperpage}{2
67
}
\indexentry{分布式表示|hyperpage}{2
68
}
\indexentry{Distributed Representation|hyperpage}{2
68
}
\indexentry{词嵌入|hyperpage}{2
68
}
\indexentry{Word Embedding|hyperpage}{2
68
}
\indexentry{句子表示模型|hyperpage}{27
0
}
\indexentry{句子的表示|hyperpage}{27
0
}
\indexentry{表示学习|hyperpage}{27
0
}
\indexentry{Representation Learning|hyperpage}{27
0
}
\indexentry{可解释机器学习|hyperpage}{27
4
}
\indexentry{Explainable Machine Learning|hyperpage}{27
4
}
\indexentry{神经机器翻译|hyperpage}{2
75
}
\indexentry{Neural Machine Translation|hyperpage}{2
75
}
\indexentry{分布式表示|hyperpage}{2
77
}
\indexentry{Distributed Representation|hyperpage}{2
77
}
\indexentry{特征工程|hyperpage}{28
3
}
\indexentry{Feature Engineering|hyperpage}{28
3
}
\indexentry{编码器-解码器模型|hyperpage}{2
83
}
\indexentry{Encoder-Decoder Paradigm|hyperpage}{2
83
}
\indexentry{编码器-解码器框架|hyperpage}{2
83
}
\indexentry{循环神经网络|hyperpage}{2
89
}
\indexentry{Recurrent Neural Network, RNN|hyperpage}{2
89
}
\indexentry{词嵌入|hyperpage}{29
1
}
\indexentry{Word Embedding|hyperpage}{29
1
}
\indexentry{表示学习|hyperpage}{29
1
}
\indexentry{Representation Learning|hyperpage}{29
1
}
\indexentry{生成|hyperpage}{29
1
}
\indexentry{Generation|hyperpage}{29
1
}
\indexentry{长短时记忆|hyperpage}{
295
}
\indexentry{Long Short-Term Memory|hyperpage}{
295
}
\indexentry{遗忘|hyperpage}{
296
}
\indexentry{记忆更新|hyperpage}{
296
}
\indexentry{输出|hyperpage}{
297
}
\indexentry{门循环单元|hyperpage}{
298
}
\indexentry{Gated Recurrent Unit,GRU|hyperpage}{
298
}
\indexentry{注意力权重|hyperpage}{30
3
}
\indexentry{Attention Weight|hyperpage}{30
3
}
\indexentry{一阶矩估计|hyperpage}{3
09
}
\indexentry{First Moment Estimation|hyperpage}{3
09
}
\indexentry{二阶矩估计|hyperpage}{3
09
}
\indexentry{Second Moment Estimation|hyperpage}{3
09
}
\indexentry{学习率|hyperpage}{3
09
}
\indexentry{Learning Rate|hyperpage}{3
09
}
\indexentry{逐渐预热|hyperpage}{31
0
}
\indexentry{Gradual Warmup|hyperpage}{31
0
}
\indexentry{分段常数衰减|hyperpage}{31
1
}
\indexentry{Piecewise Constant Decay|hyperpage}{31
1
}
\indexentry{数据并行|hyperpage}{31
1
}
\indexentry{模型并行|hyperpage}{31
2
}
\indexentry{全搜索|hyperpage}{3
13
}
\indexentry{Full Search|hyperpage}{3
13
}
\indexentry{贪婪搜索|hyperpage}{3
13
}
\indexentry{Greedy Search|hyperpage}{3
13
}
\indexentry{束搜索|hyperpage}{3
14
}
\indexentry{Beam Search|hyperpage}{3
14
}
\indexentry{自回归模型|hyperpage}{3
14
}
\indexentry{Autoregressive Model|hyperpage}{3
14
}
\indexentry{非自回归模型|hyperpage}{3
14
}
\indexentry{Non-autoregressive Model|hyperpage}{3
14
}
\indexentry{自注意力机制|hyperpage}{32
0
}
\indexentry{Self-Attention|hyperpage}{32
0
}
\indexentry{特征提取|hyperpage}{32
1
}
\indexentry{自注意力子层|hyperpage}{32
1
}
\indexentry{Self-attention Sub-layer|hyperpage}{32
1
}
\indexentry{前馈神经网络子层|hyperpage}{32
2
}
\indexentry{Feed-forward Sub-layer|hyperpage}{32
2
}
\indexentry{残差连接|hyperpage}{32
2
}
\indexentry{Residual Connection|hyperpage}{32
2
}
\indexentry{层正则化|hyperpage}{32
2
}
\indexentry{Layer Normalization|hyperpage}{32
2
}
\indexentry{编码-解码注意力子层|hyperpage}{32
2
}
\indexentry{Encoder-decoder Attention Sub-layer|hyperpage}{32
2
}
\indexentry{词嵌入|hyperpage}{32
3
}
\indexentry{Word Embedding|hyperpage}{32
3
}
\indexentry{位置编码|hyperpage}{32
3
}
\indexentry{Position Embedding|hyperpage}{32
3
}
\indexentry{点乘注意力|hyperpage}{3
26
}
\indexentry{Scaled Dot-Product Attention|hyperpage}{3
26
}
\indexentry{多头注意力|hyperpage}{3
28
}
\indexentry{Multi-head Attention|hyperpage}{3
28
}
\indexentry{残差连接|hyperpage}{3
29
}
\indexentry{短连接|hyperpage}{33
0
}
\indexentry{Short-cut Connection|hyperpage}{33
0
}
\indexentry{后正则化|hyperpage}{33
1
}
\indexentry{Post-norm|hyperpage}{33
1
}
\indexentry{前正则化|hyperpage}{33
1
}
\indexentry{Pre-norm|hyperpage}{33
1
}
\indexentry{交叉熵损失|hyperpage}{33
2
}
\indexentry{Cross Entropy Loss|hyperpage}{33
2
}
\indexentry{预热|hyperpage}{33
2
}
\indexentry{Warmup|hyperpage}{33
2
}
\indexentry{小批量训练|hyperpage}{3
33
}
\indexentry{Mini-batch Training|hyperpage}{3
33
}
\indexentry{Dropout|hyperpage}{3
33
}
\indexentry{过拟合|hyperpage}{3
33
}
\indexentry{Over fitting|hyperpage}{3
33
}
\indexentry{标签平滑|hyperpage}{3
33
}
\indexentry{Label Smoothing|hyperpage}{3
33
}
\indexentry{序列到序列的转换/生成问题|hyperpage}{3
35
}
\indexentry{Sequence-to-Sequence Problem|hyperpage}{3
35
}
\indexentry{未登录词|hyperpage}{3
45
}
\indexentry{Out of Vocabulary Word,OOV Word|hyperpage}{3
45
}
\indexentry{子词切分|hyperpage}{3
45
}
\indexentry{Sub-word Segmentation|hyperpage}{3
45
}
\indexentry{标准化|hyperpage}{3
45
}
\indexentry{Normalization|hyperpage}{3
45
}
\indexentry{数据清洗|hyperpage}{3
45
}
\indexentry{Dada Cleaning|hyperpage}{3
45
}
\indexentry{数据选择|hyperpage}{3
47
}
\indexentry{Data Selection|hyperpage}{3
47
}
\indexentry{数据过滤|hyperpage}{3
47
}
\indexentry{Data Filtering|hyperpage}{3
47
}
\indexentry{开放词表|hyperpage}{35
0
}
\indexentry{Open-Vocabulary|hyperpage}{35
0
}
\indexentry{子词|hyperpage}{35
1
}
\indexentry{Sub-word|hyperpage}{35
1
}
\indexentry{字节对编码|hyperpage}{35
1
}
\indexentry{双字节编码|hyperpage}{35
1
}
\indexentry{Byte Pair Encoding,BPE|hyperpage}{35
1
}
\indexentry{正则化|hyperpage}{3
54
}
\indexentry{Regularization|hyperpage}{3
54
}
\indexentry{过拟合问题|hyperpage}{3
54
}
\indexentry{Overfitting Problem|hyperpage}{3
54
}
\indexentry{反问题|hyperpage}{3
54
}
\indexentry{Inverse Problem|hyperpage}{3
54
}
\indexentry{适定的|hyperpage}{3
54
}
\indexentry{Well-posed|hyperpage}{3
54
}
\indexentry{不适定问题|hyperpage}{3
54
}
\indexentry{Ill-posed Problem|hyperpage}{3
54
}
\indexentry{降噪|hyperpage}{3
55
}
\indexentry{Denoising|hyperpage}{3
55
}
\indexentry{泛化|hyperpage}{3
55
}
\indexentry{Generalization|hyperpage}{3
55
}
\indexentry{标签平滑|hyperpage}{3
57
}
\indexentry{Label Smoothing|hyperpage}{3
57
}
\indexentry{相互适应|hyperpage}{3
58
}
\indexentry{Co-Adaptation|hyperpage}{3
58
}
\indexentry{集成学习|hyperpage}{3
59
}
\indexentry{Ensemble Learning|hyperpage}{3
59
}
\indexentry{容量|hyperpage}{36
0
}
\indexentry{Capacity|hyperpage}{36
0
}
\indexentry{宽残差网络|hyperpage}{36
1
}
\indexentry{Wide Residual Network|hyperpage}{36
1
}
\indexentry{探测任务|hyperpage}{3
62
}
\indexentry{Probing Task|hyperpage}{3
62
}
\indexentry{表面信息|hyperpage}{3
62
}
\indexentry{Surface Information|hyperpage}{3
62
}
\indexentry{语法信息|hyperpage}{3
62
}
\indexentry{Syntactic Information|hyperpage}{3
62
}
\indexentry{语义信息|hyperpage}{3
62
}
\indexentry{Semantic Information|hyperpage}{3
62
}
\indexentry{词嵌入|hyperpage}{3
62
}
\indexentry{Embedding|hyperpage}{3
62
}
\indexentry{数据并行|hyperpage}{3
63
}
\indexentry{Data Parallelism|hyperpage}{3
63
}
\indexentry{模型并行|hyperpage}{3
63
}
\indexentry{Model Parallelism|hyperpage}{3
63
}
\indexentry{小批量训练|hyperpage}{3
63
}
\indexentry{Mini-batch Training|hyperpage}{3
63
}
\indexentry{课程学习|hyperpage}{3
65
}
\indexentry{Curriculum Learning|hyperpage}{3
65
}
\indexentry{推断|hyperpage}{3
66
}
\indexentry{Inference|hyperpage}{3
66
}
\indexentry{解码|hyperpage}{3
66
}
\indexentry{Decoding|hyperpage}{3
66
}
\indexentry{准确性|hyperpage}{3
66
}
\indexentry{Accuracy|hyperpage}{3
66
}
\indexentry{时延|hyperpage}{3
66
}
\indexentry{Latency|hyperpage}{3
66
}
\indexentry{时延|hyperpage}{3
66
}
\indexentry{Memory|hyperpage}{3
66
}
\indexentry{搜索错误|hyperpage}{3
66
}
\indexentry{Search Error|hyperpage}{3
66
}
\indexentry{模型错误|hyperpage}{3
66
}
\indexentry{Modeling Error|hyperpage}{3
66
}
\indexentry{重排序|hyperpage}{3
68
}
\indexentry{Re-ranking|hyperpage}{3
68
}
\indexentry{双向推断|hyperpage}{3
68
}
\indexentry{Bidirectional Inference|hyperpage}{3
68
}
\indexentry{批量推断|hyperpage}{3
7
1}
\indexentry{Batch Inference|hyperpage}{3
7
1}
\indexentry{批量处理|hyperpage}{3
7
1}
\indexentry{Batching|hyperpage}{3
7
1}
\indexentry{二值网络|hyperpage}{3
7
3}
\indexentry{Binarized Neural Networks|hyperpage}{3
7
3}
\indexentry{自回归翻译|hyperpage}{3
7
3}
\indexentry{Autoregressive Translation|hyperpage}{3
7
3}
\indexentry{非自回归翻译|hyperpage}{3
74
}
\indexentry{
Non-Autoregressive Translation|hyperpage}{374
}
\indexentry{繁衍率|hyperpage}{3
74
}
\indexentry{Fertility|hyperpage}{3
74
}
\indexentry{偏置|hyperpage}{3
7
5}
\indexentry{Bias|hyperpage}{3
7
5}
\indexentry{退化|hyperpage}{3
7
5}
\indexentry{Degenerate|hyperpage}{3
7
5}
\indexentry{过翻译|hyperpage}{3
77
}
\indexentry{Over Translation|hyperpage}{3
77
}
\indexentry{欠翻译|hyperpage}{3
77
}
\indexentry{Under Translation|hyperpage}{3
77
}
\indexentry{充分性|hyperpage}{3
7
7}
\indexentry{Adequacy|hyperpage}{3
7
7}
\indexentry{系统融合|hyperpage}{3
7
8}
\indexentry{System Combination|hyperpage}{3
7
8}
\indexentry{假设选择|hyperpage}{3
79
}
\indexentry{Hypothesis Selection|hyperpage}{3
79
}
\indexentry{多样性|hyperpage}{3
79
}
\indexentry{Diversity|hyperpage}{3
79
}
\indexentry{重排序|hyperpage}{3
7
9}
\indexentry{Re-ranking|hyperpage}{3
7
9}
\indexentry{混淆网络|hyperpage}{3
81
}
\indexentry{Confusion Network|hyperpage}{3
81
}
\indexentry{动态线性层聚合方法|hyperpage}{3
85
}
\indexentry{Dynamic Linear Combination of Layers,DLCL|hyperpage}{3
85
}
\indexentry{相互适应|hyperpage}{3
89
}
\indexentry{Co-adaptation|hyperpage}{3
89
}
\indexentry{数据增强|hyperpage}{
39
1}
\indexentry{Data Augmentation|hyperpage}{
39
1}
\indexentry{回译|hyperpage}{
39
1}
\indexentry{Back Translation|hyperpage}{
39
1}
\indexentry{迭代式回译|hyperpage}{
392
}
\indexentry{Iterative Back Translation|hyperpage}{
392
}
\indexentry{前向翻译|hyperpage}{
39
2}
\indexentry{Forward Translation|hyperpage}{
39
2}
\indexentry{预训练|hyperpage}{
393
}
\indexentry{Pre-training|hyperpage}{
393
}
\indexentry{微调|hyperpage}{
393
}
\indexentry{Fine-tuning|hyperpage}{
393
}
\indexentry{多任务学习|hyperpage}{
39
4}
\indexentry{Multitask Learning|hyperpage}{
39
4}
\indexentry{模型压缩|hyperpage}{
396
}
\indexentry{Model Compression|hyperpage}{
396
}
\indexentry{学习难度|hyperpage}{
396
}
\indexentry{Learning Difficulty|hyperpage}{
39
6}
\indexentry{教师模型|hyperpage}{
39
6}
\indexentry{Teacher Model|hyperpage}{
39
6}
\indexentry{学生模型|hyperpage}{
397
}
\indexentry{Student Model|hyperpage}{
397
}
\indexentry{基于单词的知识精炼|hyperpage}{
397
}
\indexentry{Word-level Knowledge Distillation|hyperpage}{
397
}
\indexentry{基于序列的知识精炼|hyperpage}{
39
7}
\indexentry{Sequence-level Knowledge Distillation|hyperpage}{
39
7}
\indexentry{中间层输出|hyperpage}{
39
8}
\indexentry{Hint-based Knowledge Transfer|hyperpage}{
39
8}
\indexentry{注意力分布|hyperpage}{
39
8}
\indexentry{Attention To Attention Transfer|hyperpage}{
39
8}
\indexentry{循环一致性|hyperpage}{4
01
}
\indexentry{Circle Consistency|hyperpage}{4
01
}
\indexentry{翻译中回译|hyperpage}{4
02
}
\indexentry{On-the-fly Back-translation|hyperpage}{4
02
}
\indexentry{网络结构搜索技术|hyperpage}{4
0
4}
\indexentry{Neural Architecture Search;NAS|hyperpage}{4
0
4}
Book/mt-book-xelatex.ptc
查看文件 @
bd87e7ad
\boolfalse {citerequest}\boolfalse {citetracker}\boolfalse {pagetracker}\boolfalse {backtracker}\relax
\babel@toc {english}{}
\defcounter {refsection}{0}\relax
\select@language {english}
\defcounter {refsection}{0}\relax
\contentsline {part}{\@mypartnumtocformat {I}{机器翻译基础}}{15}{part.1}
\contentsline {part}{\@mypartnumtocformat {I}{机器翻译基础}}{15}{part.1}%
\ttl@starttoc {default@1}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {1}机器翻译简介}{17}{chapter.1}
\contentsline {chapter}{\numberline {1}机器翻译简介}{17}{chapter.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.1}机器翻译的概念}{17}{section.1.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {1.2}机器翻译简史}{20}{section.1.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.2.1}人工翻译}{20}{subsection.1.2.1}%
\defcounter {refsection}{0}\relax
\contentsline {s
ection}{\numberline {1.1}机器翻译的概念}{17}{section.1.1}
\contentsline {s
ubsection}{\numberline {1.2.2}机器翻译的萌芽}{21}{subsection.1.2.2}%
\defcounter {refsection}{0}\relax
\contentsline {s
ection}{\numberline {1.2}机器翻译简史}{20}{section.1.2}
\contentsline {s
ubsection}{\numberline {1.2.3}机器翻译的受挫}{22}{subsection.1.2.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.2.
1}人工翻译}{20}{subsection.1.2.1}
\contentsline {subsection}{\numberline {1.2.
4}机器翻译的快速成长}{23}{subsection.1.2.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.2.
2}机器翻译的萌芽}{21}{subsection.1.2.2}
\contentsline {subsection}{\numberline {1.2.
5}机器翻译的爆发}{24}{subsection.1.2.5}%
\defcounter {refsection}{0}\relax
\contentsline {s
ubsection}{\numberline {1.2.3}机器翻译的受挫}{22}{subsection.1.2.3}
\contentsline {s
ection}{\numberline {1.3}机器翻译现状}{25}{section.1.3}%
\defcounter {refsection}{0}\relax
\contentsline {s
ubsection}{\numberline {1.2.4}机器翻译的快速成长}{23}{subsection.1.2.4}
\contentsline {s
ection}{\numberline {1.4}机器翻译方法}{27}{section.1.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.
2.5}机器翻译的爆发}{24}{subsection.1.2.5}
\contentsline {subsection}{\numberline {1.
4.1}基于规则的机器翻译}{27}{subsection.1.4.1}%
\defcounter {refsection}{0}\relax
\contentsline {s
ection}{\numberline {1.3}机器翻译现状}{25}{section.1.3}
\contentsline {s
ubsection}{\numberline {1.4.2}基于实例的机器翻译}{28}{subsection.1.4.2}%
\defcounter {refsection}{0}\relax
\contentsline {s
ection}{\numberline {1.4}机器翻译方法}{27}{section.1.4}
\contentsline {s
ubsection}{\numberline {1.4.3}统计机器翻译}{29}{subsection.1.4.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.4.
1}基于规则的机器翻译}{27}{subsection.1.4.1}
\contentsline {subsection}{\numberline {1.4.
4}神经机器翻译}{30}{subsection.1.4.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.4.
2}基于实例的机器翻译}{28}{subsection.1.4.2}
\contentsline {subsection}{\numberline {1.4.
5}对比分析}{31}{subsection.1.4.5}%
\defcounter {refsection}{0}\relax
\contentsline {s
ubsection}{\numberline {1.4.3}统计机器翻译}{29}{subsection.1.4.3}
\contentsline {s
ection}{\numberline {1.5}翻译质量评价}{32}{section.1.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.
4.4}神经机器翻译}{30}{subsection.1.4.4}
\contentsline {subsection}{\numberline {1.
5.1}人工评价}{32}{subsection.1.5.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.
4.5}对比分析}{31}{subsection.1.4.5}
\contentsline {subsection}{\numberline {1.
5.2}自动评价}{33}{subsection.1.5.2}%
\defcounter {refsection}{0}\relax
\contentsline {s
ection}{\numberline {1.5}翻译质量评价}{32}{section.1.5}
\contentsline {s
ubsubsection}{BLEU}{33}{section*.17}%
\defcounter {refsection}{0}\relax
\contentsline {subs
ection}{\numberline {1.5.1}人工评价}{32}{subsection.1.5.1}
\contentsline {subs
ubsection}{TER}{34}{section*.18}%
\defcounter {refsection}{0}\relax
\contentsline {subs
ection}{\numberline {1.5.2}自动评价}{33}{subsection.1.5.2}
\contentsline {subs
ubsection}{基于检测点的评价}{35}{section*.19}%
\defcounter {refsection}{0}\relax
\contentsline {s
ubsubsection}{BLEU}{33}{section*.17}
\contentsline {s
ection}{\numberline {1.6}机器翻译应用}{36}{section.1.6}%
\defcounter {refsection}{0}\relax
\contentsline {s
ubsubsection}{TER}{35}{section*.18}
\contentsline {s
ection}{\numberline {1.7}开源项目与评测}{38}{section.1.7}%
\defcounter {refsection}{0}\relax
\contentsline {subs
ubsection}{基于检测点的评价}{35}{section*.19}
\contentsline {subs
ection}{\numberline {1.7.1}开源机器翻译系统}{38}{subsection.1.7.1}%
\defcounter {refsection}{0}\relax
\contentsline {s
ection}{\numberline {1.6}机器翻译应用}{36}{section.1.6}
\contentsline {s
ubsubsection}{统计机器翻译开源系统}{38}{section*.21}%
\defcounter {refsection}{0}\relax
\contentsline {s
ection}{\numberline {1.7}开源项目与评测}{38}{section.1.7}
\contentsline {s
ubsubsection}{神经机器翻译开源系统}{40}{section*.22}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.7.
1}开源机器翻译系统}{38}{subsection.1.7.1}
\contentsline {subsection}{\numberline {1.7.
2}常用数据集及公开评测任务}{42}{subsection.1.7.2}%
\defcounter {refsection}{0}\relax
\contentsline {s
ubsubsection}{统计机器翻译开源系统}{39}{section*.21}
\contentsline {s
ection}{\numberline {1.8}推荐学习资源}{44}{section.1.8}%
\defcounter {refsection}{0}\relax
\contentsline {subs
ubsection}{神经机器翻译开源系统}{40}{section*.22}
\contentsline {subs
ection}{\numberline {1.8.1}经典书籍}{44}{subsection.1.8.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {1.
7.2}常用数据集及公开评测任务}{42}{subsection.1.7.2}
\contentsline {subsection}{\numberline {1.
8.2}网络资源}{45}{subsection.1.8.2}%
\defcounter {refsection}{0}\relax
\contentsline {s
ection}{\numberline {1.8}推荐学习资源}{44}{section.1.8}
\contentsline {s
ubsection}{\numberline {1.8.3}专业组织和会议}{45}{subsection.1.8.3}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {2}词法、语法及统计建模基础}{49}{chapter.2}
\contentsline {chapter}{\numberline {2}词法、语法及统计建模基础}{49}{chapter.2}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.1}问题概述 }{50}{section.2.1}
\contentsline {section}{\numberline {2.1}问题概述 }{50}{section.2.1}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.2}概率论基础}{51}{section.2.2}
\contentsline {section}{\numberline {2.2}概率论基础}{51}{section.2.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.1}随机变量和概率}{52}{subsection.2.2.1}
\contentsline {subsection}{\numberline {2.2.1}随机变量和概率}{52}{subsection.2.2.1}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.2}联合概率、条件概率和边缘概率}{53}{subsection.2.2.2}
\contentsline {subsection}{\numberline {2.2.2}联合概率、条件概率和边缘概率}{53}{subsection.2.2.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.3}链式法则}{54}{subsection.2.2.3}
\contentsline {subsection}{\numberline {2.2.3}链式法则}{54}{subsection.2.2.3}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.4}贝叶斯法则}{55}{subsection.2.2.4}
\contentsline {subsection}{\numberline {2.2.4}贝叶斯法则}{55}{subsection.2.2.4}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.2.5}KL距离和熵}{57}{subsection.2.2.5}
\contentsline {subsection}{\numberline {2.2.5}KL距离和熵}{57}{subsection.2.2.5}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{信息熵}{57}{section*.29}
\contentsline {subsubsection}{信息熵}{57}{section*.29}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{KL距离}{58}{section*.31}
\contentsline {subsubsection}{KL距离}{58}{section*.31}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{交叉熵}{58}{section*.32}
\contentsline {subsubsection}{交叉熵}{58}{section*.32}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.3}中文分词}{59}{section.2.3}
\contentsline {section}{\numberline {2.3}中文分词}{59}{section.2.3}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.3.1}基于词典的分词方法}{60}{subsection.2.3.1}
\contentsline {subsection}{\numberline {2.3.1}基于词典的分词方法}{60}{subsection.2.3.1}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.3.2}基于统计的分词方法}{61}{subsection.2.3.2}
\contentsline {subsection}{\numberline {2.3.2}基于统计的分词方法}{61}{subsection.2.3.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{统计模型的学习与推断}{61}{section*.36}
\contentsline {subsubsection}{统计模型的学习与推断}{61}{section*.36}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{掷骰子游戏}{62}{section*.38}
\contentsline {subsubsection}{掷骰子游戏}{62}{section*.38}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{全概率分词方法}{64}{section*.42}
\contentsline {subsubsection}{全概率分词方法}{64}{section*.42}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.4}$n$-gram语言模型 }{66}{section.2.4}
\contentsline {section}{\numberline {2.4}$n$-gram语言模型 }{66}{section.2.4}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.4.1}建模}{67}{subsection.2.4.1}
\contentsline {subsection}{\numberline {2.4.1}建模}{67}{subsection.2.4.1}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.4.2}未登录词和平滑算法}{69}{subsection.2.4.2}
\contentsline {subsection}{\numberline {2.4.2}未登录词和平滑算法}{69}{subsection.2.4.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{加法平滑方法}{70}{section*.48}
\contentsline {subsubsection}{加法平滑方法}{70}{section*.48}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{古德-图灵估计法}{71}{section*.50}
\contentsline {subsubsection}{古德-图灵估计法}{71}{section*.50}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Kneser-Ney平滑方法}{72}{section*.52}
\contentsline {subsubsection}{Kneser-Ney平滑方法}{72}{section*.52}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.5}句法分析(短语结构分析)}{74}{section.2.5}
\contentsline {section}{\numberline {2.5}句法分析(短语结构分析)}{74}{section.2.5}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.5.1}句子的句法树表示}{74}{subsection.2.5.1}
\contentsline {subsection}{\numberline {2.5.1}句子的句法树表示}{74}{subsection.2.5.1}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.5.2}上下文无关文法}{76}{subsection.2.5.2}
\contentsline {subsection}{\numberline {2.5.2}上下文无关文法}{76}{subsection.2.5.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {2.5.3}规则和推导的概率}{8
1}{subsection.2.5.3}
\contentsline {subsection}{\numberline {2.5.3}规则和推导的概率}{8
0}{subsection.2.5.3}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {2.6}小结及深入阅读}{8
3}{section.2.6}
\contentsline {section}{\numberline {2.6}小结及深入阅读}{8
2}{section.2.6}%
\defcounter {refsection}{0}\relax
\contentsline {part}{\@mypartnumtocformat {II}{统计机器翻译}}{85}{part.2}
\contentsline {part}{\@mypartnumtocformat {II}{统计机器翻译}}{85}{part.2}
%
\ttl@stoptoc {default@1}
\ttl@starttoc {default@2}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {3}基于词的机器翻译模型}{87}{chapter.3}
\contentsline {chapter}{\numberline {3}基于词的机器翻译模型}{87}{chapter.3}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.1}什么是基于词的翻译模型}{87}{section.3.1}
\contentsline {section}{\numberline {3.1}什么是基于词的翻译模型}{87}{section.3.1}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.2}构建一个简单的机器翻译系统}{89}{section.3.2}
\contentsline {section}{\numberline {3.2}构建一个简单的机器翻译系统}{89}{section.3.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.1}如何进行翻译?}{89}{subsection.3.2.1}
\contentsline {subsection}{\numberline {3.2.1}如何进行翻译?}{89}{subsection.3.2.1}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{机器翻译流程}{90}{section*.65}
\contentsline {subsubsection}{机器翻译流程}{90}{section*.65}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{人工翻译 vs. 机器翻译}{91}{section*.67}
\contentsline {subsubsection}{人工翻译 vs. 机器翻译}{91}{section*.67}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.2}基本框架}{91}{subsection.3.2.2}
\contentsline {subsection}{\numberline {3.2.2}基本框架}{91}{subsection.3.2.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.3}单词翻译概率}{92}{subsection.3.2.3}
\contentsline {subsection}{\numberline {3.2.3}单词翻译概率}{92}{subsection.3.2.3}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{什么是单词翻译概率?}{92}{section*.69}
\contentsline {subsubsection}{什么是单词翻译概率?}{92}{section*.69}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{如何从一个双语平行数据中学习?}{93}{section*.71}
\contentsline {subsubsection}{如何从一个双语平行数据中学习?}{93}{section*.71}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{如何从大量的双语平行数据中学习?}{94}{section*.72}
\contentsline {subsubsection}{如何从大量的双语平行数据中学习?}{94}{section*.72}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.4}句子级翻译模型}{95}{subsection.3.2.4}
\contentsline {subsection}{\numberline {3.2.4}句子级翻译模型}{95}{subsection.3.2.4}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基础模型}{95}{section*.74}
\contentsline {subsubsection}{基础模型}{95}{section*.74}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{生成流畅的译文}{97}{section*.76}
\contentsline {subsubsection}{生成流畅的译文}{97}{section*.76}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.2.5}解码}{99}{subsection.3.2.5}
\contentsline {subsection}{\numberline {3.2.5}解码}{99}{subsection.3.2.5}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.3}基于词的翻译建模}{101}{section.3.3}
\contentsline {section}{\numberline {3.3}基于词的翻译建模}{101}{section.3.3}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.3.1}噪声信道模型}{101}{subsection.3.3.1}
\contentsline {subsection}{\numberline {3.3.1}噪声信道模型}{101}{subsection.3.3.1}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.3.2}统计机器翻译的三个基本问题}{104}{subsection.3.3.2}
\contentsline {subsection}{\numberline {3.3.2}统计机器翻译的三个基本问题}{104}{subsection.3.3.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{词对齐}{104}{section*.86}
\contentsline {subsubsection}{词对齐}{104}{section*.86}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于词对齐的翻译模型}{105}{section*.89}
\contentsline {subsubsection}{基于词对齐的翻译模型}{105}{section*.89}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于词对齐的翻译实例}{107}{section*.91}
\contentsline {subsubsection}{基于词对齐的翻译实例}{107}{section*.91}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.4}IBM模型1-2}{107}{section.3.4}
\contentsline {section}{\numberline {3.4}IBM模型1-2}{107}{section.3.4}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.4.1}IBM模型1}{108}{subsection.3.4.1}
\contentsline {subsection}{\numberline {3.4.1}IBM模型1}{108}{subsection.3.4.1}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.4.2}IBM模型2}{109}{subsection.3.4.2}
\contentsline {subsection}{\numberline {3.4.2}IBM模型2}{109}{subsection.3.4.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.4.3}解码及计算优化}{110}{subsection.3.4.3}
\contentsline {subsection}{\numberline {3.4.3}解码及计算优化}{110}{subsection.3.4.3}
%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.4.4}训练}{112}{subsection.3.4.4}
\contentsline {subsection}{\numberline {3.4.4}训练}{112}{subsection.3.4.4}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{目标函数}{112}{section*.96}
\contentsline {subsubsection}{目标函数}{112}{section*.96}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{优化}{113}{section*.98}
\contentsline {subsubsection}{优化}{113}{section*.98}
%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.5}IBM模型3-5及隐马尔可夫模型}{11
9}{section.3.5}
\contentsline {section}{\numberline {3.5}IBM模型3-5及隐马尔可夫模型}{11
8}{section.3.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.1}基于产出率的翻译模型}{11
9}{subsection.3.5.1}
\contentsline {subsection}{\numberline {3.5.1}基于产出率的翻译模型}{11
8}{subsection.3.5.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.2}IBM 模型3}{12
2}{subsection.3.5.2}
\contentsline {subsection}{\numberline {3.5.2}IBM 模型3}{12
0}{subsection.3.5.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.3}IBM 模型4}{12
3}{subsection.3.5.3}
\contentsline {subsection}{\numberline {3.5.3}IBM 模型4}{12
2}{subsection.3.5.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.4} IBM 模型5}{12
5}{subsection.3.5.4}
\contentsline {subsection}{\numberline {3.5.4} IBM 模型5}{12
4}{subsection.3.5.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.5}隐马尔可夫模型}{12
6}{subsection.3.5.5}
\contentsline {subsection}{\numberline {3.5.5}隐马尔可夫模型}{12
5}{subsection.3.5.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{隐马尔可夫模型}{12
6}{section*.110}
\contentsline {subsubsection}{隐马尔可夫模型}{12
5}{section*.110}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{词对齐模型}{12
7}{section*.112}
\contentsline {subsubsection}{词对齐模型}{12
6}{section*.112}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.5.6}解码和训练}{12
8}{subsection.3.5.6}
\contentsline {subsection}{\numberline {3.5.6}解码和训练}{12
7}{subsection.3.5.6}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.6}问题分析}{12
9}{section.3.6}
\contentsline {section}{\numberline {3.6}问题分析}{12
8}{section.3.6}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.6.1}词对齐及对称化}{12
9}{subsection.3.6.1}
\contentsline {subsection}{\numberline {3.6.1}词对齐及对称化}{12
8}{subsection.3.6.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.6.2}Deficiency}{1
30}{subsection.3.6.2}
\contentsline {subsection}{\numberline {3.6.2}Deficiency}{1
29}{subsection.3.6.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.6.3}句子长度}{13
1}{subsection.3.6.3}
\contentsline {subsection}{\numberline {3.6.3}句子长度}{13
0}{subsection.3.6.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {3.6.4}其他问题}{13
1}{subsection.3.6.4}
\contentsline {subsection}{\numberline {3.6.4}其他问题}{13
0}{subsection.3.6.4}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {3.7}小结及深入阅读}{13
2}{section.3.7}
\contentsline {section}{\numberline {3.7}小结及深入阅读}{13
1}{section.3.7}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {4}基于短语和句法的机器翻译模型}{13
5}{chapter.4}
\contentsline {chapter}{\numberline {4}基于短语和句法的机器翻译模型}{13
3}{chapter.4}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.1}翻译中的结构信息}{13
5}{section.4.1}
\contentsline {section}{\numberline {4.1}翻译中的结构信息}{13
3}{section.4.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.1.1}更大粒度的翻译单元}{13
6}{subsection.4.1.1}
\contentsline {subsection}{\numberline {4.1.1}更大粒度的翻译单元}{13
4}{subsection.4.1.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.1.2}句子的结构信息}{13
8}{subsection.4.1.2}
\contentsline {subsection}{\numberline {4.1.2}句子的结构信息}{13
6}{subsection.4.1.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.2}基于短语的翻译模型}{1
40}{section.4.2}
\contentsline {section}{\numberline {4.2}基于短语的翻译模型}{1
38}{section.4.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.1}机器翻译中的短语}{1
40}{subsection.4.2.1}
\contentsline {subsection}{\numberline {4.2.1}机器翻译中的短语}{1
38}{subsection.4.2.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.2}数学建模及判别式模型}{14
3}{subsection.4.2.2}
\contentsline {subsection}{\numberline {4.2.2}数学建模及判别式模型}{14
1}{subsection.4.2.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于翻译推导的建模}{14
3}{section*.124}
\contentsline {subsubsection}{基于翻译推导的建模}{14
1}{section*.124}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{对数线性模型}{14
4}{section*.125}
\contentsline {subsubsection}{对数线性模型}{14
2}{section*.125}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{搭建模型的基本流程}{14
5}{section*.126}
\contentsline {subsubsection}{搭建模型的基本流程}{14
3}{section*.126}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.3}短语抽取}{14
6}{subsection.4.2.3}
\contentsline {subsection}{\numberline {4.2.3}短语抽取}{14
4}{subsection.4.2.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{与词对齐一致的短语}{14
7}{section*.129}
\contentsline {subsubsection}{与词对齐一致的短语}{14
5}{section*.129}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{获取词对齐}{14
8}{section*.133}
\contentsline {subsubsection}{获取词对齐}{14
6}{section*.133}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{度量双语短语质量}{14
9}{section*.135}
\contentsline {subsubsection}{度量双语短语质量}{14
7}{section*.135}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.4}调序}{1
50}{subsection.4.2.4}
\contentsline {subsection}{\numberline {4.2.4}调序}{1
48}{subsection.4.2.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于距离的调序}{1
51}{section*.139}
\contentsline {subsubsection}{基于距离的调序}{1
49}{section*.139}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于方向的调序}{1
51}{section*.141}
\contentsline {subsubsection}{基于方向的调序}{1
49}{section*.141}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于分类的调序}{15
2}{section*.144}
\contentsline {subsubsection}{基于分类的调序}{15
1}{section*.144}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.5}特征}{15
3}{subsection.4.2.5}
\contentsline {subsection}{\numberline {4.2.5}特征}{15
1}{subsection.4.2.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.6}最小错误率训练}{15
4}{subsection.4.2.6}
\contentsline {subsection}{\numberline {4.2.6}最小错误率训练}{15
2}{subsection.4.2.6}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.2.7}栈解码}{15
7}{subsection.4.2.7}
\contentsline {subsection}{\numberline {4.2.7}栈解码}{15
5}{subsection.4.2.7}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{翻译候选匹配}{15
8}{section*.149}
\contentsline {subsubsection}{翻译候选匹配}{15
6}{section*.149}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{翻译假设扩展}{15
9}{section*.151}
\contentsline {subsubsection}{翻译假设扩展}{15
7}{section*.151}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{剪枝}{1
60}{section*.153}
\contentsline {subsubsection}{剪枝}{1
58}{section*.153}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{解码中的栈结构}{1
61}{section*.155}
\contentsline {subsubsection}{解码中的栈结构}{1
59}{section*.155}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.3}基于层次短语的模型}{16
2}{section.4.3}
\contentsline {section}{\numberline {4.3}基于层次短语的模型}{16
0}{section.4.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.1}同步上下文无关文法}{16
4}{subsection.4.3.1}
\contentsline {subsection}{\numberline {4.3.1}同步上下文无关文法}{16
3}{subsection.4.3.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{文法定义}{16
5}{section*.160}
\contentsline {subsubsection}{文法定义}{16
3}{section*.160}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{推导}{16
6}{section*.161}
\contentsline {subsubsection}{推导}{16
4}{section*.161}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{胶水规则}{16
7}{section*.162}
\contentsline {subsubsection}{胶水规则}{16
5}{section*.162}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{处理流程}{16
8}{section*.163}
\contentsline {subsubsection}{处理流程}{16
6}{section*.163}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.2}层次短语规则抽取}{16
8}{subsection.4.3.2}
\contentsline {subsection}{\numberline {4.3.2}层次短语规则抽取}{16
6}{subsection.4.3.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.3}翻译模型及特征}{1
70}{subsection.4.3.3}
\contentsline {subsection}{\numberline {4.3.3}翻译模型及特征}{1
68}{subsection.4.3.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.4}CKY解码}{1
71}{subsection.4.3.4}
\contentsline {subsection}{\numberline {4.3.4}CKY解码}{1
69}{subsection.4.3.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.3.5}立方剪枝}{17
4}{subsection.4.3.5}
\contentsline {subsection}{\numberline {4.3.5}立方剪枝}{17
2}{subsection.4.3.5}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.4}基于语言学句法的模型}{17
7}{section.4.4}
\contentsline {section}{\numberline {4.4}基于语言学句法的模型}{17
5}{section.4.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.1}基于句法的翻译模型分类}{17
9}{subsection.4.4.1}
\contentsline {subsection}{\numberline {4.4.1}基于句法的翻译模型分类}{17
7}{subsection.4.4.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.2}基于树结构的文法}{179}{subsection.4.4.2}
\contentsline {subsection}{\numberline {4.4.2}基于树结构的文法}{179}{subsection.4.4.2}
%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{树到树翻译规则}{18
2}{section*.179}
\contentsline {subsubsection}{树到树翻译规则}{18
0}{section*.180}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于树结构的翻译推导}{18
3}{section*.181}
\contentsline {subsubsection}{基于树结构的翻译推导}{18
1}{section*.182}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{树到串翻译规则}{18
5}{section*.184}
\contentsline {subsubsection}{树到串翻译规则}{18
3}{section*.185}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.3}树到串翻译规则抽取}{18
6}{subsection.4.4.3}
\contentsline {subsection}{\numberline {4.4.3}树到串翻译规则抽取}{18
4}{subsection.4.4.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{树的切割与最小规则}{18
6}{section*.186}
\contentsline {subsubsection}{树的切割与最小规则}{18
4}{section*.187}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{空对齐处理}{1
90}{section*.192}
\contentsline {subsubsection}{空对齐处理}{1
88}{section*.193}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{组合规则}{1
91}{section*.194}
\contentsline {subsubsection}{组合规则}{1
89}{section*.195}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{SPMT规则}{19
1}{section*.196}
\contentsline {subsubsection}{SPMT规则}{19
0}{section*.197}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{句法树二叉化}{19
2}{section*.198}
\contentsline {subsubsection}{句法树二叉化}{19
1}{section*.199}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.4}树到树翻译规则抽取}{19
4}{subsection.4.4.4}
\contentsline {subsection}{\numberline {4.4.4}树到树翻译规则抽取}{19
2}{subsection.4.4.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于节点对齐的规则抽取}{19
5}{section*.202}
\contentsline {subsubsection}{基于节点对齐的规则抽取}{19
2}{section*.203}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于对齐矩阵的规则抽取}{19
5}{section*.205}
\contentsline {subsubsection}{基于对齐矩阵的规则抽取}{19
4}{section*.206}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.5}句法翻译模型的特征}{19
6}{subsection.4.4.5}
\contentsline {subsection}{\numberline {4.4.5}句法翻译模型的特征}{19
5}{subsection.4.4.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.6}基于超图的推导空间表示}{19
9}{subsection.4.4.6}
\contentsline {subsection}{\numberline {4.4.6}基于超图的推导空间表示}{19
6}{subsection.4.4.6}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {4.4.7}基于树的解码 vs 基于串的解码}{
201}{subsection.4.4.7}
\contentsline {subsection}{\numberline {4.4.7}基于树的解码 vs 基于串的解码}{
199}{subsection.4.4.7}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于树的解码}{20
2}{section*.213}
\contentsline {subsubsection}{基于树的解码}{20
0}{section*.214}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于串的解码}{20
4}{section*.216}
\contentsline {subsubsection}{基于串的解码}{20
1}{section*.217}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {4.5}小结及深入阅读}{20
6}{section.4.5}
\contentsline {section}{\numberline {4.5}小结及深入阅读}{20
3}{section.4.5}%
\defcounter {refsection}{0}\relax
\contentsline {part}{\@mypartnumtocformat {III}{神经机器翻译}}{20
9}{part.3}
\contentsline {part}{\@mypartnumtocformat {III}{神经机器翻译}}{20
5}{part.3}%
\ttl@stoptoc {default@2}
\ttl@starttoc {default@3}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {5}人工神经网络和神经语言建模}{2
11}{chapter.5}
\contentsline {chapter}{\numberline {5}人工神经网络和神经语言建模}{2
07}{chapter.5}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.1}深度学习与人工神经网络}{2
12}{section.5.1}
\contentsline {section}{\numberline {5.1}深度学习与人工神经网络}{2
08}{section.5.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.1.1}发展简史}{2
12}{subsection.5.1.1}
\contentsline {subsection}{\numberline {5.1.1}发展简史}{2
08}{subsection.5.1.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{早期的人工神经网络和第一次寒冬}{2
12}{section*.218}
\contentsline {subsubsection}{早期的人工神经网络和第一次寒冬}{2
08}{section*.219}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经网络的第二次高潮和第二次寒冬}{2
13}{section*.219}
\contentsline {subsubsection}{神经网络的第二次高潮和第二次寒冬}{2
09}{section*.220}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{深度学习和神经网络方法的崛起}{21
4}{section*.220}
\contentsline {subsubsection}{深度学习和神经网络方法的崛起}{21
0}{section*.221}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.1.2}为什么需要深度学习}{21
5}{subsection.5.1.2}
\contentsline {subsection}{\numberline {5.1.2}为什么需要深度学习}{21
1}{subsection.5.1.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{端到端学习和表示学习}{21
5}{section*.222}
\contentsline {subsubsection}{端到端学习和表示学习}{21
1}{section*.223}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{深度学习的效果}{21
6}{section*.224}
\contentsline {subsubsection}{深度学习的效果}{21
2}{section*.225}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.2}神经网络基础}{21
6}{section.5.2}
\contentsline {section}{\numberline {5.2}神经网络基础}{21
2}{section.5.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.2.1}线性代数基础}{21
6}{subsection.5.2.1}
\contentsline {subsection}{\numberline {5.2.1}线性代数基础}{21
2}{subsection.5.2.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{标量、向量和矩阵}{21
7}{section*.226}
\contentsline {subsubsection}{标量、向量和矩阵}{21
3}{section*.227}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{矩阵的转置}{21
8}{section*.227}
\contentsline {subsubsection}{矩阵的转置}{21
4}{section*.228}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{矩阵加法和数乘}{21
8}{section*.228}
\contentsline {subsubsection}{矩阵加法和数乘}{21
4}{section*.229}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{矩阵乘法和矩阵点乘}{21
9}{section*.229}
\contentsline {subsubsection}{矩阵乘法和矩阵点乘}{21
5}{section*.230}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{线性映射}{2
20}{section*.230}
\contentsline {subsubsection}{线性映射}{2
16}{section*.231}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{范数}{2
21}{section*.231}
\contentsline {subsubsection}{范数}{2
17}{section*.232}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.2.2}人工神经元和感知机}{2
22}{subsection.5.2.2}
\contentsline {subsection}{\numberline {5.2.2}人工神经元和感知机}{2
18}{subsection.5.2.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{感知机\ \raisebox {0.5mm}{------}\ 最简单的人工神经元模型}{2
23}{section*.234}
\contentsline {subsubsection}{感知机\ \raisebox {0.5mm}{------}\ 最简单的人工神经元模型}{2
19}{section*.235}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经元内部权重}{22
4}{section*.237}
\contentsline {subsubsection}{神经元内部权重}{22
0}{section*.238}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经元的输入\ \raisebox {0.5mm}{------}\ 离散 vs 连续}{22
5}{section*.239}
\contentsline {subsubsection}{神经元的输入\ \raisebox {0.5mm}{------}\ 离散 vs 连续}{22
1}{section*.240}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{神经元内部的参数学习}{22
5}{section*.241}
\contentsline {subsubsection}{神经元内部的参数学习}{22
1}{section*.242}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.2.3}多层神经网络}{22
6}{subsection.5.2.3}
\contentsline {subsection}{\numberline {5.2.3}多层神经网络}{22
2}{subsection.5.2.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{线性变换和激活函数}{22
6}{section*.243}
\contentsline {subsubsection}{线性变换和激活函数}{22
2}{section*.244}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{单层神经网络$\rightarrow $多层神经网络}{22
9}{section*.250}
\contentsline {subsubsection}{单层神经网络$\rightarrow $多层神经网络}{22
5}{section*.251}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.2.4}函数拟合能力}{2
30}{subsection.5.2.4}
\contentsline {subsection}{\numberline {5.2.4}函数拟合能力}{2
26}{subsection.5.2.4}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.3}神经网络的张量实现}{2
33}{section.5.3}
\contentsline {section}{\numberline {5.3}神经网络的张量实现}{2
29}{section.5.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.1} 张量及其计算}{2
33}{subsection.5.3.1}
\contentsline {subsection}{\numberline {5.3.1} 张量及其计算}{2
29}{subsection.5.3.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{\ 张量}{2
33}{section*.259}
\contentsline {subsubsection}{\ 张量}{2
29}{section*.260}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{张量的矩阵乘法}{23
5}{section*.262}
\contentsline {subsubsection}{张量的矩阵乘法}{23
1}{section*.263}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{张量的单元操作}{23
6}{section*.264}
\contentsline {subsubsection}{张量的单元操作}{23
2}{section*.265}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.2}张量的物理存储形式}{23
7}{subsection.5.3.2}
\contentsline {subsection}{\numberline {5.3.2}张量的物理存储形式}{23
3}{subsection.5.3.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.3}使用开源框架实现张量计算}{23
8}{subsection.5.3.3}
\contentsline {subsection}{\numberline {5.3.3}使用开源框架实现张量计算}{23
4}{subsection.5.3.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.4}前向传播与计算图}{2
41}{subsection.5.3.4}
\contentsline {subsection}{\numberline {5.3.4}前向传播与计算图}{2
37}{subsection.5.3.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.3.5}神经网络实例}{2
42}{subsection.5.3.5}
\contentsline {subsection}{\numberline {5.3.5}神经网络实例}{2
38}{subsection.5.3.5}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.4}神经网络的参数训练}{2
43}{section.5.4}
\contentsline {section}{\numberline {5.4}神经网络的参数训练}{2
39}{section.5.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.1}损失函数}{24
4}{subsection.5.4.1}
\contentsline {subsection}{\numberline {5.4.1}损失函数}{24
0}{subsection.5.4.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.2}基于梯度的参数优化}{24
5}{subsection.5.4.2}
\contentsline {subsection}{\numberline {5.4.2}基于梯度的参数优化}{24
1}{subsection.5.4.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{梯度下降}{24
6}{section*.278}
\contentsline {subsubsection}{梯度下降}{24
2}{section*.279}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{梯度获取}{24
8}{section*.280}
\contentsline {subsubsection}{梯度获取}{24
4}{section*.281}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于梯度的方法的变种和改进}{2
50}{section*.284}
\contentsline {subsubsection}{基于梯度的方法的变种和改进}{2
46}{section*.285}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.3}参数更新的并行化策略}{2
53}{subsection.5.4.3}
\contentsline {subsection}{\numberline {5.4.3}参数更新的并行化策略}{2
49}{subsection.5.4.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.4}梯度消失、梯度爆炸和稳定性训练}{25
5}{subsection.5.4.4}
\contentsline {subsection}{\numberline {5.4.4}梯度消失、梯度爆炸和稳定性训练}{25
1}{subsection.5.4.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{易于优化的激活函数}{25
5}{section*.287}
\contentsline {subsubsection}{易于优化的激活函数}{25
1}{section*.288}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{梯度裁剪}{25
6}{section*.291}
\contentsline {subsubsection}{梯度裁剪}{25
2}{section*.292}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{稳定性训练}{25
7}{section*.292}
\contentsline {subsubsection}{稳定性训练}{25
3}{section*.293}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.5}过拟合}{25
8}{subsection.5.4.5}
\contentsline {subsection}{\numberline {5.4.5}过拟合}{25
4}{subsection.5.4.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.4.6}反向传播}{25
9}{subsection.5.4.6}
\contentsline {subsection}{\numberline {5.4.6}反向传播}{25
5}{subsection.5.4.6}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{输出层的反向传播}{2
60}{section*.295}
\contentsline {subsubsection}{输出层的反向传播}{2
56}{section*.296}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{隐藏层的反向传播}{2
62}{section*.299}
\contentsline {subsubsection}{隐藏层的反向传播}{2
58}{section*.300}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{程序实现}{26
4}{section*.302}
\contentsline {subsubsection}{程序实现}{26
0}{section*.303}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.5}神经语言模型}{26
5}{section.5.5}
\contentsline {section}{\numberline {5.5}神经语言模型}{26
1}{section.5.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.5.1}基于神经网络的语言建模}{26
5}{subsection.5.5.1}
\contentsline {subsection}{\numberline {5.5.1}基于神经网络的语言建模}{26
1}{subsection.5.5.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于前馈神经网络的语言模型}{26
6}{section*.305}
\contentsline {subsubsection}{基于前馈神经网络的语言模型}{26
2}{section*.306}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于循环神经网络的语言模型}{26
8}{section*.308}
\contentsline {subsubsection}{基于循环神经网络的语言模型}{26
4}{section*.309}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{基于自注意力机制的语言模型}{26
9}{section*.310}
\contentsline {subsubsection}{基于自注意力机制的语言模型}{26
5}{section*.311}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{语言模型的评价}{2
71}{section*.312}
\contentsline {subsubsection}{语言模型的评价}{2
67}{section*.313}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.5.2}单词表示模型}{2
71}{subsection.5.5.2}
\contentsline {subsection}{\numberline {5.5.2}单词表示模型}{2
67}{subsection.5.5.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{One-hot编码}{2
71}{section*.313}
\contentsline {subsubsection}{One-hot编码}{2
67}{section*.314}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{分布式表示}{2
72}{section*.315}
\contentsline {subsubsection}{分布式表示}{2
68}{section*.316}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {5.5.3}句子表示模型及预训练}{2
73}{subsection.5.5.3}
\contentsline {subsection}{\numberline {5.5.3}句子表示模型及预训练}{2
69}{subsection.5.5.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{简单的上下文表示模型}{27
4}{section*.319}
\contentsline {subsubsection}{简单的上下文表示模型}{27
0}{section*.320}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{ELMO模型}{27
5}{section*.322}
\contentsline {subsubsection}{ELMO模型}{27
1}{section*.323}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{GPT模型}{27
5}{section*.324}
\contentsline {subsubsection}{GPT模型}{27
1}{section*.325}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{BERT模型}{27
6}{section*.326}
\contentsline {subsubsection}{BERT模型}{27
2}{section*.327}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{为什么要预训练?}{27
7}{section*.328}
\contentsline {subsubsection}{为什么要预训练?}{27
3}{section*.329}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {5.6}小结及深入阅读}{27
8}{section.5.6}
\contentsline {section}{\numberline {5.6}小结及深入阅读}{27
4}{section.5.6}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {6}神经机器翻译模型}{2
81}{chapter.6}
\contentsline {chapter}{\numberline {6}神经机器翻译模型}{2
75}{chapter.6}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.1}神经机器翻译的发展简史}{2
81}{section.6.1}
\contentsline {section}{\numberline {6.1}神经机器翻译的发展简史}{2
75}{section.6.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.1.1}神经机器翻译的起源}{2
83}{subsection.6.1.1}
\contentsline {subsection}{\numberline {6.1.1}神经机器翻译的起源}{2
77}{subsection.6.1.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.1.2}神经机器翻译的品质 }{2
85}{subsection.6.1.2}
\contentsline {subsection}{\numberline {6.1.2}神经机器翻译的品质 }{2
79}{subsection.6.1.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.1.3}神经机器翻译的优势 }{28
8}{subsection.6.1.3}
\contentsline {subsection}{\numberline {6.1.3}神经机器翻译的优势 }{28
2}{subsection.6.1.3}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.2}编码器-解码器框架}{2
90}{section.6.2}
\contentsline {section}{\numberline {6.2}编码器-解码器框架}{2
83}{section.6.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.2.1}框架结构}{2
90}{subsection.6.2.1}
\contentsline {subsection}{\numberline {6.2.1}框架结构}{2
84}{subsection.6.2.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.2.2}表示学习}{2
91}{subsection.6.2.2}
\contentsline {subsection}{\numberline {6.2.2}表示学习}{2
85}{subsection.6.2.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.2.3}简单的运行实例}{2
92}{subsection.6.2.3}
\contentsline {subsection}{\numberline {6.2.3}简单的运行实例}{2
86}{subsection.6.2.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.2.4}机器翻译范式的对比}{2
93}{subsection.6.2.4}
\contentsline {subsection}{\numberline {6.2.4}机器翻译范式的对比}{2
87}{subsection.6.2.4}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.3}基于循环神经网络的翻译模型及注意力机制}{2
94}{section.6.3}
\contentsline {section}{\numberline {6.3}基于循环神经网络的翻译模型及注意力机制}{2
88}{section.6.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.1}建模}{2
95}{subsection.6.3.1}
\contentsline {subsection}{\numberline {6.3.1}建模}{2
89}{subsection.6.3.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.2}输入(词嵌入)及输出(Softmax)}{29
8}{subsection.6.3.2}
\contentsline {subsection}{\numberline {6.3.2}输入(词嵌入)及输出(Softmax)}{29
2}{subsection.6.3.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.3}循环神经网络结构}{
301}{subsection.6.3.3}
\contentsline {subsection}{\numberline {6.3.3}循环神经网络结构}{
295}{subsection.6.3.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{循环神经单元(RNN)}{
301}{section*.351}
\contentsline {subsubsection}{循环神经单元(RNN)}{
295}{section*.352}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{长短时记忆网络(LSTM)}{
302}{section*.352}
\contentsline {subsubsection}{长短时记忆网络(LSTM)}{
295}{section*.353}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{门控循环单元(GRU)}{
304}{section*.355}
\contentsline {subsubsection}{门控循环单元(GRU)}{
298}{section*.356}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{双向模型}{
305}{section*.357}
\contentsline {subsubsection}{双向模型}{
299}{section*.358}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{多层循环神经网络}{
306}{section*.359}
\contentsline {subsubsection}{多层循环神经网络}{
299}{section*.360}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.4}注意力机制}{30
6}{subsection.6.3.4}
\contentsline {subsection}{\numberline {6.3.4}注意力机制}{30
0}{subsection.6.3.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{翻译中的注意力机制}{30
7}{section*.362}
\contentsline {subsubsection}{翻译中的注意力机制}{30
1}{section*.363}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{上下文向量的计算}{30
9}{section*.365}
\contentsline {subsubsection}{上下文向量的计算}{30
2}{section*.366}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{注意力机制的解读}{3
12}{section*.370}
\contentsline {subsubsection}{注意力机制的解读}{3
05}{section*.371}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.5}训练}{3
13}{subsection.6.3.5}
\contentsline {subsection}{\numberline {6.3.5}训练}{3
07}{subsection.6.3.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{损失函数}{3
14}{section*.373}
\contentsline {subsubsection}{损失函数}{3
07}{section*.374}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{参数初始化}{3
14}{section*.374}
\contentsline {subsubsection}{参数初始化}{3
08}{section*.375}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{优化策略}{3
15}{section*.375}
\contentsline {subsubsection}{优化策略}{3
09}{section*.376}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{梯度裁剪}{3
15}{section*.377}
\contentsline {subsubsection}{梯度裁剪}{3
09}{section*.378}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{学习率策略}{3
16}{section*.378}
\contentsline {subsubsection}{学习率策略}{3
09}{section*.379}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{并行训练}{31
7}{section*.381}
\contentsline {subsubsection}{并行训练}{31
1}{section*.382}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.6}推断}{3
20}{subsection.6.3.6}
\contentsline {subsection}{\numberline {6.3.6}推断}{3
13}{subsection.6.3.6}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{贪婪搜索}{3
21}{section*.386}
\contentsline {subsubsection}{贪婪搜索}{3
14}{section*.387}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{束搜索}{3
21}{section*.389}
\contentsline {subsubsection}{束搜索}{3
15}{section*.390}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{长度惩罚}{3
23}{section*.391}
\contentsline {subsubsection}{长度惩罚}{3
16}{section*.392}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.3.7}实例-GNMT}{3
24}{subsection.6.3.7}
\contentsline {subsection}{\numberline {6.3.7}实例-GNMT}{3
17}{subsection.6.3.7}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.4}Transformer}{3
24}{section.6.4}
\contentsline {section}{\numberline {6.4}Transformer}{3
18}{section.6.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.1}自注意力模型}{3
26}{subsection.6.4.1}
\contentsline {subsection}{\numberline {6.4.1}自注意力模型}{3
19}{subsection.6.4.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.2}Transformer架构}{32
8}{subsection.6.4.2}
\contentsline {subsection}{\numberline {6.4.2}Transformer架构}{32
1}{subsection.6.4.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.3}位置编码}{3
30}{subsection.6.4.3}
\contentsline {subsection}{\numberline {6.4.3}位置编码}{3
23}{subsection.6.4.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.4}基于点乘的注意力机制}{3
32}{subsection.6.4.4}
\contentsline {subsection}{\numberline {6.4.4}基于点乘的注意力机制}{3
25}{subsection.6.4.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.5}掩码操作}{3
34}{subsection.6.4.5}
\contentsline {subsection}{\numberline {6.4.5}掩码操作}{3
27}{subsection.6.4.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.6}多头注意力}{3
35}{subsection.6.4.6}
\contentsline {subsection}{\numberline {6.4.6}多头注意力}{3
28}{subsection.6.4.6}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.7}残差网络和层正则化}{3
36}{subsection.6.4.7}
\contentsline {subsection}{\numberline {6.4.7}残差网络和层正则化}{3
29}{subsection.6.4.7}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.8}前馈全连接网络子层}{33
8}{subsection.6.4.8}
\contentsline {subsection}{\numberline {6.4.8}前馈全连接网络子层}{33
1}{subsection.6.4.8}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.9}训练}{33
9}{subsection.6.4.9}
\contentsline {subsection}{\numberline {6.4.9}训练}{33
2}{subsection.6.4.9}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.4.10}推断}{3
41}{subsection.6.4.10}
\contentsline {subsection}{\numberline {6.4.10}推断}{3
34}{subsection.6.4.10}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.5}序列到序列问题及应用}{3
42}{section.6.5}
\contentsline {section}{\numberline {6.5}序列到序列问题及应用}{3
35}{section.6.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.1}自动问答}{3
42}{subsection.6.5.1}
\contentsline {subsection}{\numberline {6.5.1}自动问答}{3
35}{subsection.6.5.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.2}自动文摘}{3
43}{subsection.6.5.2}
\contentsline {subsection}{\numberline {6.5.2}自动文摘}{3
35}{subsection.6.5.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.3}文言文翻译}{3
43}{subsection.6.5.3}
\contentsline {subsection}{\numberline {6.5.3}文言文翻译}{3
36}{subsection.6.5.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.4}对联生成}{3
44}{subsection.6.5.4}
\contentsline {subsection}{\numberline {6.5.4}对联生成}{3
37}{subsection.6.5.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {6.5.5}古诗生成}{3
44}{subsection.6.5.5}
\contentsline {subsection}{\numberline {6.5.5}古诗生成}{3
38}{subsection.6.5.5}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {6.6}小结及深入阅读}{3
46}{section.6.6}
\contentsline {section}{\numberline {6.6}小结及深入阅读}{3
38}{section.6.6}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {7}神经机器翻译实战
\ \raisebox {0.5mm}{------}\ 参加一次比赛}{349}{chapter.7}
\contentsline {chapter}{\numberline {7}神经机器翻译实战
}{341}{chapter.7}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {7.1}神经机器翻译并不简单}{34
9}{section.7.1}
\contentsline {section}{\numberline {7.1}神经机器翻译并不简单}{34
1}{section.7.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.1.1}影响神经机器翻译性能的因素}{3
50}{subsection.7.1.1}
\contentsline {subsection}{\numberline {7.1.1}影响神经机器翻译性能的因素}{3
42}{subsection.7.1.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.1.2}搭建神经机器翻译系统的步骤 }{3
51}{subsection.7.1.2}
\contentsline {subsection}{\numberline {7.1.2}搭建神经机器翻译系统的步骤 }{3
43}{subsection.7.1.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.1.3}架构选择 }{3
52}{subsection.7.1.3}
\contentsline {subsection}{\numberline {7.1.3}架构选择 }{3
44}{subsection.7.1.3}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {7.2}数据处理}{3
52}{section.7.2}
\contentsline {section}{\numberline {7.2}数据处理}{3
44}{section.7.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.2.1}分词}{3
53}{subsection.7.2.1}
\contentsline {subsection}{\numberline {7.2.1}分词}{3
45}{subsection.7.2.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.2.2}标准化}{3
54}{subsection.7.2.2}
\contentsline {subsection}{\numberline {7.2.2}标准化}{3
46}{subsection.7.2.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.2.3}数据清洗}{3
55}{subsection.7.2.3}
\contentsline {subsection}{\numberline {7.2.3}数据清洗}{3
47}{subsection.7.2.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.2.4}子词切分}{3
57}{subsection.7.2.4}
\contentsline {subsection}{\numberline {7.2.4}子词切分}{3
49}{subsection.7.2.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{大词表和OOV问题}{35
8}{section*.428}
\contentsline {subsubsection}{大词表和OOV问题}{35
0}{section*.429}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{子词}{35
8}{section*.430}
\contentsline {subsubsection}{子词}{35
0}{section*.431}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{双字节编码(BPE)}{35
9}{section*.432}
\contentsline {subsubsection}{双字节编码(BPE)}{35
1}{section*.433}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{其他方法}{3
62}{section*.435}
\contentsline {subsubsection}{其他方法}{3
54}{section*.436}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {7.3}建模与训练}{3
62}{section.7.3}
\contentsline {section}{\numberline {7.3}建模与训练}{3
54}{section.7.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.3.1}正则化}{3
62}{subsection.7.3.1}
\contentsline {subsection}{\numberline {7.3.1}正则化}{3
54}{subsection.7.3.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{L1/L2正则化}{3
64}{section*.437}
\contentsline {subsubsection}{L1/L2正则化}{3
56}{section*.438}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{标签平滑}{3
65}{section*.438}
\contentsline {subsubsection}{标签平滑}{3
57}{section*.439}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Dropout}{3
66}{section*.440}
\contentsline {subsubsection}{Dropout}{3
57}{section*.441}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Layer Dropout}{3
68}{section*.443}
\contentsline {subsubsection}{Layer Dropout}{3
59}{section*.444}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.3.2}增大模型容量}{36
9}{subsection.7.3.2}
\contentsline {subsection}{\numberline {7.3.2}增大模型容量}{36
0}{subsection.7.3.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{宽网络}{36
9}{section*.445}
\contentsline {subsubsection}{宽网络}{36
0}{section*.446}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{深网络}{3
70}{section*.447}
\contentsline {subsubsection}{深网络}{3
61}{section*.448}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{增大输入层和输出层表示能力}{3
71}{section*.449}
\contentsline {subsubsection}{增大输入层和输出层表示能力}{3
62}{section*.450}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{大模型的分布式计算}{3
72}{section*.450}
\contentsline {subsubsection}{大模型的分布式计算}{3
63}{section*.451}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.3.3}大批量训练}{3
72}{subsection.7.3.3}
\contentsline {subsection}{\numberline {7.3.3}大批量训练}{3
63}{subsection.7.3.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{为什么需要大批量训练}{3
72}{section*.451}
\contentsline {subsubsection}{为什么需要大批量训练}{3
63}{section*.452}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{如何构建批次}{3
73}{section*.454}
\contentsline {subsubsection}{如何构建批次}{3
65}{section*.455}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {7.4}推断}{3
75}{section.7.4}
\contentsline {section}{\numberline {7.4}推断}{3
66}{section.7.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.4.1}推断优化}{3
75}{subsection.7.4.1}
\contentsline {subsection}{\numberline {7.4.1}推断优化}{3
66}{subsection.7.4.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{推断系统的架构}{3
75}{section*.456}
\contentsline {subsubsection}{推断系统的架构}{3
66}{section*.457}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{自左向右推断 vs 自右向左推断}{3
76}{section*.458}
\contentsline {subsubsection}{自左向右推断 vs 自右向左推断}{3
67}{section*.459}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{推断加速}{3
77}{section*.459}
\contentsline {subsubsection}{推断加速}{3
68}{section*.460}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.4.2}译文长度控制}{3
84}{subsection.7.4.2}
\contentsline {subsection}{\numberline {7.4.2}译文长度控制}{3
75}{subsection.7.4.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{长度惩罚因子}{3
85}{section*.465}
\contentsline {subsubsection}{长度惩罚因子}{3
76}{section*.466}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{译文长度范围约束}{3
86}{section*.467}
\contentsline {subsubsection}{译文长度范围约束}{3
76}{section*.468}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{覆盖度模型}{3
86}{section*.468}
\contentsline {subsubsection}{覆盖度模型}{3
77}{section*.469}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.4.3}多模型集成}{3
87}{subsection.7.4.3}
\contentsline {subsection}{\numberline {7.4.3}多模型集成}{3
78}{subsection.7.4.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{假设选择}{3
88}{section*.469}
\contentsline {subsubsection}{假设选择}{3
79}{section*.470}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{局部预测融合}{38
9}{section*.471}
\contentsline {subsubsection}{局部预测融合}{38
0}{section*.472}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{译文重组}{3
90}{section*.473}
\contentsline {subsubsection}{译文重组}{3
81}{section*.474}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {7.5}进阶技术}{3
91}{section.7.5}
\contentsline {section}{\numberline {7.5}进阶技术}{3
82}{section.7.5}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.5.1}深层模型}{3
91}{subsection.7.5.1}
\contentsline {subsection}{\numberline {7.5.1}深层模型}{3
82}{subsection.7.5.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{Post-Norm vs Pre-Norm}{3
92}{section*.476}
\contentsline {subsubsection}{Post-Norm vs Pre-Norm}{3
82}{section*.477}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{层聚合}{3
94}{section*.479}
\contentsline {subsubsection}{层聚合}{3
84}{section*.480}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{深层模型的训练加速}{3
95}{section*.481}
\contentsline {subsubsection}{深层模型的训练加速}{3
86}{section*.482}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{渐进式训练}{3
95}{section*.482}
\contentsline {subsubsection}{渐进式训练}{3
86}{section*.483}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{分组稠密连接}{3
96}{section*.484}
\contentsline {subsubsection}{分组稠密连接}{3
87}{section*.485}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{学习率重置策略}{3
97}{section*.486}
\contentsline {subsubsection}{学习率重置策略}{3
87}{section*.487}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{深层模型的鲁棒性训练}{3
98}{section*.488}
\contentsline {subsubsection}{深层模型的鲁棒性训练}{3
89}{section*.489}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.5.2}单语数据的使用}{
400}{subsection.7.5.2}
\contentsline {subsection}{\numberline {7.5.2}单语数据的使用}{
390}{subsection.7.5.2}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{伪数据}{
401}{section*.492}
\contentsline {subsubsection}{伪数据}{
391}{section*.493}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{预训练}{
402}{section*.495}
\contentsline {subsubsection}{预训练}{
393}{section*.496}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{联合训练}{
404}{section*.498}
\contentsline {subsubsection}{联合训练}{
394}{section*.499}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.5.3}知识精炼}{
404}{subsection.7.5.3}
\contentsline {subsection}{\numberline {7.5.3}知识精炼}{
395}{subsection.7.5.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{什么是知识精炼}{
405}{section*.500}
\contentsline {subsubsection}{什么是知识精炼}{
396}{section*.501}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{知识精炼的基本方法}{
406}{section*.501}
\contentsline {subsubsection}{知识精炼的基本方法}{
397}{section*.502}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{机器翻译中的知识精炼}{
408}{section*.503}
\contentsline {subsubsection}{机器翻译中的知识精炼}{
398}{section*.504}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {7.5.4}双向训练}{
408}{subsection.7.5.4}
\contentsline {subsection}{\numberline {7.5.4}双向训练}{
399}{subsection.7.5.4}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{有监督对偶学习}{4
10}{section*.505}
\contentsline {subsubsection}{有监督对偶学习}{4
00}{section*.506}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{无监督对偶学习}{4
10}{section*.506}
\contentsline {subsubsection}{无监督对偶学习}{4
01}{section*.507}%
\defcounter {refsection}{0}\relax
\contentsline {subsubsection}{翻译中回译}{4
11}{section*.508}
\contentsline {subsubsection}{翻译中回译}{4
02}{section*.509}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {7.6}小结及深入阅读}{4
12}{section.7.6}
\contentsline {section}{\numberline {7.6}小结及深入阅读}{4
02}{section.7.6}%
\defcounter {refsection}{0}\relax
\contentsline {part}{\@mypartnumtocformat {IV}{附录}}{4
17}{part.4}
\contentsline {part}{\@mypartnumtocformat {IV}{附录}}{4
05}{part.4}%
\ttl@stoptoc {default@3}
\ttl@starttoc {default@4}
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {A}附录A}{4
19}{Appendix.1.A}
\contentsline {chapter}{\numberline {A}附录A}{4
07}{appendix.1.A}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {A.1}基准数据集}{4
19}{section.1.A.1}
\contentsline {section}{\numberline {A.1}基准数据集}{4
07}{section.1.A.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {A.2}平行语料}{4
20}{section.1.A.2}
\contentsline {section}{\numberline {A.2}平行语料}{4
08}{section.1.A.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {A.3}相关工具}{4
21}{section.1.A.3}
\contentsline {section}{\numberline {A.3}相关工具}{4
09}{section.1.A.3}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {A.3.1}数据预处理工具}{4
21}{subsection.1.A.3.1}
\contentsline {subsection}{\numberline {A.3.1}数据预处理工具}{4
09}{subsection.1.A.3.1}%
\defcounter {refsection}{0}\relax
\contentsline {subsection}{\numberline {A.3.2}评价工具}{4
22}{subsection.1.A.3.2}
\contentsline {subsection}{\numberline {A.3.2}评价工具}{4
10}{subsection.1.A.3.2}%
\defcounter {refsection}{0}\relax
\contentsline {chapter}{\numberline {B}附录B}{4
23}{Appendix.2.B}
\contentsline {chapter}{\numberline {B}附录B}{4
11}{appendix.2.B}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.1}IBM模型3训练方法}{4
23}{section.2.B.1}
\contentsline {section}{\numberline {B.1}IBM模型3训练方法}{4
11}{section.2.B.1}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.2}IBM模型4训练方法}{4
25}{section.2.B.2}
\contentsline {section}{\numberline {B.2}IBM模型4训练方法}{4
13}{section.2.B.2}%
\defcounter {refsection}{0}\relax
\contentsline {section}{\numberline {B.3}IBM模型5训练方法}{4
27}{section.2.B.3}
\contentsline {section}{\numberline {B.3}IBM模型5训练方法}{4
15}{section.2.B.3}%
\contentsfinish
Section03-Word-Based-Models/section03.tex
查看文件 @
bd87e7ad
...
...
@@ -909,12 +909,12 @@
\begin{itemize}
\item
很多时候,我们有多个互译句对
$
(
\mathbf
{
s
}^{
[
1
]
}
,
\mathbf
{
t
}^{
[
1
]
}
)
,...,
(
\mathbf
{
s
}^{
[
n
]
}
,
\mathbf
{
t
}^{
[
n
]
}
)
$
,称之为
\alert
{
双语平行数据(语料)
}
。翻译概率可以被定义为
\item
如果有多个互译句对
$
\{
(
\mathbf
{
s
}^{
[
1
]
}
,
\mathbf
{
t
}^{
[
1
]
}
)
,...,
(
\mathbf
{
s
}^{
[
K
]
}
,
\mathbf
{
t
}^{
[
K
]
}
)
\}
$
,称之为
\alert
{
双语平行数据(语料)
}
。翻译概率可以被定义为
\vspace
{
-1em
}
\begin{eqnarray}
\textrm
{
P
}
(x,y)
&
=
&
\frac
{
\sum
_{
i=1
}^{
n
}
c(x,y;
\mathbf
{
s
}^{
[i]
}
,
\mathbf
{
t
}^{
[i]
}
)
}{
\sum
_{
i=1
}^{
n
}
\sum
_{
x',y'
}
c(x',y';
\mathbf
{
s
}^{
[i]
}
,
\mathbf
{
t
}^{
[i
]
}
)
}
\nonumber
\textrm
{
P
}
(x,y)
&
=
&
\frac
{
\sum
_{
k=1
}^{
K
}
c(x,y;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}{
\sum
_{
k=1
}^{
K
}
\sum
_{
x',y'
}
c(x',y';
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k
]
}
)
}
\nonumber
\end{eqnarray}
\item
<2-> 说白了就是计算
$
(
x,y
)
$
的频次时,在每个句子上累加
...
...
@@ -1414,7 +1414,7 @@ $m$ & $n$ & $n^m \cdot m!$ \\ \hline
\node
[anchor=north west,inner sep=2pt,align=left] (line4) at ([yshift=-1pt]line3.south west)
{
\textrm
{
3:
\textbf
{
for
}
$
i
$
in
$
[
1
,m
]
$
\textbf
{
do
}}}
;
\node
[anchor=north west,inner sep=2pt,align=left] (line5) at ([yshift=-1pt]line4.south west)
{
\textrm
{
4:
\hspace
{
1em
}
$
h
=
\phi
$}}
;
\node
[anchor=north west,inner sep=2pt,align=left] (line6) at ([yshift=-1pt]line5.south west)
{
\textrm
{
5:
\hspace
{
1em
}
\textbf
{
foreach
}
$
j
$
in
$
[
1
,m
]
$
\textbf
{
do
}}}
;
\node
[anchor=north west,inner sep=2pt,align=left] (line7) at ([yshift=-1pt]line6.south west)
{
\textrm
{
6:
\hspace
{
2em
}
\textbf
{
if
}
$
used
[
j
]=
$
\textbf
{
tru
e
}
\textbf
{
then
}}}
;
\node
[anchor=north west,inner sep=2pt,align=left] (line7) at ([yshift=-1pt]line6.south west)
{
\textrm
{
6:
\hspace
{
2em
}
\textbf
{
if
}
$
used
[
j
]=
$
\textbf
{
fals
e
}
\textbf
{
then
}}}
;
\node
[anchor=north west,inner sep=2pt,align=left] (line8) at ([yshift=-1pt]line7.south west)
{
\textrm
{
7:
\hspace
{
3em
}
$
h
=
h
\cup
\textrm
{
\textsc
{
Join
}}
(
best,
\pi
[
j
])
$}}
;
\node
[anchor=north west,inner sep=2pt,align=left] (line9) at ([yshift=-1pt]line8.south west)
{
\textrm
{
8:
\hspace
{
1em
}
$
best
=
\textrm
{
\textsc
{
PruneForTop
1
}}
(
h
)
$}}
;
\node
[anchor=north west,inner sep=2pt,align=left] (line10) at ([yshift=-1pt]line9.south west)
{
\textrm
{
9:
\hspace
{
1em
}
$
used
[
best.j
]
=
\textrm
{
\textsc
{
\textbf
{
true
}}}$}}
;
...
...
@@ -2395,7 +2395,7 @@ $m$ & $n$ & $n^m \cdot m!$ \\ \hline
\item
\textbf
{
翻译模型参数估计
}
- 计算
$
\textrm
{
P
}
(
\mathbf
{
s
}
|
\mathbf
{
t
}
)
$
所需的参数
\end{itemize}
\vspace
{
0.5em
}
\item
<2->
\textbf
{
IBM模型的假设
}
:
$
\mathbf
{
s
}
=
s
_
1
...s
_
m
$
和
$
\mathbf
{
t
}
=
t
_
1
...t
_
n
$
之间有单词一级的对应,称作
\alert
{
单词对齐
}
或者
\alert
{
词对齐
}
。此外:
\item
<2->
\textbf
{
IBM模型的假设
}
:
$
\mathbf
{
s
}
=
s
_
1
...s
_
m
$
和
$
\mathbf
{
t
}
=
t
_
1
...t
_
l
$
之间有单词一级的对应,称作
\alert
{
单词对齐
}
或者
\alert
{
词对齐
}
。此外:
\begin{itemize}
\item
\textbf
{
约束
}
:一个源语言单词只能对应一个目标语单词
\vspace
{
0.5em
}
...
...
@@ -2792,11 +2792,11 @@ $\mathbf{s}$ = 在 桌子 上 \ \ \ \ \ $\mathbf{t}$ = $t_0$ on the table \ \ \
\textrm
{
P
}
(
\mathbf
{
s
}
,
\mathbf
{
a
}
|
\mathbf
{
t
}
)
&
=
&
\textrm
{
P
}
(m|
\mathbf
{
t
}
)
\prod\limits
_{
j=1
}^{
m
}
\textrm
{
P
}
(a
_
j|a
_{
1
}^{
j-1
}
,s
_{
1
}^{
j-1
}
,m,
\mathbf
{
t
}
)
\textrm
{
P
}
(s
_
j|a
_{
1
}^{
j
}
,s
_{
1
}^{
j-1
}
,m,
\mathbf
{
t
}
)
\nonumber
\\
&
\visible
<2->
{
=
}
&
\visible
<2->
{
\textrm
{
P
}
(m=3
\mid
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\visible
<3->
{
\times
}
\nonumber
\\
&
&
\visible
<3->
{
\textrm
{
P
}
(a
_
1=0
\mid
\phi
,
\phi
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\visible
<4->
{
\times
}
\nonumber
\\
&
&
\visible
<4->
{
\textrm
{
P
}
(
f
_
1=
\textrm
{
在
}
\mid
\textrm
{
\{
1-0
\}
}
,
\phi
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\visible
<5->
{
\times
}
\nonumber
\\
&
&
\visible
<4->
{
\textrm
{
P
}
(
s
_
1=
\textrm
{
在
}
\mid
\textrm
{
\{
1-0
\}
}
,
\phi
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\visible
<5->
{
\times
}
\nonumber
\\
&
&
\visible
<5->
{
\textrm
{
P
}
(a
_
2=3
\mid
\textrm
{
\{
1-0
\}
}
,
\textrm
{
'在'
}
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\visible
<6->
{
\times
}
\nonumber
\\
&
&
\visible
<6->
{
\textrm
{
P
}
(
f
_
2=
\textrm
{
桌子
}
\mid
\textrm
{
\{
1-0,2-3
\}
}
,
\textrm
{
'在'
}
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\visible
<7->
{
\times
}
\nonumber
\\
&
&
\visible
<6->
{
\textrm
{
P
}
(
s
_
2=
\textrm
{
桌子
}
\mid
\textrm
{
\{
1-0,2-3
\}
}
,
\textrm
{
'在'
}
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\visible
<7->
{
\times
}
\nonumber
\\
&
&
\visible
<7->
{
\textrm
{
P
}
(a
_
3=1
\mid
\textrm
{
\{
1-0,2-3
\}
}
,
\textrm
{
'在 桌子'
}
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\visible
<8->
{
\times
}
\nonumber
\\
&
&
\visible
<8->
{
\textrm
{
P
}
(
f
_
3=
\textrm
{
上
}
\mid
\textrm
{
\{
1-0,2-3,3-1
\}
}
,
\textrm
{
'在 桌子'
}
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\nonumber
&
&
\visible
<8->
{
\textrm
{
P
}
(
s
_
3=
\textrm
{
上
}
\mid
\textrm
{
\{
1-0,2-3,3-1
\}
}
,
\textrm
{
'在 桌子'
}
,3,
\textrm
{
'
$
t
_
0
$
on the table'
}
)
}
\nonumber
\end{eqnarray}
}
...
...
@@ -3730,7 +3730,7 @@ $\mathbf{s}$ = 在 桌子 上 \ \ \ \ \ $\mathbf{t}$ = $t_0$ on the table \ \ \
{
\small
\begin{eqnarray}
L(f,
\lambda
)
&
=
&
\frac
{
\epsilon
}{
(l+1)
^{
m
}}
\prod\limits
_{
j=1
}^{
m
}
\sum\limits
_{
i=0
}^{
l
}
\prod\limits
_{
j=1
}^{
m
}
f(s
_
j|t
_
i) -
\nonumber
\\
L(f,
\lambda
)
&
=
&
\frac
{
\epsilon
}{
(l+1)
^{
m
}}
\prod\limits
_{
j=1
}^{
m
}
\sum\limits
_{
i=0
}^{
l
}
f(s
_
j|t
_
i) -
\nonumber
\\
&
&
\sum
_{
t
_
y
}
\lambda
_{
t
_
y
}
(
\sum
_{
s
_
x
}
f(s
_
x|t
_
y) -1)
\nonumber
\end{eqnarray}
}
...
...
@@ -4190,9 +4190,9 @@ f(s_u|t_v) & = & \lambda_{t_v}^{-1} \cdot \textrm{P}(\mathbf{s}|\mathbf{t}) \cdo
%%% scale it up to the full corpus
\begin{frame}
{
在整个数据集上计算
}
\begin{itemize}
\item
\textbf
{
更真实的情况
}
:我们拥有一系列互译的句对(称作
\alert
{
平行语料
}
),记为
$
\{
(
\mathbf
{
s
}^{
[
1
]
}
,
\mathbf
{
t
}^{
[
1
]
}
)
,
(
\mathbf
{
s
}^{
[
2
]
}
,
\mathbf
{
t
}^{
[
2
]
}
)
,...,
(
\mathbf
{
s
}^{
[
N
]
}
,
\mathbf
{
t
}^{
[
N
]
}
)
\}
$
。对于这
$
N
$
个训练用句对,定义
$
f
(
s
_
u|t
_
v
)
$
的期望频次为
\item
\textbf
{
更真实的情况
}
:我们拥有一系列互译的句对(称作
\alert
{
平行语料
}
),记为
$
\{
(
\mathbf
{
s
}^{
[
1
]
}
,
\mathbf
{
t
}^{
[
1
]
}
)
,
(
\mathbf
{
s
}^{
[
2
]
}
,
\mathbf
{
t
}^{
[
2
]
}
)
,...,
(
\mathbf
{
s
}^{
[
K
]
}
,
\mathbf
{
t
}^{
[
K
]
}
)
\}
$
。对于这
$
K
$
个训练用句对,定义
$
f
(
s
_
u|t
_
v
)
$
的期望频次为
\begin{displaymath}
c
_{
\mathbb
{
E
}}
(s
_
u|t
_
v) =
\sum
_{
i=1
}^{
N
}
c
_{
\mathbb
{
E
}}
(s
_
u|t
_
v;
\mathbf
{
s
}^{
[i]
}
,
\mathbf
{
t
}^{
[i
]
}
)
c
_{
\mathbb
{
E
}}
(s
_
u|t
_
v) =
\sum
_{
k=1
}^{
K
}
c
_{
\mathbb
{
E
}}
(s
_
u|t
_
v;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k
]
}
)
\end{displaymath}
\item
<2->
\textbf
{
于是
}
\begin{center}
...
...
@@ -4200,8 +4200,8 @@ f(s_u|t_v) & = & \lambda_{t_v}^{-1} \cdot \textrm{P}(\mathbf{s}|\mathbf{t}) \cdo
\node
[anchor=west,inner sep=2pt] (eq1) at (0,0)
{$
f
(
s
_
u|t
_
v
)
$}
;
\node
[anchor=west] (eq2) at (eq1.east)
{$
=
$
\
}
;
\draw
[-] ([xshift=0.3em]eq2.east) -- ([xshift=11.6em]eq2.east);
\node
[anchor=south west] (eq3) at ([xshift=1em]eq2.east)
{$
\sum
_{
i
=
1
}^{
N
}
c
_{
\mathbb
{
E
}}
(
s
_
u|t
_
v;
\mathbf
{
s
}^{
[
i
]
}
,
\mathbf
{
t
}^{
[
i
]
}
)
$}
;
\node
[anchor=north west] (eq4) at (eq2.east)
{$
\sum
_{
s
_
u
}
\sum
_{
i
=
1
}^{
N
}
c
_{
\mathbb
{
E
}}
(
s
_
u|t
_
v;
\mathbf
{
s
}^{
[
i
]
}
,
\mathbf
{
t
}^{
[
i
]
}
)
$}
;
\node
[anchor=south west] (eq3) at ([xshift=1em]eq2.east)
{$
\sum
_{
k
=
1
}^{
K
}
c
_{
\mathbb
{
E
}}
(
s
_
u|t
_
v;
\mathbf
{
s
}^{
[
k
]
}
,
\mathbf
{
t
}^{
[
k
]
}
)
$}
;
\node
[anchor=north west] (eq4) at (eq2.east)
{$
\sum
_{
s
_
u
}
\sum
_{
k
=
1
}^{
K
}
c
_{
\mathbb
{
E
}}
(
s
_
u|t
_
v;
\mathbf
{
s
}^{
[
k
]
}
,
\mathbf
{
t
}^{
[
k
]
}
)
$}
;
\visible
<4->
{
\node
[anchor=south] (label1) at ([yshift=-6em,xshift=3em]eq1.north west)
{
利用这个公式计算
}
;
...
...
@@ -4250,16 +4250,16 @@ f(s_u|t_v) & = & \lambda_{t_v}^{-1} \cdot \textrm{P}(\mathbf{s}|\mathbf{t}) \cdo
\label
{
ibmtraining
}
\begin{beamerboxesrounded}
[upper=uppercolblue,lower=lowercolblue,shadow=true]
{
IBM模型1的训练(EM算法)
}
输入: 平行语料
$
\{
(
\mathbf
{
s
}^{
[
1
]
}
,
\mathbf
{
t
}^{
[
1
]
}
)
,...,
(
\mathbf
{
s
}^{
[
N
]
}
,
\mathbf
{
t
}^{
[
N
]
}
)
\}
$
\\
输入: 平行语料
$
\{
(
\mathbf
{
s
}^{
[
1
]
}
,
\mathbf
{
t
}^{
[
1
]
}
)
,...,
(
\mathbf
{
s
}^{
[
K
]
}
,
\mathbf
{
t
}^{
[
K
]
}
)
\}
$
\\
输出:参数
$
f
(
\cdot
|
\cdot
)
$
的最优值
\\
1:
\textbf
{
Function
}
\textsc
{
TrainItWithEM
}
(
$
\{
(
\mathbf
{
s
}^{
[
1
]
}
,
\mathbf
{
t
}^{
[
1
]
}
)
,...,
(
\mathbf
{
s
}^{
[
N
]
}
,
\mathbf
{
t
}^{
[
N
]
}
)
\}
$
)
\\
1:
\textbf
{
Function
}
\textsc
{
TrainItWithEM
}
(
$
\{
(
\mathbf
{
s
}^{
[
1
]
}
,
\mathbf
{
t
}^{
[
1
]
}
)
,...,
(
\mathbf
{
s
}^{
[
K
]
}
,
\mathbf
{
t
}^{
[
K
]
}
)
\}
$
)
\\
2:
\ \
Initialize
$
f
(
\cdot
|
\cdot
)
$
\hspace
{
5em
}
$
\rhd
$
比如给
$
f
(
\cdot
|
\cdot
)
$
一个均匀分布
\\
3:
\ \
Loop until
$
f
(
\cdot
|
\cdot
)
$
converges
\\
4:
\ \ \ \ \textbf
{
foreach
}
$
k
=
1
$
to
$
N
$
\textbf
{
do
}
\\
4:
\ \ \ \ \textbf
{
foreach
}
$
k
=
1
$
to
$
K
$
\textbf
{
do
}
\\
5:
\ \ \ \ \ \ \ \footnotesize
{$
c
_{
\mathbb
{
E
}}
(
s
_
u|t
_
v;
\mathbf
{
s
}^{
[
k
]
}
,
\mathbf
{
t
}^{
[
k
]
}
)
=
\sum\limits
_{
j
=
1
}^{
|
\mathbf
{
s
}^{
[
k
]
}
|
}
\delta
(
s
_
j,s
_
u
)
\sum\limits
_{
i
=
0
}^{
|
\mathbf
{
t
}^{
[
k
]
}
|
}
\delta
(
t
_
i,t
_
v
)
\cdot
\frac
{
f
(
s
_
u|t
_
v
)
}{
\sum
_{
i
=
0
}^{
l
}
f
(
s
_
u|t
_
i
)
}$}
\normalsize
{}
\\
6:
\ \ \ \ \textbf
{
foreach
}
$
t
_
v
$
appears at least one of
$
\{\mathbf
{
t
}^{
[
1
]
}
,...,
\mathbf
{
t
}^{
[
N
]
}
\}
$
\textbf
{
do
}
\\
6:
\ \ \ \ \textbf
{
foreach
}
$
t
_
v
$
appears at least one of
$
\{\mathbf
{
t
}^{
[
1
]
}
,...,
\mathbf
{
t
}^{
[
K
]
}
\}
$
\textbf
{
do
}
\\
7:
\ \ \ \ \ \ \
$
\lambda
_{
t
_
v
}^{
'
}
=
\sum
_{
s
_
u
}
\sum
_{
k
=
1
}^{
N
}
c
_{
\mathbb
{
E
}}
(
s
_
u|t
_
v;
\mathbf
{
s
}^{
[
k
]
}
,
\mathbf
{
t
}^{
[
k
]
}
)
$
\\
8:
\ \ \ \ \ \ \ \textbf
{
foreach
}
$
s
_
u
$
appears at least one of
$
\{\mathbf
{
s
}^{
[
1
]
}
,...,
\mathbf
{
s
}^{
[
N
]
}
\}
$
\textbf
{
do
}
\\
8:
\ \ \ \ \ \ \ \textbf
{
foreach
}
$
s
_
u
$
appears at least one of
$
\{\mathbf
{
s
}^{
[
1
]
}
,...,
\mathbf
{
s
}^{
[
K
]
}
\}
$
\textbf
{
do
}
\\
9:
\ \ \ \ \ \ \ \ \
$
f
(
s
_
u|t
_
v
)
=
\sum
_{
k
=
1
}^{
N
}
c
_{
\mathbb
{
E
}}
(
s
_
u|t
_
v;
\mathbf
{
s
}^{
[
k
]
}
,
\mathbf
{
t
}^{
[
k
]
}
)
\cdot
(
\lambda
_{
t
_
v
}^{
'
}
)
^{
-
1
}$
\\
10:
\ \textbf
{
return
}
$
f
(
\cdot
|
\cdot
)
$
\end{beamerboxesrounded}
...
...
@@ -4287,8 +4287,8 @@ c_{\mathbb{E}}(i|j,m,l;\mathbf{s},\mathbf{t}) & = & \frac{f(s_j|t_i)a(i|j,m,l)}{
\end{eqnarray}
\item
\textbf
{
M-Step
}
\begin{eqnarray}
f(s
_
u|t
_
v)
&
=
&
\frac
{
\sum
_{
k=
0
}^{
K
}
c
_{
\mathbb
{
E
}}
(s
_
u|t
_
v;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}{
\sum
_{
s
_
u
}
\sum
_{
k=0
}^{
K
}
c
_{
\mathbb
{
E
}}
(s
_
u|t
_
v;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}
\nonumber
\\
a(i|j,m,l)
&
=
&
\frac
{
\sum
_{
k=
0
}^{
K
}
c
_{
\mathbb
{
E
}}
(i|j;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}{
\sum
_{
i
}
\sum
_{
k=0
}^{
K
}
c
_{
\mathbb
{
E
}}
(i|j;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}
\nonumber
f(s
_
u|t
_
v)
&
=
&
\frac
{
\sum
_{
k=
1
}^{
K
}
c
_{
\mathbb
{
E
}}
(s
_
u|t
_
v;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}{
\sum
_{
s
_
u
}
\sum
_{
k=1
}^{
K
}
c
_{
\mathbb
{
E
}}
(s
_
u|t
_
v;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}
\nonumber
\\
a(i|j,m,l)
&
=
&
\frac
{
\sum
_{
k=
1
}^{
K
}
c
_{
\mathbb
{
E
}}
(i|j;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}{
\sum
_{
i
}
\sum
_{
k=1
}^{
K
}
c
_{
\mathbb
{
E
}}
(i|j;
\mathbf
{
s
}^{
[k]
}
,
\mathbf
{
t
}^{
[k]
}
)
}
\nonumber
\end{eqnarray}
\end{enumerate}
\end{frame}
...
...
Section05-Neural-Networks-and-Language-Modeling/section05.tex
查看文件 @
bd87e7ad
...
...
@@ -541,7 +541,7 @@ GPT-2 (Transformer) & Radford et al. & 2019 & \alert{35.7}
\end{itemize}
\item
<2->
\textbf
{
当然
}
,你是一个勇于实践的人
\begin{itemize}
\item
方法很简单:不断地尝试,根据结
构
不断地调整权重
\item
方法很简单:不断地尝试,根据结
果
不断地调整权重
\item
<10-> 在进行了很多次实验后,发现了相对好的一组权重
\end{itemize}
\end{itemize}
...
...
@@ -1034,7 +1034,7 @@ T(\alpha \textbf{a}) & = & \alpha T(\textbf{a}) \nonumber
\visible
<3->
{
\node
[anchor=center,fill=green!20] (w2) at (w)
{
\Large
{$
\textbf
{
w
}$}}
;
\node
[anchor=north,inner sep=1pt] (wlabel) at ([yshift=-0.7em]w.south)
{
\small
{
旋转(rotation)
}}
;
\node
[anchor=north,inner sep=1pt] (wlabel) at ([yshift=-0.7em]w.south)
{
\small
{
旋转(rotation)
、扩张(dilation)、挤压(squeeze)等
}}
;
\draw
[<-] ([yshift=-0.2em]w2.south) -- (wlabel.north);
\tikzstyle
{
neuron
}
= [rectangle,draw,thick,fill=red!30,red!35,minimum height=2em,minimum width=2em,font=
\small
]
...
...
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论