Commit ddc285f7 by xiaotong

Merge branch 'master' of 47.105.50.196:NiuTrans/mtbookv2

parents a3758daa 625a1644
......@@ -471,17 +471,15 @@ p_0+p_1 & = & 1 \label{eq:6-21}
\sectionnewpage
\section{小结及深入阅读}
{\color{red}产出率需要增加}
\parinterval 本章对IBM系列模型进行了全面的介绍和讨论,从一个简单的基于单词的翻译模型开始,本章以建模、解码、训练多个维度对统计机器翻译进行了描述,期间也涉及了词对齐、优化等多个重要概念。IBM 模型共分为5个模型,对翻译问题的建模依次由浅入深,同时模型复杂度也依次增加。IBM模型作为入门统计机器翻译的``必经之路'',其思想对今天的机器翻译仍然产生着影响。虽然单独使用IBM模型进行机器翻译现在已经不多见,甚至很多从事神经机器翻译等前沿研究的人对IBM模型已经逐渐淡忘,但是不能否认IBM模型标志着一个时代的开始。从某种意义上,当使用公式$\hat{\vectorn{t}} = \argmax_{\vectorn{t}} \funp{P}(\vectorn{t}|\vectorn{s})$描述机器翻译问题的时候,或多或少都在与IBM模型使用相似的思想。
\parinterval 当然,本书也无法涵盖IBM模型的所有内涵,很多内容需要感兴趣的读者继续研究和挖掘,有两个方向可以考虑:
\parinterval {\color{red}差一段小结}
\begin{itemize}
\vspace{0.5em}
\item IBM模型在提出后的十余年中,一直受到了学术界的关注。一个比较有代表性的成果是GIZA++(\url{https://github.com/moses-smt/giza-pp}),它集成了IBM模型和隐马尔可夫模型,并实现了这些模型的训练。在随后相当长的一段时间里,GIZA++也是机器翻译研究的标配,用于获得双语平行数据上单词一级的对齐结果。此外,研究者也对IBM模型进行了大量的分析,为后人研究统计机器翻译提供了大量依据\upcite{och2004alignment}。虽然IBM模型很少被独立使用,甚至直接用基于IBM模型的解码器也不多见,但是它通常会作为其他模型的一部分参与到对翻译的建模中。这部分工作会在下一章基于短语和句法的模型中进行讨论\upcite{koehn2003statistical}。此外,IBM模型也给机器翻译提供了一种非常简便的计算双语词串对应好坏的方式,因此也被广泛用于度量双语词串对应的强度,是自然语言处理中的一种常用特征。
\item 扭曲度是机器翻译中的一个经典概念。广义上来说,事物位置的变换都可以用扭曲度进行描述,比如,在物理成像系统中,扭曲度模型可以帮助进行镜头校正\upcite{1966Decentering,ClausF05}。在机器翻译中,扭曲度本质上在描述源语言和目标源单词顺序的偏差。这种偏差可以用于对调序的建模。因此扭曲度的使用也可以被看做是一种对调序问题的描述,这也是机器翻译区别于语音识别等任务的主要因素之一。在早期的统计机器翻译系统中,如Pharaoh\upcite{DBLP:conf/amta/Koehn04},大量使用了扭曲度这个概念。虽然,随着机器翻译的发展,更复杂的调序模型被提出\upcite{Gros2008MSD,xiong2006maximum,och2004alignment,DBLP:conf/naacl/KumarB05,li-etal-2014-neural,vaswani2017attention},但是扭曲度所引发的对调序问题的思考是非常深刻的,这也是IBM模型最大的贡献之一。
\vspace{0.5em}
\item 除了在机器翻译建模上的开创性工作,IBM模型的另一项重要贡献是建立了统计词对齐的基础模型。在训练IBM模型的过程中,除了学习到模型参数,还可以得到双语数据上的词对齐结果。也就是说词对齐标注是IBM模型训练的间接产物。这也使得IBM模型成为了自动词对齐的重要方法。包括GIZA++在内的很多工作,实际上更多的是被用于自动词对齐任务,而非简单的训练IBM模型参数。随着词对齐概念的不断深入,这个任务逐渐成为了自然语言处理中的重要分支,比如,对IBM模型的结果进行对称化\upcite{och2003systematic},也可以直接使用判别式模型利用分类模型解决词对齐问题\upcite{ittycheriah2005maximum},甚至可以把对齐的思想用于短语和句法结构的双语对应\upcite{xiao2013unsupervised}。除了GIZA++,研究人员也开发了很多优秀的自动词对齐工具,比如,FastAlign (\url{https://github.com/clab/fast_align})、Berkeley Aligner(\url{https://github.com/mhajiloo/berkeleyaligner})等,这些工具现在也有很广泛的应用。
\item IBM模型的另一个贡献是在机器翻译中引入了繁衍率的概念。本质上,繁衍率是一种对翻译长度的建模。在IBM模型中,通过计算单词的繁衍率就可以得到整个句子的长度。需要注意的是,在机器翻译中译文长度对翻译性能有着至关重要的影响。虽然,在很多机器翻译模型中并没有直接使用繁衍率这个概念,但是几乎所有的现代机器翻译系统中都有译文长度的控制模块。比如,在统计机器翻译和神经机器翻译中,都把译文单词数量作为一个特征用于生成合理长度的译文\upcite{Koehn2007Moses,ChiangLMMRS05,bahdanau2014neural}。此外,在神经机器翻译中,非自回归的解码中也使用繁衍率模型对译文长度进行预测({\color{red}参考文献待补充})。
\vspace{0.5em}
\end{itemize}
......
......@@ -23,18 +23,18 @@
{\scriptsize
\node[anchor=west] (h1) at ([xshift=1em,yshift=-12em]q2.west) {{Span[0,3]下的翻译假设:}};
\node[anchor=west] (h2) at ([xshift=0em,yshift=-1.3em]h1.west) {{X: imports and exports}};
\node[anchor=west] (h6) at ([xshift=0em,yshift=-1.3em]h2.west) {{S: the import and export}};
\node[anchor=west] (h2) at ([xshift=0em,yshift=-1.3em]h1.west) {{Ximports and exports}};
\node[anchor=west] (h6) at ([xshift=0em,yshift=-1.3em]h2.west) {{Sthe import and export}};
}
{\scriptsize
\node[anchor=west] (h21) at ([xshift=9em,yshift=2em]h1.east) {{替换$\textrm{X}_1$后生成的翻译假设:}};
\node[anchor=west] (h22) at ([xshift=0em,yshift=-1.3em]h21.west) {{X: imports and exports have drastically fallen}};
\node[anchor=west] (h23) at ([xshift=0em,yshift=-1.3em]h22.west) {{X: the import and export have drastically fallen}};
\node[anchor=west] (h24) at ([xshift=0em,yshift=-1.3em]h23.west) {{X: imports and exports have drastically fallen}};
\node[anchor=west] (h25) at ([xshift=0em,yshift=-1.3em]h24.west) {{X: the import and export have drastically fallen}};
\node[anchor=west] (h26) at ([xshift=0em,yshift=-1.3em]h25.west) {{X: imports and exports has drastically fallen}};
\node[anchor=west] (h27) at ([xshift=0em,yshift=-1.3em]h26.west) {{X: the import and export has drastically fallen}};
\node[anchor=west] (h22) at ([xshift=0em,yshift=-1.3em]h21.west) {{Ximports and exports have drastically fallen}};
\node[anchor=west] (h23) at ([xshift=0em,yshift=-1.3em]h22.west) {{Xthe import and export have drastically fallen}};
\node[anchor=west] (h24) at ([xshift=0em,yshift=-1.3em]h23.west) {{Ximports and exports have drastically fallen}};
\node[anchor=west] (h25) at ([xshift=0em,yshift=-1.3em]h24.west) {{Xthe import and export have drastically fallen}};
\node[anchor=west] (h26) at ([xshift=0em,yshift=-1.3em]h25.west) {{Ximports and exports has drastically fallen}};
\node[anchor=west] (h27) at ([xshift=0em,yshift=-1.3em]h26.west) {{Xthe import and export has drastically fallen}};
}
\node [rectangle,inner sep=0.1em,rounded corners=1pt,draw] [fit = (h1) (h5) (h6)] (gl1) {};
......
......@@ -7,13 +7,13 @@
{\scriptsize
\node[anchor=west] (ref) at (0,0) {{\sffamily\bfseries{参考答案:}} The Chinese star performance troupe presented a wonderful Peking opera as well as singing and dancing };
\node[anchor=west] (ref) at (0,0) {{\sffamily\bfseries{参考答案}} The Chinese star performance troupe presented a wonderful Peking opera as well as singing and dancing };
\node[anchor=north west] (ref2) at (ref.south west) {{\color{white} \sffamily\bfseries{Reference:}} performance to Hong Kong audience .};
\node[anchor=north west] (hifst) at (ref2.south west) {{\sffamily\bfseries{层次短语系统:}} Star troupe of China, highlights of Peking opera and dance show to the audience of Hong Kong .};
\node[anchor=north west] (hifst) at (ref2.south west) {{\sffamily\bfseries{层次短语系统}} Star troupe of China, highlights of Peking opera and dance show to the audience of Hong Kong .};
\node[anchor=north west] (synhifst) at (hifst.south west) {{\sffamily\bfseries{句法系统:}} Chinese star troupe};
\node[anchor=north west] (synhifst) at (hifst.south west) {{\sffamily\bfseries{句法系统}} Chinese star troupe};
\node[anchor=west, fill=green!20!white, inner sep=0.25em] (synhifstpart2) at (synhifst.east) {presented};
......@@ -25,7 +25,7 @@
\node[anchor=west] (synhifstpart6) at (synhifstpart5.east) {.};
\node[anchor=north west] (input) at ([yshift=-6.5em]synhifst.south west) {\sffamily\bfseries{源语句法树:}};
\node[anchor=north west] (input) at ([yshift=-6.5em]synhifst.south west) {\sffamily\bfseries{源语句法树}};
\begin{scope}[scale = 0.9, grow'=up, sibling distance=5pt, level distance=30pt, xshift=3.49in, yshift=-3.1in]
......
......@@ -17,19 +17,19 @@
{\scriptsize
\node[anchor=west] (h1) at ([xshift=1em,yshift=-7em]q2.west) {{Span[0,3]下的翻译假设:}};
\node[anchor=west] (h2) at ([xshift=0em,yshift=-1.3em]h1.west) {{X: the imports and exports}};
\node[anchor=west] (h3) at ([xshift=0em,yshift=-1.3em]h2.west) {{X: imports and exports}};
\node[anchor=west] (h4) at ([xshift=0em,yshift=-1.3em]h3.west) {{X: exports and imports}};
\node[anchor=west] (h5) at ([xshift=0em,yshift=-1.3em]h4.west) {{X: the imports and the exports}};
\node[anchor=west] (h6) at ([xshift=0em,yshift=-1.3em]h5.west) {{S: the import and export}};
\node[anchor=west] (h2) at ([xshift=0em,yshift=-1.3em]h1.west) {{Xthe imports and exports}};
\node[anchor=west] (h3) at ([xshift=0em,yshift=-1.3em]h2.west) {{Ximports and exports}};
\node[anchor=west] (h4) at ([xshift=0em,yshift=-1.3em]h3.west) {{Xexports and imports}};
\node[anchor=west] (h5) at ([xshift=0em,yshift=-1.3em]h4.west) {{Xthe imports and the exports}};
\node[anchor=west] (h6) at ([xshift=0em,yshift=-1.3em]h5.west) {{Sthe import and export}};
}
{\scriptsize
\node[anchor=west] (h21) at ([xshift=9em,yshift=0em]h1.east) {{替换$\textrm{X}_1$后生成的翻译假设:}};
\node[anchor=west] (h22) at ([xshift=0em,yshift=-1.3em]h21.west) {{X: the imports and exports have drastically fallen}};
\node[anchor=west] (h23) at ([xshift=0em,yshift=-1.3em]h22.west) {{X: imports and exports have drastically fallen}};
\node[anchor=west] (h24) at ([xshift=0em,yshift=-1.3em]h23.west) {{X: exports and imports have drastically fallen}};
\node[anchor=west] (h25) at ([xshift=0em,yshift=-1.3em]h24.west) {{X: the imports and the exports have drastically fallen}};
\node[anchor=west] (h22) at ([xshift=0em,yshift=-1.3em]h21.west) {{Xthe imports and exports have drastically fallen}};
\node[anchor=west] (h23) at ([xshift=0em,yshift=-1.3em]h22.west) {{Ximports and exports have drastically fallen}};
\node[anchor=west] (h24) at ([xshift=0em,yshift=-1.3em]h23.west) {{Xexports and imports have drastically fallen}};
\node[anchor=west] (h25) at ([xshift=0em,yshift=-1.3em]h24.west) {{Xthe imports and the exports have drastically fallen}};
}
\node [rectangle,inner sep=0.1em,rounded corners=1pt,draw] [fit = (h1) (h5) (h6)] (gl1) {};
......
......@@ -17,13 +17,13 @@
]
\node [anchor=west] (target) at ([xshift=1em]bsw3.east) {Cats like eating fish};
\node [anchor=north,inner sep=3pt] (cap1) at ([yshift=-1em]target.south west) {(a) 基于树的解码};
\node [anchor=north,inner sep=3pt] (cap1) at ([xshift=-1.0em,yshift=-1em]target.south west) {(a) 基于树的解码};
\draw [->,thick] (bsw3.east) -- (target.west);
\node [anchor=west] (sourcelabel) at ([xshift=6em,yshift=-1em]bsn0.east) {显式输入的结构};
\node [anchor=west] (source2) at ([xshift=3.3em]target.east) {$\ \ \;$喜欢$\ \;$\ };
\node [anchor=west] (source2) at ([xshift=3.3em,yshift=0.0em]target.east) {$\ \ \;$喜欢$\ \;$\ };
\node [anchor=west] (target2) at ([xshift=1em]source2.east) {Cats like eating fish};
\node [anchor=north,inner sep=3pt] (cap2) at ([xshift=1.1em,yshift=-1em]target2.south west) {(b) 基于串的解码};
\node [anchor=north,inner sep=3pt] (cap2) at ([xshift=-1.5em,yshift=-1em]target2.south west) {(b) 基于串的解码};
\draw [->,thick] (source2.east) -- (target2.west);
\begin{pgfonlayer}{background}
......@@ -32,7 +32,7 @@
}
\end{pgfonlayer}
\begin{scope}[xshift=3.18in,yshift=-0em,sibling distance=10pt]
\begin{scope}[xshift=3.18in,yshift=-0.28em,sibling distance=10pt]
\Tree[.\node(bsn0){IP};
[.\node(bsn1){NP};
[.\node(bsn2){NN}; ]
......@@ -44,7 +44,7 @@
]
\begin{pgfonlayer}{background}
\node [draw,dashed,rectangle,inner sep=1em,thick,red,rounded corners=5pt] (box) [fit = (bsn0) (bsn1) (bsn2) (bsn3) (bsn4) (bsn5)] {};
\node [draw,dashed,rectangle,inner sep=0.7em,thick,red,rounded corners=5pt] (box) [fit = (bsn0) (bsn1) (bsn2) (bsn3) (bsn4) (bsn5)] {};
\node [anchor=north west] (boxlabel) at ([xshift=2em,yshift=-2em]box.north east) {隐含结构};
\end{pgfonlayer}
......
......@@ -764,7 +764,6 @@
author = {Philipp Koehn},
title = {Pharaoh: {A} Beam Search Decoder for Phrase-Based Statistical Machine
Translation Models},
//series = {Lecture Notes in Computer Science},
volume = {3265},
pages = {115--124},
publisher = {Springer},
......@@ -2076,6 +2075,36 @@
pages = {836--841},
year = {1996},
}
@inproceedings{1966Decentering,
author = {Brown D.C.},
title = {Decentering Distortion of Lenses},
publisher = {Photogrammetric Engineering},
volume = {32},
pages = {444--462},
year = {1966}
}
@inproceedings{ClausF05,
author = {David Claus and
Andrew W. Fitzgibbon},
title = {A Rational Function Lens Distortion Model for General Cameras},
pages = {213--219},
publisher = {{IEEE} Computer Society Conference on Computer Vision and Pattern
Recognition},
year = {2005},
}
@inproceedings{ChiangLMMRS05,
author = {David Chiang and
Adam Lopez and
Nitin Madnani and
Christof Monz and
Philip Resnik and
Michael Subotin},
title = {The Hiero Machine Translation System: Extensions, Evaluation, and
Analysis},
pages = {779--786},
publisher = {Annual Meeting of the Association for Computational Linguistics},
year = {2005},
}
%%%%% chapter 6------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
......@@ -2213,6 +2242,14 @@
publisher = {Annual Meeting of the Association for Computational Linguistics},
year = {2011}
}
@inproceedings{DBLP:conf/acl/KleinM03,
author = {Dan Klein and
Christopher D. Manning},
title = {Accurate Unlexicalized Parsing},
pages = {423--430},
publisher = {Annual Meeting of the Association for Computational Linguistics},
year = {2003}
}
%%%%% chapter 7------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论