Commit 69c835b3 by 曹润柘

合并分支 'caorunzhe' 到 'master'

Caorunzhe

查看合并请求 !133
parents 90ac3c90 850a0f4a
...@@ -113,16 +113,7 @@ ...@@ -113,16 +113,7 @@
\parinterval 早在17世纪,如Descartes、Leibniz、Cave\ Beck、Athanasius\ Kircher和Johann\ Joachim\ Becher等很多学者就提出采用机器词典(电子词典)来克服语言障碍的想法\upcite{knowlson1975universal},这种想法在当时是很超前的。随着语言学、计算机科学等学科的发展,在19世纪30年代使用计算模型进行自动翻译的思想开始萌芽,如当时法国科学家Georges Artsrouni就提出用机器来进行翻译的想法。只是那时依然没有合适的实现手段,所以这种想法的合理性无法被证实。 \parinterval 早在17世纪,如Descartes、Leibniz、Cave\ Beck、Athanasius\ Kircher和Johann\ Joachim\ Becher等很多学者就提出采用机器词典(电子词典)来克服语言障碍的想法\upcite{knowlson1975universal},这种想法在当时是很超前的。随着语言学、计算机科学等学科的发展,在19世纪30年代使用计算模型进行自动翻译的思想开始萌芽,如当时法国科学家Georges Artsrouni就提出用机器来进行翻译的想法。只是那时依然没有合适的实现手段,所以这种想法的合理性无法被证实。
\parinterval 随着第二次世界大战爆发, 对文字进行加密和解密成为重要的军事需求,这也使得数学和密码学变得相当发达。在战争结束一年后,世界上第一台通用电子数字计算机于1946年研制成功(图\ref{fig:1-4}),至此使用机器进行翻译有了真正实现的可能。 \parinterval 随着第二次世界大战爆发, 对文字进行加密和解密成为重要的军事需求,这也使得数学和密码学变得相当发达。在战争结束一年后,世界上第一台通用电子数字计算机于1946年研制成功,至此使用机器进行翻译有了真正实现的可能。
%----------------------------------------------
\begin{figure}[htp]
\centering
\includegraphics[scale=0.4]{./Chapter1/Figures/figure-eniac.jpeg}
\caption{世界上第一台通用电子数字计算机“埃尼阿克”(ENIAC)}
\label{fig:1-4}
\end{figure}
%-------------------------------------------
\parinterval 基于战时密码学领域与通讯领域的研究,Claude Elwood Shannon在1948年提出使用“噪声信道”描述语言的传输过程,并借用热力学中的“{\small\bfnew{}}\index{}”(Entropy)\index{Entropy}来刻画消息中的信息量\upcite{DBLP:journals/bstj/Shannon48}。次年,Shannon与Warren Weaver更是合著了著名的\emph{The Mathematical Theory of Communication}\upcite{shannon1949the},这些工作都为后期的统计机器翻译打下了理论基础。 \parinterval 基于战时密码学领域与通讯领域的研究,Claude Elwood Shannon在1948年提出使用“噪声信道”描述语言的传输过程,并借用热力学中的“{\small\bfnew{}}\index{}”(Entropy)\index{Entropy}来刻画消息中的信息量\upcite{DBLP:journals/bstj/Shannon48}。次年,Shannon与Warren Weaver更是合著了著名的\emph{The Mathematical Theory of Communication}\upcite{shannon1949the},这些工作都为后期的统计机器翻译打下了理论基础。
...@@ -265,7 +256,7 @@ ...@@ -265,7 +256,7 @@
\subsection{规则的定义与层次} \subsection{规则的定义与层次}
\parinterval 规则就像语言中的“IF-THEN”语句,如果满足条件,则执行相应的语义动作。比如,可以将待翻译句子中的某个词,使用目标语言单词进行替换,但是这种替换并非随意的,而是在语言学知识的指导下进行的。 \parinterval 规则就像语言中的“If-then”语句,如果满足条件,则执行相应的语义动作。比如,可以将待翻译句子中的某个词,使用目标语言单词进行替换,但是这种替换并非随意的,而是在语言学知识的指导下进行的。
\parinterval\ref{fig:1-9}展示了一个使用转换法进行翻译的实例。这里,利用一个简单的汉译英规则库完成对句子“我对你感到满意”的翻译。当翻译“我”时,从规则库中找到规则1,该规则表示遇到单词“我”就翻译为“I”;类似地,也可以从规则库中找到规则4,该规则表示翻译调序,即将单词“you”放到“be satisfied with”后面。这种通过规则表示单词之间的对应关系也为统计机器翻译方法提供了思路。如统计机器翻译中,基于短语的翻译模型使用短语对对原文进行替换,详细描述可以参考{\chapterseven} \parinterval\ref{fig:1-9}展示了一个使用转换法进行翻译的实例。这里,利用一个简单的汉译英规则库完成对句子“我对你感到满意”的翻译。当翻译“我”时,从规则库中找到规则1,该规则表示遇到单词“我”就翻译为“I”;类似地,也可以从规则库中找到规则4,该规则表示翻译调序,即将单词“you”放到“be satisfied with”后面。这种通过规则表示单词之间的对应关系也为统计机器翻译方法提供了思路。如统计机器翻译中,基于短语的翻译模型使用短语对对原文进行替换,详细描述可以参考{\chapterseven}
...@@ -353,15 +344,15 @@ ...@@ -353,15 +344,15 @@
\parinterval 在基于规则的机器翻译时代,机器翻译技术研究有一个特点就是{\small\bfnew{语法}}\index{语法}(Grammer)\index{Grammer}{\small\bfnew{算法}}\index{算法}(Algorithm)\index{Algorithm}分开,相当于是把语言分析和程序设计分开。传统方式使用程序代码来实现翻译规则,并把所谓的翻译规则隐含在程序代码实现中。其中最大问题是一旦翻译规则发生修改,程序代码也需要进行相应修改,导致维护代价非常高。此外书写翻译规则的语言学家与编代码的程序员沟通代价也非常高,有时候会出现鸡同鸭讲的感觉。把语法和算法分开对于基于规则的机器翻译技术来说最大好处就是可以将语言学家和程序员的工作分开,各自发挥自己的优势。 \parinterval 在基于规则的机器翻译时代,机器翻译技术研究有一个特点就是{\small\bfnew{语法}}\index{语法}(Grammer)\index{Grammer}{\small\bfnew{算法}}\index{算法}(Algorithm)\index{Algorithm}分开,相当于是把语言分析和程序设计分开。传统方式使用程序代码来实现翻译规则,并把所谓的翻译规则隐含在程序代码实现中。其中最大问题是一旦翻译规则发生修改,程序代码也需要进行相应修改,导致维护代价非常高。此外书写翻译规则的语言学家与编代码的程序员沟通代价也非常高,有时候会出现鸡同鸭讲的感觉。把语法和算法分开对于基于规则的机器翻译技术来说最大好处就是可以将语言学家和程序员的工作分开,各自发挥自己的优势。
\parinterval 这种语言分析和程序设计分开的实现方式也使得基于人工书写翻译规则的机器翻译方法非常直观,语言学家可以很容易地将翻译知识利用规则的方法表达出来,并且不需要修改系统代码。例如:1991年,东北大学自然语言处理实验室王宝库教授提出的规则描述语言(CTRDL)\upcite{王宝库1991机器翻译系统中一种规则描述语言}。以及1995年,同为东北大学自然语言处理实验室的姚天顺教授提出的词汇语义驱动算法\upcite{唐泓英1995基于搭配词典的词汇语义驱动算法},都是在这种思想上对机器翻译方法的一种改进。此外,使用规则本身就具有一定的优势 \parinterval 这种语言分析和程序设计分开的实现方式也使得基于人工书写翻译规则的机器翻译方法非常直观,语言学家可以很容易地将翻译知识利用规则的方法表达出来,并且不需要修改系统代码。例如:1991年,东北大学自然语言处理实验室王宝库教授提出的规则描述语言(CTRDL)\upcite{王宝库1991机器翻译系统中一种规则描述语言}。以及1995年,同为东北大学自然语言处理实验室的姚天顺教授提出的词汇语义驱动算法\upcite{唐泓英1995基于搭配词典的词汇语义驱动算法},都是在这种思想上对机器翻译方法的一种改进。此外,使用规则本身就具有一定的优势
\begin{itemize} \begin{itemize}
\vspace{0.5em} \vspace{0.5em}
\item 首先,翻译规则的书写颗粒度具有很大的可伸缩性; \item 翻译规则的书写颗粒度具有很大的可伸缩性。
\vspace{0.5em} \vspace{0.5em}
\item 其次,较大颗粒度的翻译规则有很强的概括能力,较小颗粒度的翻译规则具有精细的描述能力; \item 较大颗粒度的翻译规则有很强的概括能力,较小颗粒度的翻译规则具有精细的描述能力。
\vspace{0.5em} \vspace{0.5em}
\item 最后,翻译规则便于处理复杂的句法结构和进行深层次的语义理解,比如解决翻译过程中的长距离依赖问题。 \item 翻译规则便于处理复杂的句法结构和进行深层次的语义理解,比如解决翻译过程中的长距离依赖问题。
\vspace{0.5em} \vspace{0.5em}
\end{itemize} \end{itemize}
...@@ -395,15 +386,15 @@ ...@@ -395,15 +386,15 @@
\end{figure} \end{figure}
%------------------------------------------- %-------------------------------------------
\parinterval 当然,基于实例的机器翻译也并不完美 \parinterval 当然,基于实例的机器翻译也并不完美
\begin{itemize} \begin{itemize}
\vspace{0.5em} \vspace{0.5em}
\item 首先,这种方法对翻译实例的精确度要求非常高,一个实例的错误可能会导致一个句型都无法翻译正确; \item 这种方法对翻译实例的精确度要求非常高,一个实例的错误可能会导致一个句型都无法翻译正确。
\vspace{0.5em} \vspace{0.5em}
\item 其次,实例维护较为困难,实例库的构建通常需要单词级对齐的标注,而保证词对齐的质量是非常困难的工作,这也大大增加了实例库维护的难度; \item 实例维护较为困难,实例库的构建通常需要单词级对齐的标注,而保证词对齐的质量是非常困难的工作,这也大大增加了实例库维护的难度。
\vspace{0.5em} \vspace{0.5em}
\item 再次,尽管可以通过实例或者模板进行翻译,但是其覆盖度仍然有限。在实际应用中,很多句子无法找到可以匹配的实例或者模板。 \item 尽管可以通过实例或者模板进行翻译,但是其覆盖度仍然有限。在实际应用中,很多句子无法找到可以匹配的实例或者模板。
\vspace{0.5em} \vspace{0.5em}
\end{itemize} \end{itemize}
...@@ -447,14 +438,14 @@ ...@@ -447,14 +438,14 @@
\end{figure} \end{figure}
%------------------------------------------- %-------------------------------------------
\parinterval 与统计机器翻译相比,神经机器翻译的优势体现在其不需要特征工程,所有信息由神经网络自动从原始输入中提取。而且,相比于统计机器翻译中所使用的离散化的表示。神经机器翻译中词和句子的分布式连续空间表示可以为建模提供更为丰富的信息,同时可以使用相对成熟的基于梯度的方法优化模型。此外,神经网络的存储需求较小,天然适合小设备上的应用。当然,神经机器翻译也存在问题 \parinterval 与统计机器翻译相比,神经机器翻译的优势体现在其不需要特征工程,所有信息由神经网络自动从原始输入中提取。而且,相比于统计机器翻译中所使用的离散化的表示。神经机器翻译中词和句子的分布式连续空间表示可以为建模提供更为丰富的信息,同时可以使用相对成熟的基于梯度的方法优化模型。此外,神经网络的存储需求较小,天然适合小设备上的应用。当然,神经机器翻译也存在问题
\begin{itemize} \begin{itemize}
\vspace{0.5em} \vspace{0.5em}
\item 首先,虽然脱离了特征工程,但神经网络的结构需要人工设计,即使设计好结构,系统的调优、超参数的设置等仍然依赖大量的实验。 \item 虽然脱离了特征工程,但神经网络的结构需要人工设计,即使设计好结构,系统的调优、超参数的设置等仍然依赖大量的实验。
\vspace{0.5em} \vspace{0.5em}
\item 其次,神经机器翻译现在缺乏可解释性,其过程和人的认知差异很大,通过人的先验知识干预的程度差。 \item 神经机器翻译现在缺乏可解释性,其过程和人的认知差异很大,通过人的先验知识干预的程度差。
\vspace{0.5em} \vspace{0.5em}
\item 再次,神经机器翻译对数据的依赖很大,数据规模、质量对性能都有很大影响,特别是在数据稀缺的情况下,充分训练神经网络很有挑战性。 \item 神经机器翻译对数据的依赖很大,数据规模、质量对性能都有很大影响,特别是在数据稀缺的情况下,充分训练神经网络很有挑战性。
\vspace{0.5em} \vspace{0.5em}
\end{itemize} \end{itemize}
......
...@@ -114,13 +114,13 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x ...@@ -114,13 +114,13 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x
\label{eq:2-4} \label{eq:2-4}
\end{eqnarray} \end{eqnarray}
\parinterval 为了更好地区分条件概率、边缘概率和联合概率,这里用一个图形面积的计算来举例说明。如图\ref{fig:2-2}所示,矩形$A$代表事件$X$发生所对应的所有可能状态,矩形$B$代表事件$Y$发生所对应的所有可能状态,矩形$C$代表$A$$B$的交集,则 \parinterval 为了更好地区分条件概率、边缘概率和联合概率,这里用一个图形面积的计算来举例说明。如图\ref{fig:2-2}所示,矩形$A$代表事件$X$发生所对应的所有可能状态,矩形$B$代表事件$Y$发生所对应的所有可能状态,矩形$C$代表$A$$B$的交集,则
\begin{itemize} \begin{itemize}
\vspace{0.5em} \vspace{0.5em}
\item 边缘概率:矩形$A$或者矩形$B$的面积 \item 边缘概率:矩形$A$或者矩形$B$的面积
\vspace{0.5em} \vspace{0.5em}
\item 联合概率:矩形$C$的面积 \item 联合概率:矩形$C$的面积
\vspace{0.5em} \vspace{0.5em}
\item 条件概率:联合概率/对应的边缘概率,如:$\funp{P}(A \mid B)$=矩形$C$的面积/矩形B的面积。 \item 条件概率:联合概率/对应的边缘概率,如:$\funp{P}(A \mid B)$=矩形$C$的面积/矩形B的面积。
\vspace{0.5em} \vspace{0.5em}
...@@ -186,7 +186,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x ...@@ -186,7 +186,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x
\subsection{贝叶斯法则}\label{sec:2.2.3} \subsection{贝叶斯法则}\label{sec:2.2.3}
\parinterval 首先介绍一下全概率公式:{\small\bfnew{全概率公式}}\index{全概率公式}(Law Of Total Probability)\index{Law Of Total Probability}是概率论中重要的公式,它可以将一个复杂事件发生的概率分解成不同情况的小事件发生概率的和。这里先介绍一个概念——划分。集合$\Sigma$的一个划分事件为$\{B_1, \ldots ,B_n\}$是指它们满足$\bigcup_{i=1}^n B_i=S \textrm{}B_iB_j=\varnothing , i,j=1, \ldots ,n,i\neq j$。此时事件$A$的全概率公式可以被描述为: \parinterval 首先介绍一下全概率公式:{\small\bfnew{全概率公式}}\index{全概率公式}(Law of Total Probability)\index{Law of Total Probability}是概率论中重要的公式,它可以将一个复杂事件发生的概率分解成不同情况的小事件发生概率的和。这里先介绍一个概念——划分。集合$\Sigma$的一个划分事件为$\{B_1, \ldots ,B_n\}$是指它们满足$\bigcup_{i=1}^n B_i=S \textrm{}B_iB_j=\varnothing , i,j=1, \ldots ,n,i\neq j$。此时事件$A$的全概率公式可以被描述为:
\begin{eqnarray} \begin{eqnarray}
\funp{P}(A)=\sum_{k=1}^n \funp{P}(A \mid B_k)\funp{P}(B_k) \funp{P}(A)=\sum_{k=1}^n \funp{P}(A \mid B_k)\funp{P}(B_k)
\label{eq:2-9} \label{eq:2-9}
...@@ -221,7 +221,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x ...@@ -221,7 +221,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x
\label{eq:2-11} \label{eq:2-11}
\end{eqnarray} \end{eqnarray}
\noindent 其中,等式右端的分母部分使用了全概率公式。进一步,令$\bar{B}$表示事件$B$不发生的情况,由上式,也可以得到贝叶斯公式的另外一种写法: \noindent 其中,等式右端的分母部分使用了全概率公式。进一步,令$\bar{B}$表示事件$B$不发生的情况,由上式,也可以得到贝叶斯公式的另外一种写法
\begin{eqnarray} \begin{eqnarray}
\funp{P}(B \mid A) & = & \frac { \funp{P}(A \mid B)\funp{P}(B) } {\funp{P}(A)} \nonumber \\ \funp{P}(B \mid A) & = & \frac { \funp{P}(A \mid B)\funp{P}(B) } {\funp{P}(A)} \nonumber \\
& = & \frac { \funp{P}(A \mid B)\funp{P}(B) } {\funp{P}(A \mid B)\funp{P}(B)+\funp{P}(A \mid \bar{B}) \funp{P}(\bar{B})} & = & \frac { \funp{P}(A \mid B)\funp{P}(B) } {\funp{P}(A \mid B)\funp{P}(B)+\funp{P}(A \mid \bar{B}) \funp{P}(\bar{B})}
...@@ -272,8 +272,8 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x ...@@ -272,8 +272,8 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x
\parinterval 自信息处理的是变量单一取值的情况。若量化整个概率分布中的不确定性或信息量,可以用信息熵,记为$\funp{H}(x)$。其公式如下: \parinterval 自信息处理的是变量单一取值的情况。若量化整个概率分布中的不确定性或信息量,可以用信息熵,记为$\funp{H}(x)$。其公式如下:
\begin{eqnarray} \begin{eqnarray}
\funp{H}(x) & = & \sum_{x \in \textrm{X}}[ \funp{P}(x) \funp{I}(x)] \nonumber \\ \funp{H}(x) & = & \sum_{x \in X}[ \funp{P}(x) \funp{I}(x)] \nonumber \\
& = & - \sum_{x \in \textrm{X} } [\funp{P}(x)\log(\funp{P}(x)) ] & = & - \sum_{x \in X } [\funp{P}(x)\log(\funp{P}(x)) ]
\label{eq:2-14} \label{eq:2-14}
\end{eqnarray} \end{eqnarray}
...@@ -287,8 +287,8 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x ...@@ -287,8 +287,8 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x
\parinterval 如果同一个随机变量$X$上有两个概率分布$\funp{P}(x)$$\funp{Q}(x)$,那么可以使用{\small\bfnew{Kullback-Leibler距离}}\index{Kullback-Leibler距离}{\small\bfnew{KL距离}}\index{KL距离}(KL Distance\index{KL Distance})来衡量这两个分布的不同(也称作KL 散度),这种度量就是{\small\bfnew{相对熵}}\index{相对熵}(Relative Entropy)\index{Relative Entropy}。其公式如下: \parinterval 如果同一个随机变量$X$上有两个概率分布$\funp{P}(x)$$\funp{Q}(x)$,那么可以使用{\small\bfnew{Kullback-Leibler距离}}\index{Kullback-Leibler距离}{\small\bfnew{KL距离}}\index{KL距离}(KL Distance\index{KL Distance})来衡量这两个分布的不同(也称作KL 散度),这种度量就是{\small\bfnew{相对熵}}\index{相对熵}(Relative Entropy)\index{Relative Entropy}。其公式如下:
\begin{eqnarray} \begin{eqnarray}
\funp{D}_{\textrm{KL}}(\funp{P}\parallel \funp{Q}) & = & \sum_{x \in \textrm{X}} [ \funp{P}(x)\log \frac{\funp{P}(x) }{ \funp{Q}(x) } ] \nonumber \\ \funp{D}_{\textrm{KL}}(\funp{P}\parallel \funp{Q}) & = & \sum_{x \in X} [ \funp{P}(x)\log \frac{\funp{P}(x) }{ \funp{Q}(x) } ] \nonumber \\
& = & \sum_{x \in \textrm{X} }[ \funp{P}(x)(\log \funp{P}(x)-\log \funp{Q}(x))] & = & \sum_{x \in X }[ \funp{P}(x)(\log \funp{P}(x)-\log \funp{Q}(x))]
\label{eq:2-15} \label{eq:2-15}
\end{eqnarray} \end{eqnarray}
...@@ -310,7 +310,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x ...@@ -310,7 +310,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x
\parinterval {\small\bfnew{交叉熵}}\index{交叉熵}(Cross-entropy)\index{Cross-entropy}是一个与KL距离密切相关的概念,它的公式是: \parinterval {\small\bfnew{交叉熵}}\index{交叉熵}(Cross-entropy)\index{Cross-entropy}是一个与KL距离密切相关的概念,它的公式是:
\begin{eqnarray} \begin{eqnarray}
\funp{H}(\funp{P},\funp{Q})=-\sum_{x \in \textrm{X}} [\funp{P}(x) \log \funp{Q}(x) ] \funp{H}(\funp{P},\funp{Q})=-\sum_{x \in X} [\funp{P}(x) \log \funp{Q}(x) ]
\label{eq:2-16} \label{eq:2-16}
\end{eqnarray} \end{eqnarray}
...@@ -523,7 +523,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x ...@@ -523,7 +523,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x
\begin{itemize} \begin{itemize}
\vspace{0.5em} \vspace{0.5em}
\item {\small\bfnew{基于频次的方法}}\index{基于频次的方法}。直接利用词序列在训练数据中出现的频次计算出$\funp{P}(w_m|w_{m-n+1}$\\$ \ldots w_{m-1})$ \item {\small\bfnew{基于频次的方法}}\index{基于频次的方法}。直接利用词序列在训练数据中出现的频次计算出$\funp{P}(w_m|w_{m-n+1}$\\$ \ldots w_{m-1})$
\begin{eqnarray} \begin{eqnarray}
\funp{P}(w_m|w_{m-n+1} \ldots w_{m-1})=\frac{\textrm{count}(w_{m-n+1} \ldots w_m)}{\textrm{count}(w_{m-n+1} \ldots w_{m-1})} \funp{P}(w_m|w_{m-n+1} \ldots w_{m-1})=\frac{\textrm{count}(w_{m-n+1} \ldots w_m)}{\textrm{count}(w_{m-n+1} \ldots w_{m-1})}
\label{eq:2-24} \label{eq:2-24}
...@@ -597,7 +597,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x ...@@ -597,7 +597,7 @@ F(x)=\int_{-\infty}^x f(x)\textrm{d}x
\noindent 其中,$V$表示词表,$|V|$为词表中单词的个数,$w$为词表中的一个词。有时候,加法平滑方法会将$\theta$取1,这时称之为加一平滑或是拉普拉斯平滑。这种方法比较容易理解,也比较简单,因此也往往被用于对系统的快速原型中。 \noindent 其中,$V$表示词表,$|V|$为词表中单词的个数,$w$为词表中的一个词。有时候,加法平滑方法会将$\theta$取1,这时称之为加一平滑或是拉普拉斯平滑。这种方法比较容易理解,也比较简单,因此也往往被用于对系统的快速原型中。
\parinterval 举一个例子。假设在一个英语文档中随机采样一些单词(词表大小$|V|=20$),各个单词出现的次数为:“look”: 4,“people”: 3,“am”: 2,“what”: 1,“want”: 1,“do”: 1。图\ref{fig:2-12} 给出了在平滑之前和平滑之后的概率分布。 \parinterval 举一个例子。假设在一个英语文档中随机采样一些单词(词表大小$|V|=20$),各个单词出现的次数为:“look”出现4次,“people”出现3次,“am”出现2次,“what”出现1次,“want”出现1次,“do”出现1次。图\ref{fig:2-12} 给出了在平滑之前和平滑之后的概率分布。
%---------------------------------------------- %----------------------------------------------
\begin{figure}[htp] \begin{figure}[htp]
...@@ -769,7 +769,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll} ...@@ -769,7 +769,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll}
\begin{itemize} \begin{itemize}
\vspace{0.5em} \vspace{0.5em}
\item {\small\bfnew{训练}}\index{训练}(Training\index{Training}):从训练数据上估计出语言模型的参数 \item {\small\bfnew{训练}}\index{训练}(Training\index{Training}):从训练数据上估计出语言模型的参数
\vspace{0.5em} \vspace{0.5em}
\item {\small\bfnew{预测}}\index{预测}(Prediction\index{Prediction}):用训练好的语言模型对新输入的句子进行概率评估,或者生成新的句子。 \item {\small\bfnew{预测}}\index{预测}(Prediction\index{Prediction}):用训练好的语言模型对新输入的句子进行概率评估,或者生成新的句子。
\vspace{0.5em} \vspace{0.5em}
...@@ -823,7 +823,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll} ...@@ -823,7 +823,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll}
\parinterval 在序列生成任务中,最简单的策略就是对词表中的词汇进行任意组合,通过这种枚举的方式得到全部可能的序列。但是,很多时候并生成序列的长度是无法预先知道的。比如,机器翻译中目标语序列的长度是任意的。那么怎样判断一个序列何时完成了生成过程呢?这里借用人类书写中文和英文的过程:句子的生成首先从一片空白开始,然后从左到右逐词生成,除了第一个单词,所有单词的生成都依赖于前面已经生成的单词。为了方便计算机实现,通常定义单词序列从一个特殊的符号<sos>后开始生成。同样地,一个单词序列的结束也用一个特殊的符号<eos>来表示。 \parinterval 在序列生成任务中,最简单的策略就是对词表中的词汇进行任意组合,通过这种枚举的方式得到全部可能的序列。但是,很多时候并生成序列的长度是无法预先知道的。比如,机器翻译中目标语序列的长度是任意的。那么怎样判断一个序列何时完成了生成过程呢?这里借用人类书写中文和英文的过程:句子的生成首先从一片空白开始,然后从左到右逐词生成,除了第一个单词,所有单词的生成都依赖于前面已经生成的单词。为了方便计算机实现,通常定义单词序列从一个特殊的符号<sos>后开始生成。同样地,一个单词序列的结束也用一个特殊的符号<eos>来表示。
\parinterval 对于一个序列$<$sos$>$\ \ I\ \ agree\ \ $<$eos$>$,图\ref{fig:2-13}展示语言模型视角下该序列的生成过程。该过程通过在序列的末尾不断附加词表中的单词来逐渐扩展序列,直到这段序列结束。这种生成单词序列的过程被称作{\small\bfnew{自左向右生成}}\index{自左向右生成}(Left-to-right Generation)\index{Left-to-right Generation}。注意,这种序列生成策略与$n$-gram的思想天然契合,因为$n$-gram语言模型中,每个词的生成概率依赖前面(左侧)若干词,因此$n$-gram语言模型也是一种自左向右的计算模型。 \parinterval 对于一个序列$<$sos$>$\ I\ agree\ $<$eos$>$,图\ref{fig:2-13}展示语言模型视角下该序列的生成过程。该过程通过在序列的末尾不断附加词表中的单词来逐渐扩展序列,直到这段序列结束。这种生成单词序列的过程被称作{\small\bfnew{自左向右生成}}\index{自左向右生成}(Left-to-right Generation)\index{Left-to-right Generation}。注意,这种序列生成策略与$n$-gram的思想天然契合,因为$n$-gram语言模型中,每个词的生成概率依赖前面(左侧)若干词,因此$n$-gram语言模型也是一种自左向右的计算模型。
%---------------------------------------------- %----------------------------------------------
\begin{figure}[htp] \begin{figure}[htp]
...@@ -836,17 +836,17 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll} ...@@ -836,17 +836,17 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll}
\parinterval 在这种序列生成方式的基础上,实现搜索通常有两种方法\ \dash\ 深度优先遍历和宽度优先遍历\upcite{DBLP:books/mg/CormenLR89}。在深度优先遍历中,每次从词表中可重复地选择一个单词,然后从左至右地生成序列,直到<eos>被选择,此时一个完整的单词序列被生成出来。然后从<eos>回退到上一个单词,选择之前词表中未被选择到的候选单词代替<eos>,并继续挑选下一个单词直到<eos>被选到,如果上一个单词的所有可能都被枚举过,那么回退到上上一个单词继续枚举,直到回退到<sos>,这时候枚举结束。在宽度优先遍历中,每次不是只选择一个单词,而是枚举所有单词。 \parinterval 在这种序列生成方式的基础上,实现搜索通常有两种方法\ \dash\ 深度优先遍历和宽度优先遍历\upcite{DBLP:books/mg/CormenLR89}。在深度优先遍历中,每次从词表中可重复地选择一个单词,然后从左至右地生成序列,直到<eos>被选择,此时一个完整的单词序列被生成出来。然后从<eos>回退到上一个单词,选择之前词表中未被选择到的候选单词代替<eos>,并继续挑选下一个单词直到<eos>被选到,如果上一个单词的所有可能都被枚举过,那么回退到上上一个单词继续枚举,直到回退到<sos>,这时候枚举结束。在宽度优先遍历中,每次不是只选择一个单词,而是枚举所有单词。
有一个简单的例子。假设词表只含两个单词\{a, b\},从<sos>开始枚举所有单词,有三种可能: 有一个简单的例子。假设词表只含两个单词$\{a, b\}$,从<sos>开始枚举所有候选,有三种可能:
\begin{eqnarray} \begin{eqnarray}
\text{\{<sos> a, <sos> b, <sos> <eos>\}} \nonumber \{\text{<sos>}\ a, \text{<sos>}\ b, \text{<sos>}\ \text{<eos>}\} \nonumber
\end{eqnarray} \end{eqnarray}
\noindent 其中可以划分成长度为0的完整的单词序列集合\{<sos> <eos>\}和长度为1的未结束的单词序列片段集合\{<sos> a, <sos> b\},然后下一步对未结束的单词序列枚举词表中的所有单词,可以生成: \noindent 其中可以划分成长度为0的完整的单词序列集合$\{\text{<sos>}\ \text{<eos>}\}$和长度为1的未结束的单词序列片段集合$\{\text{<sos>}\ a, \text{<sos>}\ b\}$,然后下一步对未结束的单词序列枚举词表中的所有单词,可以生成:
\begin{eqnarray} \begin{eqnarray}
\text{\{<sos> a a, <sos> a b, <sos> a <eos>, <sos> b a, <sos> b b, <sos> b <eos>\}} \nonumber \{\text{<sos>}\ a\ a, \text{<sos>}\ a\ b, \text{<sos>}\ a\ \text{<eos>}, \text{<sos>}\ b\ a, \text{<sos>}\ b\ b, \text{<sos>}\ b\ \text{<eos>}\} \nonumber
\end{eqnarray} \end{eqnarray}
\parinterval 此时可以划分出长度为1的完整单词序列集合\{<sos> a <eos>, <sos> b <eos>\},以及长度为2的未结束单词序列片段集合\{<sos> a a, <sos> a b, <sos> b a, <sos> b b\}。以此类推,继续生成未结束序列,直到单词序列的长度达到所允许的最大长度。 \parinterval 此时可以划分出长度为1的完整单词序列集合$\{\text{<sos>}\ a\ \text{<eos>}, \text{<sos>}\ b\ \text{<sos>}\}$,以及长度为2的未结束单词序列片段集合$\{\text{<sos>}\ a\ a, \text{<sos>}\ a\ b, \text{<sos>}\ b\ a, \text{<sos>}\ b\ b\}$。以此类推,继续生成未结束序列,直到单词序列的长度达到所允许的最大长度。
\parinterval 对于这两种搜索算法,通常可以从以下四个方面评价: \parinterval 对于这两种搜索算法,通常可以从以下四个方面评价:
...@@ -874,8 +874,8 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll} ...@@ -874,8 +874,8 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll}
{ {
\begin{tabular}{c|c|c} \begin{tabular}{c|c|c}
\rule{0pt}{10pt} & 时间复杂度 & 空间复杂度\\ \hline \rule{0pt}{10pt} & 时间复杂度 & 空间复杂度\\ \hline
\rule{0pt}{10pt} 深度优先 & $\textrm{O}({(|V|+1)}^{m-1})$ & $\textrm{O}(m)$ \\ \rule{0pt}{10pt} 深度优先 & $O({(|V|+1)}^{m-1})$ & $O(m)$ \\
\rule{0pt}{10pt} 宽度优先 & $\textrm{O}({(|V|+1)}^{m-1}$) & $\textrm{O}({(|V|+1)}^{m})$ \\ \rule{0pt}{10pt} 宽度优先 & $O({(|V|+1)}^{m-1}$) & $O({(|V|+1)}^{m})$ \\
\end{tabular} \end{tabular}
\label{tab:2-3} \label{tab:2-3}
} }
...@@ -883,7 +883,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll} ...@@ -883,7 +883,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll}
}\end{table} }\end{table}
%------------------------------------------------------ %------------------------------------------------------
\parinterval 那么是否有比枚举策略更高效的方法呢?答案是肯定的。一种直观的方法是将搜索的过程表示成树型结构,称为解空间树。它包含了搜索过程中可生成的全部序列。该树的根节点恒为$<$sos$>$,代表序列均从$<$sos$>$ 开始。该树结构中非叶子节点的兄弟节点有$|V|$个,由词表和结束符号$<$eos$>$构成。从图\ref{fig:2-14}可以看到,对于一个最大长度为4的序列的搜索过程,生成某个单词序列的过程实际上就是访问解空间树中从根节点<sos> 开始一直到叶子节点<eos>结束的某条路径,而这条的路径上节点按顺序组成了一段独特的单词序列。此时对所有可能单词序列的枚举就变成了对解空间树的遍历。并且枚举的过程与语言模型打分的过程也是一致的,每枚举一个词$i$也就是在上图选择$w_i$一列的一个节点,语言模型就可以为当前的树节点$w_i$给出一个分值,即$\funp{P}(w_i | w_1 w_2 \ldots w_{i-1})$。对于$n$-gram语言模型,这个分值$\funp{P}(w_i | w_1 w_2 \ldots w_{i-1})=\funp{P}(w_i | w_{i-n+1} \ldots w_{i-1})$ \parinterval 那么是否有比枚举策略更高效的方法呢?答案是肯定的。一种直观的方法是将搜索的过程表示成树型结构,称为解空间树。它包含了搜索过程中可生成的全部序列。该树的根节点恒为<sos>,代表序列均从<sos> 开始。该树结构中非叶子节点的兄弟节点有$|V|$个,由词表和结束符号<eos>构成。从图\ref{fig:2-14}可以看到,对于一个最大长度为4的序列的搜索过程,生成某个单词序列的过程实际上就是访问解空间树中从根节点<sos> 开始一直到叶子节点<eos>结束的某条路径,而这条的路径上节点按顺序组成了一段独特的单词序列。此时对所有可能单词序列的枚举就变成了对解空间树的遍历。并且枚举的过程与语言模型打分的过程也是一致的,每枚举一个词$i$也就是在上图选择$w_i$一列的一个节点,语言模型就可以为当前的树节点$w_i$给出一个分值,即$\funp{P}(w_i | w_1 w_2 \ldots w_{i-1})$。对于$n$-gram语言模型,这个分值$\funp{P}(w_i | w_1 w_2 \ldots w_{i-1})=\funp{P}(w_i | w_{i-n+1} \ldots w_{i-1})$
%---------------------------------------------- %----------------------------------------------
\begin{figure}[htp] \begin{figure}[htp]
...@@ -993,7 +993,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll} ...@@ -993,7 +993,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll}
\subsubsection{1.贪婪搜索} \subsubsection{1.贪婪搜索}
\parinterval {\small\bfnew{贪婪搜索}}\index{贪婪搜索}(Greedy Search)\index{Greedy Search}基于一种思想:当一个问题可以拆分为多个子问题时,如果一直选择子问题的最优解就能得到原问题的最优解,那么就可以不必遍历原始的解空间,而是使用这种“贪婪”的策略进行搜索。基于这种思想,它每次都优先挑选得分最高的词进行扩展,这一点与改进过的深度优先搜索类似。但是它们的区别在于,贪婪搜索在搜索到一个完整的序列,也就是搜索到<eos>即停止,而改进的深度优先搜索会遍历整个解空间。因此贪婪搜索非常高效,其时间和空间复杂度仅为$\textrm{O}(m)$,这里$m$为单词序列的长度。 \parinterval {\small\bfnew{贪婪搜索}}\index{贪婪搜索}(Greedy Search)\index{Greedy Search}基于一种思想:当一个问题可以拆分为多个子问题时,如果一直选择子问题的最优解就能得到原问题的最优解,那么就可以不必遍历原始的解空间,而是使用这种“贪婪”的策略进行搜索。基于这种思想,它每次都优先挑选得分最高的词进行扩展,这一点与改进过的深度优先搜索类似。但是它们的区别在于,贪婪搜索在搜索到一个完整的序列,也就是搜索到<eos>即停止,而改进的深度优先搜索会遍历整个解空间。因此贪婪搜索非常高效,其时间和空间复杂度仅为$O(m)$,这里$m$为单词序列的长度。
\parinterval 由于贪婪搜索并没有遍历整个解空间,所以该方法不保证一定能找到最优解。比如对于如图\ref{fig:2-18}所示的一个搜索结构,贪婪搜索将选择红线所示的序列,该序列的最终得分是-1.7。但是,对比图\ref{fig:2-16}可以发现,在另一条路径上有得分更高的序列“<sos>\ I\ agree\ <eos>”,它的得分为-1.5。此时贪婪搜索并没有找到最优解,由于贪婪搜索选择的单词是当前步骤得分最高的,但是最后生成的单词序列的得分取决于它未生成部分的得分。因此当得分最高的单词的子树中未生成部分的得分远远小于其他子树时,贪婪搜索提供的解的质量会非常差。同样的问题可以出现在使用贪婪搜索的任意时刻。但是,即使是这样,凭借其简单的思想以及在真实问题上的效果,贪婪搜索在很多场景中仍然得到了深入应用。 \parinterval 由于贪婪搜索并没有遍历整个解空间,所以该方法不保证一定能找到最优解。比如对于如图\ref{fig:2-18}所示的一个搜索结构,贪婪搜索将选择红线所示的序列,该序列的最终得分是-1.7。但是,对比图\ref{fig:2-16}可以发现,在另一条路径上有得分更高的序列“<sos>\ I\ agree\ <eos>”,它的得分为-1.5。此时贪婪搜索并没有找到最优解,由于贪婪搜索选择的单词是当前步骤得分最高的,但是最后生成的单词序列的得分取决于它未生成部分的得分。因此当得分最高的单词的子树中未生成部分的得分远远小于其他子树时,贪婪搜索提供的解的质量会非常差。同样的问题可以出现在使用贪婪搜索的任意时刻。但是,即使是这样,凭借其简单的思想以及在真实问题上的效果,贪婪搜索在很多场景中仍然得到了深入应用。
...@@ -1014,7 +1014,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll} ...@@ -1014,7 +1014,7 @@ c_{\textrm{KN}}(\cdot) = \left\{\begin{array}{ll}
\parinterval 贪婪搜索会产生质量比较差的解是由于当前单词的错误选择造成的。既然每次只挑选一个单词可能会产生错误,那么可以通过同时考虑更多候选单词来缓解这个问题,也就是对于一个位置,可以同时将其扩展到若干个节点。这样就扩大了搜索的范围,进而使得优质解被找到的概率增大。 \parinterval 贪婪搜索会产生质量比较差的解是由于当前单词的错误选择造成的。既然每次只挑选一个单词可能会产生错误,那么可以通过同时考虑更多候选单词来缓解这个问题,也就是对于一个位置,可以同时将其扩展到若干个节点。这样就扩大了搜索的范围,进而使得优质解被找到的概率增大。
\parinterval 常见的做法是每一次生成新单词的时候都挑选得分最高的前$B$个单词,然后扩展这$B$个单词的$T$个孩子节点,得到$BT$条新路径,最后保留其中得分最高的$B$条路径。从另外一个角度理解,它相当于比贪婪搜索看到了更多的路径,因而它更有可能找到好的解。这个方法通常被称为{\small\bfnew{束搜索}}\index{束搜索}(Beam Search)\index{Beam Search}。图\ref{fig:2-19}展示了一个束大小为3的例子,其中束大小代表每次选择单词时保留的词数。比起贪婪搜索,束搜索在实际表现中非常优秀,它的时间、空间复杂度仅为贪婪搜索的常数倍,也就是$\textrm{O}(Bm)$ \parinterval 常见的做法是每一次生成新单词的时候都挑选得分最高的前$B$个单词,然后扩展这$B$个单词的$T$个孩子节点,得到$BT$条新路径,最后保留其中得分最高的$B$条路径。从另外一个角度理解,它相当于比贪婪搜索看到了更多的路径,因而它更有可能找到好的解。这个方法通常被称为{\small\bfnew{束搜索}}\index{束搜索}(Beam Search)\index{Beam Search}。图\ref{fig:2-19}展示了一个束大小为3的例子,其中束大小代表每次选择单词时保留的词数。比起贪婪搜索,束搜索在实际表现中非常优秀,它的时间、空间复杂度仅为贪婪搜索的常数倍,也就是$O(Bm)$
%---------------------------------------------- %----------------------------------------------
\begin{figure}[htp] \begin{figure}[htp]
......
...@@ -10,39 +10,39 @@ ...@@ -10,39 +10,39 @@
} }
{\scriptsize {\scriptsize
\node [anchor=west,minimum height=2.5em,minimum width=5.0em] (sf1) at ([xshift=1em]st.east) {}; \node [anchor=west,minimum height=2.5em,minimum width=5.0em] (sf1) at ([xshift=1em]st.east) {};
\node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s1) at ([xshift=2.48em]sf1.east) {科学家}; \node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s1) at ([xshift=2.5em]sf1.east) {科学家};
\node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s2) at ([xshift=2.19em]s1.east) {}; \node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s2) at ([xshift=2.5em]s1.east) {};
\node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s3) at ([xshift=2.185em]s2.east) {并不}; \node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s3) at ([xshift=2.5em]s2.east) {并不};
\node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s4) at ([xshift=2.183em]s3.east) {知道}; \node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s4) at ([xshift=2.5em]s3.east) {知道};
} }
{\scriptsize {\scriptsize
\node [anchor=west] (tau11) at ([xshift=1.5em]taut.east) {$\tau_0$\tiny{1.NULL}}; \node [anchor=west] (tau11) at ([xshift=1.24em]taut.east) {$\tau_0$\; \tiny{1.NULL}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau1) [fit = (tau11)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=7.0em,fill=red!30,drop shadow] (tau1) [fit = (tau11)] {};
\end{pgfonlayer} \end{pgfonlayer}
\node [anchor=west] (tau21) at ([xshift=1.80em]tau1.east) {$\tau_1$}; \node [anchor=west] (tau21) at ([xshift=1.575em]tau1.east) {$\tau_1$\;};
\node [anchor=west] (tau22) at ([yshift=-0.2em,xshift=-0.5em]tau21.north east) {\tiny{1.科学家}}; \node [anchor=west] (tau22) at ([yshift=-0.2em,xshift=-0.5em]tau21.north east) {\tiny{1.科学家}};
\node [anchor=west] (tau23) at ([yshift=0.2em,xshift=-0.5em]tau21.south east) {\tiny{2.们}}; \node [anchor=west] (tau23) at ([yshift=0.2em,xshift=-0.5em]tau21.south east) {\tiny{2.们}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=blue!30,drop shadow] (tau2)[fit = (tau21) (tau22) (tau23)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=7.0em,fill=red!30,drop shadow] (tau2)[fit = (tau21) (tau22) (tau23)] {};
\end{pgfonlayer} \end{pgfonlayer}
\node [anchor=west] (tau31) at ([xshift=2.05em]tau2.east) {$\tau_2$\tiny{1.NULL}}; \node [anchor=west] (tau31) at ([xshift=1.997em]tau2.east) {$\tau_2$\; \tiny{1.NULL}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau3) [fit = (tau31)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=7.0em,fill=red!30,drop shadow] (tau3) [fit = (tau31)] {};
\end{pgfonlayer} \end{pgfonlayer}
\node [anchor=west] (tau41) at ([xshift=2.2em]tau3.east) {$\tau_3$\tiny{1.并不}}; \node [anchor=west] (tau41) at ([xshift=2.153em]tau3.east) {$\tau_3$\; \tiny{1.并不}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau4) [fit = (tau41)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=7.0em,fill=red!30,drop shadow] (tau4) [fit = (tau41)] {};
\end{pgfonlayer} \end{pgfonlayer}
\node [anchor=west] (tau51) at ([xshift=2.2em]tau4.east) {$\tau_4$\tiny{1.知道}}; \node [anchor=west] (tau51) at ([xshift=2.1525em]tau4.east) {$\tau_4$\; \tiny{1.知道}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau5) [fit = (tau51)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=7.0em,fill=red!30,drop shadow] (tau5) [fit = (tau51)] {};
\end{pgfonlayer} \end{pgfonlayer}
} }
...@@ -73,37 +73,37 @@ ...@@ -73,37 +73,37 @@
{\scriptsize {\scriptsize
\node [anchor=west,minimum height=2.5em,minimum width=5.0em] (sf12) at ([yshift=-15.0em,xshift=1em]st.east) {}; \node [anchor=west,minimum height=2.5em,minimum width=5.0em] (sf12) at ([yshift=-15.0em,xshift=1em]st.east) {};
\node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s12) at ([xshift=2.48em]sf12.east) {科学家}; \node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s12) at ([xshift=2.5em]sf12.east) {科学家};
\node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s22) at ([xshift=2.19em]s12.east) {}; \node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s22) at ([xshift=2.5em]s12.east) {};
\node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s32) at ([xshift=2.185em]s22.east) {并不}; \node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s32) at ([xshift=2.5em]s22.east) {并不};
\node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s42) at ([xshift=2.183em]s32.east) {知道}; \node [rectangle,draw,anchor=west,line width=1pt,minimum height=2.5em,minimum width=5.0em,fill=green!30,drop shadow] (s42) at ([xshift=2.5em]s32.east) {知道};
} }
{\scriptsize {\scriptsize
\node [anchor=west] (tau112) at ([yshift=-15.0em,xshift=1.5em]taut.east) {$\tau_0$\tiny{1.NULL}}; \node [anchor=west] (tau112) at ([yshift=-15.0em,xshift=1.24em]taut.east) {$\tau_0$\; \tiny{1.NULL}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau12) [fit = (tau112)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau12) [fit = (tau112)] {};
\end{pgfonlayer} \end{pgfonlayer}
\node [anchor=west] (tau212) at ([xshift=1.80em]tau12.east) {$\tau_1$}; \node [anchor=west] (tau212) at ([xshift=1.6762em]tau12.east) {$\tau_1$\;};
\node [anchor=west] (tau222) at ([yshift=-0.2em,xshift=-0.5em]tau212.north east) {\tiny{1.们}}; \node [anchor=west] (tau222) at ([yshift=-0.2em,xshift=-0.5em]tau212.north east) {\tiny{1.们}};
\node [anchor=west] (tau232) at ([yshift=0.2em,xshift=-0.5em]tau212.south east) {\tiny{2.科学家}}; \node [anchor=west] (tau232) at ([yshift=0.2em,xshift=-0.5em]tau212.south east) {\tiny{2.科学家}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=yellow!30,drop shadow] (tau22)[fit = (tau212) (tau222) (tau232)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=yellow!30,drop shadow] (tau22)[fit = (tau212) (tau222) (tau232)] {};
\end{pgfonlayer} \end{pgfonlayer}
\node [anchor=west] (tau312) at ([xshift=2.05em]tau22.east) {$\tau_2$\tiny{1.NULL}}; \node [anchor=west] (tau312) at ([xshift=1.997em]tau22.east) {$\tau_2$\; \tiny{1.NULL}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau32) [fit = (tau312)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau32) [fit = (tau312)] {};
\end{pgfonlayer} \end{pgfonlayer}
\node [anchor=west] (tau412) at ([xshift=2.2em]tau32.east) {$\tau_3$\tiny{1.并不}}; \node [anchor=west] (tau412) at ([xshift=1.9555em]tau32.east) {$\tau_3$\; \tiny{1.并不}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau42) [fit = (tau412)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau42) [fit = (tau412)] {};
\end{pgfonlayer} \end{pgfonlayer}
\node [anchor=west] (tau512) at ([xshift=2.2em]tau42.east) {$\tau_4$\tiny{1.知道}}; \node [anchor=west] (tau512) at ([xshift=2.2525em]tau42.east) {$\tau_4$\; \tiny{1.知道}};
\begin{pgfonlayer}{background} \begin{pgfonlayer}{background}
\node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau52) [fit = (tau512)] {}; \node [rounded rectangle,draw,line width=1pt,minimum height=3.0em,minimum width=6.8em,fill=red!30,drop shadow] (tau52) [fit = (tau512)] {};
\end{pgfonlayer} \end{pgfonlayer}
...@@ -131,6 +131,7 @@ ...@@ -131,6 +131,7 @@
\draw [->,thick] (d42.north) -- ([yshift=-4.45em]s32.south); \draw [->,thick] (d42.north) -- ([yshift=-4.45em]s32.south);
\draw [->,thick] (d52.north) -- ([yshift=-4.45em]s42.south); \draw [->,thick] (d52.north) -- ([yshift=-4.45em]s42.south);
%\end{scope} %\end{scope}
\end{tikzpicture} \end{tikzpicture}
......
...@@ -11,10 +11,10 @@ ...@@ -11,10 +11,10 @@
year={1983}, year={1983},
} }
@book{2019cns, @article{2019cns,
title={2019中国语言服务行业发展报告}, title={2019中国语言服务行业发展报告},
author={中国翻译协会}, author={中国翻译协会},
publisher={中国翻译协会}, journal={中国翻译协会},
year={2019} year={2019}
} }
...@@ -74,7 +74,7 @@ ...@@ -74,7 +74,7 @@
author = {Satoshi Sato and author = {Satoshi Sato and
Makoto Nagao}, Makoto Nagao},
title = {Toward Memory-based Translation}, title = {Toward Memory-based Translation},
booktitle = {13th International Conference on Computational Linguistics, {COLING} //booktitle = {13th International Conference on Computational Linguistics, {COLING}
1990, University of Helsinki, Finland, August 20-25, 1990}, 1990, University of Helsinki, Finland, August 20-25, 1990},
pages = {247--252}, pages = {247--252},
year = {1990} year = {1990}
...@@ -213,7 +213,7 @@ ...@@ -213,7 +213,7 @@
Daniele Pighin and Daniele Pighin and
Yuval Marton}, Yuval Marton},
title = {Effective Approaches to Attention-based Neural Machine Translation}, title = {Effective Approaches to Attention-based Neural Machine Translation},
booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural //booktitle = {Proceedings of the 2015 Conference on Empirical Methods in Natural
Language Processing, {EMNLP} 2015, Lisbon, Portugal, September 17-21, Language Processing, {EMNLP} 2015, Lisbon, Portugal, September 17-21,
2015}, 2015},
pages = {1412--1421}, pages = {1412--1421},
...@@ -230,7 +230,7 @@ ...@@ -230,7 +230,7 @@
//editor = {Doina Precup and //editor = {Doina Precup and
Yee Whye Teh}, Yee Whye Teh},
title = {Convolutional Sequence to Sequence Learning}, title = {Convolutional Sequence to Sequence Learning},
booktitle = {Proceedings of the 34th International Conference on Machine Learning, //booktitle = {Proceedings of the 34th International Conference on Machine Learning,
{ICML} 2017, Sydney, NSW, Australia, 6-11 August 2017}, {ICML} 2017, Sydney, NSW, Australia, 6-11 August 2017},
series = {Proceedings of Machine Learning Research}, series = {Proceedings of Machine Learning Research},
volume = {70}, volume = {70},
...@@ -246,7 +246,7 @@ ...@@ -246,7 +246,7 @@
//editor = {Yoshua Bengio and //editor = {Yoshua Bengio and
Yann LeCun}, Yann LeCun},
title = {Neural Machine Translation by Jointly Learning to Align and Translate}, title = {Neural Machine Translation by Jointly Learning to Align and Translate},
booktitle = {3rd International Conference on Learning Representations, {ICLR} 2015, //booktitle = {3rd International Conference on Learning Representations, {ICLR} 2015,
San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings}, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings},
year = {2015} year = {2015}
} }
...@@ -261,7 +261,7 @@ ...@@ -261,7 +261,7 @@
Neil D. Lawrence and Neil D. Lawrence and
Kilian Q. Weinberger}, Kilian Q. Weinberger},
title = {Sequence to Sequence Learning with Neural Networks}, title = {Sequence to Sequence Learning with Neural Networks},
booktitle = {Advances in Neural Information Processing Systems 27: Annual Conference //booktitle = {Advances in Neural Information Processing Systems 27: Annual Conference
on Neural Information Processing Systems 2014, December 8-13 2014, on Neural Information Processing Systems 2014, December 8-13 2014,
Montreal, Quebec, Canada}, Montreal, Quebec, Canada},
pages = {3104--3112}, pages = {3104--3112},
...@@ -397,7 +397,7 @@ ...@@ -397,7 +397,7 @@
author = {Reinhard Kneser and author = {Reinhard Kneser and
Hermann Ney}, Hermann Ney},
title = {Improved backing-off for M-gram language modeling}, title = {Improved backing-off for M-gram language modeling},
booktitle = {1995 International Conference on Acoustics, Speech, and Signal Processing, //booktitle = {1995 International Conference on Acoustics, Speech, and Signal Processing,
{ICASSP} '95, Detroit, Michigan, USA, May 08-12, 1995}, {ICASSP} '95, Detroit, Michigan, USA, May 08-12, 1995},
pages = {181--184}, pages = {181--184},
publisher = {{IEEE} Computer Society}, publisher = {{IEEE} Computer Society},
...@@ -407,7 +407,7 @@ ...@@ -407,7 +407,7 @@
@inproceedings{ney1991smoothing, @inproceedings{ney1991smoothing,
title={On smoothing techniques for bigram-based natural language modelling}, title={On smoothing techniques for bigram-based natural language modelling},
author={Ney, Hermann and Essen, Ute}, author={Ney, Hermann and Essen, Ute},
booktitle={Acoustics, Speech, and Signal Processing, IEEE International Conference on}, //booktitle={Acoustics, Speech, and Signal Processing, IEEE International Conference on},
pages={825--828}, pages={825--828},
year={1991}, year={1991},
organization={IEEE Computer Society} organization={IEEE Computer Society}
...@@ -418,7 +418,7 @@ ...@@ -418,7 +418,7 @@
//editor = {John H. L. Hansen and //editor = {John H. L. Hansen and
Bryan L. Pellom}, Bryan L. Pellom},
title = {{SRILM} - an extensible language modeling toolkit}, title = {{SRILM} - an extensible language modeling toolkit},
booktitle = {7th International Conference on Spoken Language Processing, {ICSLP2002} //booktitle = {7th International Conference on Spoken Language Processing, {ICSLP2002}
- {INTERSPEECH} 2002, Denver, Colorado, USA, September 16-20, 2002}, - {INTERSPEECH} 2002, Denver, Colorado, USA, September 16-20, 2002},
publisher = {{ISCA}}, publisher = {{ISCA}},
year = {2002} year = {2002}
...@@ -553,7 +553,7 @@ ...@@ -553,7 +553,7 @@
Mingbo Ma}, Mingbo Ma},
title = {When to Finish? Optimal Beam Search for Neural Text Generation (modulo title = {When to Finish? Optimal Beam Search for Neural Text Generation (modulo
beam size)}, beam size)},
booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural //booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, {EMNLP} 2017, Copenhagen, Denmark, September Language Processing, {EMNLP} 2017, Copenhagen, Denmark, September
9-11, 2017}, 9-11, 2017},
pages = {2134--2139}, pages = {2134--2139},
...@@ -571,7 +571,7 @@ ...@@ -571,7 +571,7 @@
Jun'ichi Tsujii}, Jun'ichi Tsujii},
title = {Breaking the Beam Search Curse: {A} Study of (Re-)Scoring Methods title = {Breaking the Beam Search Curse: {A} Study of (Re-)Scoring Methods
and Stopping Criteria for Neural Machine Translation}, and Stopping Criteria for Neural Machine Translation},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural //booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural
Language Processing, Brussels, Belgium, October 31 - November 4, 2018}, Language Processing, Brussels, Belgium, October 31 - November 4, 2018},
pages = {3054--3059}, pages = {3054--3059},
publisher = {Association for Computational Linguistics}, publisher = {Association for Computational Linguistics},
...@@ -626,7 +626,7 @@ ...@@ -626,7 +626,7 @@
@inproceedings{kirchhoff2005improved, @inproceedings{kirchhoff2005improved,
title={Improved Language Modeling for Statistical Machine Translation}, title={Improved Language Modeling for Statistical Machine Translation},
author={Katrin {Kirchhoff} and Mei {Yang}}, author={Katrin {Kirchhoff} and Mei {Yang}},
booktitle={Proceedings of the ACL Workshop on Building and Using Parallel Texts}, //booktitle={Proceedings of the ACL Workshop on Building and Using Parallel Texts},
pages={125--128}, pages={125--128},
year={2005} year={2005}
} }
...@@ -634,7 +634,7 @@ ...@@ -634,7 +634,7 @@
@inproceedings{koehn2007factored, @inproceedings{koehn2007factored,
title={Factored Translation Models}, title={Factored Translation Models},
author={Philipp {Koehn} and Hieu {Hoang}}, author={Philipp {Koehn} and Hieu {Hoang}},
booktitle={Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)}, //booktitle={Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)},
pages={868--876}, pages={868--876},
year={2007} year={2007}
} }
...@@ -642,7 +642,7 @@ ...@@ -642,7 +642,7 @@
@inproceedings{sarikaya2007joint, @inproceedings{sarikaya2007joint,
title={Joint Morphological-Lexical Language Modeling for Machine Translation}, title={Joint Morphological-Lexical Language Modeling for Machine Translation},
author={Ruhi {Sarikaya} and Yonggang {Deng}}, author={Ruhi {Sarikaya} and Yonggang {Deng}},
booktitle={Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers}, //booktitle={Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers},
pages={145--148}, pages={145--148},
year={2007} year={2007}
} }
...@@ -650,7 +650,7 @@ ...@@ -650,7 +650,7 @@
@inproceedings{heafield2011kenlm, @inproceedings{heafield2011kenlm,
title={KenLM: Faster and Smaller Language Model Queries}, title={KenLM: Faster and Smaller Language Model Queries},
author={Kenneth {Heafield}}, author={Kenneth {Heafield}},
booktitle={Proceedings of the Sixth Workshop on Statistical Machine Translation}, //booktitle={Proceedings of the Sixth Workshop on Statistical Machine Translation},
pages={187--197}, pages={187--197},
year={2011} year={2011}
} }
...@@ -658,7 +658,7 @@ ...@@ -658,7 +658,7 @@
@inproceedings{federico2006how, @inproceedings{federico2006how,
title={How Many Bits Are Needed To Store Probabilities for Phrase-Based Translation?}, title={How Many Bits Are Needed To Store Probabilities for Phrase-Based Translation?},
author={Marcello {Federico} and Nicola {Bertoldi}}, author={Marcello {Federico} and Nicola {Bertoldi}},
booktitle={Proceedings on the Workshop on Statistical Machine Translation}, //booktitle={Proceedings on the Workshop on Statistical Machine Translation},
pages={94--101}, pages={94--101},
year={2006} year={2006}
} }
...@@ -666,7 +666,7 @@ ...@@ -666,7 +666,7 @@
@inproceedings{federico2007efficient, @inproceedings{federico2007efficient,
title={Efficient Handling of N-gram Language Models for Statistical Machine Translation}, title={Efficient Handling of N-gram Language Models for Statistical Machine Translation},
author={Marcello {Federico} and Mauro {Cettolo}}, author={Marcello {Federico} and Mauro {Cettolo}},
booktitle={Proceedings of the Second Workshop on Statistical Machine Translation}, //booktitle={Proceedings of the Second Workshop on Statistical Machine Translation},
pages={88--95}, pages={88--95},
year={2007} year={2007}
} }
...@@ -674,7 +674,7 @@ ...@@ -674,7 +674,7 @@
@inproceedings{talbot2007randomised, @inproceedings{talbot2007randomised,
title={Randomised Language Modelling for Statistical Machine Translation}, title={Randomised Language Modelling for Statistical Machine Translation},
author={David {Talbot} and Miles {Osborne}}, author={David {Talbot} and Miles {Osborne}},
booktitle={Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics}, //booktitle={Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics},
pages={512--519}, pages={512--519},
year={2007} year={2007}
} }
...@@ -682,7 +682,7 @@ ...@@ -682,7 +682,7 @@
@inproceedings{talbot2007smoothed, @inproceedings{talbot2007smoothed,
title={Smoothed Bloom Filter Language Models: Tera-Scale LMs on the Cheap}, title={Smoothed Bloom Filter Language Models: Tera-Scale LMs on the Cheap},
author={David {Talbot} and Miles {Osborne}}, author={David {Talbot} and Miles {Osborne}},
booktitle={Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)}, //booktitle={Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)},
pages={468--476}, pages={468--476},
year={2007} year={2007}
} }
...@@ -714,7 +714,7 @@ ...@@ -714,7 +714,7 @@
Keikichi Hirose and Keikichi Hirose and
Satoshi Nakamura}, Satoshi Nakamura},
title = {Recurrent neural network based language model}, title = {Recurrent neural network based language model},
booktitle = {{INTERSPEECH} 2010, 11th Annual Conference of the International Speech //booktitle = {{INTERSPEECH} 2010, 11th Annual Conference of the International Speech
Communication Association, Makuhari, Chiba, Japan, September 26-30, Communication Association, Makuhari, Chiba, Japan, September 26-30,
2010}, 2010},
pages = {1045--1048}, pages = {1045--1048},
...@@ -725,7 +725,7 @@ ...@@ -725,7 +725,7 @@
@inproceedings{sundermeyer2012lstm, @inproceedings{sundermeyer2012lstm,
title={LSTM Neural Networks for Language Modeling.}, title={LSTM Neural Networks for Language Modeling.},
author={Martin {Sundermeyer} and Ralf {Schlüter} and Hermann {Ney}}, author={Martin {Sundermeyer} and Ralf {Schlüter} and Hermann {Ney}},
booktitle={INTERSPEECH}, //booktitle={INTERSPEECH},
pages={194--197}, pages={194--197},
year={2012} year={2012}
} }
...@@ -733,7 +733,7 @@ ...@@ -733,7 +733,7 @@
@inproceedings{vaswani2017attention, @inproceedings{vaswani2017attention,
title={Attention is All You Need}, title={Attention is All You Need},
author={Ashish {Vaswani} and Noam {Shazeer} and Niki {Parmar} and Jakob {Uszkoreit} and Llion {Jones} and Aidan N. {Gomez} and Lukasz {Kaiser} and Illia {Polosukhin}}, author={Ashish {Vaswani} and Noam {Shazeer} and Niki {Parmar} and Jakob {Uszkoreit} and Llion {Jones} and Aidan N. {Gomez} and Lukasz {Kaiser} and Illia {Polosukhin}},
booktitle={Proceedings of the 31st International Conference on Neural Information Processing Systems}, //booktitle={Proceedings of the 31st International Conference on Neural Information Processing Systems},
pages={5998--6008}, pages={5998--6008},
year={2017} year={2017}
} }
...@@ -741,7 +741,7 @@ ...@@ -741,7 +741,7 @@
@inproceedings{tillmann1997a, @inproceedings{tillmann1997a,
title={A DP-based Search Using Monotone Alignments in Statistical Translation}, title={A DP-based Search Using Monotone Alignments in Statistical Translation},
author={Christoph {Tillmann} and Stephan {Vogel} and Hermann {Ney} and Alex {Zubiaga}}, author={Christoph {Tillmann} and Stephan {Vogel} and Hermann {Ney} and Alex {Zubiaga}},
booktitle={Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics}, //booktitle={Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics},
pages={289--296}, pages={289--296},
year={1997} year={1997}
} }
...@@ -752,7 +752,7 @@ ...@@ -752,7 +752,7 @@
//editor = {Philip R. Cohen and //editor = {Philip R. Cohen and
Wolfgang Wahlster}, Wolfgang Wahlster},
title = {Decoding Algorithm in Statistical Machine Translation}, title = {Decoding Algorithm in Statistical Machine Translation},
booktitle = {35th Annual Meeting of the Association for Computational Linguistics //booktitle = {35th Annual Meeting of the Association for Computational Linguistics
and 8th Conference of the European Chapter of the Association for and 8th Conference of the European Chapter of the Association for
Computational Linguistics, Proceedings of the Conference, 7-12 July Computational Linguistics, Proceedings of the Conference, 7-12 July
1997, Universidad Nacional de Educaci{\'{o}}n a Distancia (UNED), 1997, Universidad Nacional de Educaci{\'{o}}n a Distancia (UNED),
...@@ -767,7 +767,7 @@ ...@@ -767,7 +767,7 @@
Nicola Ueffing and Nicola Ueffing and
Hermann Ney}, Hermann Ney},
title = {An Efficient A* Search Algorithm for Statistical Machine Translation}, title = {An Efficient A* Search Algorithm for Statistical Machine Translation},
booktitle = {Proceedings of the {ACL} Workshop on Data-Driven Methods in Machine //booktitle = {Proceedings of the {ACL} Workshop on Data-Driven Methods in Machine
Translation, Toulouse, France, July 7, 2001}, Translation, Toulouse, France, July 7, 2001},
year = {2001} year = {2001}
} }
...@@ -775,7 +775,7 @@ ...@@ -775,7 +775,7 @@
@inproceedings{germann2001fast, @inproceedings{germann2001fast,
title={Fast Decoding and Optimal Decoding for Machine Translation}, title={Fast Decoding and Optimal Decoding for Machine Translation},
author={Ulrich {Germann} and Michael {Jahr} and Kevin {Knight} and Daniel {Marcu} and Kenji {Yamada}}, author={Ulrich {Germann} and Michael {Jahr} and Kevin {Knight} and Daniel {Marcu} and Kenji {Yamada}},
booktitle={Proceedings of 39th Annual Meeting of the Association for Computational Linguistics}, //booktitle={Proceedings of 39th Annual Meeting of the Association for Computational Linguistics},
pages={228--235}, pages={228--235},
year={2001} year={2001}
} }
...@@ -783,7 +783,7 @@ ...@@ -783,7 +783,7 @@
@inproceedings{germann2003greedy, @inproceedings{germann2003greedy,
title={Greedy decoding for statistical machine translation in almost linear time}, title={Greedy decoding for statistical machine translation in almost linear time},
author={Ulrich {Germann}}, author={Ulrich {Germann}},
booktitle={NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1}, //booktitle={NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1},
pages={1--8}, pages={1--8},
year={2003} year={2003}
} }
...@@ -807,7 +807,7 @@ ...@@ -807,7 +807,7 @@
Antal van den Bosch and Antal van den Bosch and
Annie Zaenen}, Annie Zaenen},
title = {Moses: Open Source Toolkit for Statistical Machine Translation}, title = {Moses: Open Source Toolkit for Statistical Machine Translation},
booktitle = {{ACL} 2007, Proceedings of the 45th Annual Meeting of the Association //booktitle = {{ACL} 2007, Proceedings of the 45th Annual Meeting of the Association
for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic}, for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic},
publisher = {The Association for Computational Linguistics}, publisher = {The Association for Computational Linguistics},
year = {2007} year = {2007}
...@@ -819,7 +819,7 @@ ...@@ -819,7 +819,7 @@
Kathryn Taylor}, Kathryn Taylor},
title = {Pharaoh: {A} Beam Search Decoder for Phrase-Based Statistical Machine title = {Pharaoh: {A} Beam Search Decoder for Phrase-Based Statistical Machine
Translation Models}, Translation Models},
booktitle = {Machine Translation: From Real Users to Research, 6th Conference of //booktitle = {Machine Translation: From Real Users to Research, 6th Conference of
the Association for Machine Translation in the Americas, {AMTA} 2004, the Association for Machine Translation in the Americas, {AMTA} 2004,
Washington, DC, USA, September 28-October 2, 2004, Proceedings}, Washington, DC, USA, September 28-October 2, 2004, Proceedings},
series = {Lecture Notes in Computer Science}, series = {Lecture Notes in Computer Science},
...@@ -832,7 +832,7 @@ ...@@ -832,7 +832,7 @@
@inproceedings{bangalore2001a, @inproceedings{bangalore2001a,
title={A finite-state approach to machine translation}, title={A finite-state approach to machine translation},
author={S. {Bangalore} and G. {Riccardi}}, author={S. {Bangalore} and G. {Riccardi}},
booktitle={IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.}, //booktitle={IEEE Workshop on Automatic Speech Recognition and Understanding, 2001. ASRU '01.},
pages={381--388}, pages={381--388},
year={2001} year={2001}
} }
...@@ -840,7 +840,7 @@ ...@@ -840,7 +840,7 @@
@inproceedings{bangalore2000stochastic, @inproceedings{bangalore2000stochastic,
title={Stochastic finite-state models for spoken language machine translation}, title={Stochastic finite-state models for spoken language machine translation},
author={Srinivas {Bangalore} and Giuseppe {Riccardi}}, author={Srinivas {Bangalore} and Giuseppe {Riccardi}},
booktitle={NAACL-ANLP-EMTS '00 Proceedings of the 2000 NAACL-ANLP Workshop on Embedded machine translation systems - Volume 5}, //booktitle={NAACL-ANLP-EMTS '00 Proceedings of the 2000 NAACL-ANLP Workshop on Embedded machine translation systems - Volume 5},
pages={52--59}, pages={52--59},
year={2000} year={2000}
} }
...@@ -848,7 +848,7 @@ ...@@ -848,7 +848,7 @@
@inproceedings{venugopal2007an, @inproceedings{venugopal2007an,
title={An Efficient Two-Pass Approach to Synchronous-CFG Driven Statistical MT}, title={An Efficient Two-Pass Approach to Synchronous-CFG Driven Statistical MT},
author={Ashish {Venugopal} and Andreas {Zollmann} and Vogel {Stephan}}, author={Ashish {Venugopal} and Andreas {Zollmann} and Vogel {Stephan}},
booktitle={Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference}, //booktitle={Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference},
pages={500--507}, pages={500--507},
year={2007} year={2007}
} }
...@@ -864,7 +864,7 @@ ...@@ -864,7 +864,7 @@
Christof Monz}, Christof Monz},
title = {The Syntax Augmented {MT} {(SAMT)} System at the Shared Task for the title = {The Syntax Augmented {MT} {(SAMT)} System at the Shared Task for the
2007 {ACL} Workshop on Statistical Machine Translation}, 2007 {ACL} Workshop on Statistical Machine Translation},
booktitle = {Proceedings of the Second Workshop on Statistical Machine Translation, //booktitle = {Proceedings of the Second Workshop on Statistical Machine Translation,
WMT@ACL 2007, Prague, Czech Republic, June 23, 2007}, WMT@ACL 2007, Prague, Czech Republic, June 23, 2007},
pages = {216--219}, pages = {216--219},
publisher = {Association for Computational Linguistics}, publisher = {Association for Computational Linguistics},
...@@ -879,7 +879,7 @@ ...@@ -879,7 +879,7 @@
Claire Cardie and Claire Cardie and
Pierre Isabelle}, Pierre Isabelle},
title = {Tree-to-String Alignment Template for Statistical Machine Translation}, title = {Tree-to-String Alignment Template for Statistical Machine Translation},
booktitle = {{ACL} 2006, 21st International Conference on Computational Linguistics //booktitle = {{ACL} 2006, 21st International Conference on Computational Linguistics
and 44th Annual Meeting of the Association for Computational Linguistics, and 44th Annual Meeting of the Association for Computational Linguistics,
Proceedings of the Conference, Sydney, Australia, 17-21 July 2006}, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006},
publisher = {The Association for Computer Linguistics}, publisher = {The Association for Computer Linguistics},
...@@ -899,7 +899,7 @@ ...@@ -899,7 +899,7 @@
Pierre Isabelle}, Pierre Isabelle},
title = {Scalable Inference and Training of Context-Rich Syntactic Translation title = {Scalable Inference and Training of Context-Rich Syntactic Translation
Models}, Models},
booktitle = {{ACL} 2006, 21st International Conference on Computational Linguistics //booktitle = {{ACL} 2006, 21st International Conference on Computational Linguistics
and 44th Annual Meeting of the Association for Computational Linguistics, and 44th Annual Meeting of the Association for Computational Linguistics,
Proceedings of the Conference, Sydney, Australia, 17-21 July 2006}, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006},
publisher = {The Association for Computer Linguistics}, publisher = {The Association for Computer Linguistics},
...@@ -912,7 +912,7 @@ ...@@ -912,7 +912,7 @@
Hwee Tou Ng and Hwee Tou Ng and
Kemal Oflazer}, Kemal Oflazer},
title = {A Hierarchical Phrase-Based Model for Statistical Machine Translation}, title = {A Hierarchical Phrase-Based Model for Statistical Machine Translation},
booktitle = {{ACL} 2005, 43rd Annual Meeting of the Association for Computational //booktitle = {{ACL} 2005, 43rd Annual Meeting of the Association for Computational
Linguistics, Proceedings of the Conference, 25-30 June 2005, University Linguistics, Proceedings of the Conference, 25-30 June 2005, University
of Michigan, {USA}}, of Michigan, {USA}},
pages = {263--270}, pages = {263--270},
......
...@@ -685,5 +685,5 @@ addtohook={% ...@@ -685,5 +685,5 @@ addtohook={%
\newcommand\chapterseventeen{第十七章} \newcommand\chapterseventeen{第十七章}
\newcommand\chaptereighteen{第十八章}%* \newcommand\chaptereighteen{第十八章}%*
\newcommand\funp{\textrm}%函数P等使用 \newcommand\funp{}%函数P等使用,空是斜体,textrm是加粗
\newcommand\vectorn{\mathbf}%向量N等使用 \newcommand\vectorn{\mathbf}%向量N等使用
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论