Commit 89091baa by 曹润柘

合并分支 'caorunzhe' 到 'master'

Caorunzhe

查看合并请求 !417
parents d34cb762 9b45c1c2
......@@ -449,7 +449,7 @@ Joint training for neural machine translation models with monolingual data
\subsection{无监督词典归纳}
\parinterval 词典归纳(Bilingual Dictionary Induction,BDI),也叫词典推断、词对齐,是实现语种间单词级别翻译的任务。在统计机器翻译中,词典归纳是一项核心的任务,它从双语平行语料中发掘互为翻译的单词,是翻译知识的主要来源\cite{黄书剑0统计机器翻译中的词对齐研究}。在端到端的神经机器翻译中,词典归纳通常作为一个下游任务被用到无监督机器翻译、多语言机器翻译、迁移学习等任务中。在神经机器翻译中,单词通过连续化的向量来表示,词表分布在一个高维的空间中,基于人们对embedding空间的观察发现:连续的单词嵌入空间在各种语言中显示出类似的结构,这使得直接利用embedding来诱导双语词典成为可能。其基本想法是先将来自不同语言的embedding投影到共享嵌入空间中,然后在此共享空间中诱导出双语词典。研究人员们进行了众多的尝试,较早的尝试是使用一个包含数千词对的种子词典作为锚点来学习从源语到目标语词嵌入空间的线性映射,将两个语言的词汇投影到共享的嵌入空间之后,执行一些对齐算法即可得到双语词典\cite{DBLP:journals/corr/MikolovLS13}。最近的研究表明,词典归纳可以在更弱的监督信号下被诱导,这些监督信号来自数百对小词典\cite{DBLP:conf/acl/VulicK16}、相同的字符串\cite{DBLP:conf/iclr/SmithTHH17},甚至仅仅是共享的数字\cite{DBLP:conf/acl/ArtetxeLA17}
\parinterval 词典归纳(Bilingual Dictionary Induction,BDI),也叫词典推断,是实现语种间单词级别翻译的任务。在统计机器翻译中,词典归纳是一项核心的任务,它从双语平行语料中发掘互为翻译的单词,是翻译知识的主要来源\cite{黄书剑0统计机器翻译中的词对齐研究}。在端到端的神经机器翻译中,词典归纳通常作为一个下游任务被用到无监督机器翻译、多语言机器翻译等任务中。在神经机器翻译中,单词通过连续化的向量来表示,词表分布在一个高维的空间中,基于人们对embedding空间的观察发现:连续的单词嵌入空间在各种语言中显示出类似的结构,这使得直接利用embedding来诱导双语词典成为可能。其基本想法是先将来自不同语言的embedding投影到共享嵌入空间中,然后在此共享空间中诱导出双语词典。研究人员们进行了众多的尝试,较早的尝试是使用一个包含数千词对的种子词典作为锚点来学习从源语到目标语词嵌入空间的线性映射,将两个语言的词汇投影到共享的嵌入空间之后,执行一些对齐算法即可得到双语词典\cite{DBLP:journals/corr/MikolovLS13}。最近的研究表明,词典归纳可以在更弱的监督信号下被诱导,这些监督信号来自数百对小词典\cite{DBLP:conf/acl/VulicK16}、相同的字符串\cite{DBLP:conf/iclr/SmithTHH17},甚至仅仅是共享的数字\cite{DBLP:conf/acl/ArtetxeLA17}
\parinterval 在最近,有人提出了完全无监督的词典归纳方法,这类方法不依赖于任何种子词典即可实现词典归纳,下面进行介绍。
......@@ -480,7 +480,7 @@ Joint training for neural machine translation models with monolingual data
\begin{itemize}
\vspace{0.5em}
\item 对于图A中的分布在不同空间中的两个单语embedding X和Y,利用无监督匹配的方法来得到一个粗糙的线性映射W,结果如图B所示。
\item 对于图A中的分布在不同空间中的两个单语embedding X和Y,基于两者近似同构的假设,利用无监督匹配的方法来得到一个粗糙的线性映射W,结果如图B所示。
\vspace{0.5em}
\item 利用映射W可以执行一些对齐算法从而诱导出一个种子词典,如图C所示。
\vspace{0.5em}
......@@ -492,9 +492,9 @@ Joint training for neural machine translation models with monolingual data
\begin{itemize}
\vspace{0.5em}
\item 基于GAN的方法\cite{DBLP:conf/iclr/LampleCRDJ18,DBLP:conf/acl/ZhangLLS17,DBLP:conf/emnlp/XuYOW18}。GAN 是被广泛用于解决无监督学习问题的模型,在这个任务中,通过生成器来产生映射W,鉴别器负责区分随机抽样的元素WX 和Y,两者共同优化收敛后即可得到映射W。
\item 基于GAN的方法\cite{DBLP:conf/iclr/LampleCRDJ18,DBLP:conf/acl/ZhangLLS17,DBLP:conf/emnlp/XuYOW18,DBLP:conf/naacl/MohiuddinJ19}。GAN 是被广泛用于解决无监督学习问题的模型,在这个任务中,通过生成器来产生映射W,鉴别器负责区分随机抽样的元素WX 和Y,两者共同优化收敛后即可得到映射W。
\vspace{0.5em}
\item 基于Gromov-Wasserstein 的方法\cite{DBLP:conf/emnlp/Alvarez-MelisJ18,DBLP:conf/lrec/GarneauGBDL20}。Wasserstein distance是在度量空间中定义两个概率分布之间距离的函数,在这个任务中,它用来衡量不同语言中单词对之间的相似性,利用空间近似同构的信息可以定义出一些目标函数,之后通过优化该目标函数也可以得到映射W。
\item 基于Gromov-Wasserstein 的方法\cite{DBLP:conf/emnlp/Alvarez-MelisJ18,DBLP:conf/lrec/GarneauGBDL20,DBLP:journals/corr/abs-1811-01124,DBLP:conf/emnlp/XuYOW18}。Wasserstein distance是在度量空间中定义两个概率分布之间距离的函数,在这个任务中,它用来衡量不同语言中单词对之间的相似性,利用空间近似同构的信息可以定义出一些目标函数,之后通过优化该目标函数也可以得到映射W。
\vspace{0.5em}
\end{itemize}
......@@ -507,20 +507,29 @@ W^{\star}=\underset{W \in O_{d}(\mathbb{R})}{\operatorname{argmin}}\|W X-Y\|_{\m
\end{eqnarray}
\parinterval 上式子中,SVD中的Y和X行对齐,利用上式可以获得新的W,通过W可以归纳出新的D,如此迭代进行微调最后即可以得到收敛的D。
\parinterval 目前整体的无监督词典归纳工作主要集中在两个方向,一个方向是通过用新的建模方法或改进上述两阶段方法来提升无监督词典归纳的性能,另外一个方向是旨在分析或提升无监督词典归纳的鲁棒性,相关工作如下:
\begin{itemize}
\vspace{0.5em}
\item 提升词典归纳的性能。比如,基于变分自编码器(Variational Autoencoders,VAEs)的方法\cite{DBLP:conf/emnlp/DouZH18};基于PCA的方法\cite{DBLP:conf/emnlp/HoshenW18};基于语言模型和噪声自编码器的方法\cite{DBLP:conf/emnlp/KimGN18};基于互信息的方法\cite{DBLP:conf/emnlp/MukherjeeYH18};基于GAN的方法(WORD TRANSLATION WITHOUT PARALLEL DATA);基于Gromov-Wasserstein匹配的方法\cite{DBLP:conf/emnlp/Alvarez-MelisJ18};多语言无监督词典归纳\cite{DBLP:conf/emnlp/ChenC18,DBLP:conf/emnlp/TaitelbaumCG19,DBLP:journals/corr/abs-1811-01124,DBLP:conf/naacl/HeymanVVM19};基于Sinkhorn距离和反向翻译的方法\cite{DBLP:conf/emnlp/XuYOW18};改进归纳阶段寻找最近邻点的度量函数\cite{DBLP:conf/acl/HuangQC19};基于对抗自编码器的方法\cite{DBLP:conf/naacl/MohiuddinJ19};基于语言形态学感知的方法\cite{DBLP:conf/acl/YangLCLS19};基于无监督机器翻译的方法\cite{DBLP:conf/acl/ArtetxeLA19a};基于后处理embedding的方法\cite{DBLP:conf/rep4nlp/VulicKG20}
\item 分析或提升无监督词典归纳的鲁棒性。分析无监督词典归纳的局限性\cite{DBLP:conf/acl/SogaardVR18,DBLP:conf/acl/OrmazabalALSA19,DBLP:conf/emnlp/VulicGRK19};提出新的初始化方法和改进迭代阶段\cite{DBLP:conf/lrec/GarneauGBDL20};改进优化目标函数\cite{DBLP:conf/emnlp/JoulinBMJG18};通过降维改进初始化阶段\cite{A2020Li};分析基于GAN方法的稳定性\cite{hartmann2018empirical};分析和对比各种无监督方法性能\cite{DBLP:conf/nips/HartmannKS19};分析无监督对齐方法的挑战和难点\cite{DBLP:conf/emnlp/HartmannKS18};通过实验分析指出目前所用的数据集存在一些问题\cite{DBLP:conf/emnlp/Kementchedjhieva19}
\vspace{0.5em}
\end{itemize}
%----------------------------------------------------------------------------------------
% NEW SUB-SECTION
%----------------------------------------------------------------------------------------
\subsubsection{2. 鲁棒性问题}
\parinterval 目前很多无监督词典归纳方法在相似语言对比如英-法已经取得不错的结果,然而在远距离语言对比如英-中性能仍然很差,很多甚至为0\cite{DBLP:conf/emnlp/VulicGRK19,A2020Li},无监督词典归纳的鲁棒性仍然存在巨大的挑战。这有多个层面的原因:
\parinterval 目前很多无监督词典归纳方法在相似语言对比如英-法,英-德已经取得不错的结果,然而在远距离语言对比如英-中,英-日等性能仍然很差,很多甚至为0\cite{DBLP:conf/emnlp/VulicGRK19,A2020Li},无监督词典归纳的鲁棒性仍然存在巨大的挑战。这有多个层面的原因:
\begin{itemize}
\vspace{0.5em}
\item 首先词典归纳依赖于基于大规模单语语料训练出来的embedding,而embedding会受到单语数据的来源领域及数量、词向量训练算法等多方面的影响,这很容易导致假设的失效,从而使得模型运行失败。
\item 首先词典归纳依赖于基于大规模单语语料训练出来的embedding,而embedding会受到单语数据的来源领域及数量、词向量训练算法、超参数配置等多方面因素的影响,这很容易导致假设的失效,从而使得模型运行失败。
\vspace{0.5em}
\item 词典归纳强烈依赖于embedding空间近似同构的假设,然而许多语言对由于语言本身天然的差异导致该假设往往很弱,而无监督系统通常是基于pipeline的方法,起始阶段由于缺乏监督信号的引导很容易就失败,从而导致后面的阶段无法有效运行。\cite{DBLP:conf/acl/SogaardVR18,A2020Li}
\item 词典归纳强烈依赖于embedding空间近似同构的假设,然而许多语言对由于语言本身天然的差异导致该假设往往很弱,无监督系统通常是基于两阶段的方法,起始阶段由于缺乏监督信号的引导很容易就失败,从而导致后面的阶段无法有效运行。\cite{DBLP:conf/acl/SogaardVR18,A2020Li}
\vspace{0.5em}
\item 由于embedding本身表示上的局限性,模型无法实现单词多对多的对齐,而且对于一些相似的词或者实体名词模型也很难实现对齐。
......@@ -529,6 +538,7 @@ W^{\star}=\underset{W \in O_{d}(\mathbb{R})}{\operatorname{argmin}}\|W X-Y\|_{\m
\parinterval 无监督方法的鲁棒性是一个很难解决的问题,对于词典推断这个任务来说,是否有必要无监督值得商榷,因为其作为一个底层任务,不仅可以利用embedding,还可以利用单语、甚至是双语信息,此外,基于弱监督的方法代价也不是很大,只需要数千个词典即可,有了监督信号的引导,鲁棒性问题就能得到一定的缓解。
%----------------------------------------------------------------------------------------
% NEW SUB-SECTION
%----------------------------------------------------------------------------------------
......
......@@ -6655,6 +6655,247 @@ author = {Yoshua Bengio and
pages = {60},
year = {2019}
}
@inproceedings{DBLP:conf/naacl/MohiuddinJ19,
author = {Tasnim Mohiuddin and
Shafiq R. Joty},
title = {Revisiting Adversarial Autoencoder for Unsupervised Word Translation
with Cycle Consistency and Improved Training},
pages = {3857--3867},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/acl/HuangQC19,
author = {Jiaji Huang and
Qiang Qiu and
Kenneth Church},
title = {Hubless Nearest Neighbor Search for Bilingual Lexicon Induction},
pages = {4072--4080},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/naacl/MohiuddinJ19,
author = {Tasnim Mohiuddin and
Shafiq R. Joty},
title = {Revisiting Adversarial Autoencoder for Unsupervised Word Translation
with Cycle Consistency and Improved Training},
pages = {3857--3867},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@article{DBLP:journals/corr/abs-1811-01124,
author = {Jean Alaux and
Edouard Grave and
Marco Cuturi and
Armand Joulin},
title = {Unsupervised Hyperalignment for Multilingual Word Embeddings},
journal = {CoRR},
volume = {abs/1811.01124},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/XuYOW18,
author = {Ruochen Xu and
Yiming Yang and
Naoki Otani and
Yuexin Wu},
title = {Unsupervised Cross-lingual Transfer of Word Embedding Spaces},
pages = {2465--2474},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/DouZH18,
author = {Zi-Yi Dou and
Zhi-Hao Zhou and
Shujian Huang},
title = {Unsupervised Bilingual Lexicon Induction via Latent Variable Models},
pages = {621--626},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/HoshenW18,
author = {Yedid Hoshen and
Lior Wolf},
title = {Non-Adversarial Unsupervised Word Translation},
pages = {469--478},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/KimGN18,
author = {Yunsu Kim and
Jiahui Geng and
Hermann Ney},
title = {Improving Unsupervised Word-by-Word Translation with Language Model
and Denoising Autoencoder},
pages = {862--868},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/MukherjeeYH18,
author = {Tanmoy Mukherjee and
Makoto Yamada and
Timothy M. Hospedales},
title = {Learning Unsupervised Word Translations Without Adversaries},
pages = {627--632},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/JoulinBMJG18,
author = {Armand Joulin and
Piotr Bojanowski and
Tomas Mikolov and
Herv{\'{e}} J{\'{e}}gou and
Edouard Grave},
title = {Loss in Translation: Learning Bilingual Word Mapping with a Retrieval
Criterion},
pages = {2979--2984},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/ChenC18,
author = {Xilun Chen and
Claire Cardie},
title = {Unsupervised Multilingual Word Embeddings},
pages = {261--270},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/naacl/MohiuddinJ19,
author = {Tasnim Mohiuddin and
Shafiq R. Joty},
title = {Revisiting Adversarial Autoencoder for Unsupervised Word Translation
with Cycle Consistency and Improved Training},
pages = {3857--3867},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/emnlp/TaitelbaumCG19,
author = {Hagai Taitelbaum and
Gal Chechik and
Jacob Goldberger},
title = {Multilingual word translation using auxiliary languages},
pages = {1330--1335},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/acl/YangLCLS19,
author = {Pengcheng Yang and
Fuli Luo and
Peng Chen and
Tianyu Liu and
Xu Sun},
title = {{MAAM:} {A} Morphology-Aware Alignment Model for Unsupervised Bilingual
Lexicon Induction},
pages = {3190--3196},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/acl/OrmazabalALSA19,
author = {Aitor Ormazabal and
Mikel Artetxe and
Gorka Labaka and
Aitor Soroa and
Eneko Agirre},
title = {Analyzing the Limitations of Cross-lingual Word Embedding Mappings},
pages = {4990--4995},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/acl/ArtetxeLA19a,
author = {Mikel Artetxe and
Gorka Labaka and
Eneko Agirre},
title = {Bilingual Lexicon Induction through Unsupervised Machine Translation},
pages = {5002--5007},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/rep4nlp/VulicKG20,
author = {Ivan Vulic and
Anna Korhonen and
Goran Glavas},
title = {Improving Bilingual Lexicon Induction with Unsupervised Post-Processing
of Monolingual Word Vector Spaces},
pages = {45--54},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2020}
}
@article{hartmann2018empirical,
title={Empirical observations on the instability of aligning word vector spaces with GANs},
author={Hartmann, Mareike and Kementchedjhieva, Yova and S{\o}gaard, Anders},
year={2018}
}
@inproceedings{DBLP:conf/emnlp/Kementchedjhieva19,
author = {Yova Kementchedjhieva and
Mareike Hartmann and
Anders S{\o}gaard},
title = {Lost in Evaluation: Misleading Benchmarks for Bilingual Dictionary
Induction},
pages = {3334--3339},
publisher = {Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/nips/HartmannKS19,
author = {Mareike Hartmann and
Yova Kementchedjhieva and
Anders S{\o}gaard},
title = {Comparing Unsupervised Word Translation Methods Step by Step},
pages = {6031--6041},
year = {2019}
}
@inproceedings{DBLP:conf/emnlp/HartmannKS18,
author = {Mareike Hartmann and
Yova Kementchedjhieva and
Anders S{\o}gaard},
title = {Why is unsupervised alignment of English embeddings from different
algorithms so hard?},
pages = {582--586},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/VulicGRK19,
author = {Ivan Vulic and
Goran Glavas and
Roi Reichart and
Anna Korhonen},
title = {Do We Really Need Fully Unsupervised Cross-Lingual Embeddings?},
pages = {4406--4417},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/emnlp/JoulinBMJG18,
author = {Armand Joulin and
Piotr Bojanowski and
Tomas Mikolov and
Herv{\'{e}} J{\'{e}}gou and
Edouard Grave},
title = {Loss in Translation: Learning Bilingual Word Mapping with a Retrieval
Criterion},
pages = {2979--2984},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/acl/SogaardVR18,
author = {Anders S{\o}gaard and
Sebastian Ruder and
Ivan Vulic},
title = {On the Limitations of Unsupervised Bilingual Dictionary Induction},
pages = {778--788},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2018}
}
@inproceedings{DBLP:conf/naacl/HeymanVVM19,
author = {Geert Heyman and
Bregt Verreet and
Ivan Vulic and
Marie-Francine Moens},
title = {Learning Unsupervised Multilingual Word Embeddings with Incremental
Multilingual Hubs},
pages = {1890--1902},
publisher = {Annual Meeting of the Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
%%%%% chapter 16------------------------------------------------------
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
......
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论