\item 基于语言模型交叉熵差(cross-entropy difference,CED)(Domain Adaptation Via Pseudo In-Domain Data Selection;Data Selection With Fewer Words;Instance Weighting for Neural Machine Translation Domain Adaptation;Combining translation and language model scoring for domain-specific data filtering)。该方法做法是在目标领域数据和通用数据上分别训练语言模型,然后用语言模型来给句子打分并做差,分数越低说明句子与目标领域越相关。
\item 基于文本分类(Semi-supervised Convolutional Networks for Translation Adaptation with Tiny Amount of In-domain Data;Bilingual Methods for Adaptive Training Data Selection for Machine Translation;Cost Weighting for Neural Machine Translation Domain Adaptation;Automatic Threshold Detection for Data Selection in Machine Translation)。将该问题转化为文本分类问题,用领域数据训练一个分类器,之后利用该分类器对给定的句子进行领域分类,最后用输出的概率来打分。
\item 基于特征衰减算法(Feature Decay Algorithms,FDA)(Instance selection for machine translation using feature decay algorithms;Feature decay algorithms for neural machine translation;Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation;Data Selection with Feature Decay Algorithms Using an Approximated Target Side)。该算法基于特征匹配,试图从源领域中提取出一个句子集合,这些句子能够使目标领域语言特征的覆盖范围最大化。
\parinterval 尽管这些方法有所不同,但是它们的目的都是为了衡量样本和领域的相关性,这些评价指标最终服务于训练过程中的样本学习策略。样本学习策略主要分为静态和动态两种,早期的研究工作都是关注于设计评分函数,在学习策略上普遍采用基于静态的方法,即首先利用评分函数对源领域的数据进行打分排序,然后选取一定数量的数据合并到目标领域数据集中共同训练模型(Domain Adaptation Via Pseudo In-Domain Data Selection;Data Selection With Fewer Words;Bilingual Methods for Adaptive Training Data Selection for Machine Translation;Instance selection for machine translation using feature decay algorithms;Semi-supervised Convolutional Networks for Translation Adaptation with Tiny Amount of In-domain Data),这个过程其实是扩大了目标领域的数据规模,模型的收益主要来自于数据的增加。但是随着实践人们发现,基于静态的方法会存在两方面的缺陷:
\item 与在完整的源领域数据池上训练相比,在选定的子集上进行训练会导致词表覆盖率的降低和加剧单词长尾分布问题,这会对翻译系统的性能产生显著影响。(Data Selection With Fewer Words;Dynamic Data Selection for Neural Machine Translation)
\item 基于静态的方法可以看作一种数据过滤技术,它对数据的判定方式是“非黑即白”的,即接收或拒绝,这种方式一方面会受到评分函数的影响,一方面被拒绝的数据可能对于训练模型仍然有用,而且它们的有用性可能会随着训练的进展而改变。(Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection)
\parinterval 为了解决这些问题,研究人员提出了动态的学习策略,这里的动态主要体现在模型训练过程中,源领域和目标领域的数据是以某种策略进行动态的组织。它的基本想法是:不完全抛弃相关性低的句子,而只是使模型给予相关性高的句子更高的关注度。具体在实现上,主要有两种方法,一种是将句子的领域相似性表达成概率分布,然后在训练过程中根据该分布对数据进行动态采样(Dynamic Data Selection for Neural Machine Translation;Dynamic Sentence Sampling for Efficient Training of Neural Machine Translation), 一种是在计算损失函数时根据句子的领域相似性以加权的方式进行训练(Instance Weighting for Neural Machine Translation Domain Adaptation;Cost Weighting for Neural Machine Translation Domain Adaptation)。相比于基于静态的二元选择,基于动态的方法是一种“软”选择方式,这使得模型有机会使用到其它数据,提高了训练数据的多样性,因此性能也更理想。
\parinterval 除了领域差异,训练集偏差的另外一种常见表现形式是标签噪声。机器翻译的训练数据大多来源于网页爬取,这不可避免的会引入噪声,比如句子未对齐、多种语言单词混合、单词丢失等,相关研究表明神经机器翻译对于噪声数据很敏感,当噪声过多时就会使得模型的性能显著下降(On the impact of various types of noise on neural machine translation),因此无论是从模型鲁棒性还是训练效率出发,数据降噪都是很有意义的。事实上,数据降噪从统计机器翻译时代就已经有许多相关工作(Dealing with Input Noise in Statistical Machine Translation;Bilingual Data Cleaning for SMT using Graph-based Random Walk;Learning from Noisy Data in Statistical Machine Translation),2018年WMT也开放了关于平行语料过滤的任务,这说明数据降噪工作正在逐步引起人们的注意。
\parinterval 由于含有噪声的翻译数据通常都具有较为明显的特征,因此可以用比如:句子长度比、词对齐率、最长连续未对齐序列长度等一些启发式的特征来进行综合评分(MT Detection in Web-Scraped Parallel Corpora;Parallel Corpus Refinement as an Outlier Detection Algorithm;Zipporah: a Fast and Scalable Data Cleaning System for NoisyWeb-Crawled Parallel Corpora);也可以将该问题转化为文本分类或跨语言文本蕴含任务来进行筛选(Detecting Cross-Lingual Semantic Divergence for Neural Machine Translation;Identifying Semantic Divergences in Parallel Text without Annotations);此外,从某种意义上来说,数据降噪其实也可以算是一种领域数据选择,因为它的目标是选择可信度高的样本,因此我们可以人工构建一个可信度高的小型数据集,然后利用该数据集和通用数据集之间的差异性进行选择(Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection)。
\parinterval早期的工作大多在关注过滤的方法,对于噪声数据中模型的鲁棒性训练和噪声样本的利用探讨较少。事实上,噪声是有强度的,有些噪声数据对于模型可能是有价值的,而且它们的价值可能会随着模型的状态而改变(Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection)。一个例子如图\ref{fig:13-51}所示(画图的时候zh-gloss那行不要了,zh翻译为中文),
\parinterval 图中的中文句子中缺少了一部分翻译,但这两个句子都很流利,简单的基于长度或双语词典的方法可以很容易地对其进行过滤,但直观地看,这条训练数据对于训练NMT模型仍然有用,特别是在数据稀缺的情况下,因为中文句子和英文句子的前半部分仍然是翻译对。这表明了噪声数据的微妙之处,它不是一个简单的二元分类问题:一些训练样本可能部分有用,而它们的有用性也可能随着训练的进展而改变。因此简单的过滤并不一种很好的办法,一种合理的学习策略应该是既可以合理的利用这些数据,又不让其对模型产生负面影响。直觉上,这是一个动态的过程,当模型能力较弱时(比如在训练初期),这些数据就能对模型起到正面作用,反之亦然。受课程学习(Curriculum Learning,更详细内容见下节)、微调(fine-tune)等启发,研究学者们也提出了类似的学习策略,它的主要思想是:在训练过程中对批量数据的噪声水平进退火(anneal),使得模型在越来越干净的批量数据上进行训练(Denoising Neural Machine Translation Training with Trusted Data and Online Data Selection;Dynamically Composing Domain-Data Selection with Clean-Data Selection by “Co-Curricular Learning” for Neural Machine Translation)。从宏观上看,整个训练过程其实是一个持续微调的过程,这和微调的思想基本一致。这种学习策略一方面充分利用了训练数据,一方面又避免了噪声数据对模型的负面影响,因此取得了不错的效果。
\item 不确定性采样的查询(Uncertainty Sampling)(An analysis of active learning strategies for sequence labeling tasks;Query learning with large margin classifers;Less is more: Active learning with support vector machines)这类方法选择那些当前基准分类器最不能确定其分类的样本,不确定性通常可以用信度最低(Least Confident)、边缘采样(Margin Sampling)、熵(Entropy)等方法来描述,从几何角度看,这种方法优先选择靠近分类边界的样例;
\item 基于委员会的查询(Query-By-Committee)(Query by committee;Machine Learning;Query learning strategies using boosting and bagging;Employing em and pool-based active learning for text classifcation)这类方法选择那些训练后能够最大程度缩减版本空间的样本,可以采用Bagging,AdaBoost等分类器集成算法从版本空间中产生委员会,然后选择委员会中的假设预测分歧最大的样本;
\item 其它经典策略:梯度长度期望(Expected Gradient Length,EGL) 策略,根据未标注样本对当前模型的影响程度优先筛选出对模型影响最大的样本(Histograms of oriented gradients for human detection;Gradient-based learning applied to document recognition);方差最小(Variance Reduction,VR)策略,选择那些方差减少最多的样本数据(Optimum Experimental Designs, with SAS;A variance minimization criterion to active learning on graphs);结合生成对抗网络的方法(Generative adversarial active learning;Active decision boundary annotation with deep generative models;Adversarial sampling for active learning)等
\parinterval 主动学习非常适合于专业领域的任务,因为专业领域的标注成本往往比较昂贵,比如医学、金融、法律等。事实上,主动学习在神经机器翻译中的利用并不是很多,这主要是因为主动学习仅是把那些有价值的单语数据选出来,然后交给人工标注,这个过程需要人工的参与,然而神经机器翻译有许多利用单语数据的方法,比如在目标端结合语言模型(On using very large target vocabulary for neural machine translation;On using monolingual corpora in neural machine translation);利用反向翻译(Back translation)(Improving neural machine translation models with monolingual data;Joint training for neural machine translation models with monolingual data;Iterative backtranslation for neural machine translation);利用多语言或迁移学习(Exploiting Out-of-Domain Parallel Data through Multilingual Transfer Learning for Low-Resource Neural Machine Translation;Zero-Resource Neural Machine Translation with Monolingual Pivot Data;Pivot-based transfer learning for neural machine translation between non-english languages);无监督机器翻译(Unsupervised machine translation using monolingual corpora only;Unsupervised neural machine translation)等。但是在一些特定的场景下,主动学习仍然会发挥重要作用,比如:在低资源或专业领域的神经机器翻译中,主动学习可以大大减少人工标注成本(Learning to Actively Learn Neural Machine Translation;Active Learning Approaches to Enhancing Neural Machine Translation);在交互式或增量式机器翻译中,(Active Learning for Interactive Neural Machine Translation of Data Streams;Continuous learning from human post-edits for neural machine translation;Online learning for effort reduction in interactive neural machine translation)主动学习可以让模型持续从外界反馈中受益。
\parinterval 这是符合直觉的,可以想象,对于一个数学零基础的人来说,如果一开始就同时学习加减乘除和高等数学,效率自然是比较低下的。而如果按照正常的学习顺序,比如先学习加减乘除,然后学习各种函数,最后再学习高等数学,有了前面的基础,再学习后面的知识,效率就可以更高。事实上,课程学习自从一被提出就受到了研究人员的极大关注,除了想法本身有趣之外,还因为它作为一种和模型无关的训练策略,具有即插即用(Plug-and-Play)的特点,可以被广泛应用于各种计算密集型的领域中,以提高效率,比如计算机视觉(Computer Vision,CV)(Weakly supervised learning from large-scale web images;Self-paced reranking for zero-example multimedia search),自然语言处理(Natural Language Processing,NLP)(Competence-based curriculum learning for neural machine translation;Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives)和神经网络结构搜索(Neural Architecture Search,NAS)(Breaking the curse of space explosion: Towards efficient nas with curriculum search)等。神经机器翻译就是自然语言处理中一个很契合课程学习的任务,这是因为神经机器翻译往往需要大规模的平行语料来训练模型,训练成本很高,所以使用课程学习来加快收敛是一个很自然的想法。
\parinterval 这是符合直觉的,可以想象,对于一个数学零基础的人来说,如果一开始就同时学习加减乘除和高等数学,效率自然是比较低下的。而如果按照正常的学习顺序,比如先学习加减乘除,然后学习各种函数,最后再学习高等数学,有了前面的基础,再学习后面的知识,效率就可以更高。事实上,课程学习自从一被提出就受到了研究人员的极大关注,除了想法本身有趣之外,还因为它作为一种和模型无关的训练策略,具有即插即用(Plug-and-Play)的特点,可以被广泛应用于各种计算密集型的领域中,以提高效率,比如计算机视觉\upcite{DBLP:conf/eccv/GuoHZZDSH18,DBLP:conf/mm/JiangMMH14}(Computer Vision,CV),自然语言处理\upcite{DBLP:conf/naacl/PlataniosSNPM19,DBLP:conf/acl/TayWLFPYRHZ19}(Natural Language Processing,NLP)和神经网络结构搜索\upcite{DBLP:conf/icml/GuoCZZ0HT20}(Neural Architecture Search,NAS)等。神经机器翻译就是自然语言处理中一个很契合课程学习的任务,这是因为神经机器翻译往往需要大规模的平行语料来训练模型,训练成本很高,所以使用课程学习来加快收敛是一个很自然的想法。
\parinterval 评估样本的难度和具体的任务相关,在神经机器翻译中,有很多种评估方法,可以利用语言学上的困难准则,比如句子长度、句子平均词频、句子语法解析树深度等(Competence-based curriculum learning for neural machine translation;Curriculum Learning and Minibatch Bucketing in Neural Machine Translation)。这些准则本质上属于人类的先验知识,符合人类的直觉,但不一定和模型相匹配,对人类来说简单的句子对模型来说并不总是容易的,所以研究学者们也提出了模型自动评估的方法,比如:利用语言模型(Dynamically Composing Domain-Data Selection with Clean-Data Selection by “Co-Curricular Learning” for Neural Machine Translation;Curriculum Learning for Domain Adaptation in Neural Machine Translation),利用神经机器翻译模型(An empirical exploration of curriculum learning for neural machine translation;Dynamic Curriculum Learning for Low-Resource Neural Machine Translation)等。值得注意的是,利用神经机器翻译来打分的方法分为静态和动态两种,静态的方法是利用在小数据集上训练的、更小的NMT模型来打分(An empirical exploration of curriculum learning for neural machine translation),动态的方法则是利用当前模型的状态来打分,这在广义上也叫作自步学习(Self-Paced Learning),具体可以利用比如模型的训练误差或变化率等(Dynamic Curriculum Learning for Low-Resource Neural Machine Translation)。
\parinterval 从广义上说,大多数课程学习方法都是遵循由易到难的原则,然而在实践过程中人们逐渐赋予了课程学习更多的内涵,课程学习的含义早已超越了最原始的定义。一方面,课程学习可以与许多任务相结合,此时,评估准则并不一定总是样本的困难度,这取决于具体的任务,比如在多任务学习中(multi-task learning)(Curriculum learning of multiple tasks;Curriculum learning for multi-task classification of visual attributes),指的任务的难易程度或相关性;在领域适应任务中(Curriculum Learning for Domain Adaptation in Neural Machine Translation),指的是数据与领域的相似性;在噪声数据场景中,指的是样本的可信度(Dynamically Composing Domain-Data Selection with Clean-Data Selection by “Co-Curricular Learning” for Neural Machine Translation)。另一方面,在一些任务或数据中,由易到难并不总是有效,有时困难优先反而会取得更好的效果(Curriculum learning with deep convolutional neural networks;An empirical exploration of curriculum learning for neural machine translation),实际上这和我们的直觉不太符合,一种合理的解释是课程学习更适合标签噪声、离群值较多或者是目标任务困难的场景,能提高模型的鲁棒性和收敛速度,而困难优先则更适合数据集干净的场景,能使随机梯度下降(stochastic gradient descent,SGD)更快更稳定(Active bias: Training more accurate neural networks by emphasizing high variance samples)。课程学习不断丰富的内涵使得它有了越来越广泛的应用。
\parinterval 从广义上说,大多数课程学习方法都是遵循由易到难的原则,然而在实践过程中人们逐渐赋予了课程学习更多的内涵,课程学习的含义早已超越了最原始的定义。一方面,课程学习可以与许多任务相结合,此时,评估准则并不一定总是样本的困难度,这取决于具体的任务,比如在多任务学习中\upcite{DBLP:conf/cvpr/PentinaSL15,DBLP:conf/iccvw/SarafianosGNK17}(multi-task learning),指的任务的难易程度或相关性;在领域适应任务中\upcite{DBLP:conf/naacl/ZhangSKMCD19},指的是数据与领域的相似性;在噪声数据场景中,指的是样本的可信度\upcite{DBLP:conf/acl/WangCC19}。另一方面,在一些任务或数据中,由易到难并不总是有效,有时困难优先反而会取得更好的效果\upcite{zhang2018empirical}{\red Curriculum learning with deep convolutional neural networks},实际上这和我们的直觉不太符合,一种合理的解释是课程学习更适合标签噪声、离群值较多或者是目标任务困难的场景,能提高模型的鲁棒性和收敛速度,而困难优先则更适合数据集干净的场景,能使随机梯度下降(stochastic gradient descent,SGD)更快更稳定\upcite{DBLP:conf/nips/ChangLM17}。课程学习不断丰富的内涵使得它有了越来越广泛的应用。
\parinterval 从某种程度上看,机器翻译中的多领域、多语言等都属于持续学习的场景,灾难性遗忘也是这些任务面临的主要问题之一。在多领域神经机器翻译中,我们期望模型既有通用领域的性能,并且在特定领域也表现良好,然而事实上,由于灾难性遗忘问题的存在,适应特定领域往往是以牺牲通用领域的性能为代价的(Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation;Investigating Catastrophic Forgetting During Continual Training for Neural Machine Translation),现有的解决方法大多都可以归到以上三类,具体内容可以参考16.5 领域适应。在多语言神经翻译中,最理想的情况是一个模型就能够实现在多个语言之间的映射,然而由于数据分布的极大不同,实际情况往往是:多语言模型能够提高低资源语言对互译的性能,但同时也会降低高资源语言对的性能。因此如何让模型从多语言训练数据中持续受益就是一个关键的问题,16.3 多语言翻译模型作了详细介绍。此外,在增量式模型优化场景中也会存在灾难性遗忘问题,相关内容可参考18.2 增量式模型优化。
title = {Exploiting Out-of-Domain Parallel Data through Multilingual Transfer
Learning for Low-Resource Neural Machine Translation},
pages = {128--139},
publisher = {Machine Translation},
year = {2019}
}
@inproceedings{DBLP:conf/emnlp/CurreyH19,
author = {Anna Currey and
Kenneth Heafield},
title = {Zero-Resource Neural Machine Translation with Monolingual Pivot Data},
pages = {99--107},
publisher = {Conference on Empirical Methods in Natural Language Processing},
year = {2019}
}
@inproceedings{DBLP:conf/emnlp/KimPPKN19,
author = {Yunsu Kim and
Petre Petrov and
Pavel Petrushkov and
Shahram Khadivi and
Hermann Ney},
title = {Pivot-based Transfer Learning for Neural Machine Translation between
Non-English Languages},
pages = {866--876},
publisher = {Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/iclr/LampleCDR18,
author = {Guillaume Lample and
Alexis Conneau and
Ludovic Denoyer and
Marc'Aurelio Ranzato},
title = {Unsupervised Machine Translation Using Monolingual Corpora Only},
publisher = {International Conference on Learning Representations},
year = {2018}
}
@inproceedings{DBLP:conf/iclr/ArtetxeLAC18,
author = {Mikel Artetxe and
Gorka Labaka and
Eneko Agirre and
Kyunghyun Cho},
title = {Unsupervised Neural Machine Translation},
publisher = {International Conference on Learning Representations},
year = {2018}
}
%下面的publisher在看看
@inproceedings{DBLP:conf/conll/LiuBH18,
author = {Ming Liu and
Wray L. Buntine and
Gholamreza Haffari},
title = {Learning to Actively Learn Neural Machine Translation},
pages = {334--344},
publisher = {The SIGNLL Conference on Computational Natural Language Learning},
year = {2018}
}
@inproceedings{DBLP:conf/emnlp/ZhaoZZZ20,
author = {Yuekai Zhao and
Haoran Zhang and
Shuchang Zhou and
Zhihua Zhang},
title = {Active Learning Approaches to Enhancing Neural Machine Translation:
An Empirical Study},
pages = {1796--1806},
publisher = {Conference on Empirical Methods in Natural Language Processing},
year = {2020}
}
@inproceedings{Peris2018ActiveLF,
title={Active Learning for Interactive Neural Machine Translation of Data Streams},
author={{\'A}lvaro Peris and Francisco Casacuberta},
publisher={The SIGNLL Conference on Computational Natural Language Learning},
pages={151--160},
year={2018}
}
@inproceedings{DBLP:journals/pbml/TurchiNFF17,
author = {Marco Turchi and
Matteo Negri and
M. Amin Farajian and
Marcello Federico},
title = {Continuous Learning from Human Post-Edits for Neural Machine Translation},
publisher = {The Prague Bulletin of Mathematical Linguistics},
volume = {108},
pages = {233--244},
year = {2017}
}
@inproceedings{DBLP:journals/csl/PerisC19,
author = {{\'{A}}lvaro Peris and
Francisco Casacuberta},
title = {Online learning for effort reduction in interactive neural machine
translation},
publisher = {Computer Speech Language},
volume = {58},
pages = {98--126},
year = {2019}
}
@inproceedings{DBLP:conf/eccv/GuoHZZDSH18,
author = {Sheng Guo and
Weilin Huang and
Haozhi Zhang and
Chenfan Zhuang and
Dengke Dong and
Matthew R. Scott and
Dinglong Huang},
title = {CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images},
series = {Lecture Notes in Computer Science},
volume = {11214},
pages = {139--154},
publisher = {European Conference on Computer Vision},
year = {2018}
}
@inproceedings{DBLP:conf/mm/JiangMMH14,
author = {Lu Jiang and
Deyu Meng and
Teruko Mitamura and
Alexander G. Hauptmann},
title = {Easy Samples First: Self-paced Reranking for Zero-Example Multimedia
Search},
pages = {547--556},
publisher = {ACM International Conference on Multimedia},
year = {2014}
}
%下面的pubisher
@inproceedings{DBLP:conf/naacl/PlataniosSNPM19,
author = {Emmanouil Antonios Platanios and
Otilia Stretcu and
Graham Neubig and
Barnab{\'{a}}s P{\'{o}}czos and
Tom M. Mitchell},
title = {Competence-based Curriculum Learning for Neural Machine Translation},
pages = {1162--1172},
publisher = {Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year = {2019}
}
@inproceedings{DBLP:conf/acl/TayWLFPYRHZ19,
author = {Yi Tay and
Shuohang Wang and
Anh Tuan Luu and
Jie Fu and
Minh C. Phan and
Xingdi Yuan and
Jinfeng Rao and
Siu Cheung Hui and
Aston Zhang},
title = {Simple and Effective Curriculum Pointer-Generator Networks for Reading
Comprehension over Long Narratives},
pages = {4922--4931},
publisher = {Annual Meeting of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{DBLP:conf/icml/GuoCZZ0HT20,
author = {Yong Guo and
Yaofo Chen and
Yin Zheng and
Peilin Zhao and
Jian Chen and
Junzhou Huang and
Mingkui Tan},
title = {Breaking the Curse of Space Explosion: Towards Efficient {NAS} with
Curriculum Search},
series = {Proceedings of Machine Learning Research},
volume = {119},
pages = {3822--3831},
publisher = {International Conference on Machine Learning},
year = {2020}
}
@inproceedings{DBLP:conf/ranlp/KocmiB17,
author = {Tom Kocmi and
Ondrej Bojar},
title = {Curriculum Learning and Minibatch Bucketing in Neural Machine Translation},
pages = {379--386},
publisher = {International Conference Recent Advances in Natural Language Processing},
year = {2017}
}
@inproceedings{DBLP:conf/naacl/ZhangSKMCD19,
author = {Xuan Zhang and
Pamela Shapiro and
Gaurav Kumar and
Paul McNamee and
Marine Carpuat and
Kevin Duh},
title = {Curriculum Learning for Domain Adaptation in Neural Machine Translation},
pages = {1903--1915},
publisher = {Annual Conference of the North American Chapter of the Association for Computational Linguistics},
year = {2019}
}
@inproceedings{zhang2018empirical,
title={An empirical exploration of curriculum learning for neural machine translation},
author={Zhang, Xuan and Kumar, Gaurav and Khayrallah, Huda and Murray, Kenton and Gwinnup, Jeremy and Martindale, Marianna J and McNamee, Paul and Duh, Kevin and Carpuat, Marine},
publisher={arXiv preprint arXiv:1811.00739},
year={2018}
}
@inproceedings{DBLP:conf/coling/XuHJFWHJXZ20,
author = {Chen Xu and
Bojie Hu and
Yufan Jiang and
Kai Feng and
Zeyang Wang and
Shen Huang and
Qi Ju and
Tong Xiao and
Jingbo Zhu},
title = {Dynamic Curriculum Learning for Low-Resource Neural Machine Translation},
pages = {3977--3989},
publisher = {International Committee on Computational Linguistics},
year = {2020}
}
@inproceedings{DBLP:conf/acl/ZhouYWWC20,
author = {Yikai Zhou and
Baosong Yang and
Derek F. Wong and
Yu Wan and
Lidia S. Chao},
title = {Uncertainty-Aware Curriculum Learning for Neural Machine Translation},
pages = {6934--6944},
publisher = {Annual Meeting of the Association for Computational Linguistics},
year = {2020}
}
@inproceedings{DBLP:conf/aaai/ZhaoWNW20,
author = {Mingjun Zhao and
Haijiang Wu and
Di Niu and
Xiaoli Wang},
title = {Reinforced Curriculum Learning on Pre-Trained Neural Machine Translation
Models},
pages = {9652--9659},
publisher = {AAAI Conference on Artificial Intelligence},
year = {2020}
}
@inproceedings{DBLP:conf/cvpr/PentinaSL15,
author = {Anastasia Pentina and
Viktoriia Sharmanska and
Christoph H. Lampert},
title = {Curriculum learning of multiple tasks},
pages = {5492--5500},
publisher = {IEEE Conference on Computer Vision and Pattern Recognition},
year = {2015}
}
@inproceedings{DBLP:conf/iccvw/SarafianosGNK17,
author = {Nikolaos Sarafianos and
Theodore Giannakopoulos and
Christophoros Nikou and
Ioannis A. Kakadiaris},
title = {Curriculum Learning for Multi-task Classification of Visual Attributes},
pages = {2608--2615},
publisher = {IEEE International Conference on Computer Vision},
year = {2017}
}
@inproceedings{DBLP:conf/nips/ChangLM17,
author = {Haw{-}Shiuan Chang and
Erik G. Learned{-}Miller and
Andrew McCallum},
title = {Active Bias: Training More Accurate Neural Networks by Emphasizing
High Variance Samples},
publisher = {Conference and Workshop on Neural Information Processing Systems},
pages = {1002--1012},
year = {2017}
}
%ieee加{
@inproceedings{DBLP:journals/pami/LiH18a,
author = {Zhizhong Li and
Derek Hoiem},
title = {Learning without Forgetting},
publisher = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
volume = {40},
number = {12},
pages = {2935--2947},
year = {2018}
}
@inproceedings{rusu2016progressive,
title={Progressive neural networks},
author={Rusu, Andrei A and Rabinowitz, Neil C and Desjardins, Guillaume and Soyer, Hubert and Kirkpatrick, James and Kavukcuoglu, Koray and Pascanu, Razvan and Hadsell, Raia},