Name |
Last commit
|
Last Update |
---|---|---|
.. | ||
base.yaml | ||
base_postnorm.yaml | ||
basis.yaml | ||
big.yaml | ||
big_postnorm.yaml | ||
deep.yaml | ||
deep_ctc.yaml | ||
dlcl.yaml | ||
inter.yaml | ||
rpr.yaml |
It must be said that some problems still confuse me: 1. Whether to scale in the input layer (I try to replace it with layer specification); 2. The detailed setting of weight sharing between output projection matrix and embedding matrix in the adapter (I notice that inconsistent variance will lead to bad results); 3. The biggest confusion is that the variance increases with the calculation layer by layer (I am not sure if this phenomenon is reasonable, I will compare the behavior on the latest code). Finally, the detailed implementation is so important to the final performance, even if it is a subtle difference.
Name |
Last commit
|
Last Update |
---|---|---|
.. | ||
base.yaml | 正在载入提交数据... | |
base_postnorm.yaml | 正在载入提交数据... | |
basis.yaml | 正在载入提交数据... | |
big.yaml | 正在载入提交数据... | |
big_postnorm.yaml | 正在载入提交数据... | |
deep.yaml | 正在载入提交数据... | |
deep_ctc.yaml | 正在载入提交数据... | |
dlcl.yaml | 正在载入提交数据... | |
inter.yaml | 正在载入提交数据... | |
rpr.yaml | 正在载入提交数据... |