Skip to content
项目
群组
代码片段
帮助
当前项目
正在载入...
登录 / 注册
切换导航面板
N
NiuTrans.Tensor
概览
Overview
Details
Activity
Cycle Analytics
版本库
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
问题
0
Issues
0
列表
Board
标记
里程碑
合并请求
0
Merge Requests
0
CI / CD
CI / CD
流水线
作业
日程表
图表
维基
Wiki
代码片段
Snippets
成员
Collapse sidebar
Close sidebar
活动
图像
聊天
创建新问题
作业
提交
Issue Boards
Open sidebar
杨迪
NiuTrans.Tensor
Commits
0ec51854
Commit
0ec51854
authored
6 years ago
by
xiaotong
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
improve the way of using buf tensors
parent
0e1074ff
隐藏空白字符变更
内嵌
并排
正在显示
1 个修改的文件
包含
21 行增加
和
11 行删除
+21
-11
source/network/XBackwardShape.cpp
+21
-11
没有找到文件。
source/network/XBackwardShape.cpp
查看文件 @
0ec51854
...
...
@@ -268,10 +268,12 @@ void XShapeGrad::GradSplit(XTensor * node, bool isEfficient)
/* if the tensor is used somewhere else, we need another SUM
for gradient accumulation */
else
{
XTensor
inputGradTMP
(
input
);
XTensor
*
inputGradTMP
=
NewTensorBuf
(
input
,
input
->
devID
,
input
->
mem
);
_Merge
(
node
->
grad
,
&
inputGradTMP
,
whereToSplit
+
1
,
0
);
_Sum
(
input
->
grad
,
&
inputGradTMP
,
input
->
grad
);
_Merge
(
node
->
grad
,
inputGradTMP
,
whereToSplit
+
1
,
0
);
_Sum
(
input
->
grad
,
inputGradTMP
,
input
->
grad
);
DelTensorBuf
(
inputGradTMP
);
}
node
->
visitMark
=
NODE_FINISHED
;
...
...
@@ -347,10 +349,12 @@ void XShapeGrad::GradSplitListPost(XTensor * node, bool isEfficient)
somewhere else, we need another SUM for gradient
accumulation */
else
{
XTensor
nodeGradTMP
(
node
);
XTensor
*
nodeGradTMP
=
NewTensorBuf
(
node
,
node
->
devID
,
node
->
mem
);
_Merge
(
&
splits
,
&
nodeGradTMP
,
whereToSplit
+
1
);
_Sum
(
node
->
grad
,
&
nodeGradTMP
,
node
->
grad
);
_Merge
(
&
splits
,
nodeGradTMP
,
whereToSplit
+
1
);
_Sum
(
node
->
grad
,
nodeGradTMP
,
node
->
grad
);
DelTensorBuf
(
nodeGradTMP
);
}
}
...
...
@@ -378,8 +382,13 @@ void XShapeGrad::GradUnsqueeze(XTensor * node, bool isEfficient)
CheckNTErrors
(
dSize
==
output
->
GetDim
(
dim
),
"Wrong dim size for UNSQUEEZE!"
);
CheckNTErrors
(
output
->
unitNum
=
input
->
unitNum
*
dSize
,
"Wrong tensor size!"
);
_ReduceSum
(
output
->
grad
,
input
->
grad
,
dim
);
XTensor
*
g
=
NewTensorBuf
(
input
->
grad
,
input
->
devID
,
input
->
mem
);
_ReduceSum
(
output
->
grad
,
g
,
dim
);
_Sum
(
input
->
grad
,
g
,
input
->
grad
);
DelTensorBuf
(
g
);
node
->
visitMark
=
NODE_FINISHED
;
}
...
...
@@ -401,7 +410,7 @@ void XShapeGrad::GradTranspose(XTensor * node, bool isEfficient)
XTensor
*
output
=
node
;
XTensor
*
input
=
income
.
tails
[
0
];
XTensor
*
b
=
NewTensor
(
input
);
XTensor
*
b
=
NewTensor
Buf
(
input
,
input
->
devID
,
input
->
mem
);
XNoder
::
MakeGrad
(
input
);
int
i
=
income
.
GetParamInt
(
0
);
...
...
@@ -412,10 +421,12 @@ void XShapeGrad::GradTranspose(XTensor * node, bool isEfficient)
_Transpose
(
output
->
grad
,
b
,
i
,
j
);
_Sum
(
input
->
grad
,
b
,
input
->
grad
);
DelTensorBuf
(
b
);
node
->
visitMark
=
NODE_FINISHED
;
delete
b
;
}
}
\ No newline at end of file
}
This diff is collapsed.
Click to expand it.
编写
预览
Markdown
格式
0%
重试
或
添加新文件
添加附件
取消
您添加了
0
人
到此讨论。请谨慎行事。
请先完成此评论的编辑!
取消
请
注册
或者
登录
后发表评论