Difference in accuracy in FC-DNN on different run











up vote
0
down vote

favorite












I have been facing an issue of instability in deep neural networks. For example, while using DNN with fully connected layers (3 layers, 128 memory units in each), I am having a range of unweighted accuracy of 62% to 66% in the different runs (yes, the same train and test set). I used Xavier initializer and bias initializer as zero. All the other parameters are also fixed.



Have anyone faced such problems? I have tried to use fixed seed too, but it doesn't seem to help. Kinda puzzled about what is creating this instability.



Thanks in advance :)










share|improve this question






















  • This is completely normal, and its due to the random weight initialization.
    – Matias Valdenegro
    yesterday















up vote
0
down vote

favorite












I have been facing an issue of instability in deep neural networks. For example, while using DNN with fully connected layers (3 layers, 128 memory units in each), I am having a range of unweighted accuracy of 62% to 66% in the different runs (yes, the same train and test set). I used Xavier initializer and bias initializer as zero. All the other parameters are also fixed.



Have anyone faced such problems? I have tried to use fixed seed too, but it doesn't seem to help. Kinda puzzled about what is creating this instability.



Thanks in advance :)










share|improve this question






















  • This is completely normal, and its due to the random weight initialization.
    – Matias Valdenegro
    yesterday













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have been facing an issue of instability in deep neural networks. For example, while using DNN with fully connected layers (3 layers, 128 memory units in each), I am having a range of unweighted accuracy of 62% to 66% in the different runs (yes, the same train and test set). I used Xavier initializer and bias initializer as zero. All the other parameters are also fixed.



Have anyone faced such problems? I have tried to use fixed seed too, but it doesn't seem to help. Kinda puzzled about what is creating this instability.



Thanks in advance :)










share|improve this question













I have been facing an issue of instability in deep neural networks. For example, while using DNN with fully connected layers (3 layers, 128 memory units in each), I am having a range of unweighted accuracy of 62% to 66% in the different runs (yes, the same train and test set). I used Xavier initializer and bias initializer as zero. All the other parameters are also fixed.



Have anyone faced such problems? I have tried to use fixed seed too, but it doesn't seem to help. Kinda puzzled about what is creating this instability.



Thanks in advance :)







neural-network deep-learning






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked yesterday









Albert

1




1












  • This is completely normal, and its due to the random weight initialization.
    – Matias Valdenegro
    yesterday


















  • This is completely normal, and its due to the random weight initialization.
    – Matias Valdenegro
    yesterday
















This is completely normal, and its due to the random weight initialization.
– Matias Valdenegro
yesterday




This is completely normal, and its due to the random weight initialization.
– Matias Valdenegro
yesterday

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53237890%2fdifference-in-accuracy-in-fc-dnn-on-different-run%23new-answer', 'question_page');
}
);

Post as a guest





































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53237890%2fdifference-in-accuracy-in-fc-dnn-on-different-run%23new-answer', 'question_page');
}
);

Post as a guest




















































































Popular posts from this blog

Full-time equivalent

Bicuculline

さくらももこ