TFRecord IO slower than python hdf5 reader, how do I improve its speed?











up vote
0
down vote

favorite












I was following the official TF giude to use the tf.data.Dataset API for building data pipeline, but I found it ~2 times slower than my python data pipeline using hdf5.



Here's my experiment result:




TFRecordDataset: 660 thousand samples/second
hdf5 reader + feed_dict to placeholder: 1.1 million samples/second


My experiment setting is:

- batch size=1000,

- print log every 10 million samples to show the time,

- run a fake model that only takes data as input and do not compute anything.



what's worse is that when I set batch size=10000, it becomes:




TFRecordDataset: 760 thousand samples/second
hdf5 reader + feed_dict to placeholder: 2.1 million samples/second


Here's my code for reading tfrecord



def parse_tfrecord_batch(record_batch):
FEATURE_LEN = 29
dics = {
'label': tf.FixedLenFeature(shape=(), dtype=tf.int64),
'feature_id': tf.FixedLenFeature(shape=(FEATURE_LEN,), dtype=tf.int64),
'feature_val': tf.FixedLenFeature(shape=(FEATURE_LEN,), dtype=tf.float32)
}
parsed_example = tf.parse_example(record_batch, dics)
return parsed_example['feature_id'], parsed_example['feature_val'], parsed_example['label']

class fakeModel:
def __init__(self, train_filenames):
self.graph = tf.Graph()
with self.graph.as_default():
dataset = tf.data.TFRecordDataset(train_filenames)
dataset = dataset.repeat()
dataset = dataset.batch(1000)
dataset = dataset.map(parse_tfrecord_batch, num_parallel_calls=1)
dataset = dataset.prefetch(1000)
self.iterator = dataset.make_initializable_iterator()
self.id, self.wt, self.label = self.iterator.get_next()

self.train_preds = tf.identity(self.lbl_hldr)


I've tune the num_parallel_calls to 2, 10. Not working.



I've also tuned the prefetch(n) from 1 to 1000, which has little improvement.



My question is:

Is there any way to improve my tfrecord data pipeline? Am I missing something in my code?



Appreciate it for any help.










share|improve this question






















  • Why bother benchmarking a no-op model that does no computation? Doesn't a real-world application's actual computation dwarf the dummy no-op time? How long does your real code take to run in either of these systems?
    – John Zwinck
    Nov 11 at 10:12










  • @John Zwinck 1. Because when I train my DL model on GPU, the utility is below 50%. I guess the GPU is not fully utilized and the speed of the data pipeline could be a important factor, so I tested both methods. 2. Actually, both methods takes nearly same time during training DL model, showing similar GPU utility. so maybe the low utility is not brought by the data pipeline.
    – JenkinsY
    Nov 12 at 1:15















up vote
0
down vote

favorite












I was following the official TF giude to use the tf.data.Dataset API for building data pipeline, but I found it ~2 times slower than my python data pipeline using hdf5.



Here's my experiment result:




TFRecordDataset: 660 thousand samples/second
hdf5 reader + feed_dict to placeholder: 1.1 million samples/second


My experiment setting is:

- batch size=1000,

- print log every 10 million samples to show the time,

- run a fake model that only takes data as input and do not compute anything.



what's worse is that when I set batch size=10000, it becomes:




TFRecordDataset: 760 thousand samples/second
hdf5 reader + feed_dict to placeholder: 2.1 million samples/second


Here's my code for reading tfrecord



def parse_tfrecord_batch(record_batch):
FEATURE_LEN = 29
dics = {
'label': tf.FixedLenFeature(shape=(), dtype=tf.int64),
'feature_id': tf.FixedLenFeature(shape=(FEATURE_LEN,), dtype=tf.int64),
'feature_val': tf.FixedLenFeature(shape=(FEATURE_LEN,), dtype=tf.float32)
}
parsed_example = tf.parse_example(record_batch, dics)
return parsed_example['feature_id'], parsed_example['feature_val'], parsed_example['label']

class fakeModel:
def __init__(self, train_filenames):
self.graph = tf.Graph()
with self.graph.as_default():
dataset = tf.data.TFRecordDataset(train_filenames)
dataset = dataset.repeat()
dataset = dataset.batch(1000)
dataset = dataset.map(parse_tfrecord_batch, num_parallel_calls=1)
dataset = dataset.prefetch(1000)
self.iterator = dataset.make_initializable_iterator()
self.id, self.wt, self.label = self.iterator.get_next()

self.train_preds = tf.identity(self.lbl_hldr)


I've tune the num_parallel_calls to 2, 10. Not working.



I've also tuned the prefetch(n) from 1 to 1000, which has little improvement.



My question is:

Is there any way to improve my tfrecord data pipeline? Am I missing something in my code?



Appreciate it for any help.










share|improve this question






















  • Why bother benchmarking a no-op model that does no computation? Doesn't a real-world application's actual computation dwarf the dummy no-op time? How long does your real code take to run in either of these systems?
    – John Zwinck
    Nov 11 at 10:12










  • @John Zwinck 1. Because when I train my DL model on GPU, the utility is below 50%. I guess the GPU is not fully utilized and the speed of the data pipeline could be a important factor, so I tested both methods. 2. Actually, both methods takes nearly same time during training DL model, showing similar GPU utility. so maybe the low utility is not brought by the data pipeline.
    – JenkinsY
    Nov 12 at 1:15













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I was following the official TF giude to use the tf.data.Dataset API for building data pipeline, but I found it ~2 times slower than my python data pipeline using hdf5.



Here's my experiment result:




TFRecordDataset: 660 thousand samples/second
hdf5 reader + feed_dict to placeholder: 1.1 million samples/second


My experiment setting is:

- batch size=1000,

- print log every 10 million samples to show the time,

- run a fake model that only takes data as input and do not compute anything.



what's worse is that when I set batch size=10000, it becomes:




TFRecordDataset: 760 thousand samples/second
hdf5 reader + feed_dict to placeholder: 2.1 million samples/second


Here's my code for reading tfrecord



def parse_tfrecord_batch(record_batch):
FEATURE_LEN = 29
dics = {
'label': tf.FixedLenFeature(shape=(), dtype=tf.int64),
'feature_id': tf.FixedLenFeature(shape=(FEATURE_LEN,), dtype=tf.int64),
'feature_val': tf.FixedLenFeature(shape=(FEATURE_LEN,), dtype=tf.float32)
}
parsed_example = tf.parse_example(record_batch, dics)
return parsed_example['feature_id'], parsed_example['feature_val'], parsed_example['label']

class fakeModel:
def __init__(self, train_filenames):
self.graph = tf.Graph()
with self.graph.as_default():
dataset = tf.data.TFRecordDataset(train_filenames)
dataset = dataset.repeat()
dataset = dataset.batch(1000)
dataset = dataset.map(parse_tfrecord_batch, num_parallel_calls=1)
dataset = dataset.prefetch(1000)
self.iterator = dataset.make_initializable_iterator()
self.id, self.wt, self.label = self.iterator.get_next()

self.train_preds = tf.identity(self.lbl_hldr)


I've tune the num_parallel_calls to 2, 10. Not working.



I've also tuned the prefetch(n) from 1 to 1000, which has little improvement.



My question is:

Is there any way to improve my tfrecord data pipeline? Am I missing something in my code?



Appreciate it for any help.










share|improve this question













I was following the official TF giude to use the tf.data.Dataset API for building data pipeline, but I found it ~2 times slower than my python data pipeline using hdf5.



Here's my experiment result:




TFRecordDataset: 660 thousand samples/second
hdf5 reader + feed_dict to placeholder: 1.1 million samples/second


My experiment setting is:

- batch size=1000,

- print log every 10 million samples to show the time,

- run a fake model that only takes data as input and do not compute anything.



what's worse is that when I set batch size=10000, it becomes:




TFRecordDataset: 760 thousand samples/second
hdf5 reader + feed_dict to placeholder: 2.1 million samples/second


Here's my code for reading tfrecord



def parse_tfrecord_batch(record_batch):
FEATURE_LEN = 29
dics = {
'label': tf.FixedLenFeature(shape=(), dtype=tf.int64),
'feature_id': tf.FixedLenFeature(shape=(FEATURE_LEN,), dtype=tf.int64),
'feature_val': tf.FixedLenFeature(shape=(FEATURE_LEN,), dtype=tf.float32)
}
parsed_example = tf.parse_example(record_batch, dics)
return parsed_example['feature_id'], parsed_example['feature_val'], parsed_example['label']

class fakeModel:
def __init__(self, train_filenames):
self.graph = tf.Graph()
with self.graph.as_default():
dataset = tf.data.TFRecordDataset(train_filenames)
dataset = dataset.repeat()
dataset = dataset.batch(1000)
dataset = dataset.map(parse_tfrecord_batch, num_parallel_calls=1)
dataset = dataset.prefetch(1000)
self.iterator = dataset.make_initializable_iterator()
self.id, self.wt, self.label = self.iterator.get_next()

self.train_preds = tf.identity(self.lbl_hldr)


I've tune the num_parallel_calls to 2, 10. Not working.



I've also tuned the prefetch(n) from 1 to 1000, which has little improvement.



My question is:

Is there any way to improve my tfrecord data pipeline? Am I missing something in my code?



Appreciate it for any help.







tensorflow hdf5 tensorflow-datasets tfrecord






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 11 at 9:51









JenkinsY

88113




88113












  • Why bother benchmarking a no-op model that does no computation? Doesn't a real-world application's actual computation dwarf the dummy no-op time? How long does your real code take to run in either of these systems?
    – John Zwinck
    Nov 11 at 10:12










  • @John Zwinck 1. Because when I train my DL model on GPU, the utility is below 50%. I guess the GPU is not fully utilized and the speed of the data pipeline could be a important factor, so I tested both methods. 2. Actually, both methods takes nearly same time during training DL model, showing similar GPU utility. so maybe the low utility is not brought by the data pipeline.
    – JenkinsY
    Nov 12 at 1:15


















  • Why bother benchmarking a no-op model that does no computation? Doesn't a real-world application's actual computation dwarf the dummy no-op time? How long does your real code take to run in either of these systems?
    – John Zwinck
    Nov 11 at 10:12










  • @John Zwinck 1. Because when I train my DL model on GPU, the utility is below 50%. I guess the GPU is not fully utilized and the speed of the data pipeline could be a important factor, so I tested both methods. 2. Actually, both methods takes nearly same time during training DL model, showing similar GPU utility. so maybe the low utility is not brought by the data pipeline.
    – JenkinsY
    Nov 12 at 1:15
















Why bother benchmarking a no-op model that does no computation? Doesn't a real-world application's actual computation dwarf the dummy no-op time? How long does your real code take to run in either of these systems?
– John Zwinck
Nov 11 at 10:12




Why bother benchmarking a no-op model that does no computation? Doesn't a real-world application's actual computation dwarf the dummy no-op time? How long does your real code take to run in either of these systems?
– John Zwinck
Nov 11 at 10:12












@John Zwinck 1. Because when I train my DL model on GPU, the utility is below 50%. I guess the GPU is not fully utilized and the speed of the data pipeline could be a important factor, so I tested both methods. 2. Actually, both methods takes nearly same time during training DL model, showing similar GPU utility. so maybe the low utility is not brought by the data pipeline.
– JenkinsY
Nov 12 at 1:15




@John Zwinck 1. Because when I train my DL model on GPU, the utility is below 50%. I guess the GPU is not fully utilized and the speed of the data pipeline could be a important factor, so I tested both methods. 2. Actually, both methods takes nearly same time during training DL model, showing similar GPU utility. so maybe the low utility is not brought by the data pipeline.
– JenkinsY
Nov 12 at 1:15

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53247534%2ftfrecord-io-slower-than-python-hdf5-reader-how-do-i-improve-its-speed%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53247534%2ftfrecord-io-slower-than-python-hdf5-reader-how-do-i-improve-its-speed%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Full-time equivalent

さくらももこ

13 indicted, 8 arrested in Calif. drug cartel investigation