Speed Up your Keras Sequence Pipeline

When using tf.keras.utils.Sequence to generate batches, the data copy overhead between processed can be very high. This leads to worker processes being blocked most of the time, and decline in batch generation. A common solution is the use of shared memory to share data between processes. PyTorch uses it. With Python 3.8, you can use shared_memory from multiprocessing library to achieve the same.

You can find the complete article here.


Subscribe

Please enable JavaScript in your browser to complete this form.
Name