-
Notifications
You must be signed in to change notification settings - Fork 169
Open
Description
Hi guys,
been running a slightly modified version of the code on my (admittedly slow) Macbook Air 2013.
Now I am wondering: is it normal for the declaration of the training ops (tf.train.AdamOptimizer, tf.gradients, tf.clip_by_global_norm, tf.train.AdamOptimizerapply_gradients) to take a combined 11 minutes (or anything in that order of magnitude)? Been downsizing the layer_size to 16 as well, same effect. This effects the development workflow.
Would be thankful for any hints, because testing other parts of the code with this taking so long is very time-consuming.
Best,
Anthony
EDIT:
Could this be caused by the Mac needing to allocate virtual memory space? Because I only have 4gb and the model consumes more than that in my current setup.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels