Skip to content

Slow spinup time #17

@anthnyprschka

Description

@anthnyprschka

Hi guys,

been running a slightly modified version of the code on my (admittedly slow) Macbook Air 2013.

Now I am wondering: is it normal for the declaration of the training ops (tf.train.AdamOptimizer, tf.gradients, tf.clip_by_global_norm, tf.train.AdamOptimizerapply_gradients) to take a combined 11 minutes (or anything in that order of magnitude)? Been downsizing the layer_size to 16 as well, same effect. This effects the development workflow.

Would be thankful for any hints, because testing other parts of the code with this taking so long is very time-consuming.

Best,
Anthony

EDIT:
Could this be caused by the Mac needing to allocate virtual memory space? Because I only have 4gb and the model consumes more than that in my current setup.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions