Persisting training insights across MuJoCo runs #3161
robotmem
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I've been training manipulation policies with MuJoCo (FetchPush, custom
envs with Franka) and noticed a recurring pattern: after tuning reward
weights or contact parameters and restarting training, there's no easy
way to keep track of what configurations worked in previous runs.
Not talking about checkpoints or replay buffers — more like high-level
notes: "kp=200 caused oscillation", "approach angle of 15° was stable",
that kind of thing. Currently I just keep a text file, which doesn't
scale.
I put together a small Python wrapper that logs episode summaries
(reward, steps, success, key params) to a local SQLite db and lets you
query past experience at the start of a new run:
On FetchPush-v4 with heuristic policies (10 seeds), keeping a running
log of past episode outcomes improved success rate from ~42% to ~67%
over 300 episodes. Effect varies a lot by task complexity though.
Has anyone tried something similar for tracking experiment history in
MuJoCo? Curious if there's a standard approach I'm missing.
https://github.com/robotmem/robotmem
Beta Was this translation helpful? Give feedback.
All reactions