You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The input dataset (``--dataset``) should be the manipulation dataset generated in the previous step. You can specify any output filename using the ``--output_file_name`` parameter.
562
566
563
567
The key parameters for locomanipulation dataset generation are:
564
568
565
-
* ``--lift_step 70``: Number of steps for the lifting phase of the manipulation task. This should mark the point immediately after the robot has grasped the object.
566
-
* ``--navigate_step 120``: Number of steps for the navigation phase between locations. This should make the point where the robot has lifted the object and is ready to walk.
569
+
* ``--lift_step 60``: Number of steps for the lifting phase of the manipulation task. This should mark the point immediately after the robot has grasped the object.
570
+
* ``--navigate_step 130``: Number of steps for the navigation phase between locations. This should make the point where the robot has lifted the object and is ready to walk.
567
571
* ``--output_file``: Name of the output dataset file
568
572
569
573
This process creates a dataset where the robot performs the manipulation task at different locations, requiring it to navigate between points while maintaining the learned manipulation behaviors. The resulting dataset can be used to train policies that combine both locomotion and manipulation capabilities.
@@ -590,3 +594,66 @@ in the GR00T N1.5 repository. An example closed-loop policy rollout is shown in
590
594
591
595
The policy shown above uses the camera image, hand poses, hand joint positions, object pose, and base goal pose as inputs.
592
596
The output of the model is the target base velocity, hand poses, and hand joint positions for the next several timesteps.
597
+
598
+
Use NuRec Background in Locomanipulation SDG
599
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
600
+
601
+
The `NuRec assets <https://docs.isaacsim.omniverse.nvidia.com/5.1.0/assets/usd_assets_nurec.html#neural-volume-rendering>`__
602
+
are neural volumes reconstructed from real-world captures. When integrated into the locomanipulation SDG workflow, these
603
+
assets allow you to generate synthetic data in photorealistic environments that mirror real-world.
604
+
605
+
You can load your own USD or USDZ file, which must include neural reconstruction for rendering, and a collision mesh to
606
+
enable physical interaction and be set to invisible.
607
+
608
+
Pre-constructed assets are available via the `PhysicalAI Robotics NuRec <https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-NuRec>`__
609
+
dataset. Some of them are captured from a humanoid-viewpoint to match the camera view of the humanoid robot.
610
+
611
+
For example, when using the asset ``hand_hold-voyager-babyboom``, the relevant files are:
612
+
613
+
- ``stage.usdz``: a USDZ archive that bundles 3D Gaussian splatting (``volume.nurec``), a collision mesh (``mesh.usd``), etc.
614
+
- ``occupancy_map.yaml`` and ``occupancy_map.png``: occupancy map for path planning and navigation.
615
+
616
+
Download the files and place them under ``<PATH_TO_USD_ASSET>``.
617
+
618
+
Ensure you have the manipulation dataset from the previous step. You can also download a pre-recorded
619
+
annotated dataset as in :ref:`Generate the manipulation dataset <generate-the-manipulation-dataset>`
620
+
and place it under ``<DATASET_FOLDER>/dataset_annotated_g1_locomanip.hdf5``.
0 commit comments