Skip to content

Commit 33dc693

Browse files
authored
Merge branch 'develop' into urdf-importer
2 parents 222f250 + b19a034 commit 33dc693

File tree

165 files changed

+3904
-1150
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

165 files changed

+3904
-1150
lines changed

docker/Dockerfile.base

Lines changed: 8 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -37,10 +37,7 @@ ENV DEBIAN_FRONTEND=noninteractive
3737
USER root
3838

3939
# Install dependencies
40-
RUN --mount=type=cache,target=/var/cache/apt \
41-
apt-get update
42-
43-
RUN --mount=type=cache,target=/var/cache/apt \
40+
RUN apt-get update && \
4441
apt-get install -y --no-install-recommends \
4542
build-essential \
4643
cmake \
@@ -49,19 +46,14 @@ RUN --mount=type=cache,target=/var/cache/apt \
4946
ncurses-term \
5047
wget
5148

52-
# Required to build imgui-bundle on arm64 as no prebuilt pip wheel is provided
53-
RUN --mount=type=cache,target=/var/cache/apt \
54-
if [ "$(dpkg --print-architecture)" = "arm64" ]; then \
49+
# Install packages needed to build imgui-bundle on arm64 as no prebuilt pip wheel is provided
50+
RUN if [ "$(dpkg --print-architecture)" = "arm64" ]; then \
51+
apt-get update && \
5552
apt-get install -y --no-install-recommends \
5653
libgl1-mesa-dev libopengl-dev libglx-dev \
5754
libx11-dev libxcursor-dev libxi-dev libxinerama-dev libxrandr-dev; \
5855
fi
5956

60-
# Cleanup
61-
RUN --mount=type=cache,target=/var/cache/apt \
62-
apt -y autoremove && apt clean autoclean && \
63-
rm -rf /var/lib/apt/lists/*
64-
6557
# Copy the Isaac Lab directory (files to exclude are defined in .dockerignore)
6658
COPY ../ ${ISAACLAB_PATH}
6759

@@ -78,9 +70,10 @@ RUN ln -sf ${ISAACSIM_ROOT_PATH} ${ISAACLAB_PATH}/_isaac_sim
7870
RUN ${ISAACLAB_PATH}/isaaclab.sh -p -m pip install toml
7971

8072
# Install apt dependencies for extensions that declare them in their extension.toml
81-
RUN --mount=type=cache,target=/var/cache/apt \
82-
${ISAACLAB_PATH}/isaaclab.sh -p ${ISAACLAB_PATH}/tools/install_deps.py apt ${ISAACLAB_PATH}/source && \
83-
apt -y autoremove && apt clean autoclean && \
73+
RUN ${ISAACLAB_PATH}/isaaclab.sh -p ${ISAACLAB_PATH}/tools/install_deps.py apt ${ISAACLAB_PATH}/source
74+
75+
# Apt Cleanup
76+
RUN apt-get -y autoremove && apt-get clean && \
8477
rm -rf /var/lib/apt/lists/*
8578

8679
# for singularity usage, have to create the directories that will binded

docs/source/overview/imitation-learning/humanoids_imitation.rst

Lines changed: 70 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -408,6 +408,8 @@ The robot picks up an object at the initial location (point A) and places it at
408408
updated pre-trained policies with separate upper and lower body policies for flexibtility. They have been verified in the real world and can be
409409
directly deployed. Users can also train their own locomotion or whole-body control policies using the AGILE framework.
410410

411+
.. _generate-the-manipulation-dataset:
412+
411413
Generate the manipulation dataset
412414
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
413415

@@ -554,16 +556,18 @@ To generate the locomanipulation dataset, use the following command:
554556
--lift_step 60 \
555557
--navigate_step 130 \
556558
--output_file ./datasets/generated_dataset_g1_locomanipulation_sdg.hdf5 \
557-
--enable_cameras
559+
--enable_cameras \
560+
--randomize_placement \
561+
--visualizer kit
558562
559563
.. note::
560564

561565
The input dataset (``--dataset``) should be the manipulation dataset generated in the previous step. You can specify any output filename using the ``--output_file_name`` parameter.
562566

563567
The key parameters for locomanipulation dataset generation are:
564568

565-
* ``--lift_step 70``: Number of steps for the lifting phase of the manipulation task. This should mark the point immediately after the robot has grasped the object.
566-
* ``--navigate_step 120``: Number of steps for the navigation phase between locations. This should make the point where the robot has lifted the object and is ready to walk.
569+
* ``--lift_step 60``: Number of steps for the lifting phase of the manipulation task. This should mark the point immediately after the robot has grasped the object.
570+
* ``--navigate_step 130``: Number of steps for the navigation phase between locations. This should make the point where the robot has lifted the object and is ready to walk.
567571
* ``--output_file``: Name of the output dataset file
568572

569573
This process creates a dataset where the robot performs the manipulation task at different locations, requiring it to navigate between points while maintaining the learned manipulation behaviors. The resulting dataset can be used to train policies that combine both locomotion and manipulation capabilities.
@@ -590,3 +594,66 @@ in the GR00T N1.5 repository. An example closed-loop policy rollout is shown in
590594

591595
The policy shown above uses the camera image, hand poses, hand joint positions, object pose, and base goal pose as inputs.
592596
The output of the model is the target base velocity, hand poses, and hand joint positions for the next several timesteps.
597+
598+
Use NuRec Background in Locomanipulation SDG
599+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
600+
601+
The `NuRec assets <https://docs.isaacsim.omniverse.nvidia.com/5.1.0/assets/usd_assets_nurec.html#neural-volume-rendering>`__
602+
are neural volumes reconstructed from real-world captures. When integrated into the locomanipulation SDG workflow, these
603+
assets allow you to generate synthetic data in photorealistic environments that mirror real-world.
604+
605+
You can load your own USD or USDZ file, which must include neural reconstruction for rendering, and a collision mesh to
606+
enable physical interaction and be set to invisible.
607+
608+
Pre-constructed assets are available via the `PhysicalAI Robotics NuRec <https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-NuRec>`__
609+
dataset. Some of them are captured from a humanoid-viewpoint to match the camera view of the humanoid robot.
610+
611+
For example, when using the asset ``hand_hold-voyager-babyboom``, the relevant files are:
612+
613+
- ``stage.usdz``: a USDZ archive that bundles 3D Gaussian splatting (``volume.nurec``), a collision mesh (``mesh.usd``), etc.
614+
- ``occupancy_map.yaml`` and ``occupancy_map.png``: occupancy map for path planning and navigation.
615+
616+
Download the files and place them under ``<PATH_TO_USD_ASSET>``.
617+
618+
Ensure you have the manipulation dataset from the previous step. You can also download a pre-recorded
619+
annotated dataset as in :ref:`Generate the manipulation dataset <generate-the-manipulation-dataset>`
620+
and place it under ``<DATASET_FOLDER>/dataset_annotated_g1_locomanip.hdf5``.
621+
622+
Then run the following command:
623+
624+
.. code:: bash
625+
626+
./isaaclab.sh -p scripts/imitation_learning/locomanipulation_sdg/generate_data.py \
627+
--device cpu \
628+
--kit_args="--enable isaacsim.replicator.mobility_gen" \
629+
--task="Isaac-G1-SteeringWheel-Locomanipulation" \
630+
--dataset <DATASET_FOLDER>/dataset_annotated_g1_locomanip.hdf5 \
631+
--num_runs 1 \
632+
--lift_step 60 \
633+
--navigate_step 130 \
634+
--enable_pinocchio \
635+
--output_file <DATASET_FOLDER>/generated_dataset_g1_locomanipulation_sdg_with_background.hdf5 \
636+
--enable_cameras \
637+
--visualizer kit \
638+
--background_usd_path <PATH_TO_USD_ASSET>/stage.usdz \
639+
--background_occupancy_yaml_file <PATH_TO_USD_ASSET>/occupancy_map.yaml \
640+
--init_camera_view \
641+
--randomize_placement \
642+
--visualizer kit
643+
644+
The key parameters are:
645+
646+
- ``--background_usd_path``: Path to the NuRec USD asset.
647+
- ``--background_occupancy_yaml_file``: Path to the occupancy map file.
648+
- ``--high_res_video``: Enable high resolution video recording for the ego-centric camera view.
649+
- ``--init_camera_view``: Set the viewport camera behind the robot at the start of episode.
650+
651+
On successful task completion, an HDF5 dataset is generated containing camera observations. You can convert
652+
the ego-centric view to MP4:
653+
654+
.. code:: bash
655+
656+
./isaaclab.sh -p scripts/tools/hdf5_to_mp4.py \
657+
--input_file <DATASET_FOLDER>/generated_dataset_g1_locomanipulation_sdg_with_background.hdf5 \
658+
--output_dir <DATASET_FOLDER>/ \
659+
--input_keys robot_pov_cam

0 commit comments

Comments
 (0)