Skip to content

Commit 7143c07

Browse files
committed
Update README with recent highlights, benchmarks, and support matrix fixes
Add new Recent News entries for Qwen3, ESM2 low-precision (NVFP4/MXFP8), Mixtral MoE, ESM2 PEFT, and Llama3 context parallelism. Add benchmark figures for ESM2 low-precision on B300 and Llama3 70B CP on GB300. Update support matrix with new model/recipe rows and fix stale WIP statuses. Fix typos ("bionemo2" → "BioNeMo Framework") and remove stray character in amplify README. Signed-off-by: Timur Rvachov <trvachov@nvidia.com>
1 parent 46112e7 commit 7143c07

File tree

2 files changed

+16
-8
lines changed

2 files changed

+16
-8
lines changed

README.md

Lines changed: 16 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,11 @@ cd bionemo-framework/bionemo-recipes/recipes/esm2_native_te/
4343

4444
## Recent News
4545

46+
- 03/09/2026 [Qwen2.5 / Qwen3 model](bionemo-recipes/models/qwen/) with TE acceleration, FP8/MXFP8, KV-cache inference, and bidirectional HF checkpoint conversion.
47+
- 03/05/2026 [ESM2 NVFP4 and MXFP8](bionemo-recipes/recipes/esm2_native_te/README.md#low-precision-performance-benchmarks) low-precision training — up to **2,367 TFLOPS/GPU** on NVIDIA B300 at 15B scale with per-layer precision control.
48+
- 02/23/2026 [Mixtral MoE model](bionemo-recipes/models/mixtral/) with TE `GroupedLinear` for efficient parallel expert computation, FP8/FP4 support, and HF conversion.
49+
- 02/13/2026 [ESM2 PEFT recipe](bionemo-recipes/recipes/esm2_peft_te/) for LoRA fine-tuning with sequence packing support.
50+
- 01/14/2026 [Llama3 Context Parallelism](bionemo-recipes/recipes/llama3_native_te/README.md#performance-benchmarks) — scaling Llama 3 70B to 144K context on 36x GB300 NVL36 with ~65% MFU.
4651
- 10/27/2025 [CodonFM recipe](https://github.com/NVIDIA/bionemo-framework/tree/main/bionemo-recipes/recipes/codonfm_ptl_te) released! This is an accelerated version of the original [research codebase](https://github.com/NVIDIA-Digital-Bio/CodonFM) with [scientific preprint](https://research.nvidia.com/labs/dbr/assets/data/manuscripts/nv-codonfm-preprint.pdf).
4752
- 09/30/2025 Megatron/NeMo 5D parallel BioNeMo Framework image v2.7 [released on NGC](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/containers/bionemo-framework) for both x86 and ARM CPUs.
4853
- 09/01/2025 [bionemo-recipes](https://github.com/NVIDIA/bionemo-framework/tree/main/bionemo-recipes) goes live! Lightweight and portable examples with state-of-the-art training performance you can riff on to meet your needs.
@@ -61,13 +66,18 @@ A core use-case of the BioNeMo Framework is to help digital biology scientists a
6166
| ---------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | -------------- | ----------- | ------------- | ------ | ---------------- | ------ | ------------------- |
6267
| `models/`<br>`amplify` | TE accelerated protein BERT, pushed to HuggingFace | ✅ Active |||| 🚧 WIP || 🚧 WIP |
6368
| `models/`<br>`esm2` | TE accelerated protein BERT, pushed to HuggingFace | ✅ Active |||||||
64-
| `models/`<br>`llama3` | TE accelerated Llama 3 | ✅ Active || 🚧 WIP ||| 🚧 WIP | 🚧 WIP |
69+
| `models/`<br>`llama3` | TE accelerated Llama 3 | ✅ Active || 🚧 WIP ||| | |
6570
| `models/`<br>`geneformer` | TE accelerated single-cell BERT | 🚧 WIP ||| 🚧 WIP | 🚧 WIP | 🚧 WIP | 🚧 WIP |
6671
| `recipes/`<br>`codonfm_ptl_te` | Recipe for [CodonFM](https://research.nvidia.com/labs/dbr/assets/data/manuscripts/nv-codonfm-preprint.pdf)'s Encodon using TE | ✅ Active || 🚧 WIP ||| 🚧 WIP | 🚧 WIP |
6772
| `recipes/`<br>`esm2_accelerate_te` | Recipe for ESM2 TE + HF Accelerate | ✅ Active || 🚧 WIP |||| 🚧 WIP |
68-
| `recipes/`<br>`esm2_native_te` | Recipe for ESM2 TE + native PyTorch | ✅ Active |||||| 🚧 WIP |
73+
| `recipes/`<br>`esm2_native_te` | Recipe for ESM2 TE + native PyTorch | ✅ Active |||||| |
6974
| `recipes/`<br>`geneformer_native_te_mfsdp_fp8` | Recipe for Geneformer HF model | 🚧 WIP |||||| 🚧 WIP |
70-
| `recipes/`<br>`llama3_native_te` | Recipe for Llama 3 TE + native PyTorch | ✅ Active || 🚧 WIP ||| 🚧 WIP | 🚧 WIP |
75+
| `recipes/`<br>`llama3_native_te` | Recipe for Llama 3 TE + native PyTorch | ✅ Active || 🚧 WIP |||||
76+
| `models/`<br>`mixtral` | TE accelerated MoE model | ✅ Active || 🚧 WIP |||| 🚧 WIP |
77+
| `models/`<br>`qwen` | TE accelerated Qwen2.5/Qwen3 | ✅ Active || 🚧 WIP |||| 🚧 WIP |
78+
| `recipes/`<br>`esm2_peft_te` | Recipe for ESM2 LoRA fine-tuning | ✅ Active ||||| 🚧 WIP ||
79+
| `recipes/`<br>`evo2_megatron` | Recipe for Evo2 via Megatron Bridge | 🚧 WIP |||||||
80+
| `recipes/`<br>`fp8_analysis` | FP8 training analyzer & heatmap tool | ✅ Active | N/A | N/A | N/A | N/A | N/A | N/A |
7181
| `recipes/`<br>`vit` | Recipe for Vision Transformer | 🚧 WIP |||||| 🚧 WIP |
7282

7383
</small>
@@ -113,7 +123,7 @@ BioNeMo Framework is part of a larger ecosystem of NVIDIA Biopharma products. Ge
113123

114124
## Documentation Resources
115125

116-
- **Official Documentation:** Contents of `sub-packages` including user guides, API references, and troubleshooting, are documented on our [official documentation](https://docs.nvidia.com/bionemo-framework/latest/). Nightly builds of this documentation is available on [BioNeMo Framework GitHub Pages](https://nvidia.github.io/bionemo-framework/)
126+
- **Official Documentation:** Documentation for sub-packages, including user guides, API references, and troubleshooting, is available on our [official documentation](https://docs.nvidia.com/bionemo-framework/latest/). Nightly builds of this documentation is available on [BioNeMo Framework GitHub Pages](https://nvidia.github.io/bionemo-framework/)
117127

118128
- **🚧 In-Progress Documentation 🚧:** `bionemo-recipes` documentation is currently work in progress, however the recipes are meant to be self-documented and easy to understand—we suggest you throw them into your favorite genai code assistant!
119129

@@ -136,8 +146,8 @@ docker run --rm -it \
136146

137147
#### Initializing 3rd-party dependencies as git submodules
138148

139-
The NeMo and Megatron-LM dependencies are included as git submodules in bionemo2. The pinned commits for these submodules represent the "last-known-good" versions of these packages
140-
that are confirmed to be working with bionemo2 (and those that are tested in CI).
149+
The NeMo and Megatron-LM dependencies are included as git submodules in BioNeMo Framework. The pinned commits for these submodules represent the "last-known-good" versions of these packages
150+
that are confirmed to be working with BioNeMo Framework (and those that are tested in CI).
141151

142152
To initialize these sub-modules when cloning the repo, add the `--recursive` flag to the git clone command:
143153

bionemo-recipes/models/amplify/README.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -117,5 +117,3 @@ Or, upload all models at once with:
117117
```bash
118118
for dir in *; do huggingface-cli upload nvidia/$(basename "$dir") "$dir/"; done
119119
```
120-
121-
z

0 commit comments

Comments
 (0)