Skip to content

Commit 5316fe2

Browse files
authored
Merge branch 'main' into requirements-custom-blocks
2 parents f274df4 + 3105848 commit 5316fe2

File tree

108 files changed

+1289
-401
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+1289
-401
lines changed

.github/workflows/pr_modular_tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -117,7 +117,7 @@ jobs:
117117

118118
- name: Install dependencies
119119
run: |
120-
uv pip install -e ".[quality,test]"
120+
uv pip install -e ".[quality]"
121121
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
122122
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
123123
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps

.github/workflows/pr_tests.yml

Lines changed: 7 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -92,7 +92,6 @@ jobs:
9292
runner: aws-general-8-plus
9393
image: diffusers/diffusers-pytorch-cpu
9494
report: torch_example_cpu
95-
9695
name: ${{ matrix.config.name }}
9796

9897
runs-on:
@@ -114,9 +113,8 @@ jobs:
114113

115114
- name: Install dependencies
116115
run: |
117-
uv pip install -e ".[quality,test]"
118-
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
119-
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
116+
uv pip install -e ".[quality]"
117+
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
120118
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
121119
122120
- name: Environment
@@ -191,7 +189,7 @@ jobs:
191189

192190
- name: Install dependencies
193191
run: |
194-
uv pip install -e ".[quality,test]"
192+
uv pip install -e ".[quality]"
195193
196194
- name: Environment
197195
run: |
@@ -218,8 +216,6 @@ jobs:
218216

219217
run_lora_tests:
220218
needs: [check_code_quality, check_repository_consistency]
221-
strategy:
222-
fail-fast: false
223219

224220
name: LoRA tests with PEFT main
225221

@@ -242,14 +238,13 @@ jobs:
242238

243239
- name: Install dependencies
244240
run: |
245-
uv pip install -e ".[quality,test]"
241+
uv pip install -e ".[quality]"
246242
# TODO (sayakpaul, DN6): revisit `--no-deps`
247243
uv pip install -U peft@git+https://github.com/huggingface/peft.git --no-deps
248244
uv pip install -U tokenizers
249245
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
250-
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
251-
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
252-
246+
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
247+
253248
- name: Environment
254249
run: |
255250
python utils/print_env.py
@@ -275,6 +270,6 @@ jobs:
275270
if: ${{ always() }}
276271
uses: actions/upload-artifact@v6
277272
with:
278-
name: pr_main_test_reports
273+
name: pr_lora_test_reports
279274
path: reports
280275

.github/workflows/pr_tests_gpu.yml

Lines changed: 3 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -131,8 +131,7 @@ jobs:
131131
run: |
132132
uv pip install -e ".[quality]"
133133
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
134-
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
135-
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
134+
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
136135
137136
- name: Environment
138137
run: |
@@ -199,16 +198,10 @@ jobs:
199198

200199
- name: Install dependencies
201200
run: |
202-
# Install pkgs which depend on setuptools<81 for pkg_resources first with no build isolation
203-
uv pip install pip==25.2 setuptools==80.10.2
204-
uv pip install --no-build-isolation k-diffusion==0.0.12
205-
uv pip install --upgrade pip setuptools
206-
# Install the rest as normal
207201
uv pip install -e ".[quality]"
208202
uv pip install peft@git+https://github.com/huggingface/peft.git
209203
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
210-
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
211-
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
204+
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
212205
213206
- name: Environment
214207
run: |
@@ -269,8 +262,7 @@ jobs:
269262
nvidia-smi
270263
- name: Install dependencies
271264
run: |
272-
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
273-
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
265+
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
274266
uv pip install -e ".[quality,training]"
275267
276268
- name: Environment

.github/workflows/push_tests.yml

Lines changed: 3 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -76,8 +76,7 @@ jobs:
7676
run: |
7777
uv pip install -e ".[quality]"
7878
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
79-
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
80-
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
79+
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
8180
- name: Environment
8281
run: |
8382
python utils/print_env.py
@@ -126,16 +125,10 @@ jobs:
126125

127126
- name: Install dependencies
128127
run: |
129-
# Install pkgs which depend on setuptools<81 for pkg_resources first with no build isolation
130-
uv pip install pip==25.2 setuptools==80.10.2
131-
uv pip install --no-build-isolation k-diffusion==0.0.12
132-
uv pip install --upgrade pip setuptools
133-
# Install the rest as normal
134128
uv pip install -e ".[quality]"
135129
uv pip install peft@git+https://github.com/huggingface/peft.git
136130
uv pip uninstall accelerate && uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git
137-
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
138-
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
131+
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
139132
140133
- name: Environment
141134
run: |
@@ -187,8 +180,7 @@ jobs:
187180
- name: Install dependencies
188181
run: |
189182
uv pip install -e ".[quality,training]"
190-
#uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
191-
uv pip uninstall transformers huggingface_hub && uv pip install transformers==4.57.1
183+
uv pip uninstall transformers huggingface_hub && uv pip install --prerelease allow -U transformers@git+https://github.com/huggingface/transformers.git
192184
- name: Environment
193185
run: |
194186
python utils/print_env.py

.github/workflows/push_tests_mps.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ jobs:
4141
shell: arch -arch arm64 bash {0}
4242
run: |
4343
${CONDA_RUN} python -m pip install --upgrade pip uv
44-
${CONDA_RUN} python -m uv pip install -e ".[quality,test]"
44+
${CONDA_RUN} python -m uv pip install -e ".[quality]"
4545
${CONDA_RUN} python -m uv pip install torch torchvision torchaudio
4646
${CONDA_RUN} python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
4747
${CONDA_RUN} python -m uv pip install transformers --upgrade

docs/source/en/api/pipelines/qwenimage.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Qwen-Image comes in the following variants:
2929
| Qwen-Image-Edit Plus | [Qwen/Qwen-Image-Edit-2509](https://huggingface.co/Qwen/Qwen-Image-Edit-2509) |
3030

3131
> [!TIP]
32-
> [Caching](../../optimization/cache) may also speed up inference by storing and reusing intermediate outputs.
32+
> See the [Caching](../../optimization/cache) guide to speed up inference by storing and reusing intermediate outputs.
3333
3434
## LoRA for faster inference
3535

@@ -190,6 +190,12 @@ For detailed benchmark scripts and results, see [this gist](https://gist.github.
190190
- all
191191
- __call__
192192

193+
## QwenImageLayeredPipeline
194+
195+
[[autodoc]] QwenImageLayeredPipeline
196+
- all
197+
- __call__
198+
193199
## QwenImagePipelineOutput
194200

195201
[[autodoc]] pipelines.qwenimage.pipeline_output.QwenImagePipelineOutput

docs/source/en/training/distributed_inference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,7 @@ if __name__ == "__main__":
111111
Call `torchrun` to run the inference script and use the `--nproc_per_node` argument to set the number of GPUs to use.
112112

113113
```bash
114-
torchrun run_distributed.py --nproc_per_node=2
114+
torchrun --nproc_per_node=2 run_distributed.py
115115
```
116116

117117
## device_map

examples/custom_diffusion/test_custom_diffusion.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,9 @@
1717
import os
1818
import sys
1919
import tempfile
20+
import unittest
21+
22+
from diffusers.utils import is_transformers_version
2023

2124

2225
sys.path.append("..")
@@ -30,6 +33,7 @@
3033
logger.addHandler(stream_handler)
3134

3235

36+
@unittest.skipIf(is_transformers_version(">=", "4.57.5"), "Size mismatch")
3337
class CustomDiffusion(ExamplesTestsAccelerate):
3438
def test_custom_diffusion(self):
3539
with tempfile.TemporaryDirectory() as tmpdir:

src/diffusers/hooks/_common.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,7 @@
4848
torch.nn.ConvTranspose2d,
4949
torch.nn.ConvTranspose3d,
5050
torch.nn.Linear,
51+
torch.nn.Embedding,
5152
# TODO(aryan): look into torch.nn.LayerNorm, torch.nn.GroupNorm later, seems to be causing some issues with CogVideoX
5253
# because of double invocation of the same norm layer in CogVideoXLayerNorm
5354
)

src/diffusers/loaders/lora_pipeline.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5472,6 +5472,10 @@ def lora_state_dict(
54725472
logger.warning(warn_msg)
54735473
state_dict = {k: v for k, v in state_dict.items() if "dora_scale" not in k}
54745474

5475+
is_peft_format = any(k.startswith("base_model.model.") for k in state_dict)
5476+
if is_peft_format:
5477+
state_dict = {k.replace("base_model.model.", "diffusion_model."): v for k, v in state_dict.items()}
5478+
54755479
is_ai_toolkit = any(k.startswith("diffusion_model.") for k in state_dict)
54765480
if is_ai_toolkit:
54775481
state_dict = _convert_non_diffusers_flux2_lora_to_diffusers(state_dict)

0 commit comments

Comments
 (0)