Skip to content

Commit 7299121

Browse files
sayakpaulzRzRzRzRzRzRzRyiyixuxu
authored
Z rz rz rz rz rz rz r cogview (#12973)
* init * add * add 1 * Update __init__.py * rename * 2 * update * init with encoder * merge2pipeline * Update pipeline_glm_image.py * remove sop * remove useless func * Update pipeline_glm_image.py * up (cherry picked from commit cfe19a3) * review for work only * change place * Update pipeline_glm_image.py * update * Update transformer_glm_image.py * 1 * no negative_prompt for GLM-Image * remove CogView4LoraLoaderMixin * refactor attention processor. * update * fix * use staticmethod * update * up * up * update * Update glm_image.md * 1 * Update pipeline_glm_image.py * Update transformer_glm_image.py * using new transformers impl * support * resolution change * fix-copies * Update src/diffusers/pipelines/glm_image/pipeline_glm_image.py Co-authored-by: YiYi Xu <yixu310@gmail.com> * Update pipeline_glm_image.py * use cogview4 * Update pipeline_glm_image.py * Update pipeline_glm_image.py * revert * update * batch support * update * version guard glm image pipeline * validate prompt_embeds and prior_token_ids * try docs. * 4 * up * up * skip properly * fix tests * up * up --------- Co-authored-by: zRzRzRzRzRzRzR <2448370773@qq.com> Co-authored-by: yiyixuxu <yixu310@gmail.com>
1 parent 3114f6a commit 7299121

File tree

16 files changed

+1915
-0
lines changed

16 files changed

+1915
-0
lines changed

docs/source/en/_toctree.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -346,6 +346,8 @@
346346
title: Flux2Transformer2DModel
347347
- local: api/models/flux_transformer
348348
title: FluxTransformer2DModel
349+
- local: api/models/glm_image_transformer2d
350+
title: GlmImageTransformer2DModel
349351
- local: api/models/hidream_image_transformer
350352
title: HiDreamImageTransformer2DModel
351353
- local: api/models/hunyuan_transformer2d
@@ -540,6 +542,8 @@
540542
title: Flux2
541543
- local: api/pipelines/control_flux_inpaint
542544
title: FluxControlInpaint
545+
- local: api/pipelines/glm_image
546+
title: GLM-Image
543547
- local: api/pipelines/hidream
544548
title: HiDream-I1
545549
- local: api/pipelines/hunyuandit
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# GlmImageTransformer2DModel
13+
14+
A Diffusion Transformer model for 2D data from [GlmImageTransformer2DModel] (TODO).
15+
16+
## GlmImageTransformer2DModel
17+
18+
[[autodoc]] GlmImageTransformer2DModel
Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
-->
15+
16+
# GLM-Image
17+
18+
## Overview
19+
20+
GLM-Image is an image generation model adopts a hybrid autoregressive + diffusion decoder architecture, effectively pushing the upper bound of visual fidelity and fine-grained details. In general image generation quality, it aligns with industry-standard LDM-based approaches, while demonstrating significant advantages in knowledge-intensive image generation scenarios.
21+
22+
Model architecture: a hybrid autoregressive + diffusion decoder design、
23+
24+
+ Autoregressive generator: a 9B-parameter model initialized from [GLM-4-9B-0414](https://huggingface.co/zai-org/GLM-4-9B-0414), with an expanded vocabulary to incorporate visual tokens. The model first generates a compact encoding of approximately 256 tokens, then expands to 1K–4K tokens, corresponding to 1K–2K high-resolution image outputs. You can check AR model in class `GlmImageForConditionalGeneration` of `transformers` library.
25+
+ Diffusion Decoder: a 7B-parameter decoder based on a single-stream DiT architecture for latent-space image decoding. It is equipped with a Glyph Encoder text module, significantly improving accurate text rendering within images.
26+
27+
Post-training with decoupled reinforcement learning: the model introduces a fine-grained, modular feedback strategy using the GRPO algorithm, substantially enhancing both semantic understanding and visual detail quality.
28+
29+
+ Autoregressive module: provides low-frequency feedback signals focused on aesthetics and semantic alignment, improving instruction following and artistic expressiveness.
30+
+ Decoder module: delivers high-frequency feedback targeting detail fidelity and text accuracy, resulting in highly realistic textures, lighting, and color reproduction, as well as more precise text rendering.
31+
32+
GLM-Image supports both text-to-image and image-to-image generation within a single model
33+
34+
+ Text-to-image: generates high-detail images from textual descriptions, with particularly strong performance in information-dense scenarios.
35+
+ Image-to-image: supports a wide range of tasks, including image editing, style transfer, multi-subject consistency, and identity-preserving generation for people and objects.
36+
37+
This pipeline was contributed by [zRzRzRzRzRzRzR](https://github.com/zRzRzRzRzRzRzR). The codebase can be found [here](https://huggingface.co/zai-org/GLM-Image).
38+
39+
## Usage examples
40+
41+
### Text to Image Generation
42+
43+
```python
44+
import torch
45+
from diffusers.pipelines.glm_image import GlmImagePipeline
46+
47+
pipe = GlmImagePipeline.from_pretrained("zai-org/GLM-Image",torch_dtype=torch.bfloat16,device_map="cuda")
48+
prompt = "A beautifully designed modern food magazine style dessert recipe illustration, themed around a raspberry mousse cake. The overall layout is clean and bright, divided into four main areas: the top left features a bold black title 'Raspberry Mousse Cake Recipe Guide', with a soft-lit close-up photo of the finished cake on the right, showcasing a light pink cake adorned with fresh raspberries and mint leaves; the bottom left contains an ingredient list section, titled 'Ingredients' in a simple font, listing 'Flour 150g', 'Eggs 3', 'Sugar 120g', 'Raspberry puree 200g', 'Gelatin sheets 10g', 'Whipping cream 300ml', and 'Fresh raspberries', each accompanied by minimalist line icons (like a flour bag, eggs, sugar jar, etc.); the bottom right displays four equally sized step boxes, each containing high-definition macro photos and corresponding instructions, arranged from top to bottom as follows: Step 1 shows a whisk whipping white foam (with the instruction 'Whip egg whites to stiff peaks'), Step 2 shows a red-and-white mixture being folded with a spatula (with the instruction 'Gently fold in the puree and batter'), Step 3 shows pink liquid being poured into a round mold (with the instruction 'Pour into mold and chill for 4 hours'), Step 4 shows the finished cake decorated with raspberries and mint leaves (with the instruction 'Decorate with raspberries and mint'); a light brown information bar runs along the bottom edge, with icons on the left representing 'Preparation time: 30 minutes', 'Cooking time: 20 minutes', and 'Servings: 8'. The overall color scheme is dominated by creamy white and light pink, with a subtle paper texture in the background, featuring compact and orderly text and image layout with clear information hierarchy."
49+
image = pipe(
50+
prompt=prompt,
51+
height=32 * 32,
52+
width=36 * 32,
53+
num_inference_steps=30,
54+
guidance_scale=1.5,
55+
generator=torch.Generator(device="cuda").manual_seed(42),
56+
).images[0]
57+
58+
image.save("output_t2i.png")
59+
```
60+
61+
### Image to Image Generation
62+
63+
```python
64+
import torch
65+
from diffusers.pipelines.glm_image import GlmImagePipeline
66+
from PIL import Image
67+
68+
pipe = GlmImagePipeline.from_pretrained("zai-org/GLM-Image",torch_dtype=torch.bfloat16,device_map="cuda")
69+
image_path = "cond.jpg"
70+
prompt = "Replace the background of the snow forest with an underground station featuring an automatic escalator."
71+
image = Image.open(image_path).convert("RGB")
72+
image = pipe(
73+
prompt=prompt,
74+
image=[image], # can input multiple images for multi-image-to-image generation such as [image, image1]
75+
height=33 * 32,
76+
width=32 * 32,
77+
num_inference_steps=30,
78+
guidance_scale=1.5,
79+
generator=torch.Generator(device="cuda").manual_seed(42),
80+
).images[0]
81+
82+
image.save("output_i2i.png")
83+
```
84+
85+
+ Since the AR model used in GLM-Image is configured with `do_sample=True` and a temperature of `0.95` by default, the generated images can vary significantly across runs. We do not recommend setting do_sample=False, as this may lead to incorrect or degenerate outputs from the AR model.
86+
87+
## GlmImagePipeline
88+
89+
[[autodoc]] pipelines.glm_image.pipeline_glm_image.GlmImagePipeline
90+
- all
91+
- __call__
92+
93+
## GlmImagePipelineOutput
94+
95+
[[autodoc]] pipelines.glm_image.pipeline_output.GlmImagePipelineOutput

src/diffusers/__init__.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@
2323
is_torchao_available,
2424
is_torchsde_available,
2525
is_transformers_available,
26+
is_transformers_version,
2627
)
2728

2829

@@ -225,6 +226,7 @@
225226
"FluxControlNetModel",
226227
"FluxMultiControlNetModel",
227228
"FluxTransformer2DModel",
229+
"GlmImageTransformer2DModel",
228230
"HiDreamImageTransformer2DModel",
229231
"HunyuanDiT2DControlNetModel",
230232
"HunyuanDiT2DModel",
@@ -492,6 +494,7 @@
492494
"FluxKontextPipeline",
493495
"FluxPipeline",
494496
"FluxPriorReduxPipeline",
497+
"GlmImagePipeline",
495498
"HiDreamImagePipeline",
496499
"HunyuanDiTControlNetPipeline",
497500
"HunyuanDiTPAGPipeline",
@@ -979,6 +982,7 @@
979982
FluxControlNetModel,
980983
FluxMultiControlNetModel,
981984
FluxTransformer2DModel,
985+
GlmImageTransformer2DModel,
982986
HiDreamImageTransformer2DModel,
983987
HunyuanDiT2DControlNetModel,
984988
HunyuanDiT2DModel,
@@ -1216,6 +1220,7 @@
12161220
FluxKontextPipeline,
12171221
FluxPipeline,
12181222
FluxPriorReduxPipeline,
1223+
GlmImagePipeline,
12191224
HiDreamImagePipeline,
12201225
HunyuanDiTControlNetPipeline,
12211226
HunyuanDiTPAGPipeline,

src/diffusers/models/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,7 @@
9898
_import_structure["transformers.transformer_easyanimate"] = ["EasyAnimateTransformer3DModel"]
9999
_import_structure["transformers.transformer_flux"] = ["FluxTransformer2DModel"]
100100
_import_structure["transformers.transformer_flux2"] = ["Flux2Transformer2DModel"]
101+
_import_structure["transformers.transformer_glm_image"] = ["GlmImageTransformer2DModel"]
101102
_import_structure["transformers.transformer_hidream_image"] = ["HiDreamImageTransformer2DModel"]
102103
_import_structure["transformers.transformer_hunyuan_video"] = ["HunyuanVideoTransformer3DModel"]
103104
_import_structure["transformers.transformer_hunyuan_video15"] = ["HunyuanVideo15Transformer3DModel"]
@@ -208,6 +209,7 @@
208209
EasyAnimateTransformer3DModel,
209210
Flux2Transformer2DModel,
210211
FluxTransformer2DModel,
212+
GlmImageTransformer2DModel,
211213
HiDreamImageTransformer2DModel,
212214
HunyuanDiT2DModel,
213215
HunyuanImageTransformer2DModel,

src/diffusers/models/transformers/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@
2727
from .transformer_easyanimate import EasyAnimateTransformer3DModel
2828
from .transformer_flux import FluxTransformer2DModel
2929
from .transformer_flux2 import Flux2Transformer2DModel
30+
from .transformer_glm_image import GlmImageTransformer2DModel
3031
from .transformer_hidream_image import HiDreamImageTransformer2DModel
3132
from .transformer_hunyuan_video import HunyuanVideoTransformer3DModel
3233
from .transformer_hunyuan_video15 import HunyuanVideo15Transformer3DModel

0 commit comments

Comments
 (0)