深度学习基础:基于DDPM模型的生成模型demo笔记(一)

Jachin Zhang

引言

扩散模型是如今计算机视觉领域的热门话题,很多任务(如图像生成、图像修复等)都是基于该模型进行研发。在该算法中,最关键的部分是需要得到反向去噪过程中每一步去掉的噪声\(\epsilon_\theta(\mathbf{x}_t,t)\)。由于这部分噪声几乎无法直接通过公式计算得到,我们需要使用神经网络拟合。目前的主流方法是使用含残差操作的UNet,实验效果明显好于CNN。

该demo将基于DDPM模型和UNet构建一个生成模型,目标是通过在数据集上训练以使该模型可以生成相应数据集风格的图片(举个例子,我们可以让这个模型在MNIST数据集上训练,生成手写数字的灰度图像)。更进一步地,我们将尝试在训练过程中加入标签特征的嵌入,使其可以实现条件生成(比如我传入一个标签参数0,模型可以生成手写数字0的图像)。

该项目中我们使用MNIST和CIFAR-10这两个数据集上进行训练。

项目参考

本项目主要参考了这篇博客:《扩散模型(Diffusion Model)详解:直观理解、数学原理、PyTorch 实现》,并在此基础上进行改进。

关于该模型的算法原理就不在这篇博客做过多赘述了,可以移步至我的博客《学习笔记:扩散模型算法介绍》或上面项目参考中提到的博客。这里主要记录我的项目构建历程。

项目构建

由于该项目有些小细节被重构过(如部分函数的用法和参数等),因此不能保证该代码直接全部复制下来之后就能跑,可能会有些报错。但整体的框架是不变的,若遇到相关错误烦请读者自行排查,应该不会很多,也不会影响对该项目的理解。

数据集

为了方便参数的调整,我们还是先建立一个配置文件用于存储各项参数信息。新建options.yml,添加设备和数据集的信息:

1
2
3
4
5
6
7
8
device: 'cuda:0'

dataset:
name: 'mnist'
root: './cache'
img_shape:
mnist: [1, 28, 28]
cifar10: [3, 32, 32]

新建dataset.py,代码如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import torchvision
from torchvision.transforms import transforms
from torch.utils.data import DataLoader

import yaml
with open('options.yml', 'r') as f:
opt = yaml.safe_load(f)
dataset_opts = opt['dataset']


def get_dataset(batch_size):
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.5], std=[0.5])
])
trainset = None
dataset_name = dataset_opts['name']
root = dataset_opts['root']

if dataset_name == 'mnist':
trainset = torchvision.datasets.MNIST(
root=root,
train=True,
download=True,
transform=transform
)
elif dataset_name == 'cifar10':
trainset = torchvision.datasets.CIFAR10(
root=root,
train=True,
download=True,
transform=transform
)

trainloader = DataLoader(trainset, batch_size, shuffle=True)
return trainloader

这部分代码应该很简单,不必多说。目前配置文件在数据集的选择上只支持MNIST和CIFAR-10数据集,之后应该会做相应的扩展。

DDPM模型

接下来我们通过创建DDPM类实现相关运算。在options.yml中添加以下内容:

1
2
3
4
ddpm:
n_steps: 1000
min_beta: 1e-4
max_beta: 2e-2

图像的扩散过程包含n_steps步,公式里每一步的beta值可以使用torch.linspace(min_beta, max_beta)线性地生成一个序列,每个时间步使用对应的beta。接着可以根据公式 \[ \alpha_t=1-\beta_t, \bar{\alpha}_t=\prod_{i=1}^t{\alpha_i} \] 计算每个时刻的alphaalpha_bar

新建ddpm.py,创建DDPM类:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import torch

class DDPM():
def __init__(self,
device,
n_steps: int,
min_beta: float = 0.0001,
max_beta: float = 0.02):
self.n_steps = n_steps
self.betas = torch.linspace(min_beta, max_beta, n_steps).to(device)
self.alphas = 1 - self.betas
self.alphas_bars = torch.empty_like(self.alphas)
product = 1
for i, alpha in enumerate(self.alphas):
product *= alpha
self.alphas_bars[i] = product

正向过程方法可以根据公式计算正向过程中的x_t(即逐渐被噪声覆盖的过程中的图像)

1
2
3
4
5
6
def sample_forward(self, x, t, eps=None):
alpha_bar = self.alphas_bars[t].reshape(-1, 1, 1, 1)
if eps is None:
eps = torch.randn_like(x)
res = eps * torch.sqrt(1 - alpha_bar) + torch.sqrt(alpha_bar) * x
return res

接着实现反向过程,该过程中DDPM会使用神经网络预测每一轮去噪的均值,把x_t逐步复原回x_0以完成图像生成。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def sample_backward(self, img_shape, net, device, simple_var=True):
x = torch.randn(img_shape).to(device)
net = net.to(device)
for t in range(self.n_steps-1, -1, -1):
x = self.sample_backward_step(x, t, net, simple_var)
return x

def sample_backward_step(self, x_t, t, net, simple_var=True):
n = x_t.shape[0] # batch size
t_tensor = torch.tensor([t] * n, dtype=torch.long).to(x_t.device).unsqueeze(1)
eps = net(x_t, t_tensor)

if t == 0:
noise = 0
else:
# simple_var 用于控制取值方式
if simple_var:
var = self.betas[t]
else:
var = (1-self.alphas_bars[t-1])/(1-self.alphas_bars[t])*self.betas[t]
noise = torch.randn_like(x_t)
noise *= torch.sqrt(var)

mean = (x_t - (1-self.alphas[t])/torch.sqrt(1-self.alphas_bars[t])*eps) / \
torch.sqrt(self.alphas[t])

x_t = mean + noise
return x_t # updated image

UNet构建

本部分可以分为三个子部分:用于对时间进行时间步编码的PositionalEncoding类、UNet网络的组成部分UNetBlock类和网络主干UNet类。

在配置文件中加入以下内容:

1
2
3
4
network:
channels: [32, 64, 128, 256, 512]
pe_dim: 128
residual: true

新建networks.py,导入必要的库:

1
2
3
4
5
6
7
8
import torch
import torch.nn as nn
import torch.nn.functional as F
import yaml

with open('options.yml', 'r') as f:
opt = yaml.safe_load(f)
img_shape = opt['dataset']['img_shape'][opt['dataset']['name']]

我们首先实现负责时间步编码的PositionalEncoding类。

在扩散模型中,时间步是一个比较重要的信息——它和图像每一步增加/去除的噪声具有很高的相关性。之所以我们不直接使用一个时间步索引的标量(如0, 1, 2...)告知模型当前的时间步,是因为这些数值缺乏结构,模型难以从其中学习到有效的时序关系。它有可能错误地学习到这些数值之间绝对的大小关系(比如模型会误以为时间步101重要,但实际上它们同样重要),而这不是我们期望看到的。换句话说,对时间步进行编码可以让时间也变成模型可从中学习的特征之一。

同时,对时间步进行编码操作可以为模型带来更丰富的信息,也增强了可扩展性——我们之后就可以尝试将标签的类别编码信息融入时间步编码,让模型实现条件生成。

一种非常常用的编码方式是位置编码,尤其是正弦-余弦编码。编码后的向量之间的欧氏距离仍然反映了它们之间的相对间隔,有助于让模型理解不同时间步t之间的关系,而不仅仅是谁比谁大。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
class PositionalEncoding(nn.Module):
def __init__(self,
max_seq_len: int,
d_model: int):
super().__init__()

# Assume d_model is an even number for convenience
assert d_model % 2 == 0

pe = torch.zeros(max_seq_len, d_model)
i_seq = torch.linspace(0, max_seq_len - 1, max_seq_len)
j_seq = torch.linspace(0, d_model - 2, d_model // 2)
pos, two_i = torch.meshgrid(i_seq, j_seq)
pe_2i = torch.sin(pos / 1e4 ** (two_i / d_model))
pe_2i_1 = torch.cos(pos / 1e4 ** (two_i / d_model))
pe = torch.stack((pe_2i, pe_2i_1), 2).reshape(max_seq_len, d_model)

self.embedding = nn.Embedding(max_seq_len, d_model)
self.embedding.weight.data = pe
self.embedding.requires_grad_(False)

def forward(self, t):
return self.embedding(t)

该类的初始化参数为: - max_seq_len:位置编码的最大长度,即序列的最大长度。这里将设为DDPM的时序长度n_steps。 - d_model:编码向量(嵌入)的维度,必须是偶数,因为正弦和余弦交替填充向量。

该类首先创建一个形状为(max_seq_len, d_model)的全零矩阵pe用于存储位置编码。i_seq用于生成位置索引pos表示序列中每个token的索引,j_seq用于生成偶数索引two_i用于计算d_model维度的分量。接下来计算时间步的位置编码,公式为: \[ \begin{aligned} PE_{\left( pos,2i \right)}&=\sin \left( \frac{pos}{10000^{\frac{2i}{d_m}}} \right)\\ PE_{\left( pos,2i+1 \right)}&=\cos \left( \frac{pos}{10000^{\frac{2i}{d_m}}} \right)\\ \end{aligned} \]

偶数索引使用正弦编码,奇数索引使用余弦编码。将它们组合起来扩展到pe的维度形成完整的时间步编码。

接下来将pe的值作为Embedding层的参数,并禁止其梯度更新使其成为固定的位置编码。

接下来实现组成UNet的模块,UNetBlock类。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
class UNetBlock(nn.Module):
def __init__(self,
shape,
in_c,
out_c,
residual=False):
super().__init__()
self.ln = nn.LayerNorm(shape)
self.conv1 = nn.Conv2d(in_c, out_c, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Conv2d(out_c, out_c, kernel_size=3, stride=1, padding=1)
self.activation = nn.ReLU()
self.residual = residual
if residual:
if in_c == out_c:
self.residual_conv = nn.Identity()
else:
self.residual_conv = nn.Conv2d(in_c, out_c, kernel_size=1)

def forward(self, x):
out = self.activation(self.conv1(self.ln(x)))
out = self.conv2(out)
if self.residual:
out += self.residual_conv(x)
out = self.activation(out)
return out

这块没太多好说的,标准化 -> 卷积 -> 激活 -> 卷积 -> 与输入残差连接(可选) -> 激活 -> 输出。

下面是重头,实现预测噪声的模型核心,UNet类。

关于UNet的讲解和简要的代码实现可移步此博客《UNet结构介绍》。下面是将要实现的结构的示意图:

UNet示意图
UNet示意图

这里是代码实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
class UNet(nn.Module):
def __init__(self,
n_steps,
channels=[10, 20, 40, 80],
pe_dim=10,
residual=False):
super().__init__()
C, H, W = img_shape[0], img_shape[1], img_shape[2]
n_layers = len(channels)
Hs = [H]
Ws = [W]
cH = H
cW = W
for _ in range(n_layers - 1):
cH //= 2
cW //= 2
Hs.append(cH)
Ws.append(cW)

self.pe = PositionalEncoding(n_steps, pe_dim)
self.encoders = nn.ModuleList()
self.decoders = nn.ModuleList()
self.pe_linears_en = nn.ModuleList()
self.pe_linears_de = nn.ModuleList()
self.downs = nn.ModuleList()
self.ups = nn.ModuleList()
prev_channel = C

# down blocks
for channel, cH, cW in zip(channels[0:-1], Hs[0:-1], Ws[0:-1]):
self.pe_linears_en.append(
nn.Sequential(nn.Linear(pe_dim, prev_channel),
nn.ReLU(),
nn.Linear(prev_channel, prev_channel))
)
self.encoders.append(
nn.Sequential(
UNetBlock((prev_channel, cH, cW),
prev_channel,
channel,
residual=residual),
UNetBlock((channel, cH, cW),
channel,
channel,
residual=residual)
)
)
self.downs.append(nn.Conv2d(channel, channel, kernel_size=2, stride=2))
prev_channel = channel

# mid block
self.pe_mid = nn.Linear(pe_dim, prev_channel)
channel = channels[-1]
self.mid = nn.Sequential(
UNetBlock((prev_channel, Hs[-1], Ws[-1]),
prev_channel,
channel,
residual=residual),
UNetBlock((channel, Hs[-1], Ws[-1]),
channel,
channel,
residual=residual)
)
prev_channel = channel

# up blocks
for channel, cH, cW in zip(channels[-2::-1], Hs[-2::-1], Ws[-2::-1]):
self.pe_linears_de.append(nn.Linear(pe_dim, prev_channel))
self.decoders.append(
nn.Sequential(
UNetBlock((channel * 2, cH, cW),
channel * 2,
channel,
residual=residual),
UNetBlock((channel, cH, cW),
channel,
channel,
residual=residual)
)
)
self.ups.append(nn.ConvTranspose2d(prev_channel, channel, kernel_size=2, stride=2))
prev_channel = channel

self.conv_out = nn.Conv2d(prev_channel, C, kernel_size=3, stride=1, padding=1)

def forward(self, x, t):
n = t.shape[0]
t = self.pe(t)
encoder_outs = []
for pe_linear, encoder, down in zip(self.pe_linears_en, self.encoders, self.downs):
pe = pe_linear(t).reshape(n, -1, 1, 1)
x = encoder(x + pe)
encoder_outs.append(x)
x = down(x)
pe = self.pe_mid(t).reshape(n, -1, 1, 1)
x = self.mid(x + pe)
for pe_linear, decoder, up, encoder_out in zip(self.pe_linears_de, self.decoders,
self.ups, encoder_outs[::-1]):
pe = pe_linear(t).reshape(n, -1, 1, 1)
x = up(x)
pad_x = encoder_out.shape[2] - x.shape[2]
pad_y = encoder_out.shape[3] - x.shape[3]
x = F.pad(x,
(pad_x // 2, pad_x - pad_x//2, pad_y // 2, pad_y - pad_y//2))
x = torch.cat((encoder_out, x), dim=1)
x = decoder(x + pe)
x = self.conv_out(x)
return x

我们可以写一个对外的初始化网络的函数:

1
2
3
4
5
def build_network(n_steps: int, 
channels: list=None,
pe_dim: int=None,
residual: bool=True,):
return UNet(n_steps, channels, pe_dim, residual)

日志输出

为了更好地保存每次模型训练的信息便于后期查看、对比效果,我们要做好日志的输出与保存工作。python中自带logging库可以帮助我们完成相关工作。新建logger.py,加入以下内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import logging
import os

def get_logger(log_root, log_name='log.log'):
logger = logging.getLogger(__name__)
logger.setLevel(level=logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')

assert os.path.isdir(log_root), f"{log_root} is not a directory"
handler = logging.FileHandler(os.path.join(log_root, log_name))
handler.setFormatter(formatter)

logger.addHandler(handler)
return logger

关于logging模块的用法可以参考这篇博客:Python logger模块 - 博客园

训练阶段

在配置文件加入训练部分的参数:

1
2
3
4
5
6
train:
n_epochs: 10
batch_size: 128
loss: 'MSE'
lr: 1e-3
resume: ~ # 填入用于恢复训练的权重路径

先导入必要的包以及完成初始化工作:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import torch
import torch.nn as nn
from dataset import get_dataset
from networks import build_network
from ddpm import DDPM
from tqdm import tqdm

from tensorboardX import SummaryWriter
import os
import yaml
from datetime import datetime
from logger import get_logger

## initialization
with open('options.yml', 'r') as f:
opt = yaml.safe_load(f)
f.close()
curr_time = datetime.now().strftime("%Y%m%d-%H%M%S")
save_root = f'models/{curr_time}-{opt['dataset']['name']}'
os.makedirs(os.path.join(save_root, 'ckpts'))
logger = get_logger(save_root, 'train.log')
train_opts = opt['train']
writer = SummaryWriter()

训练部分的主函数平平无奇:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def train(ddpm: DDPM, dataloader, net: nn.Module, device):
n_epochs = train_opts['n_epochs']
n_steps = ddpm.n_steps
net = net.to(device)
if train_opts['loss'] == 'MSE':
loss_fn = nn.MSELoss()
# TODO: more loss functions
else:
raise ValueError(f"Unknown loss function: {train_opts['loss']}")
optimizer = torch.optim.Adam(net.parameters(), float(train_opts['lr']))

logger.info("Start training.")
step = 0
resume_epochs_passed = 0

ckpt_path: str = train_opts['resume']
if ckpt_path is not None:
if os.path.exists(ckpt_path):
net.load_state_dict(torch.load(ckpt_path))
print(f'Load model from {ckpt_path}.')
logger.info(f'Load model from {ckpt_path}.')
resume_epochs_passed = int(ckpt_path.split('/')[-1].split('.')[0].split('_')[-1]) + 1
step += resume_epochs_passed * len(dataloader)

for epoch in range(n_epochs):
total_loss = 0
truth_epoch = epoch + resume_epochs_passed
for x, label in tqdm(dataloader, ncols=60):
batch_size = x.shape[0]
x = x.to(device)

t = torch.randint(0, n_steps, (batch_size, )).to(device)
eps = torch.randn_like(x).to(device)
x_t = ddpm.sample_forward(x, t, eps)
eps_theta = net(x_t, t.reshape(batch_size, 1))
loss = loss_fn(eps_theta, eps)
writer.add_scalar('loss/step', loss, step)
total_loss += loss
step += 1

optimizer.zero_grad()
loss.backward()
optimizer.step()

epoch_loss = total_loss / len(dataloader)
logger.info(f"epoch {truth_epoch} | loss {epoch_loss}")
writer.add_scalar('loss/epoch', epoch_loss, truth_epoch)
print(f'epoch {truth_epoch} | loss {epoch_loss}')
save_path = os.path.join(save_root, 'ckpts', f'epoch_{truth_epoch}.pth')
torch.save(net.state_dict(), save_path)
print(f"Model checkpoint has been saved into {save_path}.")

logger.info("Training stage finished.")

程序的入口:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
if __name__ == '__main__':
logger.info(f'dataset: {opt['dataset']['name']}')

# set device
device = opt['device']
logger.info(f'device: {device}')

# DDPM settings
ddpm_opts = opt['ddpm']
logger.info(f'ddpm:\n\t[n_steps: {ddpm_opts['n_steps']}]\n\t[min_beta: {float(ddpm_opts['min_beta'])}]\n\t[max_beta: {float(ddpm_opts['max_beta'])}]')
ddpm = DDPM(device,
ddpm_opts['n_steps'],
float(ddpm_opts['min_beta']),
float(ddpm_opts['max_beta'])
)

# network settings
network_opts = opt['network']
logger.info(f'network:\n\t[channels: {network_opts['channels']}]\n\t[pe_dim: {network_opts['pe_dim']}]\n\t[residual: {network_opts['residual']}]')
net = build_network(n_steps=ddpm_opts['n_steps'],
channels=network_opts['channels'],
pe_dim=network_opts['pe_dim'],
residual=network_opts['residual'])

# get dataloader
dataloader = get_dataset(batch_size=train_opts['batch_size'])

# start training
logger.info(f'training options:\n\t[n_epochs: {train_opts['n_epochs']}]\n\t[batch_size: {train_opts['batch_size']}]\n\t[loss: {train_opts['loss']}]\n\t[lr: {train_opts['lr']}]')
train(ddpm, dataloader, net, device)

悲报。这部分在tensorboard上的实验数据被我不小心误删了(哭)就实验的情况而言,模型在MNIST数据集上的训练损失明显小于在CIFAR-10上的损失,前者的损失波动也显著地小于后者。

测试阶段

在配置文件加入测试部分的参数:

1
2
3
4
test:
output_dir: './results'
ckpt_path: ~
n_samples: 81

其中n_samples参数是模型生成的一张大图里包含的小图——因为MNIST和CIFAR-10的图片都非常小,我们可以通过一些处理让若干小图组成一张大图,顺便看下模型生成效果的稳定性。

鉴于我们并没有所谓的测试标签用于衡量模型在测试集上表现的好坏,只好让它尝试生成一组数据来检验一下模型的训练效果。新建test.py,加入以下内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
import numpy as np
import yaml
import os
from time import time

from ddpm import DDPM
from networks import build_network

from datetime import datetime
curr_time = datetime.now().strftime("%Y%m%d-%H%M%S")

with open('options.yml', 'r') as f:
opt = yaml.safe_load(f)
f.close()
test_opts = opt['test']


def generate(ddpm: DDPM,
net: nn.Module,
output_path: str,
n_sample: int,
device,
img_shape,
simple_var=True):
net = net.to(device)
net = net.eval()
C, H, W = img_shape[0], img_shape[1], img_shape[2]

with torch.no_grad():
shape = (n_sample, C, H, W)
imgs = ddpm.sample_backward(shape,
net,
device,
simple_var).detach().to(device)
imgs = (imgs + 1) / 2 * 255
imgs = imgs.clamp(0, 255)
imgs = einops.rearrange(imgs,
'(b1 b2) c h w -> (b1 h) (b2 w) c',
b1=int(n_sample**0.5))
imgs = imgs.cpu()
imgs = imgs.numpy().astype(np.uint8)
cv2.imwrite(output_path, imgs)


if __name__ == '__main__':
# DDPM settings
ddpm_opts = opt['ddpm']
ddpm = DDPM(opt['device'],
int(ddpm_opts['n_steps']),
float(ddpm_opts['min_beta']),
float(ddpm_opts['max_beta']))

# network settings
network_opts = opt['network']
network_opts_dict = dict(network_opts)
net = build_network(n_steps=ddpm_opts['n_steps'],
channels=network_opts['channels'],
pe_dim=network_opts['pe_dim'],
residual=network_opts['residual'])
# load network checkpoint
ckpt_path = test_opts['ckpt_path']
assert os.path.exists(ckpt_path), f'{ckpt_path} is an invalid path.'
net.load_state_dict(torch.load(ckpt_path))

# output path settings
output_dir = test_opts['output_dir']
os.makedirs(output_dir) if not os.path.exists(output_dir) else None
output_path = os.path.join(output_dir, f'{curr_time}.png')

# other settings
n_sample = test_opts['n_samples']
device = opt['device']
img_shape = opt['dataset']['img_shape'][opt['dataset']['name']]

# sample images from noise
generate(ddpm=ddpm,
net=net,
output_path=output_path,
n_sample=n_sample,
device=device,
img_shape=img_shape,
simple_var=True)

结果展示与分析

模型分别在MNIST和CIFAR数据集上训练10个周期,并分别生成一组图像。先在用MNIST训练的模型上浅浅看下效果:

模型在MNIST数据集下训练10个周期的随机生成结果
模型在MNIST数据集下训练10个周期的随机生成结果

效果其实还算差强人意,只是模型有一定几率生成一些神秘数字(毕竟模型并不认识数字这个图案的含义,它只会生成类似风格的图案,反映在这上面就是一个黑底白字的符号),另外部分生成的数字字体也有点过于潦草。不过毕竟只训练了10个周期,还是有待优化的。

接下来将目光转向CIFAR-10队:

模型在CIFAR-10数据集下训练10个周期的随机生成结果
模型在CIFAR-10数据集下训练10个周期的随机生成结果

寄了。

看来这个模型的拟合能力还不能应付稍微复杂一些的多通道图像任务——因为这个模型并不只是生成了一些奇怪的物体,而是根本无法形成有意义的图案,整张图片只是一个发生偏移的色块。

下一步的优化方案:

  1. 加入标签信息,使模型可以实现标签生成;

  2. 强化模型的学习能力,力求让模型加强在CIFAR-10上的生成能力;

  3. 对模型进行一些其他的优化,比如加入自注意力机制等。

该系列未完待续!

  • Title: 深度学习基础:基于DDPM模型的生成模型demo笔记(一)
  • Author: Jachin Zhang
  • Created at : 2025-03-05 22:00:27
  • Updated at : 2025-03-06 21:39:50
  • Link: https://jachinzhang1.github.io/2025/03/05/ddpm-project-1/
  • License: This work is licensed under CC BY-NC-SA 4.0.
Comments