1 Star 16 Fork 5

ChairC / Integrated-Design-Diffusion-Model

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
Apache-2.0

IDDM:集成设计扩散模型

English Document

关于模型

该扩散模型为经典的ddpm、ddim和plms,来源于论文《Denoising Diffusion Probabilistic Models》、《Denoising Diffusion Implicit Models》和《Pseudo Numerical Methods for Diffusion Models on Manifolds》。

我们将此项目命名为IDDM: Integrated Design Diffusion Model,中文名为集成设计扩散模型。在此项目中进行模型复现、训练器和生成器编写、部分算法和网络结构的改进与优化,该仓库持续维护

如果有任何问题,请先到此issue进行问题查询,若无法解决可以加入我们的QQ群:949120343、开启新issue提问或联系我的邮箱:chenyu1998424@gmail.com/chairc1998@163.com

本仓库整体结构

Integrated Design Diffusion Model
├── config
│   ├── choices.py
│   └── version.py
├── datasets
│   └── dataset_demo
│       ├── class_1
│       ├── class_2
│       └── class_3
├── model
│   ├── modules
│   │   ├── activation.py
│   │   ├── attention.py
│   │   ├── block.py
│   │   ├── conv.py
│   │   ├── ema.py
│   │   └── module.py
│   ├── networks
│   │   ├── sr
│   │   │   └── srv1.py
│   │   ├── base.py
│   │   ├── cspdarkunet.py
│   │   └── unet.py
│   └── samples
│       ├── base.py
│       ├── ddim.py
│       ├── ddpm.py
│       └── plms.py
├── results
├── sr
│   ├── dataset.py
│   ├── demo.py
│   ├── interface.py
│   └── train.py
├── test
│   ├── noising_test
│   │   ├── landscape
│   │   └── noise
│   └── test_module.py
├── tools
│   ├── deploy.py
│   ├── generate.py
│   └── train.py
├── utils
│   ├── checkpoint.py
│   ├── initializer.py
│   ├── logger.py
│   ├── lr_scheduler.py
│   └── utils.py
├── webui
│   └──web.py
└── weight

接下来要做

  • 1. 新增cosine学习率优化(2023-07-31)
  • 2. 使用效果更优的U-Net网络模型(2023-11-09)
  • 3. 更大尺寸的生成图像(2023-11-09)
  • 4. 多卡分布式训练(2023-07-15)
  • 5. 云服务器快速部署和接口(2023-08-28)
  • 6. 增加DDIM采样方法(2023-08-03)
  • 7. 支持其它图像生成(2023-09-16)
  • 8. 低分辨率生成图像进行超分辨率增强[超分模型效果待定](2024-02-18)
  • 9. 使用Latent方式降低显存消耗
  • 10. 重构model整体结构(2023-12-06)
  • 11. 编写可视化webui界面(2024-01-23)
  • 12. 增加PLMS采样方法(2024-03-12)

训练

注意

本自README的训练GPU环境如下:使用具有6GB显存的NVIDIA RTX 3060显卡、具有11GB显存的NVIDIA RTX 2080Ti显卡和具有24GB(总计48GB,分布式训练)显存的NVIDIA RTX 6000(×2)显卡对模型进行训练和测试。上述GPU均可正常训练

开始你的第一个训练(以cifar10为例,模式单卡)

  1. 导入数据集

    首先,将数据集上传至目标文件夹datasets[issue]。上传后文件夹格式(例如:cifar10文件夹下存放着所有类别;class0文件夹下存储着class0这个类别的所有图片)如下方列表所示:

     datasets
     └── cifar10
         ├── class0
         ├── class1
         ├── class2
         ├── class3
         ├── class4
         ├── class5
         ├── class6
         ├── class7
         ├── class8
         └── class9

    此时你的训练前准备已经完毕。

  2. 设置训练参数

    打开train.py文件,修改if __name__ == "__main__":中的parser参数;

    设置--conditional参数为True,因为是多类别训练,所以需要开启,单类别可以不开启也可以开启;

    设置--run_name参数为你想创建的文件名称,例如cifar_exp1

    设置--dataset_path参数为/你的/本地/或/远程服务器/文件/地址/datasets/cifar10

    设置--result_path参数为/你的/本地/或/远程服务器/文件/地址/results

    设置--num_classes参数为10,这是你的类别总数;

    设置更多参数(自定义),如果报CUDA out of memory错误,将--batch_size--num_workers调小;

    在自定义参数中,你可以设置不同的--sample例如ddpmddim,设置不同的训练网络--network例如unetcspdarkunet。当然激活函数--act,优化器--optim,混合精度训练--amp,学习率方法--lr_func等参数也都是可以自定义设置的。

    详细命令可参考训练参数

  3. 等待训练过程

    点击run运行后,项目会在results文件夹中生成cifar_exp1文件夹,该文件夹中会保存训练日志文件、模型训练文件、模型EMA文件、模型优化器文件、训练的所有最后一次保存的文件和评估后生成的图片。

  4. 查看结果

    找到results/cifar_exp1文件夹即可查看训练结果。

↓↓↓↓↓↓↓↓↓↓下方为多种训练方式、训练详细参数讲解↓↓↓↓↓↓↓↓↓↓

普通训练

  1. landscape数据集为例,将数据集文件放入datasets文件夹中,该数据集的总路径如下/your/path/datasets/landscape,图片存放在/your/path/datasets/landscape/images,数据集图片路径如下/your/path/datasets/landscape/images/*.jpg

  2. 打开train.py文件,找到--dataset_path参数,将参数中的路径修改为数据集的总路径,例如/your/path/datasets/landscape

  3. 设置必要参数,例如--sample--conditional--run_name--epochs--batch_size--image_size--result_path等参数,若不设置参数则使用默认设置。我们有两种参数设置方法,其一是直接对train.py文件if __name__ == "__main__":中的parser进行设置(我们推荐这种方式);其二是在控制台在/your/path/Defect-Diffiusion-Model/tools路径下输入以下命令:
    有条件训练命令

    python train.py --sample ddpm --conditional --run_name df --epochs 300 --batch_size 16 --image_size 64 --num_classes 10 --dataset_path /your/dataset/path --result_path /your/save/path

    无条件训练命令

    python train.py --sample ddpm --run_name df --epochs 300 --batch_size 16 --image_size 64 --dataset_path /your/dataset/path --result_path /your/save/path
  4. 等待训练即可

  5. 若因异常原因中断训练[issue],我们可以在train.py文件,首先设置--resumeTrue,其次设置异常中断的迭代编号,再写入该次训练的所在文件夹(run_name),最后运行文件即可。也可以使用如下命令进行恢复:
    有条件恢复训练命令

    # 此处为输入--start_epoch参数,使用当前编号权重
    python train.py --resume --start_epoch 10 --sample ddpm --conditional --run_name df --epochs 300 --batch_size 16 --image_size 64 --num_classes 10 --dataset_path /your/dataset/path --result_path /your/save/path
    # 此处为不输入--start_epoch参数,默认使用last权重
    python train.py --resume --sample ddpm --conditional --run_name df --epochs 300 --batch_size 16 --image_size 64 --num_classes 10 --dataset_path /your/dataset/path --result_path /your/save/path

    无条件恢复训练命令

    python train.py --resume --start_epoch 10 --sample ddpm --run_name df --epochs 300 --batch_size 16 --image_size 64 --dataset_path /your/dataset/path --result_path /your/save/path
    # 此处为不输入--start_epoch参数,默认使用last权重
    python train.py --resume --sample ddpm --run_name df --epochs 300 --batch_size 16 --image_size 64 --dataset_path /your/dataset/path --result_path /your/save/path
  6. 预训练模型在每次大版本Release中发布,请留意。预训练模型使用方法如下[issue],首先将对应networkimage_sizeact等相同参数的模型下到本地任意文件夹下。直接调整train.py--pretrain--pretrain_path即可。也可以使用如下命令进行预训练:
    使用有条件预训练模型训练命令

    python train.py --pretrain --pretrain_path /your/pretrain/path/model.pt --sample ddpm --conditional --run_name df --epochs 300 --batch_size 16 --image_size 64 --dataset_path /your/dataset/path --result_path /your/save/path

    使用无条件预训练模型训练命令

    python train.py --pretrain --pretrain_path /your/pretrain/path/model.pt --sample ddpm --run_name df --epochs 300 --batch_size 16 --image_size 64 --dataset_path /your/dataset/path --result_path /your/save/path

分布式训练

  1. 基本配置与普通训练相似,值得注意的是开启分布式训练需要设置--distributed。为了防止随意设置分布式训练,我们为开启分布式训练设置了几个基本条件,例如args.distributedtorch.cuda.device_count() > 1torch.cuda.is_available()

  2. 设置必要的参数,例如--main_gpu--world_size--main_gpu通常设置为主要GPU,例如做验证、做测试或保存权重,我们仅在单卡中运行即可。而world_size的值会与实际使用的GPU数量或分布式节点数量相对应。

  3. 我们有两种参数设置方法,其一是直接对train.py文件if __name__ == "__main__":中的parser进行设置;其二是在控制台在/your/path/Defect-Diffiusion-Model/tools路径下输入以下命令:

    有条件训练命令

    python train.py --sample ddpm --conditional --run_name df --epochs 300 --batch_size 16 --image_size 64 --num_classes 10 --dataset_path /your/dataset/path --result_path /your/save/path --distributed --main_gpu 0 --world_size 2

    无条件训练命令

    python train.py --sample ddpm --run_name df --epochs 300 --batch_size 16 --image_size 64 --dataset_path /your/dataset/path --result_path /your/save/path --distributed --main_gpu 0 --world_size 2
  4. 等待训练即可,中断恢复同基本训练一致。

IDDM分布式训练过程

训练参数

参数讲解

参数名称 条件参数 参数使用方法 参数类型 参数解释
--seed 初始化种子 int 设置初始化种子,可复现网络生成的图片
--conditional 开启条件训练 bool 若开启可修改自定义配置,例如修改类别、classifier-free guidance插值权重
--sample 采样方式 str 设置采样器类别,当前支持ddpm,ddim
--network 训练网络 str 设置训练网络,当前支持UNet,CSPDarkUNet
--run_name 文件名称 str 初始化模型的文件名称,用于设置保存信息
--epochs 总迭代次数 int 训练总迭代次数
--batch_size 训练批次 int 训练批次大小
--num_workers 加载进程数量 int 用于数据加载的子进程数量,大量占用CPU和内存,但可以加快训练速度
--image_size 输入图像大小 int 输入图像大小,自适应输入输出尺寸
--dataset_path 数据集路径 str 有条件数据集,例如cifar10,每个类别一个文件夹,路径为主文件夹;无条件数据集,所有图放在一个文件夹,路径为图片文件夹
--amp 混合精度训练 bool 开启混合精度训练,有效减少显存使用,但无法保证训练精度和训练结果
--optim 优化器 str 优化器选择,目前支持adam和adamw
--act 激活函数 str 激活函数选择,目前支持gelu、silu、relu、relu6和lrelu
--lr 学习率 float 初始化学习率
--lr_func 学习率方法 str 设置学习率方法,当前支持linear、cosine和warmup_cosine
--result_path 保存路径 str 保存路径
--save_model_interval 是否在训练中储存 bool 是否在训练中储存,根据可视化生成样本信息筛选模型,如果为False,则只保存最后一个模型
--save_model_interval_epochs 保存模型周期 int 保存模型间隔并每 X 周期保存一个模型
--start_model_interval 设置开始每次训练存储编号 int 设置开始每次训练存储的epoch编号,该设置可节约磁盘空间,若不设置默认-1,若设置则从第epoch时开始保存每次训练pt文件,需要与--save_model_interval同时开启
--vis 可视化数据集信息 bool 打开可视化数据集信息,根据可视化生成样本信息筛选模型
--num_vis 生成的可视化图像数量 int 生成的可视化图像数量。如果不填写,则默认生成图片个数为数据集类别的个数
--image_format 生成图片格式 str 在训练中生成图片格式,默认为png
--noise_schedule 加噪方法 str 该方法是模型噪声添加方法
--resume 中断恢复训练 bool 恢复训练将设置为“True”。注意:设置异常中断的epoch编号若在--start_model_interval参数条件外,则不生效。例如开始保存模型时间为100,中断编号为50,由于我们没有保存模型,所以无法设置任意加载epoch点。每次训练我们都会保存xxx_last.pt文件,所以我们需要使用最后一次保存的模型进行中断训练
--start_epoch 中断迭代编号 int 设置异常中断的epoch编号,模型会自动加载当前编号的检查点
--pretrain 预训练模型训练 bool 设置是否启用加载预训练模型训练
--pretrain_path 预训练模型路径 str 预训练模型加载地址
--use_gpu 设置运行指定的GPU int 一般训练中设置指定的运行GPU,输入为GPU的编号
--distributed 分布式训练 bool 开启分布式训练
--main_gpu 分布式训练主显卡 int 设置分布式中主显卡
--world_size 分布式训练的节点等级 int 分布式训练的节点等级, world_size的值会与实际使用的GPU数量或分布式节点数量相对应
--num_classes 类别个数 int 类别个数,用于区分类别
--cfg_scale classifier-free guidance插值权重 int classifier-free guidance插值权重,用户更好生成模型效果

生成

  1. 打开generate.py文件,找到--weight_path参数,将参数中的路径修改为模型权重路径,例如/your/path/weight/model.pt

  2. 设置必要参数,例如--conditional--generate_name--num_images--num_classes--class_name--image_size--result_path等参数,若不设置参数则使用默认设置。我们有两种参数设置方法,其一是直接对generate.py文件if __name__ == "__main__":中的parser进行设置;其二是在控制台在/your/path/Defect-Diffiusion-Model/tools路径下输入以下命令:
    有条件生成命令(1.1.1版本以上)

    python generate.py --generate_name df --num_images 8 --class_name 0 --image_size 64 --weight_path /your/path/weight/model.pt --sample ddpm

    无条件生成命令(1.1.1版本以上)

    python generate.py --generate_name df --num_images 8 --image_size 64 --weight_path /your/path/weight/model.pt --sample ddpm

    有条件生成命令(1.1.1版本及以下)

    python generate.py --conditional --generate_name df --num_images 8 --num_classes 10 --class_name 0 --image_size 64 --weight_path /your/path/weight/model.pt --sample ddpm --network unet --act gelu 

    无条件生成命令(1.1.1版本及以下)

    python generate.py --generate_name df --num_images 8 --image_size 64 --weight_path /your/path/weight/model.pt --sample ddpm --network unet --act gelu 
  3. 等待生成即可

生成参数

参数讲解

参数名称 条件参数 参数使用方法 参数类型 参数解释
--conditional 开启条件生成 bool 若开启可修改自定义配置,例如修改类别、classifier-free guidance插值权重
--generate_name 文件名称 str 初始化模型的文件名称,用于设置保存信息
--image_size 输入图像大小 int 输入图像大小,自适应输入输出尺寸。如果输入为-1并且开启条件生成为真,则模型为每类输出一张图片
--image_format 生成图片格式 str 生成图片格式,jpg/png/jpeg等。推荐使用png获取更好的生产质量
--num_images 生成图片个数 int 单次生成图片个数
--weight_path 权重路径 str 模型权重路径,网络生成需要加载文件
--result_path 保存路径 str 保存路径
--sample 采样方式 str 设置采样器类别,当前支持ddpm,ddim**(1.1.1版本后的模型可不用设置)**
--network 训练网络 str 设置训练网络,当前支持UNet,CSPDarkUNet**(1.1.1版本后的模型可不用设置)**
--act 激活函数 str 激活函数选择,目前支持gelu、silu、relu、relu6和lrelu。如果不选择,会产生马赛克现象**(1.1.1版本后的模型可不用设置)**
--num_classes 类别个数 int 类别个数,用于区分类别**(1.1.1版本后的模型可不用设置)**
--class_name 类别名称 int 类别序号,用于对指定类别生成。如果输入为-1,则模型为每类输出一张图片
--cfg_scale classifier-free guidance插值权重 int classifier-free guidance插值权重,用户更好生成模型效果

结果

我们在以下4个数据集做了训练,开启conditional,采样器为DDPM,图片尺寸均为64*64,激活函数为gelu,学习率为3e-4,采用线性学习方法,迭代次数为300,分别是cifar10NEUDETNRSD-MNWOOD。结果如下图所示:

cifar10

cifar_244_ema

cifar_294_ema

NEUDET

neudet_290_ema

neudet_270_ema

neudet_276_ema

neudet_265_ema

neudet_240_ema

neudet_244_ema

neudet_298_ema

NRSD

nrsd_180_ema

nrsd_188_ema

nrsd_194_ema

nrsd_203_ema

nrsd_210_ema

nrsd_217_ema

nrsd_218_ema

nrsd_248_ema

nrsd_276_ema

nrsd_285_ema

nrsd_298_ema

WOOD

wood_495

Animate face(整活生成)

model_428_ema

model_488_ema

model_497_ema

model_499_ema

model_459_ema

基于64×64模型生成160×160(任意大尺寸)图像

当然,我们根据64×64的基础模型,在generate.py文件中生成160×160的NEU-DET图片(单张输出,每张图片占用显存21GB)。请注意这个[issue]:如果是缺陷纹理那种图片,特征物不明显的直接生成大尺寸就不会有这些问题,例如NRSD、NEU数据集。如果是含有背景有特定明显特征的则需要超分或者resize提升尺寸,例如Cifar10、CelebA-HQ等。**如果实在需要大尺寸图像,在显存足够的情况下直接训练大像素图片。**详细图片如下:

model_499_emamodel_499_emamodel_499_emamodel_499_emamodel_499_emamodel_499_ema

部署

未完待续

关于引用

如果在学术论文中使用该项目进行实验,在可能的情况下,请适当引用我们的项目,为此我们表示感谢。具体引用格式可访问[此网站](https://zenodo.org/records/10866129。

@software{chen_2024_10866129,
  author       = {Chen Yu},
  title        = {IDDM: Integrated Design Diffusion Model},
  month        = mar,
  year         = 2024,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.10866129},
  url          = {https://doi.org/10.5281/zenodo.10866129}
}

致谢

人员

@dome272

组织

@JetBrains

@Python

@Pytorch

PycharmPythonpytorch

Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2023 chairc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

简介

IDDM(工业,景观,动画…),支持DDPM, DDIM, web和多gpu分布式训练。Pytorch实现、生成模型、扩散模型、分布式训练 展开 收起
Python
Apache-2.0
取消

发行版 (1)

全部

贡献者

全部

近期动态

加载更多
不能加载更多了
Python
1
https://gitee.com/chairc/Integrated-Design-Diffusion-Model.git
git@gitee.com:chairc/Integrated-Design-Diffusion-Model.git
chairc
Integrated-Design-Diffusion-Model
Integrated-Design-Diffusion-Model
main

搜索帮助