Skip to content

Commit

Permalink
1.11.1 bugfix (#293)
Browse files Browse the repository at this point in the history
* add request_id in params and filename  (#285)

* animatediff_output.py filname add request_id

* Update animatediff_ui.py AnimateDiffProcess add request_id param

* allow request_id empty ,and remove from webui

* move request_id at  last  in init method

* remove request_id from webui

---------

Co-authored-by: zhangruicheng <[email protected]>

* fix device

* request id

* add test

* fix test

* fix test

* fix test

* cheaper test

* cheaper test

* readme

* run one test at one time

* readme

* use ubuntu 20.04

---------

Co-authored-by: zhangrc <[email protected]>
Co-authored-by: zhangruicheng <[email protected]>
  • Loading branch information
3 people committed Nov 8, 2023
1 parent df584be commit 59e1726
Show file tree
Hide file tree
Showing 6 changed files with 195 additions and 6 deletions.
119 changes: 119 additions & 0 deletions .github/workflows/tests.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
name: Run AnimateDiff generation with Motion LoRA & Prompt Travel on CPU

on:
push: {} # Remove the branch restriction to trigger the workflow for any branch

jobs:
build:
runs-on: ubuntu-20.04
steps:
- name: Checkout A1111
uses: actions/checkout@v3
with:
repository: 'AUTOMATIC1111/stable-diffusion-webui'
path: 'stable-diffusion-webui'
- name: Checkout ControlNet
uses: actions/checkout@v3
with:
repository: 'Mikubill/sd-webui-controlnet'
path: 'stable-diffusion-webui/extensions/sd-webui-controlnet'
- name: Checkout AnimateDiff
uses: actions/checkout@v3
with:
repository: 'continue-revolution/sd-webui-animatediff'
path: 'stable-diffusion-webui/extensions/sd-webui-animatediff'
- name: Set up Python 3.11.4
uses: actions/setup-python@v4
with:
python-version: 3.11.4
cache: pip
cache-dependency-path: |
**/requirements*txt
launch.py
- name: Install test dependencies
run: |
pip install wait-for-it
pip install -r requirements-test.txt
working-directory: stable-diffusion-webui
env:
PIP_DISABLE_PIP_VERSION_CHECK: "1"
PIP_PROGRESS_BAR: "off"
- name: Setup environment
run: python launch.py --skip-torch-cuda-test --exit
working-directory: stable-diffusion-webui
env:
PIP_DISABLE_PIP_VERSION_CHECK: "1"
PIP_PROGRESS_BAR: "off"
TORCH_INDEX_URL: https://download.pytorch.org/whl/cpu
WEBUI_LAUNCH_LIVE_OUTPUT: "1"
PYTHONUNBUFFERED: "1"
- name: Cache AnimateDiff models
uses: actions/cache@v3
with:
path: stable-diffusion-webui/extensions/sd-webui-animatediff/model/
key: animatediff-models-v1
- name: Cache LoRA models
uses: actions/cache@v3
with:
path: stable-diffusion-webui/models/Lora
key: lora-models-v1
- name: Download AnimateDiff model for testing
run: |
if [ ! -f "extensions/sd-webui-animatediff/model/mm_sd_v15_v2.ckpt" ]; then
curl -Lo extensions/sd-webui-animatediff/model/mm_sd_v15_v2.ckpt https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt?download=true
fi
working-directory: stable-diffusion-webui
- name: Download LoRA model for testing
run: |
if [ ! -d "models/Lora" ]; then
mkdir models/Lora
fi
if [ ! -f "models/Lora/yoimiya.safetensors" ]; then
curl -Lo models/Lora/yoimiya.safetensors https://civitai.com/api/download/models/48374?type=Model&format=SafeTensor
fi
if [ ! -f "models/Lora/v2_lora_TiltDown.ckpt" ]; then
curl -Lo models/Lora/v2_lora_TiltDown.ckpt https://huggingface.co/guoyww/animatediff/resolve/main/v2_lora_TiltDown.ckpt?download=true
fi
working-directory: stable-diffusion-webui
- name: Start test server
run: >
python -m coverage run
--data-file=.coverage.server
launch.py
--skip-prepare-environment
--skip-torch-cuda-test
--test-server
--do-not-download-clip
--no-half
--disable-opt-split-attention
--use-cpu all
--api-server-stop
2>&1 | tee output.txt &
working-directory: stable-diffusion-webui
- name: Run tests
run: |
wait-for-it --service 127.0.0.1:7860 -t 600
python -m pytest -vv --junitxml=test/results.xml --cov ./extensions/sd-webui-animatediff --cov-report=xml --verify-base-url ./extensions/sd-webui-animatediff/tests
working-directory: stable-diffusion-webui
- name: Kill test server
if: always()
run: curl -vv -XPOST http://127.0.0.1:7860/sdapi/v1/server-stop && sleep 10
- name: Show coverage
run: |
python -m coverage combine .coverage*
python -m coverage report -i
python -m coverage html -i
working-directory: stable-diffusion-webui
- name: Upload main app output
uses: actions/upload-artifact@v3
if: always()
with:
name: output
path: output.txt
- name: Upload coverage HTML
uses: actions/upload-artifact@v3
if: always()
with:
name: htmlcov
path: htmlcov

5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ You might also be interested in another extension I created: [Segment Anything f
- `2023/10/21`: [v1.9.4](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.9.4): Save prompt travel to output images, `Reverse` merged to `Closed loop` (See [WebUI Parameters](#webui-parameters)), remove `TimestepEmbedSequential` hijack, remove `hints.js`, better explanation of several context-related parameters.
- `2023/10/25`: [v1.10.0](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.10.0): Support img2img batch. You need ControlNet installed to make it work properly (you do not need to enable ControlNet). See [ControlNet V2V](#controlnet-v2v) for more information.
- `2023/10/29`: [v1.11.0](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.11.0): Support [HotShot-XL](https://github.com/hotshotco/Hotshot-XL) for SDXL. See [HotShot-XL](#hotshot-xl) for more information.
- `2023/11/06`: [v1.11.1](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.11.1): optimize VRAM to support any number of control images for ControlNet V2V, patch [encode_pil_to_base64](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/api/api.py#L104-L133) to support api return a video, save frames to `AnimateDIff/yy-mm-dd/`, recover from assertion error without restart.
- `2023/11/06`: [v1.11.1](https://github.com/continue-revolution/sd-webui-animatediff/releases/tag/v1.11.1): optimize VRAM for ControlNet V2V, patch [encode_pil_to_base64](https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/master/modules/api/api.py#L104-L133) for api return a video, save frames to `AnimateDIff/yy-mm-dd/`, recover from assertion error, test case, optional [request id](#api) for API.

For future update plan, please query [here](https://github.com/continue-revolution/sd-webui-animatediff/pull/224).

Expand Down Expand Up @@ -94,7 +94,8 @@ It is quite similar to the way you use ControlNet. API will return a video in ba
'latent_scale': 32, # Latent scale
'last_frame': None, # Optional last frame
'latent_power_last': 1, # Optional latent power for last frame
'latent_scale_last': 32 # Optional latent scale for last frame
'latent_scale_last': 32,# Optional latent scale for last frame
'request_id': '' # Optional request id. If provided, outputs will have request id as filename suffix
}
]
}
Expand Down
4 changes: 2 additions & 2 deletions scripts/animatediff_infv2v.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,12 +111,12 @@ def mm_cn_select(context: List[int]):
if control.hint_cond.shape[0] > len(context):
control.hint_cond_backup = control.hint_cond
control.hint_cond = control.hint_cond[context]
control.hint_cond = control.hint_cond.to(device=shared.device)
control.hint_cond = control.hint_cond.to(device=devices.get_device_for("controlnet"))
if control.hr_hint_cond is not None:
if control.hr_hint_cond.shape[0] > len(context):
control.hr_hint_cond_backup = control.hr_hint_cond
control.hr_hint_cond = control.hr_hint_cond[context]
control.hr_hint_cond = control.hr_hint_cond.to(device=shared.device)
control.hr_hint_cond = control.hr_hint_cond.to(device=devices.get_device_for("controlnet"))
# IPAdapter and Controlllite are always on CPU.
elif control.control_model_type == ControlModelType.IPAdapter and control.control_model.image_emb.shape[0] > len(context):
control.control_model.image_emb_backup = control.control_model.image_emb
Expand Down
4 changes: 3 additions & 1 deletion scripts/animatediff_output.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,9 @@ def output(self, p: StableDiffusionProcessing, res: Processed, params: AnimateDi
frame_list = [image.copy() for image in res.images[i : i + params.video_length]]

seq = images.get_next_sequence_number(output_dir, "")
filename = f"{seq:05}-{res.all_seeds[(i-res.index_of_first_image)]}"
filename_suffix = f"-{params.request_id}" if params.request_id else ""
filename = f"{seq:05}-{res.all_seeds[(i-res.index_of_first_image)]}{filename_suffix}"

video_path_prefix = output_dir / filename

frame_list = self._add_reverse(params, frame_list)
Expand Down
6 changes: 5 additions & 1 deletion scripts/animatediff_ui.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ def __init__(
last_frame=None,
latent_power_last=1,
latent_scale_last=32,
request_id = '',
):
self.model = model
self.enable = enable
Expand All @@ -64,10 +65,11 @@ def __init__(
self.last_frame = last_frame
self.latent_power_last = latent_power_last
self.latent_scale_last = latent_scale_last
self.request_id = request_id


def get_list(self, is_img2img: bool):
list_var = list(vars(self).values())
list_var = list(vars(self).values())[:-1]
if is_img2img:
animatediff_i2ibatch.hack()
else:
Expand All @@ -89,6 +91,8 @@ def get_dict(self, is_img2img: bool):
"interp": self.interp,
"interp_x": self.interp_x,
}
if self.request_id:
infotext['request_id'] = self.request_id
if motion_module.mm is not None and motion_module.mm.mm_hash is not None:
infotext['mm_hash'] = motion_module.mm.mm_hash[:8]
if is_img2img:
Expand Down
63 changes: 63 additions & 0 deletions tests/test_simple.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@

import pytest
import requests


@pytest.fixture()
def url_txt2img(base_url):
return f"{base_url}/sdapi/v1/txt2img"


@pytest.fixture()
def error_txt2img_request():
return {
"prompt": '1girl, yoimiya (genshin impact), origen, line, comet, wink, Masterpiece, BestQuality. UltraDetailed, <lora:yoimiya:0.8>, <lora:v2_lora_TiltDown:0.8>\n0: closed mouth\n8: open mouth,',
"negative_prompt": "(sketch, duplicate, ugly, huge eyes, text, logo, monochrome, worst face, (bad and mutated hands:1.3), (worst quality:2.0), (low quality:2.0), (blurry:2.0), horror, geometry, bad_prompt_v2, (bad hands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), crown braid, ((2girl)), (deformed fingers:1.2), (long fingers:1.2),succubus wings,horn,succubus horn,succubus hairstyle, (bad-artist-anime), bad-artist, bad hand, grayscale, skin spots, acnes, skin blemishes",
"batch_size": 1,
"steps": 1,
"cfg_scale": 7,
"alwayson_scripts": {
'AnimateDiff': {
'args': [{
'enable': True,
'batch_size': 1,
'video_length': 2,
}]
}
}
}


@pytest.fixture()
def correct_txt2img_request():
return {
"prompt": '1girl, yoimiya (genshin impact), origen, line, comet, wink, Masterpiece, BestQuality. UltraDetailed, <lora:yoimiya:0.8>, <lora:v2_lora_TiltDown:0.8>\n0: closed mouth\n1: open mouth,',
"negative_prompt": "(sketch, duplicate, ugly, huge eyes, text, logo, monochrome, worst face, (bad and mutated hands:1.3), (worst quality:2.0), (low quality:2.0), (blurry:2.0), horror, geometry, bad_prompt_v2, (bad hands), (missing fingers), multiple limbs, bad anatomy, (interlocked fingers:1.2), Ugly Fingers, (extra digit and hands and fingers and legs and arms:1.4), crown braid, ((2girl)), (deformed fingers:1.2), (long fingers:1.2),succubus wings,horn,succubus horn,succubus hairstyle, (bad-artist-anime), bad-artist, bad hand, grayscale, skin spots, acnes, skin blemishes",
"batch_size": 1,
"steps": 1,
"cfg_scale": 7,
"alwayson_scripts": {
'AnimateDiff': {
'args': [{
'enable': True,
'batch_size': 1,
'video_length': 2,
}]
}
}
}


def test_txt2img_simple_performed(url_txt2img, error_txt2img_request, correct_txt2img_request):
'''
This test checks the following:
- simple t2v generation
- prompt travel
- infinite context generator
- motion lora
- error recovery
'''
assert requests.post(url_txt2img, json=error_txt2img_request).status_code == 500
response = requests.post(url_txt2img, json=correct_txt2img_request)
assert response.status_code == 200
assert isinstance(response.json()['images'][0], str)

0 comments on commit 59e1726

Please sign in to comment.