-
Notifications
You must be signed in to change notification settings - Fork 723
-
Notifications
You must be signed in to change notification settings - Fork 723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recent commits broke merging #1359
Comments
when the checkpoints list is populated initially, it does not contain a hash value, and a merge works. To solve that problem, we can get the checkpoint into from the checkpoint_aliases which contains all possible key's (has only, name+hash, name only etc) for a specific model instead from the checkpoint_list: |
- checkpoint_list[] contains the CheckpointInfo.title which is "checkpointname.safetensor [hash]" when a checkpoint is selected to be loaded during merge, we try to match it with just "checkpointname.safetensor". -> use checkpoint_aliases[] which already contains the checkpoint key in all possible variants. - replaced removed sd_models.read_state_dict() with sd_models.load_torch_file() - replaced removed sd_vae.load_vae_dict() with sd_vae.load_torch_file() - uncommented create_config() for now, since it calls a removed method: sd_models_config.find_checkpoint_config_near_filename()
- checkpoint_list[] contains the CheckpointInfo.title which is "checkpointname.safetensor [hash]" when a checkpoint is selected to be loaded during merge, we try to match it with just "checkpointname.safetensor". -> use checkpoint_aliases[] which already contains the checkpoint key in all possible variants. - replaced removed sd_models.read_state_dict() with sd_models.load_torch_file() - replaced removed sd_vae.load_vae_dict() with sd_vae.load_torch_file() - uncommented create_config() for now, since it calls a removed method: sd_models_config.find_checkpoint_config_near_filename()
- read_state_dict() does nothing, replaced 2 occurrences with load_torch_file() - now merging actually merges again
* Fix Checkpoint Merging #1359,#1095 - checkpoint_list[] contains the CheckpointInfo.title which is "checkpointname.safetensor [hash]" when a checkpoint is selected to be loaded during merge, we try to match it with just "checkpointname.safetensor". -> use checkpoint_aliases[] which already contains the checkpoint key in all possible variants. - replaced removed sd_models.read_state_dict() with sd_models.load_torch_file() - replaced removed sd_vae.load_vae_dict() with sd_vae.load_torch_file() - uncommented create_config() for now, since it calls a removed method: sd_models_config.find_checkpoint_config_near_filename() * Follow up merge fix for #1359 #1095 - read_state_dict() does nothing, replaced 2 occurrences with load_torch_file() - now merging actually merges again
trying to merge models fails now because of multiple methods being called which have been removed (pass) or commented out
c1b23bd
bccf9fb
sd_models.read_state_dict() has been removed (pass), but its still called during merging
see extras.py: run_modelmerger()
AttributeError: module 'modules.sd_models_config' has no attribute 'find_checkpoint_config_near_filename'
find_checkpoint_config_near_filename has been commented out in: bccf9fb
the following changes seems to work for me, not sure if create_config() is needed at all after merging?
The text was updated successfully, but these errors were encountered: