You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have increased the resolution of the image to 512, and I hope to retrain the position emb initialized by nn.parameters (as I have increased the image resolution) while fine-tuning the model using Lora.However, I found that the final saved Lora model did not include the position emb. How should I solve this problem?
The text was updated successfully, but these errors were encountered:
Hi @minmie , I also found this problem. If you try to run the code as following : for name, params in model.named_parameters(): print('name ',name)
you will find that the torch.nn.Parameters module will not appear in name. modules_to_save will Use regular matching to find the
"model.vit.embeddings.position_embeddings" , as for here , obviously , modules_to_save will find nothing. My solution to this is to use nn.Embedding instead.
I don't understand where the nn.Parameter is registered. Is it implicit because you pass image_size=SIZE?
My solution to this is to use nn.Embedding instead.
This means you found a way to make it work? Great. It would be fantastic if you could share your code here in case other users encounter the same problem.
System Info
transformers
version: 4.44.0Who can help?
@BenjaminBossan @sayakpaul
Information
Tasks
examples
folderReproduction
Expected behavior
I have increased the resolution of the image to 512, and I hope to retrain the position emb initialized by nn.parameters (as I have increased the image resolution) while fine-tuning the model using Lora.However, I found that the final saved Lora model did not include the position emb. How should I solve this problem?
The text was updated successfully, but these errors were encountered: