Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The training and validation loss dropped to -0.6, and it stopped dropping #1271

Open
MOMOANNIE opened this issue Jan 9, 2023 · 7 comments
Open
Assignees

Comments

@MOMOANNIE
Copy link

Hello FabianIsensee,
I use nnunet to train my own data. After 1000 epochs of training, my training loss and validation loss are both -0.6+, and they don't continue to decrease. What's going on?
image
Below is my training process:
image

@dojoh
Copy link

dojoh commented Aug 2, 2023

Hello,

sorry for the late response!
Is this question still relevant? To me it looks the loss might still decrease if training is continued. In what way did the loss no further decrease? Did you at any point run e.g. 2000 epochs?

Cheers
Ole

@MOMOANNIE
Copy link
Author

Is this question still relevant

Yes, this also happens when I train with nnunetV2
image

Did you at any point run e.g. 2000 epochs

I have tried to modify the number of epochs, but failed, it is still 1000, what needs to be modified to accurately modify the number of epochs?

@dojoh
Copy link

dojoh commented Sep 4, 2023

There are a couple of predefined trainers with more epochs, see https://github.com/MIC-DKFZ/nnUNet/blob/b4e97fe38a9eb6728077678d4850c41570a1cb02/nnunetv2/training/nnUNetTrainer/variants/training_length/nnUNetTrainer_Xepochs.py

You can invoke these trainers using the -tr flag, e.g. nnUNetv2_train DATASET_NAME_OR_ID UNET_CONFIGURATION FOLD -tr nnUNetTrainer_8000epochs

@975827738
Copy link

Is this question still relevant

Yes, this also happens when I train with nnunetV2 image

Did you at any point run e.g. 2000 epochs

I have tried to modify the number of epochs, but failed, it is still 1000, what needs to be modified to accurately modify the number of epochs?

Is this question still relevant

Yes, this also happens when I train with nnunetV2 image

Did you at any point run e.g. 2000 epochs

I have tried to modify the number of epochs, but failed, it is still 1000, what needs to be modified to accurately modify the number of epochs?

Hello,
Is it normal for loss to be a negative number

@iWangTing
Copy link

这个问题仍然相关吗

是的,当我使用 nnunetV2 训练时也会发生这种情况 image

您是否在任何时候运行过,例如 2000 个时代

我试过修改纪元数,但失败了,还是1000个,需要修改什么才能准确修改纪元数?

这个问题仍然相关吗

是的,当我使用 nnunetV2 训练时也会发生这种情况 image

您是否在任何时候运行过,例如 2000 个时代

我试过修改纪元数,但失败了,还是1000个,需要修改什么才能准确修改纪元数?

你好 损失为负数正常吗

这个问题仍然相关吗

是的,当我使用 nnunetV2 训练时也会发生这种情况 image

您是否在任何时候运行过,例如 2000 个时代

我试过修改纪元数,但失败了,还是1000个,需要修改什么才能准确修改纪元数?

这个问题仍然相关吗

是的,当我使用 nnunetV2 训练时也会发生这种情况 image

您是否在任何时候运行过,例如 2000 个时代

我试过修改纪元数,但失败了,还是1000个,需要修改什么才能准确修改纪元数?

你好 损失为负数正常吗

这个问题仍然相关吗

是的,当我使用 nnunetV2 训练时也会发生这种情况 image

您是否在任何时候运行过,例如 2000 个时代

我试过修改纪元数,但失败了,还是1000个,需要修改什么才能准确修改纪元数?

这个问题仍然相关吗

是的,当我使用 nnunetV2 训练时也会发生这种情况 image

您是否在任何时候运行过,例如 2000 个时代

我试过修改纪元数,但失败了,还是1000个,需要修改什么才能准确修改纪元数?

你好 损失为负数正常吗

I have also encountered the situation that loss is negative, may I ask if you have solved it? What is the basis for a negative loss value?

@ravikumawat7716
Copy link

ravikumawat7716 commented Sep 18, 2024

@FabianIsensee , @dojoh I have the same question, why it is showing the loss in negative.
image

@iWangTing
Copy link

@FabianIsensee , @dojoh I have the same question, why it is showing the loss in negative.,我有同样的问题,为什么它以负数显示损失。 image
Hi,
I have realized the problem.
The loss function of nnUNet is designed in such a way that the closer the value is to -1, the smaller the model's loss.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants