You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recommendation: make more informative errors based on LLM Exceptions
when the context length is too long, inform the user, otherwise the error can be confused with being out of funds or hitting a daily limit
cache warming seems to continue for a model even after the user switches to a new model, which is also confusing.
After switching to /model openrouter/anthropic/claude-3.5-sonnet , I get the error:
litellm.APIError: APIError: OpenrouterException
I checked openrouter funds, network access etc, there are no problems. I believe the problem was caused by my context length being too long, however the error message is not helpful in figuring this out.
I also see the following error, how
litellm.RateLimitError: AnthropicException - {"type":"error","error":{"type":"rate_limit_error","message":"Number of request
tokens has exceeded your daily rate limit (https://docs.anthropic.com/en/api/rate-limits); see the response headers for
current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact
sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}
This cache warming error seen after switching to gpt-4o from sonnet:
Cache warming error: litellm.RateLimitError: AnthropicException -
{"type":"error","error":{"type":"rate_limit_error","message":"Number of request tokens has exceeded your daily rate limit
(https://docs.anthropic.com/en/api/rate-limits); see the response headers for current usage. Please reduce the prompt length
or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to
discuss your options for a rate limit increase."}}
Cache warming error: litellm.APIError: APIError: OpenrouterException -
Aider v0.58.1
Main model: openrouter/anthropic/claude-3.5-sonnet with diff edit format, prompt cache, infinite output
Weak model: openrouter/anthropic/claude-3-haiku-20240307
Git repo: .git with 12,393 files
Warning: For large repos, consider using --subtree-only and .aiderignore
See: https://aider.chat/docs/faq.html#can-i-use-aider-in-a-large-mono-repo
Repo-map: using 1024 tokens, files refresh
Added aider/coders/base_coder.py to the chat.
Added aider/commands.py to the chat.
Added aider/gui.py to the chat.
Added aider/io.py to the chat.
Added aider/main.py to the chat.
Restored previous conversation history.
The text was updated successfully, but these errors were encountered:
Recommendation: make more informative errors based on LLM Exceptions
After switching to /model openrouter/anthropic/claude-3.5-sonnet , I get the error:
litellm.APIError: APIError: OpenrouterException
I checked openrouter funds, network access etc, there are no problems. I believe the problem was caused by my context length being too long, however the error message is not helpful in figuring this out.
I also see the following error, how
litellm.RateLimitError: AnthropicException - {"type":"error","error":{"type":"rate_limit_error","message":"Number of request
tokens has exceeded your daily rate limit (https://docs.anthropic.com/en/api/rate-limits); see the response headers for
current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact
sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}
This cache warming error seen after switching to gpt-4o from sonnet:
Cache warming error: litellm.RateLimitError: AnthropicException -
{"type":"error","error":{"type":"rate_limit_error","message":"Number of request tokens has exceeded your daily rate limit
(https://docs.anthropic.com/en/api/rate-limits); see the response headers for current usage. Please reduce the prompt length
or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to
discuss your options for a rate limit increase."}}
Cache warming error: litellm.APIError: APIError: OpenrouterException -
Aider version: 0.58.1
Python version: 3.10.12
Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python implementation: CPython
Virtual environment: Yes
OS: Linux 5.15.153.1-microsoft-standard-WSL2 (64bit)
Git version: git version 2.34.1
Aider v0.58.1
Main model: openrouter/anthropic/claude-3.5-sonnet with diff edit format, prompt cache, infinite output
Weak model: openrouter/anthropic/claude-3-haiku-20240307
Git repo: .git with 12,393 files
Warning: For large repos, consider using --subtree-only and .aiderignore
See: https://aider.chat/docs/faq.html#can-i-use-aider-in-a-large-mono-repo
Repo-map: using 1024 tokens, files refresh
Added aider/coders/base_coder.py to the chat.
Added aider/commands.py to the chat.
Added aider/gui.py to the chat.
Added aider/io.py to the chat.
Added aider/main.py to the chat.
Restored previous conversation history.
The text was updated successfully, but these errors were encountered: