Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uninformative LLM errors #1858

Open
lockmeister opened this issue Oct 1, 2024 · 0 comments
Open

Uninformative LLM errors #1858

lockmeister opened this issue Oct 1, 2024 · 0 comments
Labels
question Further information is requested

Comments

@lockmeister
Copy link

Recommendation: make more informative errors based on LLM Exceptions

  • when the context length is too long, inform the user, otherwise the error can be confused with being out of funds or hitting a daily limit
  • cache warming seems to continue for a model even after the user switches to a new model, which is also confusing.

After switching to /model openrouter/anthropic/claude-3.5-sonnet , I get the error:
litellm.APIError: APIError: OpenrouterException

I checked openrouter funds, network access etc, there are no problems. I believe the problem was caused by my context length being too long, however the error message is not helpful in figuring this out.

I also see the following error, how
litellm.RateLimitError: AnthropicException - {"type":"error","error":{"type":"rate_limit_error","message":"Number of request
tokens has exceeded your daily rate limit (https://docs.anthropic.com/en/api/rate-limits); see the response headers for
current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact
sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}}

This cache warming error seen after switching to gpt-4o from sonnet:

Cache warming error: litellm.RateLimitError: AnthropicException -
{"type":"error","error":{"type":"rate_limit_error","message":"Number of request tokens has exceeded your daily rate limit
(https://docs.anthropic.com/en/api/rate-limits); see the response headers for current usage. Please reduce the prompt length
or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to
discuss your options for a rate limit increase."}}
Cache warming error: litellm.APIError: APIError: OpenrouterException -

Aider version: 0.58.1
Python version: 3.10.12
Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python implementation: CPython
Virtual environment: Yes
OS: Linux 5.15.153.1-microsoft-standard-WSL2 (64bit)
Git version: git version 2.34.1

Aider v0.58.1
Main model: openrouter/anthropic/claude-3.5-sonnet with diff edit format, prompt cache, infinite output
Weak model: openrouter/anthropic/claude-3-haiku-20240307
Git repo: .git with 12,393 files
Warning: For large repos, consider using --subtree-only and .aiderignore
See: https://aider.chat/docs/faq.html#can-i-use-aider-in-a-large-mono-repo
Repo-map: using 1024 tokens, files refresh
Added aider/coders/base_coder.py to the chat.
Added aider/commands.py to the chat.
Added aider/gui.py to the chat.
Added aider/io.py to the chat.
Added aider/main.py to the chat.
Restored previous conversation history.

@fry69 fry69 added the question Further information is requested label Oct 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants