Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add explanation of embeddings #500

Merged
merged 4 commits into from
Aug 16, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Rework text
  • Loading branch information
koppor authored Aug 15, 2024
commit e8b0c71ba6d90d048f445e8d0575cd6ebdad95de
20 changes: 9 additions & 11 deletions en/ai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,17 +27,15 @@ In this window you can see the following elements:

## How does the AI functionality work?

In the background, JabRef analyzes the linked PDF files of library entries. The information used after the indexing is then supplied to the AI, which, to be precise, in our case is a Large Language Model (LLM). The LLM is currently not stored on your computer.
Instead, we have many integrations with AI providers (OpenAI, Mistral AI, Hugging Face), so you can choose the one you like the most.
These AI providers are availableonly remotely via the internet.
In short: we send chunks of text to AI service and then receive processed responses.
In order to use it you need to configure JabRef to use your API key.

JabRef processes linked files this way: the file is split into parts of fixed-length (also called *chunks*), and then an *embedding* is generated.
Embedding is a representation of a part of text.
It is a vector that represents the meaning of the text.
This vector has a crucial property: texts with similar meaning have vectors that are close to (this is called *vector similarity*).
So, whenever you ask AI a question, JabRef tries to find relevant pieces of text from the indexed files using vector similarity.
JabRef uses external AI providers to do the actual work.
You can choose between OpenAI, Mistral AI, and Hugging Face.
They all run "Large Language Models" (LLMs) to process the quests.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quests :)

The AI providers need chunks of text to work.
For this, JabRef parses and indexes linked PDF files of entries:
The file is split into parts of fixed-length (so-called *chunks*) and for each of them, an *embedding* is generated.
An embedding itself is a representation of a part of text and in turn a vector that represents the meaning of the text.
Each vector has a crucial property: texts with similar meaning have vectors that are close to (so-called *vector similarity*).
As a result, whenever you ask AI a question, JabRef tries to find relevant pieces of text from the indexed files using vector similarity.

## More information

Expand Down
Loading