Skip to content

LLM-powered Conversational AI experience using Vectara

License

Notifications You must be signed in to change notification settings

ofermend/vectara-answer

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

58 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Welcome to vectara-answer

logo

Version Documentation Maintenance Twitter: vectara

About

Customize and deploy a pre-built conversational search UI connected to the data you've ingested into Vectara. With Vectara’s APIs you can create conversational experiences with your data, such as chatbots, semantic search, and workplace search.

vectara-answer is an open source React project that provides a configurable conversational search user interface. You can deploy it to end users so they can ask questions of your data and get back accurate, dependable answers, or refer to the source code when building your own conversational search applications.

Quickstart

Let’s create a simple conversational application. We'll base it on Paul Graham's essays, so you'll be able to ask questions and get back answers based on what he's written. This guide assumes you've followed the vectara-ingest Quickstart to ingest this content into a corpus.

1. Install dependencies

Install Docker.

Install npm and node.

Clone this repository:

git clone https://github.com/vectara/vectara-answer.git

From the root directory, run these commands to install JavaScript dependencies and build the front-end application:

npm install && npm run build

2. Set configuration

Duplicate the secrets.example.toml file and rename the copy to secrets.toml.

Edit the secrets.toml file and change the api_key value to be your Vectara API Key.

Make a duplicate of the config/vectara-website-search/ directory and rename it pg-search/.

Update the config/pg-search/config.yaml file with these changes:

  • Change the corpus_id value to the ID of the corpus into which you ingested Paul Graham's essays as part of the vectara-ingest Quickstart.
  • Change the account_id value to the ID of your account. You can click on your username in the top-right corner to copy it to your clipboard.
  • Change the app_title to "Ask Paul Graham".

Edit the config/pg-search/queries.json file and update the four questions to a set of curated questions you'd like to include in the user interface. For example: "What is a maker schedule?"

3. Run the application

Execute the run script from the root directory using your config/ directory, assigning the default profile from your secrets file:

bash docker/run.sh config/pg-search default

The application executes inside of a Docker container to avoid any issues with existing environments and installed packages.

When the container is set up, the run.sh launch script will open up a browser at localhost:80.

4. Done!

Your application is now up and running. Try a few queries to see how it works.

You can view your application's logs by running docker logs -f vanswer. You can stop the application and shut down the container with docker stop vanswer.

Project architecture

Goals

vectara-answer provides example code of a modern user-interface for GenAI conversational search. We created it with two goals in mind:

  1. To help you create custom conversational search applications with Vectara. You can customize the user experience, launch the application locally, and deploy it to production.
  2. To demonstrate how a conversational search user can be implemented in JavaScript, so you can refer to it when writing your own code.

There are specific example applications such as AskNews (news search), Wikipedia search, and Hacker News search inside of the config/ directory. Each example application has its own sub-directory. See Example applications for more info.

Docker

vectara-answer uses a Docker container to reduce the complexities of specific development environments. Developers can run it locally or take pieces from this reference implementation and use them within their own application. See the Dockerfile for more information on the Docker file structure and build.

Connecting to your Vectara data

vectara-answer requires a Vectara API key for querying. For this you will need to create a file called secrets.toml in the root directory. See secrets.example.toml for an example. This file uses the TOML format, which supports the definition of one or more profiles. Under each profile you can add the line api_key="XXX" where XXX is the Vectara API key you want to use in that profile.

UI

The UI source code is all in the src/ directory. See the UI README.md to learn how to make changes to the UI source.

Example applications

The config/ directory contains example configurations of a vectara-answer application. Each example has its own sub-directory that contains two files:

  • config.yaml defines the general behavior and look of the user interface.
  • queries.json defines a set of pre-defined questions to display in the UI.

You can use the command line to try out an example locally:

bash docker/run.sh config/{name of sub-directory} default

If you like the UX of an example application, you can duplicate the sub-directory and configure it to connect to your own data.

Configuring an application

config.yaml file

You can configure the appearance and behavior of your app by editing these values in your application's config.yaml file.

Search (required)

# These config vars are required for connecting to your Vectara data and issuing requests.
corpus_id: 5
customer_id: 0000000001
api_key: "zwt_abcdef..."

Application (optional)

These configuration parameters allow you to configure the look and feel of the application header, including title, logo and header/footer.

# Define the title of your app to render in the browser tab.
app_title: "Your title here"

# Hide or show the app header.
enable_app_header: False

# Hide or show the app footer.
enable_app_footer: False

# Define the URL the browser will redirect to when the user clicks the logo in the app header.
app_header_logo_link: "https://www.vectara.com"

# Define the logo that appears in the app header. Any images you place in your `config_images` directory will be available.
app_header_logo_src: "config_images/logo.png"

# Describe the logo for improved accessibility.
app_header_logo_alt: "Vectara logo"

# Customize the height at which to render the logo. The width will scale proportionately.
app_header_logo_height: 20

Source filters (optional)

If your application uses more than one corpus, you can define source filters to enable the user to narrow their search to a specific corpus.

# Hide or show source filters.
enable_source_filters: True

# A comma-separated list of the sources on which users can filter.
sources: "BBC,NPR,FOX,CNBC,CNN"

Search header (optional)

These configuration parameters enable you to configure the look and feel of the search header, including the logo.

# Define the URL the browser will redirect to when the user clicks the logo above the search controls.
search_logo_link: "https://asknews.demo.vectara.com"

# Define the logo that appears in the search header. Any images you place in your `config_images` directory will be available.
search_logo_src: "config_images/logo.png"

# Describe the logo for improved accessibility.
search_logo_alt: "Vectara logo"

# Customize the height at which to render the logo. The width will scale proportionately.
search_logo_height: 20

# Define the title to render next to the logo.
search_title: "Search your data"

# Define the description to render opposite the logo and title.
search_description: "Data that speaks for itself"

# Define the placeholder text inside the search box.
search_placeholder: "Ask me anything"

Authentication (optional)

vectara-answer supports Google SSO authentication.

# Configure your app to require the user to log in with Google SSO.
authenticate: True
google_client_id: "cb67dbce87wcc"

Analytics (optional)

# Track user interaction with your app.
google_analytics_tracking_code: "884327434"

queries.json file

the queries.json defines four questions that are displayed underneath the search bar and can be clicked by the user as a shortcut to typing that question in.

The file is structured as follows:

[
    {
      question: "What is the meaning of life 1?"
    },
    {
      question: "What is the meaning of life 2?"
    },
    {
      question: "What is the meaning of life 3?"
    },
    {
      question: "What is the meaning of life 4?"
    }
]

Deployment

Local deployment

To run vectara-answer locally using Docker, perform the following steps:

  1. Make sure you have docker installed on your machine.
  2. Clone this repo into a local directory using git clone https://github.com/vectara/vectara-answer.git.
  3. From the root, run sh docker/run.sh config/<config-directory> <profile_name>. This configures the Docker container with the parameters specified in your configuration directory, and builds the Docker image. Then it starts up the Docker container and opens up localhost:80 in your browser which now contains the main search interface starting point. <profile_name> is the name of the profile in your secrets.toml file where the api_key is defined to use with this search application.

The container generated is called vanswer, and after it is started, you can:

  • View logs by using docker logs -f vanswer
  • Stop the container with docker stop vanswer

Cloud deployment

You can deploy vectara-answer on cloud platforms such as AWS, Azure, or GCP.

  1. Create your configuration file for the project under the config/ directory.
  2. Run python3 prepare_config.py <config_file_name> to generate the .env file
  3. Push the docker to the cloud specific docker container registry:
  4. Launch the container on a VM instance based on the Docker image now hosted in your cloud environment. Make sure to load the .env and the config/ directory as volumes that the Docker recognizes as shown in run.sh.

Author

πŸ‘€ Vectara

🀝 Contributing

Contributions, issues and feature requests are welcome!
Feel free to check issues page. You can also take a look at the contributing guide.

Show your support

Give a ⭐️ if this project helped you!

πŸ“ License

Copyright Β© 2023 Vectara.
This project is Apache 2.0 licensed.

About

LLM-powered Conversational AI experience using Vectara

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 76.6%
  • SCSS 17.3%
  • JavaScript 2.3%
  • HTML 1.7%
  • Python 1.4%
  • Shell 0.5%
  • Dockerfile 0.2%