Quivr is your second brain in the cloud, designed to easily store and retrieve unstructured information. It's like Obsidian but powered by generative AI.
- Store Anything: Quivr can handle almost any type of data you throw at it. Text, images, code snippets, you name it.
- Generative AI: Quivr uses advanced AI to help you generate and retrieve information.
- Fast and Efficient: Designed with speed and efficiency in mind. Quivr makes sure you can access your data as quickly as possible.
- Secure: Your data is stored securely in the cloud and is always under your control.
- Compatible Files:
- Text
- Markdown
- Audio
- Video
- Open Source: Quivr is open source and free to use.
quiver-16.05.mp4
Quivr.webm
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Make sure you have the following installed before continuing:
- Python 3.10 or higher
- Pip
- Virtualenv
You'll also need a Supabase account for:
- A new Supabase project
- Supabase Project API key
- Supabase Project URL
- Clone the repository
git clone [email protected]:StanGirard/Quivr.git && cd Quivr
- Create a virtual environment
virtualenv venv
- Activate the virtual environment
source venv/bin/activate
- Install the dependencies
pip install -r requirements.txt
- Copy the streamlit secrets.toml example file
cp .streamlit/secrets.toml.example .streamlit/secrets.toml
- Add your credentials to .streamlit/secrets.toml file
supabase_url = "SUPABASE_URL"
supabase_service_key = "SUPABASE_SERVICE_KEY"
openai_api_key = "OPENAI_API_KEY"
anthropic_api_key = "ANTHROPIC_API_KEY" # Optional
Note that the supabase_service_key
is found in your Supabase dashboard under Project Settings -> API. Use the anon
public
key found in the Project API keys
section.
- Run the following migration scripts on the Supabase database via the web interface (SQL Editor ->
New query
)
-- Enable the pgvector extension to work with embedding vectors
create extension vector;
-- Create a table to store your documents
create table documents (
id bigserial primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);
CREATE FUNCTION match_documents(query_embedding vector(1536), match_count int)
RETURNS TABLE(
id bigint,
content text,
metadata jsonb,
-- we return matched vectors to enable maximal marginal relevance searches
embedding vector(1536),
similarity float)
LANGUAGE plpgsql
AS $$
# variable_conflict use_column
BEGIN
RETURN query
SELECT
id,
content,
metadata,
embedding,
1 -(documents.embedding <=> query_embedding) AS similarity
FROM
documents
ORDER BY
documents.embedding <=> query_embedding
LIMIT match_count;
END;
$$;
and
create table
stats (
-- A column called "time" with data type "timestamp"
time timestamp,
-- A column called "details" with data type "text"
chat boolean,
embedding boolean,
details text,
metadata jsonb,
-- An "integer" primary key column called "id" that is generated always as identity
id integer primary key generated always as identity
);
- Run the app
streamlit run main.py
- NextJS - The React framework used.
- FastAPI - The API framework used.
- Supabase - The open source Firebase alternative.
Open a pull request and we'll review it as soon as possible.