You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is part 1 of GPT3.5 integration, where we will be creating VectorDB instances that store course materials to be used as context to enrich our LLM responses.:
Create VectorDB instances for each course (if it is requested) where we will store course materials to be used as context for the GPT3.5 LLM. VectorDB configuration file will be saved to a S3 bucket. More info about about different types of VectorDBs here. Another alternative to using S3 is to use a cloud-hosted VectorDB (like Pinecone) and attach metadata stating that this vector belongs to a specific course.
Create an API endpoint that accepts course materials (PDF, TXT, strings, etc) and converts them into vectors to be stored in the vectorDB instance
Create an API endpoint that given a term/query, return all relevant vectors/documents from the VectorDB (there should be specific params exposed to tune this endpoint). This documents will be used as context to be fed into LLM for developing an actual response to the query
The text was updated successfully, but these errors were encountered:
trangiabach
changed the title
[Feat] ChatGPT Integration Part 1:
[Feat] ChatGPT Integration Part 1: VectorDBs
Sep 24, 2023
trangiabach
changed the title
[Feat] ChatGPT Integration Part 1: VectorDBs
[Feat] ChatGPT Integration Part 1: VectorDBs API
Sep 24, 2023
This is related to #279. An article for reference.
This is part 1 of GPT3.5 integration, where we will be creating VectorDB instances that store course materials to be used as context to enrich our LLM responses.:
Create VectorDB instances for each course (if it is requested) where we will store course materials to be used as context for the GPT3.5 LLM. VectorDB configuration file will be saved to a S3 bucket. More info about about different types of VectorDBs here. Another alternative to using S3 is to use a cloud-hosted VectorDB (like Pinecone) and attach metadata stating that this vector belongs to a specific course.
Create an API endpoint that accepts course materials (PDF, TXT, strings, etc) and converts them into vectors to be stored in the vectorDB instance
Create an API endpoint that given a term/query, return all relevant vectors/documents from the VectorDB (there should be specific params exposed to tune this endpoint). This documents will be used as context to be fed into LLM for developing an actual response to the query
The text was updated successfully, but these errors were encountered: