Images automatically extracted from the document were described using a VLM agent structure. Using the description results, questions were generated with a question generation agent. Subsequently, these questions were posed to the document using the RAG system, and answers were verified.
Images and texts are automatically extracted from the document. Text data undergoes processing using a summarization agent to obtain a concise summary. Subsequently, embeddings of images and texts are extracted using the CLIP model, and their similarities are compared.
The first method achieved a similarity rate of 60%, whereas the other method showed similarities around 33%.
git clone https://github.com/oztrkoguz/VisQueryPDF.git
cd VisQueryPDF
python main.py
Python > 3.10
langchain==0.2.6
langchain-chroma==0.1.1
langchain-community==0.0.38
langchain-core==0.1.52
langchain-openai==0.0.5
langchain-text-splitters==0.2.1
langsmith==0.1.82
ollama==0.2.1