- .env.example: environment and parameters settings, needs to replace the value of API KEY with your own OPENAI api key.
- chatGenerate.py: generates CQs using just LLM prompting, output a list of generative CQs
- RAGgenerate.py: generates CQs using just RAG, output a list of generative CQs
- generate-iteration.py: iteration of each parameter settings in chatGenerate.py or RAGgenerate.py
- similarity.py: similarity calculation between generative CQs and ground-truth CQs
- similarity-iteration.py: iteration of each parameter settings in similariy.py
- plotting.ipynb: plots the evaluation metrics
- references: papers in PDF format as input to the RAG
- ground-truth-cqs.txt: 15 ground truth CQs for HCI refernce ontology
- gpt-output: output of the generate-iteration.py, contains generative CQs with different number of papers as input to the RAG, differnt temperature, and different iteration.
- all-cos-results: cosinie similarity of all pairs of (genCQs, gtCQs), here in HCI is 15*15 pairs
- highest-cos-results: cosinie similarity of best pairs of (genCQs, gtCQs), here in HCI is 15 pairs
- metric-results: Precision and Recall for 10 iteration