This is a simple API proxy server that allows you to interact with a LLM API and Google Search API.
- CORS enabled
- JSON request body parsing
- Rate limiting for both APIs
- Basic error handling
The server uses the following environment variables:
LLM_API_URL
: The URL of the LLM API endpointLLM_API_KEY
: The API key for the LLM APILLM_CHAT_API_URL
: The URL of the LLM Chat API endpointGOOGLE_SEARCH_API_URL
: The URL of the Google Search APIGOOGLE_SEARCH_API_KEY
: The API key for the Google Search APIGOOGLE_SEARCH_CX
: The CX for the Google Search API
POST /api/llm/prompt
: This route accepts a JSON body withmodel
andprompt
parameters and forwards the request to the LLM API. It is rate limited to 10 requests per minute and 1000 requests per day.GET /api/search
: This route accepts aq
query parameter and forwards the request to the Google Search API. It is rate limited to 100 requests per day.
- Clone the repository
- Install the dependencies with
npm install
- Set the environment variables in a
.env
file like in.env.exemple
- Start the server with
npm start
You can use any HTTP client to send requests to the server. Here's an example using curl
:
# LLM API
curl http://localhost:8000/api/llm/prompt -H "Content-Type: application/json" -d '{"model": "Meta-Llama-3-8B-Instruct", "prompt": "How to save the world?"}'
# Google Search API
curl "http://localhost:8000/api/search?q=How%20to%20make%20a%20webserver%20in%20Rust?"