Slift API offers a powerful text analysis service that swiftly identifies inappropriate language in textual messages. It provides a quick and efficient way to flag messages that may contain inappropriate content. Designed for easy integration, this API allows developers to quickly and effectively moderate user-generated content.
- Content Analysis: Detects and flags potentially inappropriate language, identifying specific words or phrases that triggered the detection.
- Scoring System: Provides a confidence score indicating the likelihood of sensitive content.
- Backend: HonoJs for advanced text analysis capabilities
- Database: Upstash-Vector (Vector Database) for efficient data storage and retrieval
- Deployment: Cloudflare for scalable and efficient API hosting
-
Clone the Repository:
git clone https://github.com/MonalBarse/slift-api
-
Install Dependencies:
cd slift-api npm install
-
Run the Application:
npm start
Ensure you have a
.env
file in the root directory with the following configuration:PORT=3000 API_KEY=your_api_key NODE_ENV=production
To utilize the Slift API, send a POST request to the hosted endpoint.
https://slift-api.monalbarse.workers.dev/
{
"message": "What da ****"
}
{
"isProfane": true,
"score": 0.9,
"flaggedFor": "****"
}
curl -X POST https://slift-api.monalbarse.workers.dev/ -H "Content-Type: application/json" -d '{"message": "What da ****"}'
- The
isProfane
field indicates whether the message contains inappropriate language. - The
score
field provides a confidence level of the inappropriate content, ranging from 0 to 1. - The
flaggedFor
field shows the specific word or phrase that triggered the detection.
Feel free to contribute by:
- Reporting issues
- Adding new features
- Improving existing code
To contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Make your changes and commit them (
git commit -am "build:"
). Follow these guidelines for any commit messages. - Push to the branch (
git push origin feature-branch
). - Create a new Pull Request.