X-Model server for boosting ML inference
A server taking care of all your inference to deployment needs + boosting performance without letting you do any heavylifting in the backend.
-
Clone this repo & install
git clone https://github.com/biswaroop1547/microbatcher.git && cd microbatcher make install
-
Define model path and start server
echo
lorem ipsum
lorem ipsum
lorem ipsum
- lorem ipsum
- lorem ipsum