- Language-agnostic PyTorch model serving
- Serve JIT compiled PyTorch model in production environment
This project is an extension for Brusta: original project with Scala/Java support
- docker == 18.09.1
- go >= 1.13
- your JIT traced PyTorch model (If you are not familiar with JIT tracing, please refer JIT Tutorial)
- Run "make" to make your PyTorch model server binary (libtorch should be pre-installed)
- Load your traced PyTorch model file on the "model server"
- Run the model server
- TBD
- TBD
Request to the model server as follow (Suppose your input dimension is 3)
curl -X POST -d '{"input":[1.0, 1.0, 1.0]}' localhost:8080/predict
- YongRae Jo ([email protected])
- YoonHo Jo ([email protected])
- GiChang Lee ([email protected])
- SukHyun Ko (s3cr3t)
- Seunghwan Hong ([email protected])
- Alex Kim ([email protected], Original project)