Work on ffi bindings into the c++ libtorch library in preparation for 0.2 which targets the pytorch's post 1.0libtorch backend.
General approach is to use generated Declarations.yaml
spec instead of header parsing for code generation.
codegen/
- code generation, parsesDeclarations.yaml
spec from pytorch and producesffi/
contentsdeps/
- submodules for dependencies - libtorch, mklml, pytorchexamples/
- high level example models (xor mlp, typed cnn)ffi/
- low level FFI bindings to libtorchhasktorch/
- higher level user-facing library, calls intoffi/
, used byexamples/
inline-c/
- submodule to inline-cpp fork used for C++ FFIspec/
- specification files used forcodegen/
deps/
holds several external dependencies that are retrieved using the deps/get-deps.sh
script.
This should be run prior to building
The following steps should run the xor mlp example:
# Download libtorch-binary and other shared library dependencies
pushd deps
./get-deps.sh
popd
# Set shared library environment variables
source setenv
stack build examples
stack exec xor_mlp
Code generation is used to build low-level FFI functions.
Note that the code is already generated in this repo under ffi
, running this is only needed if changes are being made to the code generation process.
To run:
stack build codegen
stack exec codegen-exe
To get CLI options:
stack exec codegen-exe -- --help
Contributions/PRs are welcome.