Skip to content

ai4os-hub/tf-cnn-benchmarks-api

Repository files navigation

DEEPaaS API for Tensorflow Benchmarks

Build Status

tf_cnn_benchmarks from TensorFlow team accessed via DEEPaaS API V2.

This is a wrapper to access the TF Benchmarks, not the benchmarks code itself! One has to install tf_cnn_benchmarks and TensorFlow Models and make them accessible in Python. For example for TF 1.10.0, as

$ git clone --depth 1 -b cnn_tf_v1.10_compatible https://github.com/tensorflow/benchmarks.git
$ export PYTHONPATH=$PYTHONPATH:$PWD/benchmarks/scripts/tf_cnn_benchmarks

The recommended way to run TF Benchmarks through DEEPaaS API is to use our Docker images also available through the AI4OS Hub. The Docker images already contain TF Benchmarks and DEEPaaS API.

tf_cnn_benchmarks contains implementations of several popular convolutional models, and is designed to be as fast as possible. tf_cnn_benchmarks supports both running on a single machine or running in distributed mode across multiple hosts. See the High-Performance models guide for more information.

These models utilize many of the strategies in the TensorFlow Performance Guide. Benchmark results can be found here.

These models are designed for performance. For models that have clean and easy-to-read implementations, see the TensorFlow Official Models.

Project Organization

├── LICENSE
├── README.md              <- The top-level README for developers using this project.
├── data
│   └── raw                <- The original, immutable data dump.
│
├── docs                   <- A default Sphinx project; see sphinx-doc.org for details
│
├── models                 <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks              <- Jupyter notebooks. Naming convention is a number (for ordering),
│                             the creator's initials (if many user development), 
│                             and a short `_` delimited description, e.g.
│                             `1.0-jqp-initial_data_exploration.ipynb`.
│
├── references             <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports                <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures            <- Generated graphics and figures to be used in reporting
│
├── requirements.txt       <- The requirements file for reproducing the analysis environment, e.g.
│                             generated with `pip freeze > requirements.txt`
├── test-requirements.txt  <- The requirements file for the test environment
│
├── setup.py               <- makes project pip installable (pip install -e .) so benchmarks_api can be imported
├── benchmarks_cnn_api    <- Source code for use in this project.
│   ├── __init__.py        <- Makes benchmarks_api a Python module
│   │
│   ├── dataset            <- Scripts to download or generate data
│   │   └── make_dataset.py
│   │
│   ├── features           <- Scripts to turn raw data into features for modeling
│   │   └── build_features.py
│   │
│   ├── models             <- Scripts to train models and then use trained models to make
│   │   │                     predictions
│   │   └── deep_api.py
│   │
│   └── tests              <- Scripts to perfrom code testing + pylint script
│   │
│   └── visualization      <- Scripts to create exploratory and results oriented visualizations
│       └── visualize.py
│
└── tox.ini                <- tox file with settings for running tox; see tox.testrun.org

Project based on the cookiecutter data science project template. #cookiecutterdatascience