(listed in alphabetical order)
- Po-Chun Chien, [email protected]
- Yu-Shan Huang, [email protected]
- Nai-Ning Ji, [email protected]
- Hao-Ren Wang, [email protected]
- Prof. Jie-Hong Roland Jiang (supervisor), [email protected]
From the 10 classes of CIFAR-10 dataset, we select 2 classes and train a binary classifier from the selected subset of dataset. In total, there are C(10, 2) = 45 binary classifers for each class-pair, and their outputs are used for voting the final prediction. This approach is often referred to as the one-against-one (OAO) method when constructing a multi-class classifier with binary classifiers. We adopt the decision tree classifier from scikit-learn for binary classifcation. To restrict the size of the tree and to avoid overfitting, we limit the maximum depth of the tree (max_depth
) and perform cost complexity pruning (ccp_alpha
).
We train 10 'small' classifers described in the previous section with different subsets of the dataset (to increase the divrsity of each classifier). The final prediction is decided by majority voting of the 10 classifers.
We train a shallow convolutional neural network (CNN) model with grouped convolutions and quantized weights. The CNN contains 2 convolutional layer and 2 dense (fully connected) layers. For the convolutional kernals and the first dense layer, their weights are restricted to the powers of 2 (i.e. 2-1, 20, 21 ...) and 0s, and for the sencond dense layer (, which also serves as the output layer), its weights are represented with 4-bit fixed point numbers. The quantized CNN model is then synthesized with sub-adder sharing to reduce the circuit size (≈30% lesser gates with sharing enabled).
We apply the following preprocessing methods on the CIFAR-10 dataset.
- Image downsampling.
- Image augmentation.
- Truncating several least significant bits of each image pixel.
The circuits generated by our learning programs are hierarchical Verilog designs. We use YOSYS to synthesize the designs into and-inverter-graph (AIG) netlists, and further optimize the netlists with a combination of ABC commands dc2, resyn, resyn2rs and ifraig.
The 3 AIGs small.aig
, medium.aig
and large.aig
can be found in submit_AIGs/
. Their sizes and accuracies[1] evaluated by ABC command &iwls21test
are listed below.
small.aig |
medium.aig |
large.aig |
|
---|---|---|---|
size (#AIG-nodes) | 9,697 | 97,350 | 995,247 |
training acc. (%) | 44.96 | 56.77 | 59.33 |
testing acc. (%) | 39.31 | 44.69 | 54.68 |
We only used the CIFAR-10 testing data ONCE for each submitted circuit in the 3 size categories for the purpose of final evaluation right before the submission. That is, we never used the testing dataset during the the course of our research.
After the submission deadline (June 25, 2021) of the contest, we took some time to fine-tune the hyper-parameters and fix a bug that caused a ≈1% accuracy degradation during large circuit generation. The 3 newer versions of AIGs small_new.aig
, medium_new.aig
and large_new.aig
can also be found in submit_AIGs/
. Their sizes and accuracies are listed below. Each of the 3 newer circuit not only achieves a higher testing accuracy, but at the same time has a smaller generlization gap (the margin between training and testing accuracy) than the originally submitted one.
small_new.aig |
medium_new.aig |
large_new.aig |
|
---|---|---|---|
size (#AIG-nodes) | 9,273 | 99,873 | 967,173 |
training acc. (%) | 43.89 | 54.99 | 59.18 |
testing acc. (%) | 39.51 | 45.44 | 56.34 |
[1]: The training accuracy is computed by averaging the accuracy achieved on each of the training data batches (data_batch
1~5).
Please install the required pip packages specified in requirements.txt
.
pip3 install -r requirements.txt
We also provide the Dockerfile
to build the docker image capable of executing our codes.
docker build -t iwls2021 ./
docker run -it iwls2021
How To Run [2]
-
It is recommended to clone this repository with the
--recurse-submodules
flag.git clone --recurse-submodules [email protected]:NTU-ALComLab/IWLS2021.git
-
Clone and build ABC in
tools/abc/
.cd tools git clone [email protected]:berkeley-abc/abc.git # if it hasn't already been cloned cd abc make cd ../..
-
Clone and build YOSYS in
tools/yosys/
.cd tools git clone [email protected]:YosysHQ/yosys.git # if it hasn't already been cloned cd yosys make cd ../..
-
Before running the circuit learning programs, re-format the original CIFAR-10 dataset with the provided script
data/reformat.py
. (You may refer todata/format.md
to see how the dataset is loaded and re-organized).python3 data/reformat.py
-
To generate the small circuit (with no more than 10,000 AIG-nodes), run the script
small.py
. The output circuit can be found atsmall/small.aig
.python3 small.py
-
To generate the medium circuit (with no more than 100,000 AIG-nodes), run the script
medium.py
. The output circuit can be found atmedium/medium.aig
.python3 medium.py
-
To generate the large circuit (with no more than 1,000,000 AIG-nodes), run the script
large.sh
. The output circuit can be found atlarge/large..aig
.bash large.sh
-
If you want to train a decision-tree-based model with customized parameters instead of our fine-tuned ones, run the script
main.py
and use the flag--help
to see the help messages.python3 main.py # execute with default arguments
[2]: Note that there is some randomness in our procedures, therefore the results may differ each time. Please let us know if there is any problem executing the programs.