This project lets you send a digitally signed image stream from an Outsourcer (Raspberry pi) to two machines in the local network. One remote machine acts as a Contractor, and the other one acts as a Verifier. The Contractor receives all images, while the Outsourcer only receives random samples. Whenever the Contractor and the Verifier send back a signed object detection result belonging to the same image, the Outsourcer checks if both results are equal. At the end of a contract, signed messages can be used as proof to redeem payment or to convict a party of cheating.
Supported models for object detection on a regular GPU and CPU are Yolov4 and Yolov3 using Tensorflow, TFLite, and TensorRT (only deterministic) as the framework. Tiny weights and custom weights can be used as well.
The supported model for object detection on a Coral USB Accelerator is Mobilenet SSD V2.
Contract violations are distinguished between (1) Quality of Service (QoS) Violations due to timeouts, or not receiving/acknowledging enough outputs, and (2) Dishonest Behavior. Consequences of QoS violations can be blacklisting and bad reviews. Consequences of dishonest behavior can be fines and refusal of payment. Every party accused of dishonest behavior has the right to contest if additional Verifiers are available within a deadline. The verification scheme predicts or detects 11 different types of protocol violations with high confidence.
Type of Violation | Referred Number | Description | Techniques | Confidence |
---|---|---|---|---|
Dishonest Behavior by Individual | 1 | Contractor sends back false response to save resources | Sampling-based re-execution | Up to 100% |
2 | Verifier sends back false response to save resources | Contestation | 100% | |
3 | Outsourcer sends different input to Contractor and Verifier to refuse payment | Digital Signatures (signature chain), Contestation | 100% | |
4 | Contractor or Verifier tries to avoid global penalties | Digital Signatures | 100% | |
5 | Participant refuses to pay even if obliged to by the protocol | TTP or Blockchain that is authorized to conduct payment on behalf of another entity | 100% | |
Dishonest Behavior via Collusion | 6 | Outsourcer and Verifier collude to refuse payment and save resources | Randomization, Game-theoretic incentives, Contestation | 100% |
7 | Contractor and Verifier collude to save resources | Randomization, Game-theoretic incentives | High confidence | |
QoS Violation | 8 | Timeouts, Low Response Rate, High Response Time | Blacklisting, Review system, Contract abortion | 100% |
External Threat | 9 | Message Tampering | Digital Signatures | 100% |
To detect dishoenst behavior or QoS violations, our scripts check the following exit conditions during execution.
- Contractor did not connect in time
- Verifier did not connect in time
- Contractor response is ill formated
- Verifier response is ill formated
- Contractor signature does not match its response
- Verifier signature does not match its response
- Contractor response delay rate is too high
- Verifier has failed to process enough samples in time
- No root hash received for the current interval in time
- Merkle tree leaf node does not match earlier sent response
- Contractor signature of challenge-response is incorrect
- Leaf is not contained in Merkle Tree
- Contractor signature of root hash received at challenge response does not match previously signed root hash
- Merkle Tree proof of membership challenge-response was not received in time
- Outsourcer signature does not match the input
- Outsourcer did not acknowledge enough outputs
- Outsourcer timed out
- Merkle Tree of Contractor is built on responses unequal to responses of the Verifier
- Contractor response and Verifier sample are not equal
By changing parameters.py you can modify the thresholds of QoE violations.
# Tensorflow CPU
conda env create -f verified-outsourcing-cpu.yml
conda activate verified-outsourcing-cpu
# Tensorflow GPU
conda env create -f verified-outsourcing-gpu.yml
conda activate verified-outsourcing-gpu
# TensorFlow CPU
pip install -r requirements.txt
# TensorFlow GPU
pip install -r requirements-gpu.txt
Make sure to use CUDA Toolkit version 10.1 as it is the correct version for the TensorFlow version used in this repository. https://developer.nvidia.com/cuda-10.1-download-archive-update2
https://coral.ai/docs/accelerator/get-started/
The Outsourcer only relies on the outsourcer folder. Copy it to your Raspberry Pi and install all required python dependencies. Installing open-cv can be done with this guide: https://qengineering.eu/install-opencv-4.2-on-raspberry-pi-4.html
The Edge TPU model is already contained in this repository because it is only 6MB in size.
YOLOv4 comes pre-trained and able to detect 80 classes. For easy demo purposes, you can use the pre-trained weights. Download pre-trained yolov4.weights file: https://drive.google.com/openid=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT
Copy and paste yolov4.weights from your downloads folder into the 'data' folder of this repository.
If you want to use yolov4-tiny.weights, a smaller model that is faster at running detections but less accurate, download the file here: https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights
Copy and paste your custom .weights file into the 'data' folder and copy and paste your custom .names into the 'data/classes/' folder.
The only change within the code you need to make for your custom model to work is on line 14 of 'core/config.py' file. Update the code to point at your custom .names file as seen below. (my custom .names file is called custom.names but yours might be named differently)
Note: If you are using the pre-trained yolov4 then make sure that line 14 remains coco.names.
To implement YOLOv4 using TensorFlow, first, we convert the .weights into the corresponding TensorFlow model files and then run the model.
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4
python save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416 --input_size 416 --model yolov4 --tiny
save_model.py:
--weights: path to weights file
(default: './data/yolov4.weights')
--output: path to output
(default: './checkpoints/yolov4-416')
--[no]tiny: yolov4 or yolov4-tiny
(default: 'False')
--input_size: define input size of export model
(default: 416)
--framework: what framework to use (tf, trt, tflite)
(default: tf)
--model: yolov3 or yolov4
(default: yolov4)
After each machine is set up and at least one model is saved, you can start executing the program. First, open parameters.py and change IPs according to the local IPs of your machines. You can also make changes to the model used, whether you want to use Merkle Trees, sampling intervals, maximum allowed loss rates, and much more. Note that OutsourceContract and VerifierContract have to be identical on your machine running the Outsourcer and your machines running the Contractor and the Verifier, respectively.
Afterward, you can start Outsourcer.py on the Raspberry Pi and either Contractor.py, Contractor_EdgeTpu.py, Contractor_with_multithreading.py, or Contractor_EdgeTpu_with_multithreading.py on the other two machines, depending on which version you want to use. Note that the machine running the Verifier also uses one of the above-listed contractor scripts, but you have to specify in parameters.py that it should behave as a Verifier.
If everything was set up correctly, the Outsourcer will start sending a live webcam image stream to the Contractor and sample images to the Verifier. Verifier and Contractor will send back object detection results. All messages sent between machines are signed by the sending entity and verified by the receiving entity using ED25519 signatures of message content, SHA3-256 hash of Verifier Contract or Outsource Contract, and additional information depending on the setup. You can terminate the contract according to custom if you press q in the CV2 output window of Verifier or Contractor.
This repository re-uses components of the following existing repositories:
https://github.com/theAIGuysCode/yolov4-custom-functions - To run Yolov4 with TensorFlow and get formatted outputs
https://github.com/redlogo/RPi-Stream - To set up a Raspberry Pi image stream and use a Coral USB Accelerator for inference