This project has been moved to DFL2
DFL is a blockchain framework integrating specially optimized for, and works for federated machine learning. In DFL, all contributions are reflected on the improvements of model accuracy and blockchain database works as a proof of contribution rather than a distributed ledger.
Here are two tested toolchain configuration cases.
- Ubuntu 20
- GCC 9.3.0
- CMake 3.16
- Boost 1.76
- CUDA 10.2 (optional)
- Ubuntu 18 (Official Jetson image)
- GCC 9.4.0
- CMake 3.16
- Boost 1.76
- CUDA 10.2 (under testing)
- CuDNN 8 (under testing)
- Caffe, DFL uses Caffe as the machine learning backend, CUDA support is still under testing.
- Boost 1.76
- nlohmann/json
- RocksDB
- Lz4
- OpenSSL
-
Install CMake and GCC with C++17 support.
-
You can install above dependencies by executing the shell scripts in shell folder. For most cases, you should execute these scripts with this order:
If you are going to depoly DFL to Jetson Nano, you must execute two additional scripts:
-
Compile DFL executable(the source code is in DFL.cpp, you can find everything you need in CMake), which will start a node in the DFL network. There are several tools that we recommend to build, they are listed below:
- Keys generator: to generate private keys and public keys. These keys will be used in the configuration file.
-
Compile your own "reputation algorithm", which will define the way of updating ML models and updating the other nodes' reputation. This implementation is critical for different dataset distribution, malicious ratio situations. We provide four sample "reputation algorithm" here.
-
Run DFL executable, it should provide a sample configuration file for you.
-
Modify the configuration file as you wish, for example, peers, node address, private key, public key, etc. Notice that the batch_size and test_batch_size must be identical to the Caffe solver's configuration. Here is an explaination file for the configuration.
-
DFL receive ML dataset by network, there is an executable file called data_injector for MNIST dataset, use it to inject dataset to DFL. Current version of data_injector only supports I.I.D. dataset injection.
-
DFL will train the model once it receives enough dataset for training, and send it as a transaction to other nodes. The node will generate a block when generating enough transactions and perform FedAvg when receiving enough models from other nodes.
-
Perform step 1, step 2 and step 4 in deployment.
-
Compile DFL_Simulator_mt (source file: simulator_mt.cpp). This version have multi-threading optimization.
Some tools:
-
Dirichlet_distribution_generator_for_Non_IID dataset, used to generate Dirichlet distribution. You can execute without any arguments it to get its usage.
-
large_scale_simulation_generator, it can automatically generate a configuration file for many many nodes (the configuration file is over 3000+ lines, so you'd better use this tool if you want to simulate for over 20 nodes).
-
-
Run the simulator, it should generate a sample configuration file and execute simulation immediately. You can use Ctrl+C to exit.
-
Modify the configuration file with this explanation file.
-
The simulator will automatically crate an output folder, whose name is the current time, in the executable path. The configuration file and reputation dll will also be copied to the output folder for easily reproduce the output.
We provide a sample simulation output folder here, you can reuse the reputation dll and the configuration. This configuration contains 5 nodes(1 observer) and all of them use IID dataset. Please note that this configuration uses HalfFedAvg(the output model = 50% previous model + 50% FedAvg output) because there is no malicious node.
Please refer to this link for sample reputation algorithm. The SDK API is not written yet.
- Large scale DFL deployment (50+ nodes) tool is on its way (50%). Introducer is under testing.