Neural networks are probably the most popular machine learning algorithms in recent years. FATE provides a federated Heterogeneous neural network implementation.
This federated heterogeneous neural network framework allows multiple parties to jointly conduct a learning process with partially overlapping user samples but different feature sets, which corresponds to a vertically partitioned virtual data set. An advantage of Hetero NN is that it provides the same level of accuracy as the non privacy-preserving approach while at the same time, reveal no information of each private data provider.
The following figure shows the proposed Federated Heterogeneous Neural Network framework.
Party B: We define the party B as the data provider who holds both a data matrix and the class label. Since the class label information is indispensable for supervised learning, there must be an party with access to the label y. The party B naturally takes the responsibility as a dominating server in federated learning.
Party A: We define the data provider which has only a data matrix as party A. Party A plays the role of clients in the federated learning setting.
The data samples are aligned under an encryption scheme. By using the privacy-preserving protocol for inter-database intersections, the parties can find their common users or data samples without compromising the non-overlapping parts of the data sets.
Party B and party A each have their own bottom neural network model, which may be different. The parties jointly build the interactive layer, which is a fully connected layer. This layer's input is the concatenation of the two parties' bottom model output. In addition, only party B owns the model of interactive layer. Lastly, party B builds the top neural network model and feeds the output of interactive layer to it.
Forward Propagation Process consists of three parts.
Part Ⅰ: | Forward Propagation of Bottom Model.
|
---|---|
Part ⅠⅠ: | Forward Propagation of Interactive Layer.
|
Part ⅠⅠⅠ: | Forward Propagation of Top Model.
|
Backward Propagation Process also consists of three parts.
Part I: | Backward Propagation of Top Model.
|
---|---|
Part II: | Backward Propagation of Interactive layer. 1. Party B calculates the error delta_act of activation function's output by delta. 2. Party B propagates delta_bottomB = delta_act * W_B to bottom model, then updates W_B(W_B -= eta * delta_act * alpha_B). 3. Party B generates noise epsilon_B, calculates [delta_act * (W_A + epsilon_B] and sends it to party A. 4. Party A encrypts epsilon_acc, sends [epsilon_acc] to party B. Then party B decrypts the received value, generates noise epsilon_A, adds epsilon_A / eta to decrypted result(delta_act * W_A + epsilon_B + epsilon_A / eta) and add epsilon_A to accumulate noise epsilon_acc(epsilon_acc += epsilon_A). Party A sends the addition result to party B. (delta_act * W_A + epsilon_B + epsilon_A / eta) 5. Party B receives [epsilon_acc] and delta_act * W_A + epsilon_B + epsilon_A / eta. Firstly it sends party A's bottom model output' error [alpha_A * W_A + acc] to host. Secondly updates W_A -= eta * (delta_act * W_A + epsilon_B + epsilon_A / eta - epsilon_B) = eta * delta_act * W_A - epsilon_B = W_TRUE - epsilon_acc. Where W_TRUE represents the actually weights. 6. Party A decrypts [alpha_A * (W_A + acc)] and passes alpha_A * (W_A + acc) to its bottom model. |
Part III: | Backward Propagation of Bottom Model. 1. Party B and party A updates their bottom model separately. The following figure shows the backward propagation of Federated Heterogeneous Neural Network framework. |
.. automodule:: federatedml.param.hetero_nn_param :members:
Allow party B's training without features.
Support evaluate training and validate data during training process
Support use
- early stopping strategy since FATE-v1.4.0
- ping strategy since FATE-v1.4.0
- Referenping strategy since FATE-v1.4.0
[1] Qiao Zping strategy since FATE-v1.4.0ang, Cong ping strategy since FATE-v1.4.0ang, Hongyping strategy since FATE-v1.4.0 Wu, Chunsheng Xin, Tran V. Phuong. GELU-Net: A Globally Encrypted, Locally Unencrypted Deep Neural Network for Privacy-Preserved Learning. IJCAI 2018: 3933-3939
[2] Yifei Zhang, Hao Zhu. Deep Neural Network for Collaborative Machine Learning with Additively Homomorphic Encryption.IJCAI FL Workshop 2019