This repository contains a complete implementation of a secure federated learning framework that uses dual zero-knowledge proof verification to ensure both client-side training correctness and server-side aggregation integrity.
- π‘οΈ Dual ZKP Verification: Client-side zk-STARKs + Server-side zk-SNARKs
- π FedJSCM Aggregation: Momentum-based federated aggregation for improved convergence
- π Dynamic Proof Rigor: Adaptive proof complexity based on training stability
- π Blockchain Integration: On-chain verification for public auditability
- π Comprehensive Experiments: Built-in benchmarking and visualization tools
This project requires specific ZKP tools for circuit compilation and proof generation:
- Circom (Rust-based): Circuit compiler for zero-knowledge proofs
- SnarkJS: JavaScript library for zk-SNARK operations
# Install Rust (required for circom)
curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
# Install circom from source
git clone https://github.com/iden3/circom.git
cd circom && cargo build --release && cargo install --path circom
# Install snarkjs via npm
npm install -g snarkjs
# Verify installation
uv run python -m secure_fl.setup checkπ Detailed Setup Guide: See docs/ZKP_SETUP.md for comprehensive installation instructions, troubleshooting, and platform-specific guidance.
CI/CD Testing: Automated tests run on Ubuntu and macOS. Windows support is available but requires manual setup and verification.
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Client 1 β β Client 2 β β Client N β
β βββββββββββββββ β β βββββββββββββββ β β βββββββββββββββ β
β βLocal Trainingβ β β βLocal Trainingβ β β βLocal Trainingβ β
β β + zk-STARK β β β β + zk-STARK β β β β + zk-STARK β β
β β Proof β β β β Proof β β β β Proof β β
β βββββββββββββββ β β βββββββββββββββ β β βββββββββββββββ β
βββββββββββ¬ββββββββ βββββββββββ¬ββββββββ βββββββββββ¬ββββββββ
β β β
ββββββββββββββββββββββββΌβββββββββββββββββββββββ
β
ββββββββββββββΌβββββββββββββ
β FL Server β
β βββββββββββββββββββββββ β
β β FedJSCM Aggregation β β
β β + zk-SNARK Proof β β
β β + Stability Monitor β β
β βββββββββββββββββββββββ β
ββββββββββββββ¬βββββββββββββ
β
ββββββββββββββΌβββββββββββββ
β Blockchain Verifier β
β βββββββββββββββββββββββ β
β β Smart Contract β β
β β Proof Verification β β
β βββββββββββββββββββββββ β
βββββββββββββββββββββββββββ
- @krishantt - Krishant Timilsina
- @bigya01 - Bindu Paudel
secure-fl/
βββ π docs/ # Research papers and documentation
β βββ concept-note/ # Initial concept and motivation
β βββ project-proposal/ # Detailed project proposal
β βββ proposal-defense/ # Defense materials
βββ π fl/ # Core federated learning implementation
β βββ server.py # FL server with FedJSCM and ZKP integration
β βββ client.py # FL client with zk-STARK proof generation
β βββ aggregation.py # FedJSCM momentum-based aggregation
β βββ proof_manager.py # ZKP proof generation and verification
β βββ stability_monitor.py # Dynamic proof rigor adjustment
β βββ quantization.py # Parameter quantization for circuits
β βββ utils.py # Utility functions
βββ π proofs/ # Zero-knowledge proof circuits
β βββ client/ # zk-STARK circuits (Cairo)
β β βββ sgd_full_trace.cairo
β βββ server/ # zk-SNARK circuits (Circom)
β βββ fedjscm_aggregation.circom
βββ π blockchain/ # Smart contracts for verification
β βββ FLVerifier.sol # Solidity contract for proof verification
βββ π experiments/ # Experiment scripts and configs
β βββ train_secure_fl.py # Main training experiment
β βββ config.yaml # Experiment configuration
βββ π k8s/ # Kubernetes deployment manifests
βββ π infra/ # Infrastructure as Code configs
βββ requirements.txt # Python dependencies
βββ .gitignore
βββ README.md
- Python 3.8+
- Node.js (for Circom/SnarkJS)
- Cairo compiler (for zk-STARKs)
- CUDA-capable GPU (optional, for acceleration)
# Install the package
pip install secure-fl
# Setup ZKP tools (optional but recommended)
secure-fl setup zkp
# Run a quick demo
secure-fl demo# Clone the repository
git clone https://github.com/krishantt/secure-fl.git
cd secure-fl
# Install PDM if you don't have it
pip install pdm
# Install dependencies
pdm install
# Setup ZKP tools
pdm run setup-zkp
# Run tests
pdm run test# Clone the repository
git clone https://github.com/krishantt/secure-fl.git
cd secure-fl
# Install in development mode
pip install -e .
# Setup environment
secure-fl setup fullFor research purposes, the project includes a comprehensive experiments directory with multi-dataset benchmarking:
# Run multi-dataset benchmark (development only)
cd secure-fl
python experiments/benchmark.py --datasets mnist cifar10 synthetic
# Quick benchmark
python experiments/benchmark.py --quick --configs baseline_iid
# See experiments/README.md for full documentationNote: The experiments/ directory is excluded from package distribution and contains standalone research scripts.
# Run a quick demo
secure-fl demo
# Run a federated learning experiment
secure-fl experiment --num-clients 3 --rounds 5 --dataset synthetic
# Start a server
secure-fl server --rounds 10 --enable-zkp
# Connect a client
secure-fl client --client-id client_1 --dataset mnist
# Check system requirements
secure-fl setup checkfrom secure_fl import SecureFlowerServer, create_client, create_server_strategy
import torch.nn as nn
# Define your model
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.fc = nn.Linear(784, 10)
def forward(self, x):
return self.fc(x.flatten(1))
# Create server
strategy = create_server_strategy(
model_fn=lambda: MyModel(),
enable_zkp=True,
proof_rigor="medium"
)
server = SecureFlowerServer(strategy=strategy)
# Create clients
client = create_client(
client_id="client_1",
model_fn=lambda: MyModel(),
train_data=your_train_data,
enable_zkp=True
)Our momentum-based aggregation algorithm:
m^{(t+1)} = Ξ³ Γ m^{(t)} + Ξ£(p_i Γ Ξ_i)
w^{(t+1)} = w^{(t)} + m^{(t+1)}
Where:
m^{(t)}is server momentum at round tΞ³is momentum coefficient (0.9 by default)p_iare client weights (proportional to data size)Ξ_iare client parameter updates
The system automatically adjusts proof complexity based on training stability:
- High Rigor: Full SGD trace verification (early rounds, unstable training)
- Medium Rigor: Single-step verification (moderate stability)
- Low Rigor: Delta norm verification (stable/converged training)
- Language: Cairo
- Purpose: Prove correct local SGD training
- Features:
- Post-quantum secure
- Transparent (no trusted setup)
- Scalable verification
- Scheme: Groth16
- Purpose: Prove correct FedJSCM aggregation
- Features:
- Succinct proofs (~200 bytes)
- Fast verification
- Blockchain-compatible
- Training Integrity: Clients cannot submit invalid parameter updates
- Aggregation Correctness: Server cannot manipulate aggregation process
- Data Privacy: No raw data is revealed, only computational correctness
- Public Auditability: All proofs can be verified on-chain
| Configuration | Proof Time | Verification Time | Communication Overhead |
|---|---|---|---|
| High Rigor | ~2.3s | ~0.05s | +15% |
| Medium Rigor | ~0.8s | ~0.02s | +8% |
| Low Rigor | ~0.3s | ~0.01s | +3% |
| Method | MNIST | CIFAR-10 | MedMNIST |
|---|---|---|---|
| Standard FL | 0.95 | 0.78 | 0.82 |
| Secure FL (Ours) | 0.94 | 0.76 | 0.81 |
| Overhead | -1% | -2.6% | -1.2% |
from secure_fl import create_server_strategy, SecureFlowerServer
import torch.nn as nn
class MyCustomModel(nn.Module):
def __init__(self):
super().__init__()
# Your model definition
self.conv1 = nn.Conv2d(3, 32, 3)
self.fc1 = nn.Linear(32 * 30 * 30, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
# Your forward pass
x = torch.relu(self.conv1(x))
x = x.flatten(1)
x = torch.relu(self.fc1(x))
return self.fc2(x)
# Create server strategy
strategy = create_server_strategy(
model_fn=lambda: MyCustomModel(),
enable_zkp=True,
proof_rigor="medium"
)
# Start server
server = SecureFlowerServer(strategy=strategy, num_rounds=20)
server.start()// Deploy the FLVerifier contract
contract MyFLVerifier is FLVerifier {
constructor() FLVerifier(
3, // min clients per round
300, // proof timeout (seconds)
0x1234... // STARK verifying key hash
) {}
}# config.yaml
parameter_sweep:
enabled: true
parameters:
momentum: [0.5, 0.7, 0.9, 0.95]
proof_rigor: ["low", "medium", "high"]
num_clients: [3, 5, 10]| Parameter | Type | Default | Description |
|---|---|---|---|
num_clients |
int | 5 | Number of federated clients |
num_rounds |
int | 10 | Training rounds |
enable_zkp |
bool | true | Enable zero-knowledge proofs |
proof_rigor |
str | "high" | Proof complexity level |
momentum |
float | 0.9 | FedJSCM momentum coefficient |
blockchain_verification |
bool | false | Enable on-chain verification |
| Parameter | Type | Default | Description |
|---|---|---|---|
quantization_bits |
int | 8 | Bits for parameter quantization |
max_trace_length |
int | 1024 | Maximum STARK trace length |
circuit_size |
int | 1000 | SNARK circuit constraint count |
proof_timeout |
int | 120 | Proof generation timeout (seconds) |
-
Cairo Compilation Errors
# Ensure Cairo is properly installed cairo-compile --version # Reinstall if needed pip uninstall cairo-lang && pip install cairo-lang
-
Circom Circuit Compilation
# Check Circom installation circom --version # Compile circuits manually cd proofs/server circom fedjscm_aggregation.circom --r1cs --wasm --sym
-
Memory Issues with Large Models
# Reduce model/circuit size in config.yaml model: hidden_dim: 64 # Reduce from default 128 zkp: client_proof: max_trace_length: 512 # Reduce from 1024
-
Client Connection Timeouts
# Increase timeouts networking: client_timeout: 600 # Increase from 300 max_retries: 5 # Increase from 3
The framework automatically tracks:
- Training convergence (loss, accuracy)
- Proof generation/verification times
- Communication overhead
- Client participation rates
- Model parameter stability
- Resource utilization
from secure_fl import StabilityMonitor
monitor = StabilityMonitor()
# Add custom metrics
monitor.update(parameters, round_num, custom_metrics={
"gradient_norm": grad_norm,
"privacy_budget": epsilon,
"custom_score": score
})We welcome contributions! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Using PDM (recommended)
pdm install -d
pdm run test
pdm run format
pdm run lint
# Using pip
pip install -e ".[dev]"
pytest
black secure_fl/
isort secure_fl/
mypy secure_fl/If you use this work in your research, please cite:
@misc{timilsina2024secure,
title={Dual-Verifiable Framework for Federated Learning using Zero-Knowledge Proofs},
author={Timilsina, Krishant and Paudel, Bindu},
year={2024},
institution={Tribhuvan University, Institute of Engineering}
}pip install secure-flgit clone https://github.com/krishantt/secure-fl.git
cd secure-fl
pdm install -ddev: Development dependencies (pytest, black, mypy, etc.)medical: Medical dataset support (medmnist, nibabel, etc.)notebook: Jupyter notebook supportquantization: Advanced quantization toolsblockchain: Blockchain integration toolsall: All optional dependencies
Example: pip install "secure-fl[dev,medical,notebook]"
This project is licensed under the MIT License - see the LICENSE file for details.
- Flower for the federated learning framework
- StarkWare for Cairo and STARK technology
- iden3 for Circom and zk-SNARK tools
- Our supervisor, Dr. Arun Kumar Timalsina, for guidance and support
- Tribhuvan University, Institute of Engineering, Pulchowk Campus
π« Contact: For questions or collaborations, reach out to [email protected] or [email protected]