Ready to transform your business?
Let us know what your challenge is.
In general, machine learning algorithms require a large amount of training data and companies often do not have enough to achieve the desired accuracy. To address this, a company may be interested in training its model jointly with others while not violating the confidentiality of its own data.
There are various approaches that allow machine learning models to be trained across several data sources without disclosing them. We have identified multi-party computation and federated machine learning as the most promising candidates for privacy preserving training. Additionally, we also take into consideration the protection of the trained model (Differential Privacy).
In order to gain expertise, we are running hands-on analysis and experiments where we focus on a real-life scenario (unbalanced, non-IID data):
The trained model has a much better accuracy due to more data and more features. This applies for the following scenarios where the data cannot be centralized:
Parallel to our hands-on analysis we are running workshops with clients from different industries in order to identify and sharpen use cases as well as to run proof of values.