Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. More information
Every year, cyber attacks become more common and data breaches become more expensive. Whether companies want to protect their AI system during development or use their algorithm to improve their security posture, they need to reduce cybersecurity risks. Federated learning could do both.
What is federated learning?
Federated learning is an approach to AI development in which multiple parties separately train one model. Each downloads the current primary algorithm from a central cloud server. They train their configuration independently on local servers and upload it upon completion. This way they can share data remotely without exposing raw data or model parameters.
The centralized algorithm weighs the number of samples it receives from each unevenly trained configuration, and merges them to create a single global model. All information remains on each participant’s local servers or devices; the centralized repository weighs the updates instead of processing raw data.
The popularity of federated learning is rapidly increasing as it addresses common development-related security issues. It is also highly sought after for its performance benefits. Research shows that this technique can improve image classification models accuracy up to 20% — a significant increase.
Horizontal federated learning
There are two types of federated learning. The conventional option is horizontal federated learning. This approach involves distributing data across different devices. The datasets share feature spaces, but have different samples. This allows edge nodes to jointly train a machine learning (ML) model without sharing information.
Vertical federated learning
In vertical federated learning, the opposite is true: the features are different, but the examples are the same. Attributes are distributed vertically across participants, each of which has different attributes over the same set of entities. Because only one party has access to the full set of sample labels, privacy is guaranteed.
How federated learning strengthens cybersecurity
Traditional development is prone to security gaps. While algorithms must have extensive, relevant data sets to maintain accuracy, involving multiple departments or vendors creates openings for threat actors. They can exploit the lack of visibility and broad attack surface to create bias, perform rapid engineering, or exfiltrate sensitive training data.
When algorithms are deployed in cybersecurity roles, their performance can impact an organization’s security posture. Research shows that the accuracy of models can suddenly decrease when processing new data. Although AI systems appear accurate, they can fail when tested elsewhere because they have learned to take false shortcuts to produce convincing results.
Because AI cannot think critically or truly take context into account, its accuracy decreases over time. Even though ML models evolve as they absorb new information, their performance will stagnate if their decision-making skills rely on shortcuts. This is where federated learning comes into play.
Other notable benefits of training a centralized model via disparate updates are privacy and security. Because each participant works independently, no one needs to share proprietary or sensitive information to further the training. Additionally, the fewer data transfers there are, the lower the risk of a man-in-the-middle (MITM) attack.
All updates are encrypted for secure aggregation. Multi-party computation hides them behind different encryption schemes, reducing the chance of a breach or MITM attack. Doing this improves collaboration and minimizes risk, ultimately improving security.
An overlooked benefit of federated learning is speed. It has much lower latency than its centralized counterpart. Because the training takes place locally instead of on a central server, the algorithm can detect, classify and respond to threats much faster. Minimal delays and fast data transfer allow cybersecurity professionals to tackle malicious actors with ease.
Considerations for cybersecurity professionals
Before deploying this training technique, AI engineers and cybersecurity teams must consider several technical, security, and operational factors.
Use of resources
AI development is expensive. Teams that build their own model can expect to spend money wherever they go $5 million to $200 million upfront, and more than $5 million annually for maintenance. The financial commitment is significant, even if the costs are divided among several parties. Business leaders must consider the costs of cloud and edge computing.
Federated learning is also computationally intensive, which may introduce bandwidth, storage space, or computing limitations. While the cloud enables on-demand scalability, cybersecurity teams risk being locked into a vendor if they are not careful. Strategic hardware and vendor selection is of utmost importance.
Participant confidence
While disparate training is safe, it lacks transparency, making intentional bias and malicious injection a problem. A consensus mechanism is essential for approving model updates before the centralized algorithm merges them. This way, they can minimize threat risk without sacrificing confidentiality or revealing sensitive information.
Data security training
While this machine learning training technique can improve a company’s security posture, there is no such thing as 100% secure. Developing a model in the cloud comes with the risk of insider threats, human error, and data loss. Redundancy is key. Teams should create backups to avoid disruptions and roll back updates if necessary.
Decision makers need to reconsider the sources of their training datasets. Frequent borrowing of datasets occurs in ML communities, raising valid concerns about model misalignment. On paper with code, more than that 50% of the task communities use borrowed datasets at least 57.8% of the time. Moreover, 50% of the datasets there come from just twelve universities.
Applications of federated learning in cybersecurity
Once the primary algorithm collects and weighs participant updates, it can be reshared for the application it was trained for. Cybersecurity teams can use it to detect threats. The benefit of this is twofold: while threat actors remain in the dark because they can’t easily exfiltrate data, professionals aggregate insights for highly accurate results.
Federated Learning is ideal for adjacent applications such as threat classification or compromise detection. The large size of the dataset and the AI’s extensive training build the knowledge base, curating extensive expertise. Cybersecurity professionals can use the model as a unified defense mechanism to protect broad attack surfaces.
ML models – especially models that make predictions – can change over time as concepts evolve or variables become less relevant. With federated learning, teams can periodically update their model with varied features or data samples, resulting in more accurate and timely insights.
Leveraging federated learning for cybersecurity
Whether companies want to secure their training dataset or leverage AI for threat detection, they should consider using federated learning. This technique could improve accuracy and performance and strengthen their security posture, as long as they strategically deal with potential insider threats or breach risks.
Zac Amos is the features editor at ReHack.
Source link
Leave a Reply