Crowdstrike aftermath, Secure Boot compromised, NIST's Dioptra, OpenAI Rule-based rewards, Open-source AI debate.
GenAI Safety & Security Newsletter (July 21-28, 2024)
This Week's Highlights:
CrowdStrike Outage Aftermath: The dust settles on the CrowdStrike software update debacle, revealing lessons in AI security, policy, single points of failure, and the importance of robust testing.
Secure Boot Compromised: A massive Secure Boot vulnerability, dubbed PKfail, exposes hundreds of PC models to potential BIOS-level malware attacks.
New AI Safety Tools Emerge: NIST releases Dioptra, a testing platform to assess AI model risks; OpenAI introduced Rule-based rewards while Google Chrome introduces a feature to scan encrypted files for malware.
Open Source AI Debate Heats Up: The FTC, Y Combinator, and Meta champion open source AI, while concerns about potential misuse and security risks persist.
Deep Dive:
1. The CrowdStrike Outage: A Case Study in Systemic Risk
A faulty software update from CrowdStrike, a major cybersecurity firm, caused a global outage affecting millions of Windows machines, disrupting airlines, hospitals, and emergency services. CNN
Microsoft released a tool to help repair affected Windows machines. The Verge
Microsoft points to a 2009 EU agreement as a barrier to preventing similar outages. They cannot legally restrict third-party developers from accessing the Windows kernel, making it difficult to avoid similar calamities in the future. Tom's Hardware
Why it matters: This incident highlights the potential for catastrophic disruptions caused by software bugs, emphasizing the need for robust risk mitigation strategies in our increasingly interconnected world.
2. Secure Boot No Longer Secure: The PKfail Vulnerability
Researchers discovered a critical vulnerability in Secure Boot, a fundamental security mechanism designed to prevent malicious code from loading during bootup. Ars Technica
The vulnerability, dubbed PKfail, affects hundreds of device models from major PC manufacturers, including Acer, Dell, Gigabyte, Intel, and Supermicro. BleepingComputer
The root cause is the use of compromised cryptographic keys, including test keys labeled "DO NOT TRUST" that were mistakenly included in production firmware. PC Gamer
Why it matters: PKfail exposes a fundamental flaw in PC security, potentially allowing attackers to install undetectable malware that can persist even after the operating system is reinstalled.
3. New Tools for AI Safety and Security
NIST released Dioptra, an open source platform for testing the trustworthiness of AI systems, including their vulnerability to adversarial attacks. TechCrunch
OpenAI introduces Rule-Based Rewards (RBRs) to enhance AI system safety and align model behavior with desired safe practices. OpenAI
Google Chrome introduced a new feature that scans password-protected files for malicious content, enhancing security against malware hidden in encrypted archives. The Hacker News
Why it matters: These tools provide valuable resources for evaluating AI models, identifying potential security risks, and strengthening defenses against emerging threats.
4. Open Source AI: A Double-Edged Sword?
The FTC, Y Combinator, and Meta have publicly championed open source AI, arguing that it fosters innovation and competition. Wired
However, concerns remain about the potential misuse of open source AI models, particularly for malicious purposes such as generating deepfakes or spreading misinformation. Forbes
Why it matters: The debate over open source AI highlights the tension between fostering innovation and mitigating potential risks, a key challenge in the responsible development and deployment of GenAI.