Highlights
💥 Windows Zero-Day.
☁️ Oracle Cloud Breach?
📱 SpyX Spyware Leaked.
⚛️ UK: Quantum Hackers.
🔑 Google Threat Intelligence
⚠️ California: Future AI safety.
Notice: FY2024 is available on GumRoad and Amazon with paperback.
Next: Q1 2025 on March 31st.
Special:
Deep Dive:
💥 Microsoft Zero-Day CyberScoop
Eight-year-old vulnerability.
Six nation-states involved.
300+ organizations hit.
.lnk file exploit used.
No patch available yet.
Heads-Up: Review .lnk file handling; consider restrictions, user education, and advanced threat detection.
☁️ Oracle Cloud Breach ? The Register
Hacker claims 6M records.
Oracle denies any breach.
Data for sale online.
Included encrypted password & key.
Exploited a login page of US region.
HTTP without authentication.
Proof: Uploaded file to server:
Possible Fusion Middleware 11G vulnerability.
Heads-Up: Verify Oracle Cloud configurations and patching; monitor for data exposure and credential compromise alerts.
Related: Microsoft outage & FBI warning.
📱SpyX Stalkerware Breach TechCrunch
Almost 2 million records.
Thousands of Apple users.
iCloud credentials exposed.
25th spyware breach since 2017.
Android and Apple devices.
Heads-Up: Reiterate policies against unauthorized device monitoring; strengthen BYOD security and data protection.
⚛️ UK: Quantum Hackers The Guardian
2035: deadline for protection.
Post-quantum cryptography needed.
Targets critical infrastructure.
Upgrade services by 2028.
Complete migration by 2035.
Heads-Up: Plan for post-quantum cryptography. Evaluate long-term data encryption needs.
Related: Quantum breaks Encryption
🔑Google Threat intelligence Arxiv
12,000+ attack instances analyzed.
Seven attack chain archetypes.
50 new challenge evaluations.
Focus: end-to-end attack chain.
Frontier models capabilities evalution.
Heads-Up: Stay informed on evolving AI-driven threats; integrate AI-specific threat modeling into security assessments.
⚠️ California: AI Safety Laws? TechCrunch
41-page draft on safety and security.
Co-led by Dr. Li Fei-Fei.
Address “not yet observed” risks.
Transparency into frontier AI labs.
Advocates safety tests reporting.
"Trust but verify" strategy.
Heads-Up: Prepare for potential AI regulations; prioritize transparency and ethical AI development practices.
Share this post