If you enjoy this newsletter, please become our paid subscriber to help this keep going. Also we just have a yearly paid reader. Thank you for your support!
Highlights
McDonald AI breach lessons.
ChatGPT agent comes out.
Meta declines EU AI Safety Guidelines.
Nvidia Safety Recipe
MIT Governance of AI.
Special!
Deep Dive
McDonald AI breach lessons Forbes
Basic measures are missing.
No credential management
Lack of access control and 2FA.
Rush to deploy latest tools.
A false sense of security.
Related: McDonal AI get hacked.
ChatGPT agent comes out TechRadar
Threats comes from the internet.
Promp injection is the top concern.
AI can be tricked to expose info.
For example, your credit card.
Should not give AI this power.
Meta declines EU AI safety Guideline The Register
Two weeks before it takes effect.
Focused on > 10^25 FLOPs.
Asks for volutary effort.
Transparency and Copyright.
Over 30 models like Meta, OpenAI.
Related: EU AI code of practice
Nvidia Safety Recipe Nvidia
Apply defense at build, deploy and run.
Harden every steage of the AI lifecycle.
Introduce NeMO Guardrails.
Improve content safety by 6%
Improve Security Resilience by 7%
Governance of artificial intelligence MIT
Risk from AI to 6 categories
1. Tech, Data and Analytic
2. Info and Communication.
3. Economic AI Risks
4. Social AI Risks.
5. Ethical AI Risks.
6. Legal and Regulation AI Risks.
Notice: Y2 GenAI Safety and Security is on GumRoad and Amazon with paperback.
Share this post