Secure GenAI
Secure GenAI Podcast
Singapore Agentic AI framework, how SSO is exploited, LLM endpoints, Outlook crashes, Elicitation attack pipeline
0:00
-4:58

Singapore Agentic AI framework, how SSO is exploited, LLM endpoints, Outlook crashes, Elicitation attack pipeline

GenAI Safety & Security | Jan 26 - Feb 1, 2026

If you enjoy our newsletter, please consider to be a paid subscriber to help us keep more news and updates coming out.

Notice: The Book Report Q4, 2025 is available! Download here.

Secure GenAI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Highlights

  • Singapore: Agentic AI framework.

  • How SSO is exploited.

  • LLM endpoints.

  • Outlook crashes.

  • Elicitation attack pipeline.


Deep Dive

Singapore: Agentic AI framework IMDA

  • Framework guides safe agent use.

  • Humans remain responsible for actions.

  • Limits placed on agent powers.

  • Checkpoints require human approval.

  • Technical controls mitigate autonomy risks.

  • Fosters global AI governance standards.

How SSO is exploited BleepingComputer

  • Attackers impersonate corporate helpdesks.

  • Fake portals steal SSO credentials.

  • MFA codes intercepted in real-time.

  • SSO dashboards grant wide access.

  • Salesforce and SharePoint are targets.

  • The ShinyHunters group claims responsibility.

LLM endpoints BleepingComputer

  • Campaign named Bizarre Bazaar.

  • Activity dubbed LLMjacking.

  • Stolen access fuels crypto mining.

  • API access resold on darknet.

  • Sensitive prompt data is exfiltrated.

  • Misconfigured AI ports are exploited.

  • SilverInc resells stolen access.

  • MCP servers allow lateral movement.

Outlook crashes BleepingComputer

  • Coding error causes freezes.

  • Launch app in Airplane Mode.

  • Official fix pending Apple review.

  • Windows Outlook also has issues.

  • Web access down in regions.

Elicitation Attack Pipeline Anthropic

  • Benign prompts bypass safeguards

  • Open-source models become dangerous

  • Three-step elicitation attack

  • Adjacent-domain prompt construction

  • No harmful data required

  • Recovers ~40% capability gap

  • Stronger models amplify attack

Image

Thanks for reading Secure GenAI ! This post is public so feel free to share it.

Share

Discussion about this episode

User's avatar

Ready for more?