How upcoming Trump senior AI policy advisor thinks
Sriram Krishnan - An Indian American VC and Entrepreneur.
Note: We are about to publish Q4, GenAI Safety and Security on 1/1/2025. And 2024 Year edition. If you have any comment to improve content, we’re happy to hear. - Emma
Only three weeks before the Trump Presidency, we continued to give you the latest update of key advisors in the Trump Administration.
Last week, we showed you David Sachs, the head advisor of Science and Technology. Today, we introduce another key advisor, Sriram Krishnan from an early stage tech investors in Silicon Valley. (Not forget JD Vance, Trump VP also a familiar name there.)
Profile
Early Life and Education: Krishnan was born in Chennai, India. He moved to the United States in 2005 at the age of 21.
Career in Tech: His career began at Microsoft where he was a founding member of the Windows Azure team. He has held senior product roles at several tech giants including Twitter (now X), Yahoo!, Facebook, and Snap. His work has involved leading product and engineering teams, significantly contributing to platforms like Windows Azure, Twitter's core features, and Snapchat's advertising API.
Venture Capital: Krishnan became a general partner at Andreessen Horowitz (a16z), focusing on crypto and leading the firm's first international office in London in 2023.
Source: Grok accessed on Dec 29 2025 with edits and validation.
This post is review based on his thought shared on his podcast with his wife, "The Aarthi and Sriram Show” in 2024 that they interviewed notable figures and expressing their thoughts and opinions on the work in progress of AI, tech, startup and culture.
A16Z's Martin Casado Explains California’s AI Safety Bill SB1047
Sriram introduces the episode by expressing his view that the bill is "quite harmful" for innovation. The conversation explores the bill's potential impact on AI development, particularly the argument that it is based on a flawed understanding of the field and could lead to unintended negative consequences.
Sriram and Martin Casado discuss whether AI is a "paradigm shift" requiring entirely new regulations, or whether existing software regulations should be applied to AI. Martin says that regulation should not be based on the amount of computing power (flops) used in training, which could hinder innovation and open source development. They also explore the idea of focusing on regulating specific applications of AI, such as deepfakes and child safety.
There is a debate on the spirit of the bill, with Sriram suggesting that the intent is to ensure safety plans for very large models. The conversation also touches on the political disagreements surrounding the bill, including the observation that not all Democrats are aligned in their support of the bill. A key point is the concern that regulation, even if intended to be light, can have unintended and negative consequences, using the GDPR as an example.
The discussion raises the question of whether there is naivety around the "slippery slope" version of regulation. They discuss the risk that submitting safety plans could be weaponized against open source models, as those that did not submit safety plans could be considered unsafe by default.
How To Fix Google's WOKE AI Disaster
This episode discusses AI safety and responsibility in the context of Google's AI models and products.
The conversation touches on incidents where AI models like Gemini have failed on basic history questions, suggesting that safety and bias issues are still a challenge. Sriram also discusses the release of Sora, a new AI model from OpenAI that generates hyper-realistic videos, and the potential implications of such technology.
Sriram notes that Google has fallen behind in the AI race, despite the fact that they were behind some of the original research. This leads to a discussion of why Google is struggling to keep up with other AI companies.
Sriram and Aarthi discuss how the conversation around AI regulation has shifted from a year ago, noting how fast things are changing.
New models are being released daily, creating a highly active and invigorating atmosphere in the AI space, driven by the potential of achieving so much with relatively little.
The regulatory landscape has shifted significantly. A year ago, the conversation was intensely focused on AI regulations and discussions before the Senate, with a prevailing negative trend centered around AI safety concerns.
The core issue is not the technology, but rather how it has been trained, and the current reinforcement learning from human feedback (RLHF) process, which may be fundamentally broken and require a complete overhaul. This situation highlights the need for Google to reevaluate its entire approach to AI, especially in regard to safety and responsibility. The company's current frameworks are viewed as fundamentally flawed, requiring the establishment of new, more objective frameworks.
Sriram expresses skepticism towards broad, ambiguous terms like "responsible" or "safety" when applied to AI. He argues that AI should be governed by existing laws, rather than through the creation of entirely new, potentially overly broad, regulations. Finally, the discussion around AI remains a moving target because AI itself has become a kind of "deus ex machina"—an unspecified force that can be moved around to fit any given argument.
The VINOD KHOSLA Interview: The Predictions Of An Optimistic Investor
This episode features a discussion on AI, focusing on the balance between optimism and skepticism. Sriram and Vinod Khosla discuss the importance of avoiding regulatory capture which would hinder innovation. They also talk about the advantages of open ecosystems in AI and the potential risks. Vinod Khosla mentions the need to spend more on R&D spending on AI safety.
Definition and Concern: The conversation highlights that regulatory capture can hinder innovation and create barriers for new entrants. This is a primary concern because it can stifle competition, leading to a less dynamic and less beneficial market for consumers and society as a whole.
Focus on Open Ecosystems: Khosla emphasizes the advantages of open ecosystems in the context of AI. He suggests that when the underlying technology is open and accessible, it reduces the likelihood of regulatory capture because a wider range of individuals and companies can participate and challenge the status quo.
Dangers of Closed Systems: The discussion implicitly contrasts open ecosystems with closed ones, where a small group of powerful companies could potentially influence regulations in their favor, further consolidating their market position and making it difficult for competitors to emerge. This could lead to a scenario where the benefits of technological advancement are not widely shared, and innovation is slowed down.
Innovation: The conversation suggests that a diverse range of players, including individuals and startups, are more likely to drive innovation and lead to breakthroughs, rather than a few large companies that might have an incentive to maintain the status quo. In an open system, more people are empowered to develop new solutions without having to contend with the regulations created by established players.
R&D Investment: Although the main focus of the conversation was on regulatory capture, the discussion also touched on the need to invest in Research and Development (R&D) for AI safety. This ties in with the idea of preventing regulatory capture because investing in R&D can lead to a deeper understanding of the technology and can help in creating more informed policies and regulations.
Keep reading with a 7-day free trial
Subscribe to Secure GenAI to keep reading this post and get 7 days of free access to the full post archives.