Announcing the Golden Gate Institute for AI
Subscribe to our "Second Thoughts" newsletter, where we bring AI's toughest questions into focus
Hello, friends!
It's been a while since I've posted here, and I’m back to share some exciting news with you: I'm co-founding a new think tank: the Golden Gate Institute for AI.
Together with my co-founders, Steve Newman and Rachel Weinberg, our mission is to support leaders to meet the challenges of the AI transition. We will do this by bridging disciplines, convening experts, and publishing accessible analysis of AI's toughest questions.
I’ll explain more in a minute, but I know you’re busy, so first thing’s first:
Please subscribe to our newsletter Second Thoughts to get all of our analysis of developments in AI.
If you’re interested in hearing about Golden Gate events in your area (or other updates from us as an organization), sign up for announcements on the form at the bottom of our webpage.
Why a think tank? What about your consulting work?
The AI landscape has transformed dramatically since early 2023, when I launched AI Impact Lab. ChatGPT was still new and niche. AI consulting was a wide-open field – I didn’t know of any other consultants working to upskill mission-driven organizations on AI tools, or writing a newsletter aimed at supporting this work. I felt like I was blazing a trail through the wilderness. It was challenging, but fascinating and fun.
But what began as a relatively niche field has exploded into mainstream consciousness, with far-reaching implications. Over the course of 2024, the field of AI consulting mushroomed. My business was booming (hence the lack of posts on this Substack – I found my plate full serving clients!). But at the same time, I felt like my marginal impact was going down: Other consultants were entering the field, who I knew could serve my clients well if I took a step back.
Meanwhile, I was feeling a magnetic pull toward the big-picture societal implications of AI.
As I’ve said many times on this Substack before, I believe that AI will probably prove to be the most transformative technology of my lifetime. For better or worse, it’s going to have enormous impact on your life, my life, my kids’ lives, and every other human being. More and more mainstream thought leaders are acknowledging that some version of “AGI” (artificial general intelligence, or human-level AI) might be just around the corner.
“Excuse me, Mr. Presidents, but you two need to get together, like, tomorrow. But it’s not to discuss the golden oldies — tariffs, trade and Taiwan…. There is an earthshaking event coming — the birth of artificial general intelligence. The United States and China are the two superpowers closing in on A.G.I. — systems that will be as smart or smarter than the smartest human and able to learn and act on their own.”
— Tom Friedman, March 2025, addressing Trump and Xi
Even barring other concerns – bioweapons, cybersecurity, etc – many of them conclude that the labor market implications of AI advancements are potentially economy-shattering:
“[A]nyone with a job that involves words, data or ideas needs to become an AI prepper. Not in the overhyped doomer sense. (I’m not sure what preparations would be useful if superintelligent machines decided to destroy us.) But in the sense of preparing for how AI will disrupt society and especially work.”
— Megan McArdle, May 2025
If we’ve ever needed to have a technology governed in the public interest, it’s this one. But civil society infrastructure around AI is not keeping pace with the industry’s growth. And if you haven’t noticed, federal government capacity is… not high right now ☹️
We need more people thinking seriously about AI, educating decision-makers, helping prepare frameworks for possible policy interventions, and generally trying to shape our AI future. I want Golden Gate to help humanity navigate the coming AI transition in a way that’s beneficial for all.
OK, cut the jargon, what will Golden Gate actually do?
As Rachel and Steve and I spent the last few months making our plans for the year, we have noticed time and again how very many gaps there are in the field, and how many people are yearning for more conversations and analysis. We will focus on two types of interventions where we see big gaps.
Convenings: Last year, on just three months’ notice, Rachel ran the wildly successful conference The Curve, a hotbed of productive conversations. The conference assembled several hundred people from AI labs, DC think tanks, Silicon Valley startups, AI safety organizations, academia, and elsewhere – groups that don’t often mix. New York Times writer Kevin Roose summarized his experience by saying “It felt like an event where history was happening.”
At Golden Gate, we’ll turn The Curve into an annual conference, and add many smaller events.

Accessible analysis: When Steve told me he wanted to try to reconcile the different stories about how many years we are from AGI, I originally thought that sounded a bit silly. Surely that territory was covered – I mean, isn’t that what everyone is talking about? But as we dug in, I realized how much sense-making there is to be done. We couldn’t find an existing catalogue of what the skills are that humans apply to jobs in remote settings, but that AI doesn’t yet have; the skills for which no one would bother making a benchmark or evaluation (yet) — because all current AI systems would score a zero on them. (If you know of one, please let me know!)
How can one possibly try to project when AI will have those skills, without listing them first? As MIT labor economist David Autor was quoted in the NYT today “Predictions that A.I. will steal jobs often “underestimate the complexity of the work that people actually do.” So we set out to work on a taxonomy of gaps in AI skills ourselves.
At Golden Gate Institute for AI, we will strive to fill gaps like these, and help translate technical knowledge of AI for broader audiences.
What we'll cover at the new Substack, Second Thoughts
Our work at Golden Gate focuses on four main areas, and we will be writing about all of them:
Timelines & Capabilities – how rapidly will AI advance?
Economic Impacts – how will AI affect jobs, markets, and economic opportunity?
Democracy & Governance – how must our governing institutions adapt to AI's challenges and opportunities?
Realizing Benefits – how can we maximize AI's potential for positive impact?
The name of Second Thoughts reflects our approach: We won't be the first to rush out hot takes on new developments. Instead, we'll synthesize: When other AI experts disagree, we’ll try to figure out why – what is the root of how they see the world differently? We’ll translate technical concepts and papers for broader consumption without losing the nuance. And when we identify a key question no one is writing about, we’ll interview specialists in fields from macroeconomics to robotics, from cybersecurity to public opinion, and share our findings with you.
We want to be the eye of the AI storm, helping you navigate the tumult.
What does this mean for AI for Good and AI Impact Lab?
I’m in the process of wrapping up my consulting work and plan to fully shut down AI Impact Lab soon. I’ll keep this newsletter, AI for Good, alive for now – cross-posting content from Second Thoughts, and possibly writing an occasional stand-alone post.
I'm deeply grateful for your support of this newsletter and of my journey into AI. I hope you'll continue this journey with me at Golden Gate and Second Thoughts as we work to bring AI's toughest questions into focus.