AI Workshop / April 22-23, 2026

Learn to make AI
do real work
for you.

Fabrik, Dumbo, Brooklyn

Two days of hands-on learning for women who are ready to go beyond chatting with AI. You'll leave with real tools that do real work for you.

Get Tickets $2,800 $2,300 - early bird ends April 6
Get your company to cover this →
Illustration of group celebration
10 Modules Prompting to autonomous agents
2 Real Builds You leave with things that run
Coaches On-Site Hands-on help while you build
No Experience Needed All levels welcome
What is this

Two days of learning, building, and being in a room with women you'll want to know.

Start with a spec, not a prompt Learn to define what you're building before you touch a tool
Leave with something that runs A real workflow you'll actually use next week
From chatting to orchestrating Go from asking AI questions to building agents that work for you

You're already the expert. Now get more leverage.

You know your domain better than any AI ever will. This workshop teaches you to turn that expertise into systems that run - so you stop doing the work manually and start building the thing that does it.

How you do it today

~8 hrs/week
  1. You export last month's closed-lost deals from Salesforce and sort through them manually.
  2. You read every deal note trying to figure out why you're losing.
  3. You notice three deals mention the same missing integration. You Slack the product team.
  4. You can tell mid-market win rate is down but you're not sure why. You spend an hour pulling comp data.
  5. You update the battlecard. You hope the reps read it.

What building with AI unlocks

~30 min
  1. You open a dashboard. Your system already tagged every closed-lost deal by reason - pricing, missing feature, competitor, timing - with revenue impact attached.
  2. Your system caught that three deals worth $420K cited the same missing integration and drafted a product ticket with customer quotes, projected revenue, and the assumptions behind it.
  3. You review the ticket, adjust the priority, and send it - the product team gets a real case, not a Slack message they'll forget.
  4. Your system traced a 15% drop in mid-market win rate to a competitor pricing change launched 3 weeks ago. It updated the battlecard with counter-positioning.
  5. Your two reps with mid-market deals closing this month already got a Slack with the updated talk track and objections to expect.

How you do it today

~half a day
  1. You pull up the competitor's site, scroll their LinkedIn, check their ads library. You try to piece together what they launched.
  2. You screenshot everything, paste it next to your messaging, and squint at the overlap.
  3. You get your team in a room to brainstorm differentiation. You leave with a whiteboard of ideas but nothing concrete.
  4. You brief a designer on new landing page variants. You wait two days for a first draft.
  5. You go back to the CEO with "we're working on it."

What building with AI unlocks

~15 min
  1. Your system flagged this overlap three days ago, when the competitor first changed their messaging. You already knew.
  2. It proposed three differentiation angles, each with a landing page mockup you can preview right now.
  3. It ran a conversion estimate on each option based on your last 90 days of campaign performance - option two will likely convert best, but option three better positions you against a second competitor who's gaining ground.
  4. You pick option three. Your system updates the landing page, the ad copy, and the email sequences.
  5. You reply to the CEO with the analysis, your decision, and the reasoning behind it - before lunch.

How you do it today

~a week
  1. You Google for TAM estimates, pull numbers from three analyst reports that don't agree, and triangulate in a spreadsheet.
  2. You build a model with one set of assumptions - best case, basically - because you don't have time for more.
  3. You email the draft to four stakeholders. You wait two days. You get conflicting feedback.
  4. You revise the model, realize one assumption was off, and redo the downstream math.
  5. You build a 15-slide deck with a recommendation you feel okay about but wouldn't bet your job on.

What building with AI unlocks

~an hour
  1. Your system has been tracking potential market opportunities for months - pulling customer requests, competitor expansions, and adjacent market growth signals. The segment your CFO is asking about is already in there with a preliminary sizing and thesis.
  2. You kick off a deeper analysis. Your system models four entry scenarios - aggressive, conservative, partnership-led, acquisition-led - each with different assumptions on pricing, ramp, and churn.
  3. Each scenario shows exactly where it breaks: "fails if churn exceeds 8%," "only works above $50K deal size."
  4. It flagged a risk you hadn't considered: two of your largest accounts compete in the new segment, creating potential channel conflict. You adjust the partnership scenario to account for it.
  5. You walk into the board meeting with an interactive deck where the CFO can adjust assumptions live - change the churn rate, shift the pricing, see how it flows through the whole model in real time.

How you do it today

~2 weeks
  1. You dig through Salesforce to find the churn note. It's vague - "missing reporting capabilities." You Slack the account manager for the real story.
  2. You search for how many other customers have asked for the same thing. It's scattered across support tickets, sales calls, and a Productboard tag no one's maintained.
  3. You try to estimate the revenue impact. You pull a list of accounts that mentioned it, guess at churn risk, and build a rough model in a spreadsheet.
  4. You write up a business case and circulate it. Half the stakeholders think it's urgent, the other half say you're overreacting to one churned account.
  5. You present to leadership two weeks later. They ask how big this really is. You're not sure enough to bet on a number.

What building with AI unlocks

~2 days
  1. Your system already tracks every feature request, support ticket, and sales call mentioning this capability. You can see that 34 accounts have asked for it, 3 have churned citing it, and requests spiked 40% last quarter.
  2. It pulled the actual quotes from churned customer calls - not a summary, the exact words they used to describe what was missing and what they switched to.
  3. You write a short spec for the feature. Your system turns it into a working prototype and runs a simulation to see where users would drop off - you fix the weak points before engineering sees it.
  4. You ask your system to weigh this against what it would displace on the roadmap. It models the revenue impact of building this vs. shipping what's already planned - and shows you which bet is bigger.
  5. You walk into the review with a working demo, the revenue data, the customer quotes, the trade-off analysis, and a clear recommendation.

How you do it today

~a week
  1. You read through all 12 transcripts, highlighting quotes and tagging themes as you go.
  2. You build an affinity map in FigJam, grouping insights on sticky notes, rearranging until patterns emerge.
  3. You write a findings deck - key themes, supporting quotes, recommendations. It takes a full day.
  4. The product team asks for more detail on two themes. You go back through the transcripts to pull more evidence.
  5. You present on Friday. Half the insights get deprioritized because there's no clear tie to business impact.

What building with AI unlocks

~a few hours
  1. Your system surfaces the key themes across all 12 transcripts with supporting quotes attached. You scan the themes and pick the three you want to dig deeper into.
  2. It pulls the full context from those interviews - what came before and after each quote, what the participant's tone was, where they contradicted themselves. You go deep without reading 12 full transcripts.
  3. You flag what feels most important. Your system cross-references your themes against support ticket volume, churn data, and company strategy - showing where your instincts are backed by data and where they're not.
  4. You push back on the gaps. The system pulls more evidence or shows you why the data disagrees. You make the final call on what stays.
  5. Your system drafts the deck with your chosen themes, ordered by business impact, each backed by quotes and data. You present on Friday and nothing gets deprioritized - every recommendation already has the "why should we care" built in.

How you do it today

~a week
  1. You email the client for access to their internal data. They say they'll "check with IT" and go quiet for two days.
  2. You cobble together a market view from public sources, a few analyst reports, and what you remember from the kickoff call.
  3. You build a slide deck with your recommendation, knowing the supporting evidence is thinner than you'd like.
  4. You send it to the partner for review. They push back on two assumptions and ask you to reframe the narrative.
  5. You present to the client. They poke at the numbers. You spend half the meeting defending methodology instead of discussing strategy.

What building with AI unlocks

~a day
  1. Your system already ingested everything from the engagement - kickoff notes, interview transcripts, every document the client shared - and organized it by theme so you're not hunting through a shared drive.
  2. It cross-referenced the client's data against public market benchmarks, competitor filings, and industry reports to show where the client sits relative to peers.
  3. You built your recommendation. Your system stress-tested it against three different scenarios and surfaced the assumptions most likely to get challenged in the room.
  4. You pressure-tested the narrative against a simulated version of the client's CEO - the one who always asks "what are we missing?" Your system flagged two blind spots and you addressed them.
  5. You walk into the meeting with a recommendation backed by the client's own data, external benchmarks, and pre-built answers to the three hardest questions they'll ask.

You're good at what you do. You've been paying attention to AI, maybe using it a little, but you know there's more. You want to learn by doing - not by watching someone else's demo. And you'd rather spend two days with interesting, thoughtful women than sit through another virtual event. This is for you.

Illustration of a person and robot hugging

Workshop Agenda

Day 1: Doors at 9am, programming 9:30am - 5pm
Day 2: 9am - 5pm
Happy Hour after Day 1
Day One / April 22

From Chatting to Building

Learn the frameworks, then build something real.

  • 9:30 - Is This an AI Problem? A framework for identifying which problems AI solves well - and which it doesn't. Live demo, then you brainstorm and rank your own ideas.
  • 10:30 - Write Your Spec Product thinking for AI: define what success looks like, how to test your way there, and what can go wrong - before you touch a tool.
  • 11:30 - Prompting That Actually Works Context engineering, iterative prompting, and the difference between garbage and gold.
  • 12:00 - Lunch
  • 1:00 - The AI Toolkit What each tool does, when to use it, and how professionals stack them by role.
  • 1:30 - Build Your Idea Take your spec and build it. Three structured sprints with coaches circulating, then a 60-second share-out.
  • 4:00 - Fireside Chat Building tools for yourself and your team.
  • 5:00 - Happy Hour
Day Two / April 23

From Building to Orchestrating

Go from building apps to building agents that run on their own.

  • 9:00 - What Is an Agent, Actually? The difference between a chatbot and an agent. Sense, Plan, Act framework - and what can go wrong.
  • 10:00 - Give Your Agent a Job Description Write the role card your agent reads every time it starts. Intro to agents, skills, and workflows.
  • 11:00 - Choose Your Build Pick your project: daily briefing, meeting prep agent, personal productivity tool, or more. Our team helps you scope it.
  • 12:00 - Lunch
  • 1:00 - Build in Claude Code Three sprints with coaching support. You leave with a real workflow you'll use next week.
  • 3:30 - What's Next Build your personal AI backlog: what you'll tackle this week, your 1-3 month goal, and how to keep momentum.
  • 4:15 - Panel What AI actually changes about how you lead.
Illustration of people and robot at a cafe

Frequently Asked Questions

On Day 1, you'll build in Lovable - a browser-based tool that lets you create working apps by describing what you want in plain English. No downloads, no setup, no code. On Day 2, you'll use Claude Code or Cursor - more powerful tools that let you build AI agents and automations that run on your computer. We'll walk you through setup and have coaches on hand if you get stuck.
Yes. We teach frameworks, not just tools. How to identify a good AI problem, how to write a spec, how to think about risks and guardrails - that applies whether you're using Claude, Gemini, ChatGPT, or whatever your company rolls out next. The specific tools we use in the workshop are a way to practice those skills by building something real. Most people find that once they've actually built something, they're dramatically better at using whatever AI tools they already have access to.
You'll leave knowing how to break down a work problem, decide if AI can solve it, and build a working version - without waiting for your engineering team or IT department. Things like a tool that preps you for every meeting by pulling in context on who you're talking to, a weekly competitor digest that scans news and tells you what's worth paying attention to, or an agent that drafts your team's status update from Slack and email. You'll also leave with a backlog of AI project ideas specific to your role so you know exactly what to build next.
On Day 2, you'll build an AI agent that connects to your personal email, calendar, or news sources. Things like an agent that scans your kids' school emails and pulls out the dates and action items you actually need, or one that builds your grocery list from a meal plan and places the order. These are things you'll use the following week. And once you've built one, you start seeing opportunities everywhere - the skill compounds fast.
Most attendees do. We put together a justification letter you can share with your manager that outlines what you'll learn and how it applies to your role.
97%
of past Grrls in the Loop attendees would recommend our events to a friend.
Hilary Gridley

Hilary Gridley

Founder, Writerbuilder
Former Head of Core Product, WHOOP
Anjali Ahuja

Anjali Ahuja

Staff AI Product Manager, WHOOP
Former Founder, Safrn Health
Alexa Murray

Alexa Murray

Stanford GSB
Women's Health Product Manager
Liv Benger

Liv Benger

CX Programs & Enablement Lead, Rain
Former AI Manager, WHOOP

Wednesday-Thursday, April 22-23, 2026

Fabrik

20 Jay St, Suite 218, Brooklyn, NY 11201

In person. Hands-on. Bring your laptop.

Tickets
Early Bird Pricing
Two-Day Workshop Pass
All sessions, builds, fireside, and panel. April 22-23.
$2,300 $2,800
Save $500
06
Early bird ends April 6 Prices increase to $2,800 after this date.
Secure Your Spot →
Limited spots. Questions? Email hilary dot gridley at gmail dot com
Most people get this reimbursed. Here's how →
Not ready yet? Stay in the loop →

What past attendees took away from the day.

"Vibe coding is accessible!" "Be the driver, don't have AI drive." "The possibilities are endless" "Play. Test. We're all figuring it out!" "It's nowhere near as technically challenging as I thought" "I should push myself to explore, ideate, and tinker more" "Holy Moly, I've been playing on hard mode by NOT learning about these technologies." "Be really specific with prompts and iterate, iterate, iterate" "The people in the room are so creative" "Having direct access to other women using AI tools to ask questions" "I was intimidated to get started, and the demo gave me confidence to build an MVP"

Our first workshop sold out. Early bird pricing ends April 6.

Get Tickets