Press

Available for keynotes, panels, podcasts, and media commentary on decision systems, AI governance, and healthcare policy.

Decision Architect · Harvard PhD · Former faculty (Operations/Tech) · Founder & CEO of Doogooda (Decision Systems)

Lina Song in studio

For Hosts & Producers

Name Pronunciation

Lina Song — LEE-nah SONG

Title

Founder & CEO, Doogooda

One-liner

"Decision architect for regulated institutions"

Based in

Milwaukee, WI (through mid-2026) · Seoul, KR

Time Zones

CT (UTC-6) primary · ET/PT available · KST for Asia-Pacific

Remote Setup

Professional mic (Shure MV7) · Ring light · Neutral background

Turnaround

Same-day for media · 3-5 days for speaking

Current angle

Decision scientist with comparative Korea–US healthcare perspective. Actively working with US hospitals on operational decision systems.

Short (~50 words)

Lina Song is a Decision Architect and Founder & CEO of Doogooda, building auditable decision systems that turn data into defensible actions in high-stakes, regulated environments. Her work focuses on decision-making under uncertainty, accountable AI in deployment, and operational governance in healthcare and public systems.

Medium (~100 words)

Lina Song is a Decision Architect and the Founder & CEO of Doogooda, where she designs auditable decision systems that translate data into defensible actions under real constraints—capacity, staffing, budgets, and policy. Her approach combines causal reasoning, scenario simulation, and optimization as decision infrastructure that can be explained, stress-tested, and audited. She previously held an academic appointment in operations/technology and has worked across healthcare and public-sector decision contexts. Lina speaks and writes about decision-making under uncertainty, accountable AI in deployment, and why operations is fundamentally a governance problem in high-stakes institutions.

Long (~200 words)

Lina Song is a Decision Architect and the Founder & CEO of Doogooda, a decision-intelligence company focused on auditable systems that help institutions make defensible choices under uncertainty. Rather than treating AI as prediction, she builds decision infrastructure: explicit assumptions, scenario simulation, constraint-aware optimization, and governance-ready outputs that teams can justify, execute, and audit. Her work is grounded in real operational constraints—capacity, staffing, budgets, and policy trade-offs—especially in regulated, high-stakes environments such as healthcare and public systems. Lina previously served in an academic role in operations and technology and has worked across research and applied settings. She speaks and writes about the practical meaning of "accountable AI," the politics of AI infrastructure, and why many operational problems are ultimately governance problems. Her core themes include trade-offs and incentives, decision quality under uncertainty, and designing systems that remain credible when stakes are high and accountability is non-negotiable.

Pre-packaged segments for TV, podcasts, and panels. Each includes talking points, suggested graphics, and a one-sheet.

Segment

Healthcare Affordability Crisis

Why healthcare costs keep rising—and what policy levers actually work.

Why Now

With healthcare costs hitting record highs and election-year debates intensifying, audiences want clarity on what's broken and what's fixable.

Key Points

  • The hidden decision systems that drive healthcare pricing
  • Why transparency laws haven't lowered costs (and what would)
  • Trade-offs policymakers face between access, quality, and cost

Why Me

My PhD research at Harvard used US Medicare claims data to study how hospital closures and physician-hospital integration actually affect care quality and costs—not in theory, but in the data. I've since built decision systems inside Korea's universal healthcare system and am now working directly with US hospitals. I bring both the research rigor and the operational experience to cut through the talking points.

Segment

AI Accountability Gap

When AI makes decisions, who's responsible when it goes wrong?

Why Now

Every AI incident makes headlines, but coverage focuses on the model. The real story is institutional accountability gaps.

Key Points

  • Why 'explainable AI' doesn't mean accountable AI
  • The governance structures institutions actually need
  • Real cases where AI accountability failed—and how to fix it

Why Me

I build AI decision systems for regulated healthcare operations—environments where "the model got it wrong" isn't an acceptable answer. My work produces auditable decision trails: documented assumptions, binding constraints, and explicit trade-offs. This isn't a framework I teach—it's infrastructure I ship, tested in hospital settings where accountability is non-negotiable.

Segment

Elections as Decision Systems

Cutting through election narratives with decision-systems thinking.

Why Now

Every election cycle floods with causal claims that lack rigor. Audiences deserve frameworks to evaluate what's real.

Key Points

  • How to separate causal claims from post-hoc narratives
  • The constraints and trade-offs candidates actually face
  • Why most 'what won the election' takes are unfalsifiable

Why Me

I've applied decision-systems frameworks to real political campaigns and policy advisory work in Korea, and my academic training is in causal inference and decision science under uncertainty. I bring a cross-institutional lens—having worked inside both Korean and American policy contexts—and the methodological rigor to distinguish signal from narrative.

Decision-making under uncertainty

Trade-offs, governance, and how to structure choices when outcomes are unknowable

Substack

Auditable AI in real institutions

Assumptions, accountability, and what organizations actually need from AI systems

K Metaverse News

Healthcare operations as policy-native decision intelligence

Why clinical decisions are governance problems, not just analytics problems

Substack

Emerging Topics

Elections as decision systems

Uncertainty, causal claims, and governance frameworks for electoral and policy interpretation

US–Korea institutional comparison

What transfers, what doesn't, and why context matters for policy

From dashboards to decisions

How to operationalize 'what to do' instead of 'what happened'

Stay Current

For ongoing analysis and frameworks:

Substack →

Available Talks

  • Decision Architecture for Uncertain Times
  • Auditable AI: Beyond Explainability
  • Elections as Decision Systems
  • Healthcare Operations: From Analytics to Actions
  • Building Accountable AI for Regulated Institutions

Decision Architecture for Uncertain Times

A framework for structuring organizational decisions when predictions fail. Covers trade-off mapping, assumption documentation, and governance design.

Key Takeaways

  • How to map trade-offs before they become crises
  • A template for documenting assumptions that change
  • Governance design that survives uncertainty
Audience: Leadership teams, policy makers, healthcare executives Duration: 45-60 min

Auditable AI: Beyond Explainability

What institutions actually need from AI systems, and why current approaches fall short. A practical framework for accountability.

Key Takeaways

  • Why explainability theater fails in regulated contexts
  • The decision trail: what to document and why
  • Accountability frameworks that work across stakeholders
Audience: Technology leaders, governance professionals, regulators Duration: 30-45 min

Speaking

Coming soon

Lina Song - Formal

Formal

Download
Lina Song - Editorial

Editorial

Download
Lina Song - Speaking

Speaking

Download

High-resolution images available. Usage permitted for press with attribution.

Speaking & Events

Media & Press

Response Time

Same-day for urgent media · 3-5 days for speaking

For urgent requests, include "URGENT" in subject line.

Book Lina