Swapneel Mehta, Ph.D.
Co-founder and President, SimPPL
QUESTION 1
Customer support productivity up ~14%, biggest gains for less-experienced agents
Non-expert workers completed higher-quality work 37% faster. Performance converged across skill levels.
Where exactly in your workflow would AI remove the most friction: ideas, drafts, QA, or analysis?
Pilot with high-volume, repeatable tasks. Measure cycle time, quality, and escalation rates pre/post. Pair junior staff with AI to compress ramp-up time and skill convergence.
QUESTION 2
~40% of jobs globally are exposed to AI; ~60% in advanced economies. But exposure does not mean displacement. Outcomes depend on whether companies augment workers or replace them.
Jobs with heavy text-based, analytical, and interpersonal tasks are most affected (customer ops, marketing, coding, legal), but effects vary widely across roles and industries
What parts of your job can AI substitute vs. what parts does AI complement?
If 20 to 30% of tasks are automated, how will you redesign jobs, KPIs, and career ladders?
Monitor redeployment and reskilling, not just net headcount changes
Treat AI as a task-level shock, not a job-level tsunami. Companies will build role redesign and reskilling around high-exposure tasks first.
QUESTION 3
AI Impact Assessment tied to NIST AI RMF
Align to EU AI Act risk tiers
Log testing, oversight, provenance
QUESTION 4
Frontier models are concentrated in a few firms with massive compute budgets and proprietary data/evals
Meta Llama 3 (8B to 405B) increases access under custom licenses, but it is still not "fully open source"
Capital-intensive training; costs and scale reinforce centralization in hyperscalers and chip vendors
Stanford AI Index 2024 | Meta Llama 3 | McKinsey Compute Analysis
Will you rely solely on APIs, or build a dual stack (closed APIs + open-weight models on your VPC)?
How do export controls and market concentration affect your resilience and bargaining power?
Avoid one-way doors. Design a portable architecture: model abstraction layer, standardized evals, and configurable guardrails so you can switch models as prices and quality shift.
QUESTION 5
Providers cut prices aggressively to win developer adoption. Competition intensified in China during 2024.
Serving is the main ongoing cost at scale. Hardware, batching, and distillation improve economics, but margins stay thin.
Multi-trillion dollar data center expansion creates pressure for low per-token pricing
SemiAnalysis | McKinsey
As tokens get cheaper, will your total spend actually fall, or will usage balloon, keeping costs constant or higher?
Metric That Matters: Track cost-per-business-outcome (e.g., cost per qualified lead, per resolved ticket), not cost per token alone.
QUESTION 6
Which complements do you own: distribution, proprietary data, brand equity, deep integration, or regulatory compliance? If you don't own any of these, you are donating all your value to platform providers.
Treat foundation models as a commodity input. Differentiate on data advantage, user experience, and last-mile integration where you control the customer relationship.
QUESTION 7
What specific evals or capability thresholds would force you to re-evaluate strategy?
Design systems with abstraction layers so you can switch models as capabilities and costs shift
Invest in workforce adaptability and evaluation infrastructure, not speculative roadmaps
AGI-Agnostic Operating Model: Build flexibility, continuous learning, and product roadmaps that don't depend on speculative breakthroughs.
QUESTION 8
Restrict advanced chips and manufacturing equipment to China; rules refreshed Oct 2023 and Apr 2024 to close loopholes
First comprehensive horizontal AI regulation with extraterritorial effects for providers placing systems on EU market
UK Bletchley Declaration (Nov 2023) and Seoul AI Summit (May 2024) produced safety commitments, but these are still voluntary norms
UK Government
Monitor evolving rules across jurisdictions where you operate
Reduce single points of failure in compute, chips, and data infrastructure
Log evals, human oversight, and decision trails for high-risk use cases
LLMs can produce confident falsehoods. Mitigation requires retrieval (RAG), task decomposition, verification steps, and clear UX to set user expectations.
Active legal fronts (NYT v. Microsoft and OpenAI; Authors Guild v. OpenAI). Licensing and provenance tooling will become critical differentiators.
Establish continuous evals and red-teaming protocols. Log and review AI decisions. Implement guardrails. Prefer traceable training and fine-tuning data where feasible.
Focus on measurable outcomes: customer support, sales ops, knowledge management. Track clear before/after metrics.
Baseline, pilot, A/B test, scale. Treat models as replaceable. Build abstraction layers from day one.
Adopt NIST AI RMF "govern-map-measure-manage." Align with EU AI Act risk tiers if you touch EU markets.
The organizations that win with AI won't be the ones with the best models. They'll be the ones with the best processes, evals, and change management.
CNBC, CNN, TechCrunch, April 15, 2026
BIRD (NASDAQ)
$16.99 +582%
Apr 15, 2026 · Market Close
A shoe company valued at $21M on Tuesday is now worth $148M as a GPU provider. The "AI" label alone added $127 million in market cap.
For context: Long Island Iced Tea rebranded to "Long Blockchain" in 2017 and saw a similar surge. That company was later charged with securities fraud.