A completely new assessment
We rebuilt the assessment from the ground up. Twenty single-tap questions, about five minutes, written in plain English. No jargon. No sliders you have to interpret. No skill-sorting grids.
The old assessment demanded the technical literacy we're supposed to be teaching. That was broken. The new one meets you where you actually are: non-technical professional, early-career, recent grad, between jobs, or founder.
Medical-assessment UX under the hood — stages move from identity to context to goal to the scary questions at the end, not upfront. If you say you're not worried about AI replacement, we don't ask you three more questions about when you worry about it.
Real data from the Anthropic Economic Index
Your role's AI exposure now comes from the Anthropic Economic Index — 756 occupations with observed AI task coverage sourced from millions of real Claude.ai conversations. Open dataset. Peer-reviewed. You can audit every number on your report.
We pair today's exposure with a 1-year forward projection based on Anthropic's own published growth trajectory: directive Claude.ai use went from 27% to 39% in eight months. Apply that trajectory across occupations and a role at 30% today realistically clears 60% in twelve months.
No vibes. No Goldman-360 handwaving. Real numbers you can trace.
A new results experience
Your report is no longer a dashboard dump. It's eight serialized screens that unfold in order: analyzing → your archetype → your role's twelve-month forecast → four readiness dimensions → your peer ranking → an interactive slider that shows what changes if you actually put hours in → the next-step CTA → a shareable identity card.
Most people used to scroll past the old dashboard without reading. The new sequence respects your attention. You see one thing at a time.
New archetypes — shareable, specific, honest
The old archetype names ("Spectator", "Operator") were diagnostically accurate and socially humiliating. Nobody wants to share "I'm a Spectator" on LinkedIn.
We renamed every archetype:
- Worried Operator — you see AI everywhere but haven't put it to work for you yet
- Stalled Veteran — you use AI daily but every task still runs through you
- The Builder — you ship with AI, you're no longer just using it
- Orchestrator-in-Training — you run multiple AI workflows, one step from running a function
- Human in Residence — you run an operation where AI does the labor, you do the judgment
Each one has dignity. Each one tells you where you are and what the next move looks like.
A venture path, not just a placement path
Most career-AI products push everyone toward one outcome: get hired. That's wrong for our top-tier users. If you're already shipping with AI and running multi-agent systems, getting placed at a mid-market company feels like a step backwards.
April's update puts two exits on the table. Placement — direct intros to companies hiring for AI-forward roles with comp data attached. Or venture — support to build your own AI-powered business. Same training and coaching, different destination.
Your assessment detects which path fits. If you're a founder or an Orchestrator+, we won't insult you with a placement pitch.
Honest limitations on every score
Good career products hide their methodology. That usually means they're embarrassed by it. Our methodology page now explicitly lists three things you should know about our score:
- Our exposure data is observed Claude.ai usage — it skews toward roles already using AI
- We map your role to the closest SOC code; the match is good but not perfect
- The 1-year projection is a published rule-of-thumb based on Anthropic's observed growth, not a predictive model
If you can't tell someone what your score can't do, you shouldn't be charging for what it can.
Fixes and improvements
- Power-user depth. Questions nine through ten-and-a-half now capture Builder-tier tools (Cursor, LangChain, MCP, evals) and shipped production capabilities (multi-agent systems, fine-tuned models, public teaching). Power users no longer max out at "running a whole workflow end-to-end."
- Skip-logic. If you say you're not worried about AI, we don't ask you three more emotional questions. The assessment adjusts to your answers in real time.
- Multi-select on "what you do day-to-day." Most people wear multiple hats — the question now allows it.
- Expanded "what you've tried." Added social feeds, asking AI directly, teaching others, and building to learn — the ways people actually learn AI in 2026.
- Slider screen works for ceiling-level users. If your score is already 95+, we skip the growth-slider nonsense and show a leverage-level screen instead: the question isn't your score, it's what you build with the hours you already have.
- CTAs read your signals. Every call-to-action on the results page now reads the venture/placement signals you gave us. Founders see "Talk to us" about your venture. Mid-career professionals see "Apply to join" for placement. No more generic "real career move" pitches.
- Result card shareability. The shareable identity card is Co-Star-minimalist: black background, your archetype name in serif, your percentile ranking. No numeric score on the share (nobody shares a number they're not proud of).
- Killed jargon site-wide. Banned from the product: "agentic," "SOC code," "task exposure," "delegate-heavy," "shipped artifacts." Our ICP is non-technical professionals. They shouldn't need a glossary.
- Landing page structural fix. Features section ("What you walk away with" — outcomes) and Showcase section ("How it works" — process) used to duplicate each other. Now they have different jobs.
- Consistent archetype vocabulary. The Personas section on the landing page now uses the same archetype names as the assessment. Funnel is finally internally consistent.
- One ask per page. The bottom CTA is now one button, not two. Less decision paralysis.