The scientific bedrock.
In 2024, a landmark study was published in Science — the world's most cited scientific journal — that systematically evaluated the exposure of human work tasks to AI. This peer-reviewed study provides the most comprehensive public dataset on AI task exposure. Our proprietary methodology draws on its findings alongside multiple other research sources.
One methodological finding from the study shapes our engine directly: human reviewers rated AI exposure consistently lower than AI self-ratings. AI systems tend to overestimate their own capability. We apply human-calibrated weights throughout our scoring — the result is a more conservative, more accurate risk profile than tools that rely on AI-generated scores alone.
Why job titles are misleading.
The foundational insight of both our methodology and Eloundou et al. is that AI does not replace jobs — it replaces tasks. Two people with identical job titles can have wildly different AI exposure based on what they actually do day-to-day.
Task-Level Decomposition (TLD) is the process of breaking a role into discrete units of work and classifying each unit against three exposure categories:
Your risk score is determined by the proportion of your role that falls into Mechanical vs. Human-Centric categories — and by how rapidly the Mechanical boundary is expanding as AI capability advances.
The Agentic Exposure Index.
The Agentic Exposure Index (AEI) is our proprietary risk metric. It measures exposure specifically to agentic AI — systems that can plan, reason, and execute multi-step workflows autonomously — not just the simpler pattern-matching AI of 2022.
Agentic AI is qualitatively different from earlier LLMs. It does not just answer questions. It can set goals, use tools, write and run code, navigate interfaces, and execute complete workflows end-to-end. The displacement risk for most white-collar roles comes from this agentic layer — and most published research predates it.
The AEI score is a 0–100 integer mapped to three risk tiers: low, medium, and high. Specific band thresholds are proprietary.
What we calibrate against.
Our scoring engine is calibrated against multiple independent data sources. No single source is sufficient — together, they give us a cross-validated picture of where AI capability currently sits, how fast it is advancing, and which tasks are first to fall.
What this isn't.
AI career risk analysis is inherently probabilistic. No methodology — however grounded in research — can predict with certainty which roles will be automated and when. The pace of AI development introduces genuine uncertainty. Economic, regulatory, and organisational factors also modulate outcomes in ways that are difficult to model.
Our analysis represents a research-calibrated probability assessment based on current AI capabilities, published economic research, and observed deployment patterns — not a guarantee of any outcome.
The AEI score reflects task-level exposure, not your employability, performance, or career trajectory. High exposure does not mean you will lose your job. It means a significant portion of your current task stack is on an automation trajectory — and that strategic repositioning is warranted.
This report is intended as a strategic thinking tool, not professional career, financial, or legal advice.
See where your role lands.
Free preview available instantly. Full 10-section intelligence report delivered to your inbox.
Analyse My Role →