Methodology · Research Foundation

Built on peer-reviewed
science. Scored with
proprietary intelligence.

Our analysis is grounded in the largest published study of AI exposure across human work tasks — Eloundou et al., Science, 2024. The scoring engine that sits on top of it is proprietary to AI Career Architect.

Research Foundation

The scientific bedrock.

In 2024, a landmark study was published in Science — the world's most cited scientific journal — that systematically evaluated the exposure of human work tasks to AI. This peer-reviewed study provides the most comprehensive public dataset on AI task exposure. Our proprietary methodology draws on its findings alongside multiple other research sources.

Primary Research · Peer Reviewed
GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models
Eloundou, Manning, Mishkin, Rock · Science · 2024
"Approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted." Human annotators rated AI exposure more conservatively than AI self-ratings — our scoring reflects this human-calibrated view.
19,265
Individual tasks rated
923
Occupations analysed
80%
Of workers affected ≥10%

One methodological finding from the study shapes our engine directly: human reviewers rated AI exposure consistently lower than AI self-ratings. AI systems tend to overestimate their own capability. We apply human-calibrated weights throughout our scoring — the result is a more conservative, more accurate risk profile than tools that rely on AI-generated scores alone.

Task-Level Decomposition · TLD

Why job titles are misleading.

The foundational insight of both our methodology and Eloundou et al. is that AI does not replace jobs — it replaces tasks. Two people with identical job titles can have wildly different AI exposure based on what they actually do day-to-day.

Task-Level Decomposition (TLD) is the process of breaking a role into discrete units of work and classifying each unit against three exposure categories:

Mechanical
Fully automatable
Rule-based, repetitive, pattern-matching tasks. AI can already execute these reliably. Examples: data entry, report generation, CRUD code, invoice processing.
Augmentable
AI-assisted, human-verified
Tasks requiring judgment, context, or oversight that AI can assist but not fully own. Examples: financial modelling review, code architecture, complex analysis.
Human-Centric
Human Alpha
Tasks where human judgment, relationships, and accountability are irreplaceable. Examples: stakeholder trust, leadership, ethical sign-off, crisis navigation.

Your risk score is determined by the proportion of your role that falls into Mechanical vs. Human-Centric categories — and by how rapidly the Mechanical boundary is expanding as AI capability advances.

Proprietary Scoring Engine

The Agentic Exposure Index.

The Agentic Exposure Index (AEI) is our proprietary risk metric. It measures exposure specifically to agentic AI — systems that can plan, reason, and execute multi-step workflows autonomously — not just the simpler pattern-matching AI of 2022.

Agentic AI is qualitatively different from earlier LLMs. It does not just answer questions. It can set goals, use tools, write and run code, navigate interfaces, and execute complete workflows end-to-end. The displacement risk for most white-collar roles comes from this agentic layer — and most published research predates it.

Classified · Proprietary Framework
Task-Level Decomposition Engine (TLD)
Agentic Exposure Index (AEI)
Human Alpha Calibration (HAC)
Resilience Pivot Mapping (RPM)
Temporal Horizon Scoring (THS)
On Human Alpha Calibration (HAC): Two people in the same job title do not have the same risk. Seniority, domain specificity, and the complexity of actual responsibilities all modulate raw task exposure. HAC is our multi-signal adjustment layer — it ensures that a VP-level professional with 15 years of specialised domain context scores differently from a junior hire in the same title. The specific weights and normalization logic are proprietary.

The AEI score is a 0–100 integer mapped to three risk tiers: low, medium, and high. Specific band thresholds are proprietary.

Data Sources

What we calibrate against.

Our scoring engine is calibrated against multiple independent data sources. No single source is sufficient — together, they give us a cross-validated picture of where AI capability currently sits, how fast it is advancing, and which tasks are first to fall.

Research
Eloundou et al. — "GPTs are GPTs" (Science, 2024)
19,265 individual task ratings across 923 U.S. occupations. Human-annotator and AI-self-rating comparison. One of several peer-reviewed sources used in our proprietary analysis.
Government
O*NET — Occupational Information Network (U.S. DOL)
The authoritative database of occupational task descriptions used as input to TLD. Updated continuously by the U.S. Department of Labor across 900+ occupations.
Government
Bureau of Labor Statistics (BLS) — Occupational Outlook Handbook
Employment projections and occupation-level automation probability estimates. Used to validate Temporal Horizon Scoring (THS) timelines.
Research
Anthropic Economic Index (March 2026)
Real observed AI usage patterns across professional tasks. Provides ground-truth signal on which tasks AI systems are actually performing in production — not just theoretically capable of.
Market Data
Challenger, Gray & Christmas — Monthly Job Cut Reports
The industry standard for tracking announced layoffs. January 2026: 108,435 job cuts with AI cited as the #1 stated reason. Used to validate real-world displacement trajectories.
Limitations

What this isn't.

AI career risk analysis is inherently probabilistic. No methodology — however grounded in research — can predict with certainty which roles will be automated and when. The pace of AI development introduces genuine uncertainty. Economic, regulatory, and organisational factors also modulate outcomes in ways that are difficult to model.

Important Caveats

Our analysis represents a research-calibrated probability assessment based on current AI capabilities, published economic research, and observed deployment patterns — not a guarantee of any outcome.

The AEI score reflects task-level exposure, not your employability, performance, or career trajectory. High exposure does not mean you will lose your job. It means a significant portion of your current task stack is on an automation trajectory — and that strategic repositioning is warranted.

This report is intended as a strategic thinking tool, not professional career, financial, or legal advice.

Get Your Report

See where your role lands.

Free preview available instantly. Full 10-section intelligence report delivered to your inbox.

Analyse My Role →