Why Job Title Is the Wrong Unit of Analysis

Type your job title into most AI risk tools and they return a score. "Software Engineer: Medium risk." "Financial Analyst: High risk." "Marketing Manager: Medium-high risk." These scores feel authoritative, but they are methodologically flawed — and the flaw is not a small one. It goes to the core of what AI displacement actually is.

AI does not automate job titles. It automates tasks. A task is a specific, discrete unit of work: drafting a report, reconciling a ledger, reviewing a contract clause, scheduling a meeting, building a financial model, writing a function, interviewing a witness. Job titles aggregate many different tasks — and the distribution of those tasks varies enormously between two people who hold the same title.

Eloundou et al. (2024) in Science rated 19,265 individual work tasks across 923 occupations for LLM exposure. The research explicitly operates at the task level — because that is where automation actually happens. Scoring by job title collapses the variance that matters most.

Consider two financial analysts at the same firm, at the same seniority level. The first spends 70% of their day building spreadsheet models from structured data sources, generating monthly performance reports, and compiling market data summaries. The second spends 70% of their day interviewing management teams, synthesizing qualitative signals about business strategy, and advising portfolio managers on nuanced sector dynamics. The first analyst's AEI score might be 74. The second's might be 31. Same title, same employer, same credentials — a 43-point difference in AI displacement risk.

The same divergence appears across virtually every knowledge work occupation. Two lawyers, two engineers, two marketing managers, two consultants — the title tells you almost nothing. The task composition tells you almost everything. A rigorous AI career risk assessment must start with task-level decomposition. Any tool that skips this step is not measuring your risk. It is measuring the average of a category you may or may not resemble.

Step 1: Decompose Your Role into Tasks (TLD)

Task-Level Decomposition (TLD) is the foundational input for any accurate AI risk assessment. The process is more structured than it sounds, and doing it honestly is more revealing than most people expect.

Start by listing every significant activity you perform in your role — not the tasks listed in your job description, but the tasks you actually perform when you account for your working hours. Include recurring deliverables, routine processes, one-off projects that recur with regularity, and the informal activities that take real time but often go unlisted.

For each task, estimate the percentage of your total working time it represents. The percentages should sum to 100. Be honest: if you spend 40% of your week in meetings and only 20% writing reports, that distribution matters more than the role description that says "produces analytical reports and presentations."

Then rate each task against the three-tier AI replaceability framework:

Step 2: Calculate Your Exposure Score (AEI)

The Agentic Exposure Index is the weighted composite score that results from your task-level breakdown. Each task receives an exposure score based on its replaceability category and specific characteristics, weighted by the proportion of your working time it represents.

A task rated as fully automatable contributes a high AEI component — typically 80 to 98 depending on specifics. An augmentable task contributes 40 to 70. A human-centric task contributes 5 to 20. The weighted sum produces your overall AEI, on a 0–100 scale.

The Human Alpha Coefficient (HAC) is applied as an adjustment to the base AEI. It accounts for three variables that affect real-world risk beyond pure task composition: seniority (more senior professionals have more human-judgment buffers built into their day-to-day role, even if the task names are similar), domain depth (specialists with 10+ years in a narrow domain have tacit knowledge that is harder to replicate than generalist knowledge in the same category), and context complexity (roles in highly regulated, high-stakes, or politically complex environments carry inherent human-judgment requirements that the task description alone does not fully capture).

The HAC can move your AEI by 8 to 15 points relative to the raw task-weighted score. Two professionals with identical task compositions but different seniority, domain depth, and context complexity will produce meaningfully different final AEI scores. This is by design — it reflects a real difference in risk.

The AEI is not a permanent number. It is a snapshot of your current task composition and role context. As you shift the tasks you perform — by choice or by organizational change — your AEI changes. This is why the assessment includes not just your current score but a projected trajectory based on your industry's observed automation adoption curve.

Step 3: Identify Your Human Alpha

Once you have your task-level breakdown, the most strategically important output is not your overall AEI score — it is the identification of your Human Alpha tasks. These are the tasks in your role where the AEI contribution is below 20: the activities that AI cannot replicate because they require genuine human judgment, presence, relationship, or physical execution.

Your Human Alpha is your structural protection. It is the set of capabilities that will remain valuable regardless of how powerful AI systems become in the near to medium term. It is also — and this is the actionable insight — the set of capabilities you should be deliberately deepening and expanding.

Most knowledge workers can identify two to four genuine Human Alpha tasks in their role when they do an honest TLD. The goal is to understand which of these tasks you are currently best at, which you have the most room to develop, and how you can increase the proportion of your working time spent on them. This is not abstract career advice. It is a concrete restructuring of your task portfolio — the same kind of portfolio optimization that financial advisors apply to investment exposure, applied instead to your professional activities.

Practically, increasing your Human Alpha task weight often means: volunteering for client-facing work that more junior colleagues avoid, leading complex projects that require judgment and organizational navigation, developing deeper domain expertise in the specific area where your field intersects with hard-to-automate complexity, and building relationships with stakeholders who value your judgment rather than just your output volume.

Get your full Human Alpha breakdown

Your personalized report identifies every task in your role, scores it for AI exposure, and maps your Human Alpha tasks explicitly — along with a 6-month plan for deepening them.

Get Your Report — $39.99 →

Step 4: Map Your Adaptation Timeline

The 2027 inflection point is the target date that the AEI framework uses as its primary planning horizon. By 2027, agentic AI systems capable of executing multi-step workflows autonomously are projected to be deployed at significant scale in the sectors currently undergoing restructuring. The roles that face the highest risk are those where the full workflow — not just individual tasks — can be owned by an AI system end-to-end.

The adaptation timeline maps your specific tasks to a three-horizon framework. The 12-month horizon covers tasks that are automatable today and will be actively automated in your sector within the next year — these require immediate action. The 18-month horizon covers tasks where AI assistance is increasing and workflow redesign is underway — these require preparation and skill-building. The 36-month horizon covers tasks where automation is technically plausible but adoption is slower due to regulatory, organizational, or complexity barriers — these require monitoring rather than immediate action.

The most valuable output of this timeline exercise is not the list of tasks at risk. It is the clarity it creates about where to invest your time and development energy right now. Career adaptation in an AI-accelerating environment is not reactive — it is not about responding after your role has been automated. It is about making deliberate choices now about task composition and skill development, before the automation wave reaches your specific workflow.

Your report includes a month-by-month action calendar: specific, concrete steps for each of the 12 months following your assessment, calibrated to your AEI score, your task composition, and your industry's adoption curve.

What a Good AI Risk Assessment Covers

A rigorous AI career risk assessment should deliver substantive analysis across ten distinct areas. Understanding what these sections cover helps you evaluate any assessment tool — including this one — against what it actually provides.

  1. Executive Risk Summary: Your overall AEI score with context — what it means, what it doesn't mean, and what the immediate priority actions are.
  2. Task-Level Decomposition (TLD): The complete breakdown of your role by task, with individual exposure scores and time-weighting for each activity.
  3. Automation Timeline 2026–2029: Which of your tasks automate when, based on current deployment trends in your sector.
  4. Industry Context: How your AEI compares to others in your field and what sector-specific dynamics affect your risk profile.
  5. Skills Gap Analysis: The gap between your current skills and the skills most protective against AI displacement in your domain.
  6. Role Evolution Mapping: How your specific role is likely to change over the next three years — not disappear or survive intact, but transform — and what that transformation looks like.
  7. 6-Month Roadmap: Prioritized, concrete actions for the next six months to shift your task composition and build Human Alpha skills.
  8. Month-by-Month Action Calendar: Week-level specificity for the first 12 months, calibrated to your score and timeline.
  9. Career Pivot Options: If your AEI is above 70, an analysis of adjacent roles with lower exposure that are accessible from your current skill set.
  10. Final Verdict: A direct, unhedged assessment of your risk level, the most important factor driving it, and the single highest-leverage action available to you.

Red Flags in AI Risk Assessments to Avoid

Not all AI career risk tools are built on rigorous methodology. Several patterns should cause skepticism about whether an assessment tool is actually measuring what it claims to measure.

Generic job-title scoring with no task breakdown. If the tool asks only for your job title and returns a score, it is measuring the average of a category. It is not measuring you. Two people with the same title can have AEI scores 40+ points apart.

Scores that don't vary between people with the same title. A good test: ask two colleagues with your same job title to use the same tool. If they get the same score despite having different day-to-day responsibilities, the tool is not performing task-level analysis.

No timeline specificity. "Your job is at risk from AI" without a timeline is not actionable. The difference between immediate risk, 2-year risk, and 5-year risk is the difference between needing to act this quarter and having time to plan a deliberate transition.

No actionable roadmap. Risk identification without adaptation guidance is incomplete. The purpose of understanding your AEI is to take specific actions that change it — or that position you to thrive despite it. An assessment that ends with a score and no next steps has delivered incomplete value.

Binary verdicts without nuance. "Your job is safe" or "your job is doomed" are both almost always wrong. The actual picture is: these specific tasks in your role are highly exposed, these are protected, and here is what you can do to shift the balance.

How to Use Your AEI Score

Your AEI score is a signal for action, not a verdict. Here is how to interpret it correctly.

An AEI of 70 or above is a strong signal that a meaningful portion of your current task composition is highly exposed to near-term automation. It does not mean your job disappears tomorrow. It means that if you do nothing, a significant fraction of what you do today will either be automated or dramatically compressed by AI assistance within 2–3 years. The appropriate response is not panic — it is a deliberate, structured effort to shift your task composition toward Human Alpha activities and build skills in AI oversight and integration.

An AEI between 40 and 70 indicates moderate exposure. Your role contains a meaningful mix of automatable and protected tasks. The risk is real but not immediate. The appropriate response is to understand which specific tasks in your role are driving the score, invest in deepening your Human Alpha capabilities, and develop AI fluency so that you are the kind of worker who augments AI rather than competing with it.

An AEI below 40 indicates low near-term exposure. Your role is substantially weighted toward tasks that are structurally protected — physical work, complex judgment, direct human relationships. This does not mean you are immune to change; it means you have time and structural protection to adapt thoughtfully rather than urgently.

See how other roles score:

Get your personalized 10-section assessment

Enter your role and task composition. Receive a complete AEI analysis, automation timeline, Human Alpha breakdown, and month-by-month adaptation roadmap — delivered to your inbox in under 10 minutes. One-time founding price of $39.99.

Start Your Assessment →