What Do You Love Doing? What Do You Hate Doing?
Most AI rollouts fail the same way. Leadership announces efficiency. Staff hear replacement. A developer at a recent peer group meeting offered a reframe that changes everything; the psychology of why it works tells you how to deploy AI without destroying trust.
Most AI rollouts fail the same way. Leadership announces “we’re implementing AI to improve efficiency.” Staff hear “we’re replacing you, and we’d like you to help.” Resistance isn’t irrational. It’s the correct response to a threat that nobody has told them isn’t one.
There’s a question that cuts through it. A developer put it simply: ask people what they love doing and what they hate doing. Then show them AI is coming for the second list, not the first.
That reframe isn’t a communications trick. It’s accurate. And the psychology of why it works tells you something important about how to deploy AI without destroying the trust you need to make it stick.
Why “Efficiency” Is a Threat
When leadership says “efficiency,” employees hear “fewer people doing the same work.” They’re not wrong to hear that: it’s been the pattern for decades of technology-driven restructuring. The word carries history.
But the deeper problem is identity. Most people define themselves partly through their work. “I’m the person who handles X.” When you announce a system that handles X, you haven’t just changed a process. You’ve told someone that part of their professional identity is now redundant. People don’t resist change because they’re afraid of technology. They resist because the framing tells them they’re about to lose something that matters to them.
This is loss aversion at organisational scale. The potential loss (my role, my expertise, my relevance) is felt more intensely than any potential gain. And the loss is immediate and concrete, while the gain is abstract and future. Humans discount future gains against present losses every time.
Why the Question Changes Everything
The love/hate question works because it reverses the loss frame. Instead of leadership deciding what AI replaces, the employee identifies it. The psychology shifts in three ways.
Agency. The employee is making the choice. “I hate chasing invoice follow-ups” is their complaint, voiced in their words. When AI takes over that task, it’s not replacement. It’s relief. The emotional register is completely different. You haven’t taken something from them. You’ve removed something they wanted gone.
Identity preservation. The things people love about their jobs are almost always the things AI is worst at. Relationship-building. Creative problem-solving. Judgement calls in ambiguous situations. Complex negotiations. Mentoring. The parts of work that make someone feel skilled, valued, and irreplaceable are the parts that genuinely are irreplaceable by AI or anything else. When you let people name what they love, they’re naming their professional identity. And then you get to tell them: that’s exactly what we want more of.
Concrete specificity. “AI will improve efficiency” is abstract. “AI will handle the three things you just told me you hate” is specific. Specificity reduces anxiety because it bounds the change. People can picture exactly what’s different and exactly what stays the same. The unknown shrinks. The known expands.
The Structural Coincidence That Makes This Work
This reframe isn’t just psychologically effective. It’s technically accurate, and that’s what makes it more than a management trick.
AI is genuinely good at repetitive documentation and data entry, scheduling and follow-up sequences, answering the same questions for the hundredth time, first drafts of structured content, and pattern recognition across large datasets.
AI is genuinely bad at building trust with another human, making judgement calls with incomplete information, creative problem-solving that requires real-world context, reading emotional dynamics in a room, and knowing when the rules don’t apply.
The first list is what most people hate about their jobs. The second list is what most people love. This isn’t a coincidence; it’s a structural feature of how current AI works. The overlap between “what AI is good at” and “what employees hate doing” is not 100%. But it’s high enough that the honest version of the conversation almost always lands better than the efficiency pitch. A workflow analysis makes this visible, recording a real workday reveals exactly which tasks fall on which list, and where the automation opportunities actually sit.
How This Plays Out in Practice
Trades. A plumber doesn’t love writing quotes at 9pm or sending “I’m on my way” texts to customers. He loves diagnosing the problem, fixing it, and knowing the homeowner trusts him. AI handles the quoting, scheduling, and notifications. The plumber gets his evenings back and spends more time doing the work that built his reputation.
Professional services. An accountant doesn’t love reformatting data from client spreadsheets or chasing missing receipts. He loves the advisory conversation where he helps a business owner understand their numbers. AI handles the data wrangling and follow-up. The accountant spends more time on the work clients actually value.
Every case follows the same pattern: AI absorbs the task the person already resents, and frees time for the task the person already values. The resistance dissolves because there’s nothing to resist. Nobody fights to keep the admin they complain about. An AI opportunity assessment identifies these patterns across your whole team, not just one role.
The Error That Destroys Trust
The love/hate frame fails when leadership uses it as a surface-level exercise but then deploys AI in ways that contradict the answers.
If an employee says “I love client relationships and hate CRM data entry,” and then you deploy AI that automates client communications without their input, you’ve done the opposite of what the exercise promised. You’ve automated the thing they loved and left them with something else they’ll hate.
The question only works if the deployment matches the answer. The technology team picks what to automate based on what’s technically easiest. The employees were asked what they hate, but the implementation targeted what was cheapest to build. The mismatch creates a deeper betrayal than never asking at all, because now you’ve demonstrated that you heard them and ignored them.
The rule: automate what they said, not what’s convenient. If the two don’t align, explain why and renegotiate. Treat the love/hate answers as a design constraint, not a consultation exercise.
The Uncomfortable Truth for Leadership
The love/hate question works on employees. It also works on leadership; the answers might be uncomfortable.
If a CEO is honest, some of what they “love doing” is also on the AI-replaceable list. Writing the weekly update email. Summarising board reports. Drafting communications. These aren’t the strategic, high-judgement tasks leaders tell themselves fill their days. They’re the tasks that feel productive without being genuinely strategic.
AI adoption forces an honest audit of where human value actually sits for everyone in the organisation, including the people commissioning the AI. The businesses that handle this well are the ones where leadership goes through the exercise first and shares their answers. “Here’s what I’m handing to AI so I can spend more time on X” models the behaviour and normalises the conversation.
The businesses that handle it badly are the ones where leadership exempts themselves from the same scrutiny they’re applying to everyone else.
If you’re about to introduce AI to your team and you’re bracing for pushback, the problem might be the pitch, not the people. We help businesses get the framing right before the deployment starts. Start with a conversation.