AI usage in your workflow
Show you use AI tools (Copilot, ChatGPT/Claude, Cursor) as a productivity multiplier while staying the engineer in charge: faster boilerplate/tests/refactors, but you review every line, never paste secrets or proprietary code into public tools, verify correctness and security, and own the output. The red flag is either dismissing AI entirely or trusting it blindly.
This question is increasingly common. The interviewer is checking two things: that you're fluent with modern tooling, and that you exercise judgment — you're not pasting AI output blindly into production.
How to frame it: AI is a multiplier, you're still the engineer
Talk about where it helps and how you stay in control.
Where it speeds me up
- Boilerplate — scaffolding components, config, repetitive CRUD.
- Tests — generating test cases for a function, then I review for the cases it missed.
- Refactors and migrations — mechanical transformations across files.
- Unfamiliar APIs — faster than docs-diving for a first pass.
- Rubber-ducking — explaining a bug to it often surfaces the answer.
How I stay in control
- I review every line. AI output is a draft, not a commit. I read it like a PR from a junior — it's often subtly wrong, outdated, or over-engineered.
- I verify correctness and security — especially anything touching auth, input handling, or dependencies (it hallucinates packages).
- I never paste secrets, credentials, customer data, or proprietary code into public AI tools — that's a data-leak risk and often a policy violation.
- I own the output. "The AI wrote it" is never an excuse for a bug.
- For anything I don't understand, I make sure I understand it before it ships — otherwise I can't maintain or debug it.
STAR-ish example
On a recent migration I used Cursor to convert ~40 class components to hooks. It handled the mechanical 80% in an afternoon; I reviewed each diff and hand-fixed the lifecycle edge cases it got wrong (cleanup logic, refs). What would've been a week was two days — but I still read and understood every change.
The two red flags to avoid
- Dismissing it — "I don't use AI, I write everything myself" reads as out of touch.
- Trusting it blindly — "I just accept the suggestions" reads as someone who'll ship hallucinated, insecure code.
The hireable answer is in the middle: leverage with judgment.
Senior framing
The senior take: AI changes the speed of producing code but not the engineer's responsibility for correctness, security, and maintainability. You use it to eliminate drudgery and move faster on the mechanical 80%, while spending your saved time on the 20% that needs real judgment — architecture, edge cases, trade-offs. And you're crisp about the data-handling boundaries.
Follow-up questions
- •What's something AI-generated code got subtly wrong that you caught?
- •What are your rules for what you will and won't paste into a public AI tool?
- •How do you make sure you still understand code AI wrote for you?
Common mistakes
- •Saying you don't use AI at all — reads as out of touch.
- •Saying you accept suggestions without review — reads as reckless.
- •Not mentioning the data/secrets boundary.
- •No concrete example of AI being wrong and you catching it.
Edge cases
- •Company policy may forbid certain AI tools — show you'd check and follow policy.
- •AI hallucinating npm packages is a real supply-chain risk worth naming.
Real-world examples
- •Copilot for autocomplete, Claude/ChatGPT for debugging and explanations, Cursor for multi-file refactors, AI-generated test scaffolding.