TL;DR: Hiding AI use is a shame response, and shame indicates either unethical use or dependency. If you’re hiding it, ask: am I breaking a rule, or am I afraid of being perceived as incapable? Both answers reveal something important about your relationship with AI.


The Short Version

You’re at work. You use Claude to help write code, design architecture, brainstorm features. Nobody knows. Or maybe one person knows, but you haven’t told the team. You certainly haven’t told your manager. If someone looked over your shoulder and saw your browser, you’d quickly minimize it.

This hiding pattern is widespread. Most builders using AI at work are doing it partly in the open, partly covertly. The covert part varies: you might tell some people but not others, share some use but not all, or maintain complete secrecy.

The hiding itself is worth examining. It’s not random. It’s a signal.


The Two Reasons for Hiding

When you hide AI use, one of two things is happening:

Type 1: Policy Violation Your workplace has explicitly or implicitly forbidden or restricted AI use. Using it anyway, you hide because you’re breaking a rule. This is straightforward: there’s a boundary, you’ve crossed it, you’re concealing the violation.

This might be legitimate (the rule is prudent and you’re respecting the boundary, just quietly) or illegitimate (the rule is unreasonable and you’re right to subvert it). Either way, the hiding is about rule-breaking.

Type 2: Shame About Dependency Your workplace hasn’t forbidden AI use. But you’re hiding because you’re embarrassed. You’re afraid colleagues will perceive you as:

  • Not capable enough to work without augmentation
  • Lazy (letting AI do your thinking)
  • Less skilled or intelligent than you want to appear
  • Dependent or addicted to a tool

This hiding is about managing image. You’re not breaking a rule; you’re hiding evidence that you believe would damage your professional self-concept.

Type 1 and Type 2 look similar from the outside. You hide in both cases. But they reveal different things.

💡 Key Insight: If you’re hiding AI use where it’s permitted, shame is the driver. Shame reveals you don’t actually believe what you’re doing is legitimate.


When Shame About AI Use Is Rational

Here’s the crucial distinction: sometimes the shame is a legitimate signal. You should be uncomfortable.

Scenarios where hiding AI use makes sense:

  • You’re using AI for tasks you were hired to do yourself (research, writing, architecture design) without disclosing that capability loss to your employer
  • You’re shipping work with minimal review, relying on “AI generated it” as your quality assurance
  • You’re misrepresenting turnaround time (telling a client you did work in 2 hours when you really prompted it in 5 minutes, concealing the AI involvement)
  • You’re claiming credit for AI-generated work in a context where disclosure matters

In these scenarios, the shame is telling you the truth: you’re being dishonest. The hiding is justified because the use is ethically questionable.

The solution isn’t to hide better. It’s to stop doing the thing that makes you want to hide.

📊 Data Point: Behavioral research shows that shame about an action usually tracks with actual ethical misalignment; the shame is your conscience, not social anxiety.


When Shame About AI Use Is Defensive

But often, the shame is not about actual dishonesty. It’s about something else: fear of being perceived as inadequate.

Scenarios where shame is defensive:

  • You use AI to supplement your work, but you’re embarrassed colleagues will think you can’t work alone
  • You use AI to save time, but you’re afraid people will perceive you as lazy or less skilled
  • You use AI because it’s genuinely useful, but you’re afraid of social judgment about the tool’s legitimacy
  • You’ve started using AI recently and you’re embarrassed about being “behind the curve” before now

In these scenarios, the shame isn’t about actual wrongdoing. It’s about status anxiety. You’re hiding a legitimate behavior because you’re afraid of social consequences.

The distinction matters because the solution is different.

If the shame is about dishonesty, you need to stop the dishonest behavior.

If the shame is about status anxiety, you need to examine the anxiety. Is it rooted in actual professional consequences (disclosure could harm you) or in image management (you care what people think)?


The Shame-Dependency Connection

Here’s where this connects to addiction: shame drives hiding, and hiding enables escalation.

If you’re hiding AI use, you’re not having honest conversations about it. You’re not saying, “I’m using AI for half my work” or “I’ve become dependent on this tool for decision-making.” Instead, you’re managing the image.

This prevents accountability. Without outside perspective, your use escalates. You rationalize increasingly heavy AI dependence because no one is questioning it. The hiding creates a blind spot.

Additionally, shame maintains the pattern. The more ashamed you feel, the more you hide. The more you hide, the more alone with the pattern you become. The more alone, the harder it is to see the pattern clearly. Shame becomes the mechanism that traps you.

This is familiar from other addictions: alcoholics hide drinking, which prevents intervention, which enables escalation. The hiding and shame are part of the maintenance cycle.

The escalation spiral with shame:

  1. Light AI use → mild shame → light hiding
  2. Increased use → increased shame → increased hiding
  3. Heavy use → strong shame → complete secrecy
  4. Completely hidden use → no feedback → rapid escalation

At this point, the user is deeply dependent and completely isolated with the behavior. Recovery is much harder.

💡 Key Insight: Shame about AI use is either a signal to stop (if the use is dishonest) or a signal that you’re addicted (if the use is legitimate but you’re hiding escalating dependence).


The Professional Consequence Question

Before you assume the shame is irrational, you should ask: would disclosure actually have negative professional consequences?

In some workplaces, disclosing AI use is genuinely risky. Leadership might:

  • Question your capabilities
  • Reduce your project autonomy
  • Question your judgment about when to use AI
  • Use it as evidence that you’re not “really” doing the work

These are real workplace risks. In these cases, hiding might be rational self-protection, not evidence of dependency.

But you should be honest about this. Is hiding actually protective, or are you assuming negative consequences that might not materialize?

Many builders hide AI use not because disclosure would harm them, but because they anticipate judgment. The anticipation often exceeds reality.

Test it: consider telling one trusted colleague. Not in a defensive way (“I use AI and there’s nothing wrong with it”). Just factually: “I use Claude for X and Y. It helps with Z. Thought you should know.” See what happens. Often, the response is neutral or positive. Colleagues are using AI too.

If disclosure actually would harm you, you’ve learned something true about your workplace. You can recalibrate your strategy based on real risk, not anticipated risk.

If disclosure doesn’t harm you, you’ve disproven your anxiety. That’s valuable information.


What This Means For You

If you’re hiding AI use because of actual policy violation: Make a decision. Either accept the rule and stop, or challenge the rule openly. Don’t hide indefinitely. That path leads to higher risk (getting discovered) and deeper shame.

If you’re hiding AI use because of shame about dependency: This is worth examining. The shame might be telling you the truth: you’ve become dependent, and you know it’s not a position you want to be in. The hiding prevents you from addressing it. Consider:

  • Am I actually overusing this?
  • Would acknowledging the use to one person help me see the pattern more clearly?
  • What am I afraid of? What’s the actual worst-case?

If you’re hiding AI use because you fear workplace judgment: Test your assumptions. Pick a low-stakes disclosure. See what happens. You might discover the fear is worse than the reality.

If you’re honestly using AI legitimately and you’ve hidden it, now’s the time to normalize: Talk about your AI use. Mention it in standups. Reference it in pull requests (“Claude helped me think through this architecture”). Normalize it. This removes the power of the secrecy and the shame. You’ll also discover whether there are actual concerns worth addressing, or whether you were just managing anxiety.


Key Takeaways

  • Hiding AI use is a shame response; shame indicates either policy violation or dependency anxiety
  • If the shame is about dishonesty (misrepresenting work), the solution is to stop the dishonest behavior
  • If the shame is about status anxiety, examine whether disclosure would actually harm you or whether you’re anticipating judgment
  • Shame maintains hidden AI use patterns; hiding enables escalation
  • Normalizing AI use disclosure prevents the isolation that deepens dependency

Frequently Asked Questions

Q: Isn’t it reasonable to hide AI use if my workplace disapproves? A: Reasonable in the moment, unsustainable long-term. You’re working in an environment that’s adversarial to your productivity. That’s a workplace problem, not an AI problem.

Q: What if I disclosed and got judged anyway? A: Then you have real information about your workplace. Use it to decide whether that’s where you want to work and how openly you want to operate going forward.

Q: How do I know if my AI use is actually dishonest? A: Ask: if my manager knew exactly how much I’m using this and for what, would they think I’m being deceptive about my capabilities or work? If yes, you’re hiding dishonesty. If no, you’re hiding shame.


Not medical advice. Community-driven initiative. Related: AI Addiction in Remote Work | Building Without Confidence | The Always-On AI Worker