You probably have dead code in your codebase right now. Not the obvious kind. Not unused imports or unreachable branches after an early return. The kind that looks completely normal, compiles fine, and passes every check your editor throws at it. But when your application runs, Python never touches it.
What it looks like
Here's a simplified version of something you'd see in any large Python project:
# base.py
class BaseProcessor:
def validate(self, data):
if not isinstance(data, dict):
raise TypeError("Expected dict")
return data
def process(self, data):
validated = self.validate(data)
return self.transform(validated)
# enhanced.py
class EnhancedProcessor(BaseProcessor):
def validate(self, data):
data = super().validate(data)
if "version" not in data:
raise ValueError("Missing version field")
return dataIf EnhancedProcessor is what gets instantiated at runtime, editing BaseProcessor.validate still has an effect: it changes the behavior of super().validate(data) inside EnhancedProcessor. That's fine.
But say there's a third class, ProductionProcessor(EnhancedProcessor), that also defines validate. Now EnhancedProcessor.validate is dead code for ProductionProcessor instances. Your edits to EnhancedProcessor's validation logic do nothing in production.
Why your linter won't catch this
Linters check syntax, style, and (in mypy's case) type correctness. They don't compute the method resolution order across your class hierarchy. Python 3.12 introduced the @override decorator, which mypy can verify, but it only tells you whether a method is an override. It doesn't tell you that a method is shadowed by something three levels away in the chain.
"Go to Definition" in your editor shows you where a method is defined. It doesn't tell you which definition Python will actually call for a given instance type. That's a question about the MRO, and no standard editor feature answers it.
Why your tests might miss it too
This is the sneaky part. Say you write a test for BaseProcessor.validate:
def test_base_validate_rejects_non_dict():
proc = BaseProcessor()
with pytest.raises(TypeError):
proc.validate("not a dict")This test passes. But it only exercises BaseProcessor when BaseProcessor is instantiated directly. In your actual application, if EnhancedProcessor or ProductionProcessor is the class being created, BaseProcessor.validate might never be the version that runs. Your test covers code that's dead in production.
Tests that directly instantiate base classes can create false confidence. The method "works," but only for an instance type that production never creates.
The cost adds up
Method shadowing isn't usually a single dramatic bug. It's a slow accumulation of wasted effort:
- •Debugging a "fix" applied to a shadowed method. The code looks correct, the test passes, but the bug persists because the running version lives in a different class.
- •Adding defensive checks to a method that another class already overrides. You're duplicating logic the framework handles upstream.
- •Code review cycles spent discussing code that never executes. Nobody in the review knows it's dead, so everybody treats it as live.
In a codebase with deep inheritance (five, six, seven levels), these situations are common and hard to spot by reading code alone. You'd have to trace the MRO manually or instrument the runtime with logging.
Finding it with workspace scanning
PRISM's workspace scan walks every class in your project and checks the method resolution status across the full hierarchy. It categorizes each method into one of four states: owns (unique to this class), overrides (wins over a base), overridden (wins here but a descendant redefines it), and shadowed (a base class version wins).
Workspace scan results (overridden mode):
BaseProcessor.validate
overridden by EnhancedProcessor.validate
File: src/base.py:12
EnhancedProcessor.validate
overridden by ProductionProcessor.validate
File: src/enhanced.py:8
BaseProcessor.process
overridden by ProductionProcessor.process
File: src/base.py:18
Found: 3 overridden methods across 4 classesEach result links back to the source file. Click through to see the full MRO chain and understand which version wins. No print statements, no runtime instrumentation.
What to do about it
Not all dead code is a bug. Sometimes a base class method exists as a default, and subclasses are expected to override it. The point isn't to eliminate every override. It's to know about them. If you're spending time editing or debugging a method, you should know upfront whether that method actually runs for the instances your code creates.
Run a workspace scan. If nothing surprises you, great. If something does, you just saved yourself an afternoon.
Try PRISM
See method resolution in real time.