The summer before my junior year, I interned at Uber on the ML Platform team. I was working on infrastructure that had to integrate with a large, mature ML framework. If you've worked on a codebase like that, you know what it means: deep class hierarchies, multiple inheritance, and layers of reliability and abstraction that have accumulated over years of iteration.
The feedback that changed how I think about code
A few weeks in, I was in a debug session with my mentor and a couple of other engineers. I'd been writing new components for the framework and, wanting to write careful code, I had added input validation and defensive checks all over the place. Format verification, null guards, type assertions. What I thought was solid engineering practice.
My mentor pulled up the class hierarchy and pointed something out: several of the methods I'd written were overriding implementations already defined in base classes further up the chain. The validation checks I was adding were already handled upstream. The framework guaranteed correct input format at a higher level. My checks were redundant at best and misleading at worst.
The other part was the try-except problem. I had been gating everything in exception handlers because that felt like the responsible thing to do. When bugs came up, though, I couldn't tell which handler was catching what. I'd add more logging, more try-except blocks, more print statements. The codebase got noisier, not clearer. I was spending more time tracing execution paths than solving the actual problem.
The code compiled fine. The linter was quiet. My editor showed no warnings. But I was writing redundant code that duplicated work the framework already did, and I had no way to see that without reading thousands of lines across dozens of files.
I started seeing this pattern everywhere
Once I knew what to look for, it kept coming up. Methods that existed in a subclass but never ran because a base class version took priority. Defensive checks duplicated across multiple levels of a hierarchy. Override chains where nobody was sure which version actually executed for a given instance.
This wasn't a skill issue. In a codebase with five, six, seven levels of inheritance and liberal use of multiple inheritance, no one holds the full resolution order in their head. You trace through the MRO manually or add print statements and run the code. Neither option scales.
The question I couldn't let go of
After the internship, the problem kept nagging at me. What I had needed wasn't more logging or more try-except blocks. I needed to see, at a glance, which version of a method was going to run when the code executed. Was my version the one Python would call, or was something else in the chain taking priority?
I started reading about method resolution across different languages. Python's C3 linearization. Java's interface default method rules. C++ virtual dispatch. Every language with inheritance has some version of this problem, and none of the developer tools I could find surfaced it while you were actually writing code.
The 200ms constraint
I started prototyping during my junior year at Minerva. The core idea: parse the code statically, compute the MRO from source without running anything, and show the result in real time as the cursor moves.
I set a hard constraint from day one. 200 milliseconds, from cursor move to panel update. A tool you have to run manually is a tool you forget to use. It had to feel instant. That constraint shaped every design decision: a persistent subprocess instead of spawning Python per request, AST caching keyed by file modification time, debounced cursor events to keep the backend from getting flooded.
Four states, not two
Early prototypes had a binary classification: "runs" or "doesn't run." That wasn't enough. Testing against real codebases, I found four distinct situations:
- •Owns: Only this class defines it. It runs.
- •Overrides: A base class also defines it, but this version wins.
- •Overridden: This version runs for direct instances, but a descendant class redefines it. Partially dead code.
- •Shadowed: A base class's version wins. This code never runs.
The distinction between "overridden" and "shadowed" matters because the fix is different. If your method is overridden, maybe that's intentional and you just need to be aware of it. If it's shadowed, you're writing dead code and should probably be editing a different file.
Where PRISM is today
PRISM supports 10 languages, has workspace-wide dead code scanning, interactive graph visualization, and CodeLens annotations in the editor gutter. But at its core, it still answers the same question I had that summer: when your cursor is on a method, does that method actually run?
Every feature traces back to that debug session. The scan modes, the radial view, the method status colors. I keep asking the same thing: what would have helped me that afternoon? What would have made the class hierarchy visible instead of something I had to reconstruct in my head?
PRISM is the tool I wish I'd had on day one of that internship.
Try PRISM
See method resolution in real time.