I don't use LeetCode, and you probably shouldn't either
In a famous experiment, monkeys were sprayed with water whenever they reached for bananas. As new monkeys were introduced one by one, the existing group would stop them from taking the bananas to avoid getting sprayed. Each new monkey learned not to touch the bananas without experiencing the spray themselves. Eventually, all original monkeys were replaced, resulting in a group that enforced a rule they didn't understand the reason for.
If you're studying computer science, you've probably heard of LeetCode — the platform where developers solve programming problems to prepare for technical interviews. The common advice you'll hear is along the lines of "Complete 150 medium problems before your next interview" or "Make sure you master DFS, BFS, dynamic programming, and graph algorithms!"
I've observed many students spending countless hours "grinding" LeetCode problems, which often amounts to merely memorizing solutions. Yet, when faced with algorithmic questions in actual interviews, they still struggle and feel disappointed with their performance. Their typical response? To double down on memorizing more problems or working through various "curated" lists that supposedly teach fundamental concepts better.
This cycle of ineffective preparation mirrors the monkey experiment — we've created a culture where developers mindlessly follow interview preparation advice, passing it on to others without questioning whether it actually works. Like the monkeys who never experienced the water spray themselves, many continue to promote LeetCode grinding simply because "that's what everyone does".
The situation is made worse by an entire industry built around selling interview prep courses and "insider knowledge". Social media is flooded with tech influencers (who shall not be named) promising to teach you the "secret patterns" to "crack FAANG interviews" — all for just $499. These courses often repackage the same LeetCode-focused approach, adding to the illusion that memorizing problem patterns is the key to success. Many of these course creators have minimal real interviewing experience themselves — they're simply profiting off anxiety about the interview process.
Understanding through problem solving
Consider the following (textbook) problem:
There's a staircase with
stairs. To climb these stairs, you can either:
- Take a small step, moving up by 1 stair, or
- Take a big step, moving up by 2 stairs.
How many ways are there to climb to the top? Call this function
.
To begin, let's start with an easier version of this problem:
- Suppose the "flight of stairs" is just a single stair. Then, the solution is obvious — there's only 1 way.
. - Suppose there's now 2 stairs. Then you can either take 2 small steps, or take 1 big step, which is 2 ways in total.
.
The fun begins when there's 3 stairs. You now have 2 choices to begin with — you can take a small step or a big step. Notice that no matter which choice you pick, you end up on one of the easier scenarios above — taking a small step brings you to a set of 2 stairs, while taking a big step brings you to a set of 1 stair. The key idea here is that the decision you made before doesn't matter — no matter which steps you took in the past, you always have the same options to choose from in the future. Finding the answer —
We can apply the same reasoning to larger numbers. Repeating with 4 stairs — you can take a small step or a big step, ending up in the situation with 3 stairs or 2 stairs, respectively.
Expressed in Python:
py
def f(n):
if n == 1:
return 1
if n == 2:
return 2
return f(n - 1) + f(n - 2)
However, if you run this code with large values of
py
def f(n):
print('n = ', n)
if n == 1:
return 1
if n == 2:
return 2
return f(n - 1) + f(n - 2)
You'll notice that the same values of
py
cache = {}
def f(n):
if n in cache:
return cache[n]
if n == 1:
return 1
if n == 2:
return 2
return f(n - 1) + f(n - 2)
ans = f(n - 1) + f(n - 2)
cache[n] = ans
return ans
This is the second key idea — since we know that the same subproblems appear multiple times, we can cache their results instead of recomputing them. This transforms our solution from taking exponential time to linear time. This pattern — identifying overlapping subproblems and finding ways to avoid redundant computation — is a fundamental concept in dynamic programming. It's far more valuable to understand why this optimization works than to memorize that "the climbing stairs problem uses dynamic programming".
TIP
What would you do if you could take a very big step — climbing up 3 stairs at once?
Learning from real scenarios
Here's another (textbook) problem that you might have ran into:
You've decided to take the ultimate course: MATH131. However, that course has some requirements — you have to take MATH120, MATH121, and MATH122 first! But each of those have their own requirements:
- MATH131 requires MATH120, MATH121, MATH122
- MATH122 requires MATH111, MATH112
- MATH121 requires MATH111
- MATH120 requires MATH110, MATH111
- MATH112 requires MATH110
- MATH111 requires MATH110
- MATH110 has no requirements
What order should you take these courses in?
We can start by writing a simple program:
python
def take_course(course):
print(f'I took {course}!')
take_course('MATH131')
Unfortunately, this doesn't help much. We just took a course without meeting its requirements. Let's try and fix it by taking the requirements first:
python
def take_course(course):
for requirement in requirements(course):
take_course(requirement)
print(f'I took {course}!')
take_course('MATH131')
Details
I've omitted the hardcoded requirements
function, but here's the full source:
python
def requirements(course):
if course == 'MATH131':
return ['MATH120', 'MATH121', 'MATH122']
if course == 'MATH122':
return ['MATH111', 'MATH112']
if course == 'MATH121':
return ['MATH111']
if course == 'MATH120':
return ['MATH110', 'MATH111']
if course == 'MATH112':
return ['MATH110']
if course == 'MATH111':
return ['MATH110']
if course == 'MATH110':
return []
return []
Let's run it and see what happens:
I took MATH110!
I took MATH110!
I took MATH111!
I took MATH120!
I took MATH110!
I took MATH111!
I took MATH121!
I took MATH110!
I took MATH111!
I took MATH110!
I took MATH112!
I took MATH122!
I took MATH131!
We've taken the same classes multiple times! Although that might be good for learning, it wouldn't be the best use of our time. Let's add some code to fix this:
python
already_taken = set()
def take_course(course):
if course in already_taken:
return
for requirement in requirements(course):
take_course(requirement)
print(f'I took {course}!')
already_taken.add(course)
take_course('MATH131')
Now the output looks better:
I took MATH110!
I took MATH111!
I took MATH120!
I took MATH121!
I took MATH112!
I took MATH122!
I took MATH131!
We now have a valid order to take all these courses. By doing this, we've implemented topological sort by treating classes as nodes on a graph, and where requirements are represented by edges. Note that the terms "graph", "nodes", and "edges" weren't necessary at all — it's only a way to generalize this idea to problems that have the same underlying concept in a different situation. You could have called them "courses and prerequisites" or "tasks and dependencies" and solved the problem just the same — the terminology itself isn't what matters.
Just do it
If you've played The Best Programming Game, you realize that the most effective way to learn programming is simply by building real things.
For those who still want structured programming challenges, Advent of Code offers a fantastic alternative to LeetCode. It presents engaging puzzles that feel like real problem-solving rather than pattern matching, and each problem builds naturally on fundamental programming concepts. The community aspect and seasonal nature make it feel more like a celebration of programming than a grinding exercise.
The real value in learning algorithms comes from understanding how they solve actual problems you encounter. When you're building a course registration system and realize you need to handle prerequisites, or developing a path-finding feature for a game, the motivation to learn these concepts comes naturally. You're not memorizing solutions to artificial puzzles — you're developing intuition by solving real problems.
This approach is exactly how we learned the dynamic programming and topological sort ideas above. Instead of starting with definitions and pattern recognition, we explored concrete problems and discovered the underlying principles organically. The best way to internalize these ideas isn't through grinding similar problems, but through applying them in different contexts and understanding why they work.