Database Indexing Mistakes That Slow Down Queries
You’ve added indexes to your database tables. Queries should be fast now, right? Except they’re not. Your carefully crafted indexes sit unused while the database does full table scans, and you’re left wondering what went wrong.
Database indexing seems straightforward in theory. In practice, subtle mistakes render indexes worthless. Understanding what breaks indexing helps you avoid the common traps.
Index Column Order Matters More Than You Think
Composite indexes care deeply about column order, but many developers treat them like unordered sets. An index on (last_name, first_name, city) works very differently than (city, first_name, last_name).
The leftmost prefix rule determines which queries can use the index. An index on (A, B, C) works for queries filtering on A, or A and B, or A and B and C. But it won’t help queries filtering only on B, or only on C, or B and C together.
If you query by city frequently but rarely use last_name alone, you need city as the first column. Most developers build indexes matching the order columns appear in their table definition or their SELECT clause, neither of which relates to how the database uses the index.
Query patterns should drive index column order. Put the most selective column first? Put the column you filter on most often first? It depends on your specific queries. There’s no universal rule, which is why blind index creation often fails.
Function Calls on Indexed Columns
This one kills performance constantly, and developers rarely notice they’re doing it. If you have an index on email, but you query for WHERE LOWER(email) = ‘[email protected]’, the database can’t use the index.
Applying a function to an indexed column means the database needs to compute the function result for every row before it can compare values. That requires scanning all rows, making your index useless.
The same problem hits dates constantly. WHERE DATE(created_at) = ‘2026-03-22’ prevents index usage on created_at. So does WHERE YEAR(created_at) = 2026.
Rewrite these queries without functions on the indexed column. Instead of LOWER(email), store emails in lowercase and query directly. Instead of DATE(created_at) = ‘2026-03-22’, use created_at >= ‘2026-03-22’ AND created_at < ‘2026-03-23’.
Type Mismatches Break Everything
When your query compares an integer column to a string, or a string column to an integer, the database can’t use indexes efficiently. Implicit type conversion happens, which means the database transforms every value before comparing.
If user_id is an integer but you query WHERE user_id = ‘12345’, many databases will scan the whole table. The quotes make ‘12345’ a string, forcing type conversion that prevents index usage.
ORMs and dynamic languages make this worse by silently converting types. You write clean code that looks fine, but the SQL that actually runs includes type conversions that break indexing.
String collation adds another layer. Comparing a case-sensitive column with a case-insensitive comparison might work, but it might also prevent index usage depending on your database and collation settings.
OR Conditions Are Index Killers
Queries with OR often can’t use indexes effectively. WHERE status = ‘active’ OR status = ‘pending’ might use an index on status, but WHERE status = ‘active’ OR created_at > ‘2026-01-01’ likely won’t use indexes on either column efficiently.
The database needs to find rows matching the first condition, rows matching the second condition, then combine results. If the conditions use different columns, it can’t scan a single index.
Rewriting with UNION sometimes helps. Split the OR query into separate queries and combine results. It sounds slower but often runs faster by allowing each part to use appropriate indexes.
IN clauses work better than multiple ORs when filtering the same column. WHERE status IN (‘active’, ‘pending’) uses indexes more reliably than WHERE status = ‘active’ OR status = ‘pending’.
Not Indexing Foreign Keys
Every foreign key column should probably have an index, but many developers skip this. When you join tables, the database needs to match foreign key values between tables. Without an index on the foreign key, that’s a full table scan for every row in the parent table.
Tables with foreign keys that aren’t indexed cause slow joins even when everything else is optimized. The issue compounds as tables grow. A missing index on a foreign key in a 100-row table barely matters. The same missing index in a 10-million-row table destroys performance.
Some databases automatically create indexes on foreign keys. Many don’t. Know your database’s behavior and add foreign key indexes manually when needed.
Over-Indexing Problems
More indexes aren’t always better. Each index slows down writes because the database must update every index when data changes. A table with 10 indexes on a write-heavy workload spends more time maintaining indexes than serving queries.
Redundant indexes waste space and update time without adding value. If you have an index on (last_name, first_name) and another on just (last_name), the second index is usually redundant. The composite index already supports queries filtering by last_name alone.
Unused indexes accumulate over time as query patterns change. Someone added an index for a report that ran daily two years ago. The report was replaced, but the index remains, slowing down every INSERT and UPDATE.
Regular index usage reviews help identify unused indexes. Most databases track whether indexes are actually being used. Drop indexes that haven’t been accessed in months.
Partial/Filtered Indexes Ignored
Many databases support partial indexes that only include rows matching certain conditions. If 95% of your rows have status = ‘archived’ but you only query active and pending records, index only the 5% that matter.
Partial indexes are smaller, faster to scan, and cheaper to maintain. But they require specifying the filter condition in both the index definition and queries that use the index.
Developers often create a partial index but forget the filter in their queries, so the database can’t use it. Or they create a full index when a partial index would work better, wasting space and performance.
Statistics Out of Date
Indexes work well when the database’s query planner has accurate statistics about data distribution. When statistics are stale, the planner makes poor decisions about which indexes to use.
Auto-vacuum and auto-analyze help, but they’re not instant. After bulk data loads, massive deletes, or significant data changes, statistics might lag. The query planner thinks your table has 1,000 rows when it actually has 1 million, or thinks a column has unique values when they’re heavily skewed.
Manually running ANALYZE or UPDATE STATISTICS after major data changes ensures the query planner has current information. It takes seconds and can dramatically improve query plans.
Looking at Actual Query Plans
Most indexing problems become obvious when you look at the query execution plan. The database tells you exactly which indexes it considered, which it used, and why it chose full table scans.
Many developers add indexes hoping for improvement but never verify whether the database actually uses them. The execution plan shows the truth. If your new index doesn’t appear in the plan, it’s not helping.
Explain plans reveal whether the database thinks your table has 100 rows or 100 million, whether it expects your filter to match 1% or 90% of rows, and whether it chose an index scan or table scan and why.
Reading execution plans isn’t intuitive at first, but it’s the most direct way to understand what’s actually happening. Every major database has an EXPLAIN command or equivalent. Use it before and after adding indexes to verify they help.
Indexing isn’t magic. It’s a specific optimization for specific query patterns. Getting it right means understanding how your database uses indexes, how your queries prevent index usage, and what your actual performance bottlenecks are.