SQL Server Performance Tuning: Practical Tips That Actually Work

Battle-tested SQL Server performance tips from years of tuning enterprise databases, focusing on the changes that make the biggest difference.

Dev Team Solutions 7 min read
SQL Server Performance Tuning: Practical Tips That Actually Work

Performance tuning SQL Server is one of those skills where experience genuinely counts. You can read all the documentation Microsoft publishes, and it will help, but nothing replaces having stared at thousands of execution plans and learned to recognise the patterns that cause problems.

What follows are the techniques I reach for first when a database is not performing as it should. These are not obscure tricks — they are the fundamentals that solve the majority of real-world performance issues.

Start With the Execution Plan

Before changing anything, look at the execution plan. I cannot stress this enough. Tuning without reading the plan is guesswork.

In SQL Server Management Studio, press Ctrl+M to include the actual execution plan, then run your query. The actual plan shows you what really happened, not what the optimiser predicted would happen. The difference matters.

What to look for:

  • Table scans and clustered index scans on large tables. These mean SQL Server is reading every row. Sometimes that is unavoidable, but often it indicates a missing index.
  • Key lookups. These appear when a non-clustered index finds the rows but then has to go back to the clustered index to fetch additional columns. A few key lookups are fine. Thousands are a problem.
  • Large estimated vs actual row counts. When these diverge significantly, the optimiser has chosen a bad plan based on incorrect assumptions. This usually points to stale statistics or parameter sniffing.
  • Sort and hash match operators. These are memory-hungry. If you see them spilling to tempdb (indicated by a warning icon), you have a problem.

Index Analysis: Finding What Is Missing and What Is Wasted

SQL Server tracks index usage statistics and missing index suggestions internally. These are invaluable.

Find Unused Indexes

Indexes that are maintained on every write but never used for reads are pure overhead. This query identifies them:

SELECT
    OBJECT_NAME(i.object_id) AS TableName,
    i.name AS IndexName,
    i.type_desc AS IndexType,
    ius.user_seeks,
    ius.user_scans,
    ius.user_lookups,
    ius.user_updates
FROM sys.indexes i
INNER JOIN sys.dm_db_index_usage_stats ius
    ON i.object_id = ius.object_id
    AND i.index_id = ius.index_id
WHERE OBJECTPROPERTY(i.object_id, 'IsUserTable') = 1
    AND ius.database_id = DB_ID()
    AND ius.user_seeks = 0
    AND ius.user_scans = 0
    AND ius.user_lookups = 0
    AND ius.user_updates > 0
ORDER BY ius.user_updates DESC;

A word of caution: these statistics reset when the SQL Server service restarts. Make sure you are looking at data from a representative period before dropping anything.

Find Missing Indexes

SQL Server records index suggestions based on queries it has processed:

SELECT
    OBJECT_NAME(mid.object_id) AS TableName,
    mid.equality_columns,
    mid.inequality_columns,
    mid.included_columns,
    migs.avg_user_impact,
    migs.user_seeks
FROM sys.dm_db_missing_index_details mid
INNER JOIN sys.dm_db_missing_index_groups mig
    ON mid.index_handle = mig.index_handle
INNER JOIN sys.dm_db_missing_index_group_stats migs
    ON mig.index_group_handle = migs.group_handle
WHERE mid.database_id = DB_ID()
ORDER BY migs.avg_user_impact * migs.user_seeks DESC;

Do not blindly create every index this suggests. Review each one. Check whether an existing index could be extended with an INCLUDE column instead. Think about the write overhead. But do take the high-impact suggestions seriously — they are based on real query patterns.

Statistics Maintenance

Outdated statistics are one of the most common causes of poor query plans, and one of the easiest to fix.

SQL Server uses statistics to estimate how many rows a query will return, which directly influences the execution plan chosen. When statistics are stale, the estimates are wrong, and wrong estimates lead to bad plans.

By default, SQL Server auto-updates statistics when roughly 20% of the rows in a table have changed. For large tables, that threshold is far too high. A table with ten million rows needs two million changes before statistics update automatically. That is a lot of bad plans in the meantime.

The fix is straightforward. Schedule a maintenance job that updates statistics regularly:

EXEC sp_updatestats;

For more control, update statistics on specific tables with a full scan:

UPDATE STATISTICS dbo.Order WITH FULLSCAN;

I run sp_updatestats nightly on most production databases and full scans on the busiest tables weekly.

Parameter Sniffing: The Silent Performance Killer

Parameter sniffing is the behaviour where SQL Server creates an execution plan based on the parameter values used in the first execution of a query, then reuses that plan for all subsequent executions regardless of the parameter values.

This is usually a good thing. But when the data distribution is skewed — say, one customer has a million orders and most have fewer than a hundred — the plan optimised for the million-row customer is terrible for the hundred-row customer, and vice versa.

Signs of parameter sniffing problems:

  • A stored procedure that is sometimes fast and sometimes slow with no apparent pattern
  • Performance that dramatically improves after recompiling a procedure
  • Execution plans that look absurd for the parameters being passed

Solutions, in order of preference:

  1. OPTIMIZE FOR UNKNOWN hint. This tells SQL Server to generate a plan based on average statistics rather than the specific parameter value:
SELECT OrderId, OrderDate, Total
FROM dbo.Order
WHERE CustomerId = @CustomerId
OPTION (OPTIMIZE FOR UNKNOWN);
  1. Recompile hint on specific statements. This generates a fresh plan every time, which has a CPU cost but guarantees an optimal plan:
SELECT OrderId, OrderDate, Total
FROM dbo.Order
WHERE CustomerId = @CustomerId
OPTION (RECOMPILE);
  1. Plan guides for queries you cannot modify (third-party application code, for example).

TempDB Configuration

TempDB is a shared resource used by every database on the instance. Sort operations, hash joins, temporary tables, version store — they all use tempdb. If it is not configured properly, it becomes a bottleneck.

The key recommendations:

  • Multiple data files. As a starting point, create one tempdb data file per logical processor core, up to eight files. All files should be the same size. This reduces allocation contention.
  • Pre-size the files. Do not let tempdb grow in tiny increments. Size it based on your workload and set a sensible autogrowth increment (at least 256 MB or more).
  • Fast storage. If you can put tempdb on its own fast storage (SSD or NVMe), do so. It is the most I/O-intensive system database.

Common Anti-Patterns

A few coding patterns I see regularly that cause performance problems:

Scalar Functions in SELECT Lists

User-defined scalar functions execute once per row. On a result set of a million rows, that is a million function calls. Rewrite them as inline table-valued functions or incorporate the logic directly into the query.

Cursor-Based Processing

Row-by-row processing is almost always slower than set-based operations. I estimate that 90% of the cursors I have encountered in production code could be replaced with a single set-based query that runs orders of magnitude faster.

Excessive Use of NOLOCK

Sprinkling WITH (NOLOCK) on every query is not a performance strategy. It is a data accuracy strategy — specifically, a strategy for having less of it. NOLOCK reads can return duplicate rows, skip rows, and read partially updated data. If you have concurrency problems, investigate READ COMMITTED SNAPSHOT ISOLATION instead.

Implicit Conversions

When you compare a column of one data type against a parameter of another, SQL Server performs an implicit conversion. This can prevent index usage entirely. Check your execution plans for yellow warning triangles — they often indicate implicit conversions.

A Systematic Approach

When faced with a poorly performing database, I follow a consistent sequence:

  1. Identify the slowest queries using Query Store or sys.dm_exec_query_stats
  2. Examine execution plans for the top offenders
  3. Check statistics freshness on the relevant tables
  4. Review index usage and missing index DMVs
  5. Look for parameter sniffing patterns
  6. Check tempdb configuration and contention
  7. Review server-level settings (max memory, max degree of parallelism, cost threshold for parallelism)

This methodical approach avoids the temptation to make random changes and hope for improvement. Every change should be based on evidence from the execution plan and system DMVs, and every change should be measured to confirm it actually helped.

Performance tuning is not glamorous work, but there is genuine satisfaction in taking a query from thirty seconds to thirty milliseconds. And for enterprise applications where the database underpins everything, it is some of the most valuable work you can do.

sql-server performance database tuning
Share:

Let's Work Together

Get in touch today to discuss your project requirements.