Hands-On Scenarios
This section pairs every concept from the study guide with a real, runnable exercise using a local SQL Server instance. You'll see actual timing numbers, real execution plans, and the satisfaction of watching logical reads drop from thousands to dozens.
Prerequisites
Step 1 โ Start SQL Server via Docker:
Wait ~20 seconds for SQL Server to initialize, then run the init scripts.
Step 2 โ Verify it's working (VS Code MSSQL extension or DataGrip):
Step 3 โ Run the C# demo app:
cd src/SqlDemos
dotnet run # interactive menu
dotnet run -- all # run all 6 scenarios and print timing table
dotnet run -- 2 # run scenario 2 only
The Database: InterviewDemoDB
| Table | Rows (approx.) | Purpose |
|---|---|---|
Customers |
1,000 | 1 whale (50k orders), 999 normal |
Products |
25 | Used in OrderItems |
Orders |
~55,000 | Skewed data โ key to parameter sniffing |
OrderItems |
~120,000 | Line items per order |
Employees |
15 | Org chart for SELF JOIN demo |
OrderAuditLog |
~14,000 | Wide table for SELECT * demo |
The whale: CustomerID=1 ("BigCorp Ltd") has ~50,000 orders. Every other customer has 1-8. This skew is what makes Scenario 2 (Parameter Sniffing) interesting.
The 6 Scenarios
| # | Name | Antipattern | Typical Speedup |
|---|---|---|---|
| 1 | Cursor Catastrophe | Row-by-row UPDATE in a loop | 100โ1000ร |
| 2 | Parameter Sniffing Ghost | Cached plan wrong for data distribution | 10โ50ร |
| 3 | Non-SARGable Date Trap | Functions on columns in WHERE | 10โ20ร |
| 4 | SELECT * Flood | Pulling fat NVARCHAR(MAX) columns | 5โ30ร |
| 5 | Key Lookup Tax | Non-covering index | 5โ15ร |
| 6 | Scalar UDF Killer | Scalar function in WHERE clause | 20โ100ร |
How to Measure on macOS
Every scenario has a query exercise block. The SQL commands themselves (SET STATISTICS IO ON, EXEC ...) are identical across both tools. What differs is how you enable the execution plan. Pick the one you have:
Note: Azure Data Studio was retired on February 28, 2026. Use one of the two tools below instead.
Option A โ VS Code with the MSSQL Extension
- Install SQL Server (mssql) by Microsoft (
ms-mssql.mssql) from the Extensions panel if you haven't already. - Open a
.sqlfile, connect tolocalhost,1433when prompted (or use the status-bar connection picker at the bottom). - Paste the statistics preamble at the top of your query:
- To get the actual execution plan, right-click anywhere in the editor โ "Run Query with Actual Execution Plan" (or click the dropdown arrow next to the Run button โถ and select that option).
- The results pane shows three tabs:
- Results โ the row data
- Messages โ
STATISTICS IO/STATISTICS TIMEoutput (logical reads, CPU time, elapsed time) - Execution Plan โ graphical plan (click any operator node to see estimated vs actual rows, cost %, output column list)
Tip: The VS Code MSSQL extension uses the same
.sqlplanXML format as SSMS, so all operators (Index Seek, Key Lookup, Hash Match, etc.) use identical names.
Option B โ JetBrains DataGrip
- Connect to
localhost,1433with the SQL Server driver (Login:sa, Password:InterviewDemo@2026, Database:InterviewDemoDB). - Open a query console for that data source.
- Paste the statistics preamble at the top:
- To get the actual execution plan, right-click in the editor โ Explain Plan โ Explain Analyzed. This runs the query and captures live row counts โ equivalent to SSMS's actual execution plan.
- For a quick estimated plan without running the query: right-click โ Explain Plan โ Explain Plan (or
Cmd+Shift+E). - Results appear in separate panels at the bottom:
- Output โ the
STATISTICS IO/STATISTICS TIMEtext (logical reads, CPU, elapsed) - Plan โ DataGrip's graphical plan tree. Click any node to expand its full properties (actual rows, estimated rows, cost, output columns).
Tip: DataGrip's Explain Analyzed view highlights the costliest nodes in orange/red. Start your investigation there.
The SQL Commands (Same in All Tools)
After running, look for this pattern in the Messages/Output tab:
Table 'Orders'. Scan count 1, logical reads 3842, lob logical reads 0, ...
SQL Server Execution Times: CPU time = 234 ms, elapsed time = 412 ms.
Logical reads is your primary metric. 1 logical read = 1 ร 8 KB page read from the buffer cache. Dropping logical reads from thousands to dozens is the goal for most of these scenarios.
Good habit
Always run the bad version first, write down the logical reads, then run the fixed version. The contrast is what makes it stick.