Of course. Here is the rewritten blog post, following your style guide and prompt.
Stop Flying Blind: Safely Log EF Core SQL in Production
I remember the incident vividly. P95 latency for our main API shot through the roof overnight. Our logs showed a bunch of slow HTTP requests, but nothing about why they were slow. We were flying blind. After hours of frantic debugging, we found the culprit: a seemingly innocent LINQ query was generating a monster SQL join, causing a full table scan on a massive table.
We had no visibility into what Entity Framework Core was doing. This one burned me, and I promised myself: never again.
Most developers face this. They either accept the black box and pray, or they enable dangerous logging flags that leak sensitive user data. There’s a much better way to see exactly what SQL EF Core is executing in production without compromising security.
TL;DR: Use a custom DbCommandInterceptor
to log SQL, execution time, and parameter metadata (but never the values). Correlate logs with a TraceId
and use sampling or thresholds to avoid log noise.
The Common Mistakes That’ll Get You Paged at 3 AM
Before we get to the right way, let’s talk about the traps. I’ve seen these “temporary fixes” make their way into production code, and the results are always painful.
- Never use
EnableSensitiveDataLogging()
in production. I can’t stress this enough. It logs parameter values. That means passwords, personal user info, and API keys will end up in your logs. It’s a security incident waiting to happen. Just don’t do it. - Don’t rely on
ToQueryString()
for automated logging. This method is fantastic for debugging on your local machine. But it’s a trap for production logging because it doesn’t give you execution time and doesn’t integrate with your observability stack. It’s a debugging tool, not a production monitor. - Avoid logging every single query. Turning on EF Core’s default
Information
level logging for every command will flood your logging system. You’ll pay a fortune in storage and ingestion costs, and the signal-to-noise ratio will be so low that you’ll miss the actual problems.
The Production-Ready Fix: A DbCommandInterceptor
Interceptors are the clean, powerful way to hook into EF Core’s execution pipeline. They let you control exactly what gets logged and how.
Here’s a production-ready interceptor I’ve used. It safely captures the SQL text, measures execution time, and logs parameter names and types without ever touching the sensitive values.
using Microsoft.EntityFrameworkCore.Diagnostics;
using Microsoft.Extensions.Logging;
using System.Data.Common;
using System.Diagnostics;
public class SqlLoggingInterceptor : DbCommandInterceptor
{
private readonly ILogger<SqlLoggingInterceptor> _logger;
private readonly int _slowQueryThresholdMs;
// A threshold of 200ms is a decent starting point.
public SqlLoggingInterceptor(ILogger<SqlLoggingInterceptor> logger, int slowQueryThresholdMs = 200)
{
_logger = logger;
_slowQueryThresholdMs = slowQueryThresholdMs;
}
// We use the DbContext's internal dictionary to pass the stopwatch around.
// It's a bit of a trick, but it's reliable.
public override InterceptionResult<DbDataReader> ReaderExecuting(
DbCommand command, CommandEventData eventData, InterceptionResult<DbDataReader> result)
{
eventData.Context!.Items["sql:stopwatch"] = Stopwatch.StartNew();
return base.ReaderExecuting(command, eventData, result);
}
public override void ReaderExecuted(
DbCommand command, CommandExecutedEventData eventData, DbDataReader result)
{
LogCommand(command, eventData);
base.ReaderExecuted(command, eventData, result);
}
public override void CommandFailed(
DbCommand command, CommandErrorEventData eventData)
{
LogCommand(command, eventData, eventData.Exception);
base.CommandFailed(command, eventData);
}
private void LogCommand(DbCommand command, CommandEventData eventData, Exception? exception = null)
{
if (eventData.Context?.Items["sql:stopwatch"] is not Stopwatch sw)
return;
sw.Stop();
var durationMs = sw.ElapsedMilliseconds;
// Safe parameter logging: names and types only, no values.
var parameterInfo = command.Parameters
.Cast<DbParameter>()
.Select(p => new {
Name = p.ParameterName,
Type = p.DbType.ToString(),
IsNull = p.Value == DBNull.Value
})
.ToList();
var logLevel = exception != null ? LogLevel.Error :
durationMs > _slowQueryThresholdMs ? LogLevel.Warning :
LogLevel.Information;
// The TraceId gives us the magic link back to the originating HTTP request.
_logger.Log(logLevel, exception,
"SQL executed: {CommandText} | Duration: {DurationMs}ms | Parameters: {Parameters} | TraceId: {TraceId}",
command.CommandText.Trim(),
durationMs,
parameterInfo,
Activity.Current?.Id);
}
}
Now, just wire it up in your dependency injection container.
// Program.cs
services.AddSingleton<SqlLoggingInterceptor>();
services.AddDbContext<AppDbContext>((serviceProvider, options) =>
{
options.UseSqlServer(connectionString);
options.AddInterceptors(serviceProvider.GetRequiredService<SqlLoggingInterceptor>());
});
Boom. You now have safe, structured, and insightful SQL logging.
Now You Can Actually Catch Slow Queries
The interceptor above automatically flags any query running longer than your threshold as a Warning
. This is where the real power comes in. Don’t just log these warnings and forget them.
Set up alerts. Configure your APM tool (like Datadog, Splunk, or Seq) to trigger an alert when it sees these warning logs. A sudden spike in slow queries should page the on-call engineer. Logging is useless if nobody is looking at it.
For high-traffic systems, you might not want to log every single successful query. You can easily add sampling to reduce the noise.
private bool ShouldLogQuery(long durationMs)
{
// Always log failed or slow queries
if (durationMs > _slowQueryThresholdMs) return true;
// Only log 1% of the normal, fast queries
return Random.Shared.NextDouble() < 0.01;
}
Linking It All Together: TraceId
Notice the Activity.Current?.Id
in the log message? That’s the TraceId
. ASP.NET Core automatically creates this for each incoming HTTP request. By including it in your SQL log, you can now connect a slow API endpoint directly to the exact database query that’s holding it up. This is a massive time-saver during an incident.
The Right Way to Use ToQueryString()
(In Dev Only!)
ToQueryString()
is still your best friend for local development. When you’re writing a new query and want to see the generated SQL, it’s perfect.
var query = context.Users
.Where(u => u.TenantId == tenantId && u.IsActive)
.Include(u => u.Roles);
// This is great for debugging in your IDE.
Console.WriteLine(query.ToQueryString());
Just make sure it never becomes part of your automated production logging strategy.
Here’s When I’d Use It, Here’s When I’d Avoid It
So, should you use this interceptor?
Here’s when I’d use it: On any production application that talks to a database. Period. The visibility it provides is non-negotiable for serious applications. It helps you catch performance regressions, diagnose incidents, and understand what your ORM is actually doing. The setup is minimal, and the payoff is huge.
Here’s when I’d avoid it: I honestly can’t think of a good reason to avoid this on a production system. If you have an extremely simple, low-traffic internal app, maybe you could get away with the default logging. But for any system where performance and reliability matter, the custom interceptor is the way to go. It’s the professional standard.
Your future on-call self will thank you.