I once inherited a dashboard page that took over five seconds to load. The API endpoint behind it looked innocent enough, but digging in, I found a classic case of death by a thousand cuts: lazy loading, bloated entities, and chatty database calls. It was a textbook example of default EF Core behavior backfiring under real-world load.

After fixing that mess (and many others like it), I’ve developed a small playbook of go-to optimizations. These aren’t wild, complex tricks. They’re five fundamental patterns that I apply to almost every high-traffic ASP.NET Core project.

Here’s what I do to keep my data layers fast and lean.

1. Ditch Full Entities, Project to DTOs

This one is my golden rule for read queries. If you’re just displaying data, stop loading full-blown EF Core entities.

The problem is that when you pull an entity, EF Core has to hydrate every single property. Even the ones you don’t need. This results in bigger SQL queries that join more tables and pull back way more data than your API client will ever see.

It’s pure waste.

Instead, use Select to project directly into a Data Transfer Object (DTO).

// The slow way
var users = await _context.Users
    .Include(u => u.Profile) // Pulls everything from two tables
    .ToListAsync();

// The fast, clean way
var users = await _context.Users
    .Select(u => new UserSummaryDto 
    {
        Id = u.Id,
        Name = u.Name,
        Email = u.Email
    })
    .ToListAsync();

On a multi-tenant app I worked on, we had a user list query that was taking around 280ms. The generated SQL was pulling 15 columns. Switching to a projection DTO dropped the query time to 85ms and the SQL was trimmed to just the 3 columns we needed. Easy win.

When I’d avoid it: Don’t bother with projections if you actually need the full entity to perform updates or run business logic. For write operations, stick with the real entity.

2. Use AsNoTracking() for All Read-Only Queries

This is probably the lowest-hanging fruit in EF Core performance.

Whenever EF Core loads an entity, it keeps a snapshot of it in its change tracker. This is how it knows what UPDATE statement to generate when you call SaveChanges(). But for any query where you’re just reading data (think API GET endpoints, reports), this tracking is 100% overhead.

It burns CPU cycles and eats memory for no reason.

The fix is a single method call: AsNoTracking().

var products = await _context.Products
    .AsNoTracking() // Tell EF Core: "don't watch these for changes"
    .Where(p => p.Category == category)
    .ToListAsync();

This one burned me once in production on a reporting endpoint. The query was pulling thousands of records, and the memory usage was slowly creeping up with every request. Adding AsNoTracking() dropped the memory footprint for that endpoint by nearly 30%.

When I’d avoid it: Simple: if you’re going to call SaveChanges() on the entities you fetched, you cannot use AsNoTracking(). You’ll lose all your changes.

3. Batch Your Database Writes

Stop calling SaveChangesAsync() inside a loop. Please.

Every single call to SaveChangesAsync() is a separate round-trip to the database, wrapped in its own transaction. If you’re adding 100 new records, that’s 100 network hops and 100 transactions. It’s painfully slow.

The right way is to add all your entities to the DbContext first, then call SaveChangesAsync() once.

// Don't do this. It's so chatty.
foreach (var order in newOrders)
{
    _context.Orders.Add(order);
    await _context.SaveChangesAsync(); // Bad: a round-trip for every single order
}

// Do this instead. One and done.
_context.Orders.AddRange(newOrders);
await _context.SaveChangesAsync(); // Good: one trip, one transaction

We had a bulk import job that was taking forever. Moving from per-item saves to a single batched save made the process over 5 times faster.

When I’d avoid it: For huge batches (thousands of records), a single transaction can put a lot of pressure on your database’s transaction log. In those cases, I’d either chunk the work into smaller batches (e.g., save every 500 records) or look into a library like EFCore.BulkExtensions for true bulk operations.

4. Build Indexes for Your Queries, Not Just Your Tables

Just because you have an index on UserId doesn’t mean your query is fast.

A common mistake is creating single-column indexes and calling it a day. But EF Core’s LINQ often generates WHERE clauses with multiple conditions. If your index doesn’t match the shape of your query, the database might just ignore it.

You need to look at the actual SQL EF Core generates and build composite indexes to match.

For example, if you often query for orders with a certain status for a specific user, you need a two-column index.

// Your LINQ query
var recentOrders = await _context.Orders
    .Where(o => o.UserId == userId && o.Status == OrderStatus.Processing)
    .ToListAsync();

// Your entity needs an index that matches that query shape
[Index(nameof(UserId), nameof(Status))] // This is the key!
public class Order
{
    public int Id { get; set; }
    public OrderStatus Status { get; set; }
    public DateTime CreatedDate { get; set; }
    public int UserId { get; set; }
}

Turn on EF Core’s sensitive data logging in development to see the queries, or use a tool like SQL Server Profiler. Find your most frequent, expensive queries and build indexes that serve them directly.

When I’d avoid it: Don’t go crazy and index everything. Every index you add makes INSERT and UPDATE operations a little slower because the database has to do more work. Prioritize indexing for your slowest read queries on your biggest tables.

5. Be Wary of Include() on Large Collections

The Include() method seems convenient, but it can be a sneaky performance trap.

When you use Include() to load a related collection (e.g., a user and all their orders), EF Core generates a LEFT JOIN. If a user has 50 orders, you get 50 rows back from the database, with the user’s data duplicated in every single row. This is called a “cartesian explosion,” and it can kill your app’s performance and memory.

This works fine in dev with 2 orders per user. It dies in production when a power user has 2,000.

// This can generate a gigantic, bloated result set
var users = await _context.Users
    .Include(u => u.Orders)
        .ThenInclude(o => o.OrderItems) // This makes it even worse
    .ToListAsync();

// Better: Load the data in separate, targeted queries
var users = await _context.Users.ToListAsync();
var userIds = users.Select(u => u.Id).ToList();
var orders = await _context.Orders
    .Where(o => userIds.Contains(o.UserId))
    .ToListAsync();
// Now you can link them up in memory

Splitting the query into two separate, smaller queries is almost always more efficient than one massive JOIN.

When I’d avoid it: Include() is perfectly fine for one-to-one relationships or very small, predictable one-to-many collections (like a user who can only have 2-3 phone numbers). Just be deliberate and know how much data you’re pulling.

My Final Takeaway

These patterns aren’t silver bullets, but they are my default starting point for keeping EF Core fast.

  • Here’s when I use them: On any read-heavy API endpoint, background job, or data export process. Projections and AsNoTracking() are my go-to for GET requests. Batching is essential for any bulk import.
  • Here’s when I avoid them: On simple CRUD operations for a single entity, you don’t need to over-optimize. If you need the full entity for complex business rules, loading it directly is cleaner than juggling DTOs and multiple queries.

Always measure. Don’t guess. Use a profiler or just enable EF Core’s built-in logging to see what’s happening under the hood.

// In your DbContext's OnConfiguring or in Program.cs
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
    optionsBuilder
        .LogTo(Console.WriteLine, LogLevel.Information)
        .EnableSensitiveDataLogging();
}

Find your slowest query and start there. Often, one or two of these patterns are all you need to get things running smoothly again.

Further Reading

FAQ

Do these EF Core tips work in EF Core 8 and 9?

Yes. These patterns are stable across EF Core versions, though you should always review the release notes for any breaking changes.

Can I combine all five performance techniques?

Yes, they are not mutually exclusive. However, measure your workload before and after applying them to ensure net performance gains.

How much performance improvement should I expect?

It depends on your data size and queries. In high-traffic SaaS platforms, these tips typically deliver 15–40% faster queries and 20–30% lower memory usage.

When should I avoid using AsNoTracking()?

Avoid it when you need EF Core to track entity state for updates or deletes, as disabling tracking means changes won’t be detected.

What’s the best way to index EF Core queries?

Analyze your most frequent LINQ queries, inspect the generated SQL, and create composite indexes that match your WHERE clauses.

About the Author

@CodeCrafter is a software engineer who builds real-world systems , from resilient ASP.NET Core backends to clean, maintainable Angular frontend. With 11+ years in production development, he shares what actually works when shipping software that has to last.