The comfort of modern defaults
Modern .NET development has conditioned us well. When data access is involved, we rarely hesitate: Entity Framework or Dapper is the obvious choice. I followed that same reflex recently, when I started building a synchronization tool for a client.

The task itself was straightforward. An external system exposed close to 200k records that needed to be transformed into a completely different structure and persisted into another data store. I started with Dapper, feeling confident it would be more than fast enough.

The problem with row by row inserts
As the implementation evolved, a familiar pattern emerged. Each record was transformed and inserted individually. When I ran the full dataset, the total runtime approached 1 hour. Adding parallelism improved things slightly, but it also added complexity and didn't address the underlying issue. It felt like the wrong solution to the right problem.

After stepping back, I realized this wasn't a new problem at all. Years ago, before ORMs became the default, we solved this exact scenario differently. That realization brought me back to ADO.NET and more specifically, SQL Bulk Copy.

Rediscovering SQL Bulk Copy
Using SqlBulkCopy completely changed the performance profile. Instead of executing insert statements in a loop, data is streamed directly into SQL Server. With minimal effort, the same workload processed around 5000 records per second. What previously took nearly 1 hour now completed in seconds.

The code itself is refreshingly simple:

using var bulkCopy = new SqlBulkCopy(connection)
{
    DestinationTableName = "dbo.TargetTable",
    BatchSize = 5000
};

bulkCopy.ColumnMappings.Add("Id", "Id");
bulkCopy.ColumnMappings.Add("Name", "Name");
bulkCopy.ColumnMappings.Add("CreatedAt", "CreatedAt");

await bulkCopy.WriteToServerAsync(dataTable, cancellationToken);

What makes this so effective is what happens beneath the surface. SqlBulkCopy uses SQL Server's native Tabular Data Stream protocol, sending data in an optimized binary format. Instead of parsing thousands of individual SQL statements, SQL Server receives a continuous stream of rows designed specifically for bulk ingestion. The database engine is simply doing what it does best.

When frameworks aren't enough
Frameworks like Entity Framework and Dapper remain excellent tools, and for most applications they are exactly the right choice. But this project was a reminder that ADO.NET is far from obsolete. It is simply specialized.

For synchronization tools, large migrations, and high-volume data processing, stepping away from abstractions can lead to solutions that are faster, simpler, and more reliable. Sometimes, the best way forward is remembering the tools we already have and knowing when to use them.