Skip to content

Avoid unnecessary executemany subbatching#1479

Open
bkline wants to merge 1 commit into
mkleehammer:masterfrom
bkline:741-slow-inserts
Open

Avoid unnecessary executemany subbatching#1479
bkline wants to merge 1 commit into
mkleehammer:masterfrom
bkline:741-slow-inserts

Conversation

@bkline
Copy link
Copy Markdown
Contributor

@bkline bkline commented Apr 7, 2026

This commit implements Michael's suggestion to scan as many rows of data as we can for the fast_executemany path when we are determining how many rows we can send with the same binding. It's still possible for an application to provide data in such a way that more than one invocation of SQLExecute() is required. For example, with a table containing a single VARCHAR column, you can force a separate call to SQLExecute() for each row if you alternate back and forth between rows with string values and rows with bytes values. My advice would be: "don't do that."

It turns out that--because all the work to scan the rows takes place in memory--the amount of time required to scan even very large sets of values is trivial (less than a half milliseccond to scan 10,000 rows with 16 values each on my development machine) in comparison with the time needed for even a single round trip to the database.

Closes #741

This commit implements Michael's suggestion to scan as many rows
of data as we can for the fast_executemany path when we are
determining how many rows we can send with the same binding.
It's still possible for an application to provide data in such
a way that more than one invocation of SQLExecute() is required.
For example, with a table containing a single VARCHAR column, you
can force a separate call to SQLExecute() for each row if you
alternate back and forth between rows with string values and rows
with bytes values. My advice would be: "don't do that."

It turns out that--because all the work to scan the rows takes
place in memory--the amount of time required to scan even very
large sets of values is trivial (less than a half milliseccond to
scan 10,000 rows with 16 values each on my development machine)
in comparison with the time needed for even a single round trip
to the database.

Closes mkleehammer#741
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Passing None to SQL Server INSERTs drastically slows down inserts:

1 participant