Skip to content

Commit 9ac4846

Browse files
authored
builtin/add: enable fscache around repo_read_index_preload (#899)
Trace2 + GIT_TRACE_FSCACHE evidence on Windows ARM64 (Snapdragon X Elite, ReFS Dev Drive) shows that the heaviest lstat-bound work in `git add` happens inside repo_read_index_preload(), which currently runs *before* enable_fscache() is called. Moving the enable up so the preload phase is wrapped lets the existing batched NtQueryDirectoryFile cache cover the bulk of the lstat traffic. This patch gave me a ~30% performance improvement on a large git repo with a batched add. Also at the end of cmd_add(): the cleanup site called enable_fscache(0) again instead of disable_fscache(), leaking the refcount.
2 parents ed26177 + add7a58 commit 9ac4846

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

builtin/add.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -496,13 +496,13 @@ int cmd_add(int argc,
496496
(!(addremove || take_worktree_changes)
497497
? ADD_CACHE_IGNORE_REMOVAL : 0));
498498

499+
enable_fscache(0);
499500
if (repo_read_index_preload(repo, &pathspec, 0) < 0)
500501
die(_("index file corrupt"));
501502

502503
die_in_unpopulated_submodule(repo->index, prefix);
503504
die_path_inside_submodule(repo->index, &pathspec);
504505

505-
enable_fscache(0);
506506
/* We do not really re-read the index but update the up-to-date flags */
507507
preload_index(repo->index, &pathspec, 0);
508508

@@ -621,6 +621,6 @@ int cmd_add(int argc,
621621
free(ps_matched);
622622
dir_clear(&dir);
623623
clear_pathspec(&pathspec);
624-
enable_fscache(0);
624+
disable_fscache();
625625
return exit_status;
626626
}

0 commit comments

Comments
 (0)