aboutsummaryrefslogtreecommitdiff
path: root/.mailmap
diff options
context:
space:
mode:
authorNicolas Pitre <nico@cam.org>2007-05-25 21:38:58 -0400
committerJunio C Hamano <junkio@cox.net>2007-05-26 20:28:13 -0700
commit843366961cf14aad6490fbeb30f7b98f37f8833a (patch)
treecd8615b29d397b95b28f014783b83ead5b4acc4a /.mailmap
parent99b5a79e1329468bee26ae3bd9070c47418279d0 (diff)
downloadgit-843366961cf14aad6490fbeb30f7b98f37f8833a.tar.gz
git-843366961cf14aad6490fbeb30f7b98f37f8833a.tar.xz
improve delta long block matching with big files
Martin Koegler noted that create_delta() performs a new hash lookup after every block copy encoding which are currently limited to 64KB. In case of larger identical blocks, the next hash lookup would normally point to the next 64KB block in the reference buffer and multiple block copy operations will be consecutively encoded. It is however possible that the reference buffer be sparsely indexed if hash buckets have been trimmed down in create_delta_index() when hashing of the reference buffer isn't well balanced. In that case the hash lookup following a block copy might fail to match anything and the fact that the reference buffer still matches beyond the previous 64KB block will be missed. Let's rework the code so that buffer comparison isn't bounded to 64KB anymore. The match size should be as large as possible up front and only then should multiple block copy be encoded to cover it all. Also, fewer hash lookups will be performed in the end. According to Martin, this patch should reduce his 92MB pack down to 75MB with the dataset he has. Tests performed on the Linux kernel repo show a slightly smaller pack and a slightly faster repack. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
Diffstat (limited to '.mailmap')
0 files changed, 0 insertions, 0 deletions