aboutsummaryrefslogtreecommitdiff
path: root/builtin-pack-objects.c
Commit message (Collapse)AuthorAge
* Change 'Deltifying objects' to 'Compressing objects'Shawn O. Pearce2007-10-18
| | | | | | | | | | | | Recently I was referred to the Grammar Police as the git-pack-objects progress message 'Deltifying %u objects' is considered to be not proper English to at least some small but vocal segment of the English speaking population. Techncially we are applying delta compression to these objects at this stage, so the new term is slightly more acceptable to the Grammar Police but is also just as correct. Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* fix const issues with some functionsNicolas Pitre2007-10-17
| | | | | | | | | | | | | | | | | | | Two functions, namely write_idx_file() and open_pack_file(), currently return a const pointer. However that pointer is either a copy of the first argument, or set to a malloc'd buffer when that first argument is null. In the later case it is wrong to qualify that pointer as const since ownership of the buffer is transferred to the caller to dispose of, and obviously the free() function is not meant to be passed const pointers. Making the return pointer not const causes a warning when the first argument is returned since that argument is also marked const. The correct thing to do is therefore to remove the const qualifiers, avoiding the need for ugly casts only to silence some warnings. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* pack-objects.c: fix some global variable abuse and memory leaksNicolas Pitre2007-10-17
| | | | | | | | | | | To keep things well layered, sha1close() now returns the file descriptor when it doesn't close the file. An ugly cast was added to the return of write_idx_file() to avoid a warning. A proper fix will come separately. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* pack-objects: no delta possible with only one object in the listNicolas Pitre2007-10-17
| | | | | | | | ... so don't even try in that case, and save another useless line of progress display. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* more compact progress displayNicolas Pitre2007-10-17
| | | | | | | | | | | Each progress can be on a single line instead of two. [sp: Changed "Checking files out" to "Checking out files" at Johannes Sixt's suggestion as it better explains the action that is taking place] Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* Merge branch 'jc/autogc'Junio C Hamano2007-10-03
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | * jc/autogc: git-gc --auto: run "repack -A -d -l" as necessary. git-gc --auto: restructure the way "repack" command line is built. git-gc --auto: protect ourselves from accumulated cruft git-gc --auto: add documentation. git-gc --auto: move threshold check to need_to_gc() function. repack -A -d: use --keep-unreachable when repacking pack-objects --keep-unreachable Export matches_pack_name() and fix its return value Invoke "git gc --auto" from commit, merge, am and rebase. Implement git gc --auto
| * pack-objects --keep-unreachableJunio C Hamano2007-09-17
| | | | | | | | | | | | | | | | | | | | | | This new option is meant to be used in conjunction with the options "git repack -a -d" usually invokes the underlying pack-objects with. When this option is given, objects unreachable from the refs in packs named with --unpacked= option are added to the resulting pack, in addition to the reachable objects that are not in packs marked with *.keep files. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | builtin-pack-objects.c: avoid bogus gcc warningsJunio C Hamano2007-09-14
| | | | | | | | | | | | | | These empty statement marcos can solicit bogus "statement with no effect" warnings; squelch them. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | threaded delta search: proper locking for cache accountingNicolas Pitre2007-09-12
| | | | | | | | | | Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | threaded delta search: add pack.threads config variableNicolas Pitre2007-09-10
| | | | | | | | | | Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | fix threaded delta search lockingNicolas Pitre2007-09-10
| | | | | | | | | | | | | | Found by Jeff King. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | threaded delta search: specify number of threads at run timeNicolas Pitre2007-09-09
| | | | | | | | | | | | | | | | This adds a --threads=<n> parameter to 'git pack-objects' with documentation. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | threaded delta search: better chunck split pointNicolas Pitre2007-09-09
| | | | | | | | | | | | | | | | | | Try to keep object with the same name hash together. Suggested by Martin Koegler. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | threaded delta search: refine work allocationNicolas Pitre2007-09-09
| | | | | | | | | | | | | | | | | | | | With this, each thread get repeatedly assigned the next available chunk of objects to process until the whole list is done. The idea is to have reasonably small chunks so that all CPUs remain busy with a minimum number of threads for as long as there is data to process. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | basic threaded delta searchNicolas Pitre2007-09-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | this is still rough, hence it is disabled by default. You need to compile with "make THREADED_DELTA_SEARCH=1 ..." at the moment. Threading is done on different portions of the object list to be deltified. This is currently done by spliting the list into n parts and then a thread is spawned for each of them. A better method would consist of spliting the list into more smaller parts and have the n threads pick the next part available. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | rearrange delta search progress reportingNicolas Pitre2007-09-06
| | | | | | | | | | | | | | | | This is to help threadification of the delta search code, with a bonus consistency check. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | localize window memory usage accountingNicolas Pitre2007-09-05
| | | | | | | | | | | | | | This is to help threadification of delta searching. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | straighten the list of objects to deltifyNicolas Pitre2007-09-05
| | | | | | | | | | | | | | | | Not all objects are subject to deltification, so avoid carrying those along, and provide the real count to progress display. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Keep last used delta base in the delta windowJunio C Hamano2007-09-01
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on Martin Koegler's idea to keep the object that was successfully used as the base of the delta when it is about to fall off the edge of the window. Instead of doing so only for the objects at the edge of the window, this makes the window a lru eviction mechanism. If an entry is used as a base, it is moved to the last of the queue to be evicted. This is a quick-and-dirty implementation, as it keeps the original implementation of the data structure used for the window. This originally was done as an array, not as an array of pointers, because it was meant to be used as a cyclic FIFO buffer and a plain array avoids an extra pointer indirection, while its FIFOness eant that we are not "moving" the entries like this patch does. The runtime from three versions were comparable. It seems to make the resulting chain even shorter, which can only be good. (stock "master") 15782196 bytes chain length = 1: 2972 objects chain length = 2: 2651 objects chain length = 3: 2369 objects chain length = 4: 2121 objects chain length = 5: 1877 objects ... chain length = 46: 490 objects chain length = 47: 515 objects chain length = 48: 527 objects chain length = 49: 570 objects chain length = 50: 408 objects (with your patch) 15745736 bytes (0.23% smaller) chain length = 1: 3137 objects chain length = 2: 2688 objects chain length = 3: 2322 objects chain length = 4: 2146 objects chain length = 5: 1824 objects ... chain length = 46: 503 objects chain length = 47: 509 objects chain length = 48: 536 objects chain length = 49: 588 objects chain length = 50: 357 objects (with this patch) 15612086 bytes (1.08% smaller) chain length = 1: 4831 objects chain length = 2: 3811 objects chain length = 3: 2964 objects chain length = 4: 2352 objects chain length = 5: 1944 objects ... chain length = 46: 327 objects chain length = 47: 353 objects chain length = 48: 304 objects chain length = 49: 298 objects chain length = 50: 135 objects [jc: this is with code simplification follow-up from Nico] Signed-off-by: Junio C Hamano <gitster@pobox.com>
* fix same sized delta logicNicolas Pitre2007-08-30
| | | | | | | | | The code favoring shallower deltas when size is equal was triggered only when previous delta was also cached. There should be no relation between cached deltas and same sized deltas. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* pack-objects: check return value from read_sha1_file()Junio C Hamano2007-08-25
| | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Make thin-pack generation subproject aware.Linus Torvalds2007-08-19
| | | | | | | | | | | | When a thin pack wants to send a tree object at "sub/dir", and the commit that is common between the sender and the receiver that is used as the base object has a subproject at that path, we should not try to use the data at "sub/dir" of the base tree as a tree object. It is not a tree to begin with, and more importantly, the commit object there does not have to even exist. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* pack-objects: remove bogus arguments to delta_cacheable()Nicolas Pitre2007-08-15
| | | | | | | | Not only are they unused, but the order in the function declaration and the actual usage don't match. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Use xmkstemp() instead of mkstemp()Luiz Fernando N. Capitulino2007-08-14
| | | | | | | | xmkstemp() performs error checking and prints a standard error message when an error occur. Signed-off-by: Luiz Fernando N. Capitulino <lcapitulino@mandriva.com.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Pack-objects: properly initialize the depth valueNicolas Pitre2007-07-12
| | | | | | | | | | Commit 5a235b5e was missing this little detail. Otherwise your pack will explode. Problem noted by Brian Downing. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* reduce git-pack-objects memory usage a little moreNicolas Pitre2007-07-12
| | | | | | | | The delta depth doesn't have to be stored in the global object array structure since it is only used during the deltification pass. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Add pack-objects window memory usage limitBrian Downing2007-07-12
| | | | | | | | | | | | | | | | | | | | | | This adds an option (--window-memory=N) and configuration variable (pack.windowMemory = N) to limit the memory size of the pack-objects delta search window. This works by removing the oldest unpacked objects whenever the total size goes above the limit. It will always leave at least one object, though, so as not to completely eliminate the possibility of computing deltas. This is an extra limit on top of the normal window size (--window=N); the window will not dynamically grow above the fixed number of entries specified to fill the memory limit. With this, repacking a repository with a mix of large and small objects is possible even with a very large window. Cleaner and correct circular buffer handling courtesy of Nicolas Pitre. Signed-off-by: Brian Downing <bdowning@lavos.net> Acked-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Don't try to delta if target is much smaller than sourceBrian Downing2007-07-12
| | | | | | | | | | | | | | | Add a new try_delta heuristic. Don't bother trying to make a delta if the target object size is much smaller (currently 1/32) than the source, as it's very likely not going to get a match. Even if it does, you will have to read at least 32x the size of the new file to reassemble it, which isn't such a good deal. This leads to a considerable performance improvement when deltifying a mix of small and large files with a very large window, because you don't have to wait for the large files to percolate out of the window before things start going fast again. Signed-off-by: Brian Downing <bdowning@lavos.net> Acked-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* apply delta depth bias to already deltified objectsNicolas Pitre2007-07-12
| | | | | | | | | | | | | | | | | | | | | | | | We already apply a bias on the initial delta attempt with max_size being a function of the base object depth. This has the effect of favoring shallower deltas even if deeper deltas could be smaller, and therefore creating a wider delta tree (see commits 4e8da195 and c3b06a69). This principle should also be applied to all delta attempts for the same object and not only the first attempt. With this the criteria for the best delta is not only its size but also its depth, so that a shallower delta might be selected even if it is larger than a deeper one. Even if some deltas get larger, they allow for wider delta trees making the depth limit less quickly reached and therefore better deltas can be subsequently found, keeping the resulting pack size even smaller. Runtime access to the pack should also benefit from shallower deltas. Testing on different repositories showed slighter faster repacks, smaller resulting packs, and a much nicer curve for delta depth distribution with no more peak at the maximum depth level. Improvements are even more significant with smaller depth limits. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* pack-objects: Prefer shallower deltas if the size is equalBrian Downing2007-07-08
| | | | | | | | | Change "try_delta" so that if it finds a delta that has the same size but shallower depth than the existing delta, it will prefer the shallower one. This makes certain delta trees vastly less deep. Signed-off-by: Brian Downing <bdowning@lavos.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* War on whitespaceJunio C Hamano2007-06-07
| | | | | | | | | This uses "git-apply --whitespace=strip" to fix whitespace errors that have crept in to our source files over time. There are a few files that need to have trailing whitespaces (most notably, test vectors). The results still passes the test, and build result in Documentation/ area is unchanged. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Unify write_index_file functionsGeert Bosch2007-06-02
| | | | | | | | | | | | This patch unifies the write_index_file functions in builtin-pack-objects.c and index-pack.c. As the name "index" is overloaded in git, move in the direction of using "idx" and "pack idx" when refering to the pack index. There should be no change in functionality. Signed-off-by: Geert Bosch <bosch@gnat.com> Acked-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* fix repack with --max-pack-sizeNicolas Pitre2007-05-30
| | | | | | | | | | | | | | | | | | | | | Two issues here: 1) git-repack -a --max-pack-size=10 on the GIT repo dies pretty quick. There is a lot of confusion about deltas that were suposed to be reused from another pack but that get stored undeltified due to pack limit and object size doesn't match entry->size anymore. This test is not really worth the complexity for determining when it is valid so get rid of it. 2) If pack limit is reached, the object buffer is freed, including when it comes from a cached delta data. In practice the object will be stored in a subsequent pack undeltified, but let's make sure no pointer to freed data subsists by clearing entry->delta_data. I also reorganized that code a bit to make it more readable. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* builtin-pack-object: cache small deltasMartin Koegler2007-05-29
| | | | | Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at> Signed-off-by: Junio C Hamano <junkio@cox.net>
* git-pack-objects: cache small deltas between big objectsMartin Koegler2007-05-29
| | | | | | | | | | | | | | Creating deltas between big blobs is a CPU and memory intensive task. In the writing phase, all (not reused) deltas are redone. This patch adds support for caching deltas from the deltifing phase, so that that the writing phase is faster. The caching is limited to small deltas to avoid increasing memory usage very much. The implemented limit is (memory needed to create the delta)/1024. Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at> Signed-off-by: Junio C Hamano <junkio@cox.net>
* builtin-pack-objects: don't fail, if delta is not possibleMartin Koegler2007-05-29
| | | | | | | | | | | | If builtin-pack-objects runs out of memory while finding the best deltas, it bails out with an error. If the delta index creation fails (because there is not enough memory), we can downgrade the error message to a warning and continue with the next object. Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'dh/repack' (early part)Junio C Hamano2007-05-29
|\ | | | | | | | | | | | | | | | | | | | | * 'dh/repack' (early part): Ensure git-repack -a -d --max-pack-size=N deletes correct packs pack-objects: clarification & option checks for --max-pack-size git-repack --max-pack-size: add option parsing to enable feature git-repack --max-pack-size: split packs as asked by write_{object,one}() git-repack --max-pack-size: write_{object,one}() respect pack limit git-repack --max-pack-size: new file statics and code restructuring Alter sha1close() 3rd argument to request flush only
| * pack-objects: clarification & option checks for --max-pack-sizeDana How2007-05-23
| | | | | | | | | | | | | | | | Explain the special code for detecting a corner-case error, and complain about --stdout & --max-pack-size being used together. Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
| * git-repack --max-pack-size: add option parsing to enable featureDana L. How2007-05-20
| | | | | | | | | | | | | | | | | | | | Add --max-pack-size parsing and usage messages. Upgrade git-repack.sh to handle multiple packfile names, and build packfiles in GIT_OBJECT_DIRECTORY not GIT_DIR. Update documentation. Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
| * git-repack --max-pack-size: split packs as asked by write_{object,one}()Dana L. How2007-05-20
| | | | | | | | | | | | | | | | | | | | | | Rewrite write_pack_file() to break to a new packfile whenever write_object/write_one request it, and correct the header's object count in the previous packfile. Change write_index_file() to write an index for just the objects in the most recent packfile. Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
| * git-repack --max-pack-size: write_{object,one}() respect pack limitDana L. How2007-05-20
| | | | | | | | | | | | | | | | | | | | | | | | With --max-pack-size, generate the appropriate write limit for each object and check against it before each group of writes. Update delta usability rules to handle base being in a previously- written pack. Inline sha1write_compress() so we know the exact size of the written data when it needs to be compressed. Detect and return write "failure". Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
| * git-repack --max-pack-size: new file statics and code restructuringDana L. How2007-05-20
| | | | | | | | | | | | | | | | | | | | | | | | | | Add "pack_size_limit", the limit specified by --max-pack-size, "written_list", the list of objects written to the current pack, and "nr_written", the number of objects in written_list. Put "base_name" at file scope again and add forward declarations. Move write_index_file() call from cnd_pack_objects() to write_pack_file() since only the latter will know how many times to call write_index_file(). Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | builtin-pack-objects: remove unnecessary code for no-deltaJunio C Hamano2007-05-22
| | | | | | | | | | | | | | | | | | | | | | As we do not consider objects marked as "no-delta" early, there is no point to check if the other objects already in the delta window are marked as such -- "no-delta" objects will not enter the window to begin with. Pointed out by Nico. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | Teach "delta" attribute to pack-objects.Junio C Hamano2007-05-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This teaches pack-objects to use .gitattributes mechanism so that the user can specify certain blobs are not worth spending CPU cycles to attempt deltification. The name of the attrbute is "delta", and when it is set to false, like this: == .gitattributes == *.jpg -delta they are always stored in the plain-compressed base object representation. Signed-off-by: Junio C Hamano <junkio@cox.net>
* | pack-objects: pass fullname down to add_object_entry()Junio C Hamano2007-05-21
|/ | | | | | | | | Instead of giving a hash for grouping, pass fullname to add_object_entry(). I want to add "do not try deltifying this object" bit to object_entry based on the settings in .gitattributes, and hashing the name down too early would interfere with that plan. Signed-off-by: Junio C Hamano <junkio@cox.net>
* Merge branch 'dh/pack'Junio C Hamano2007-05-20
|\ | | | | | | | | * dh/pack: Custom compression levels for objects and packs
| * Custom compression levels for objects and packsDana How2007-05-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add config variables pack.compression and core.loosecompression , and switch --compression=level to pack-objects. Loose objects will be compressed using core.loosecompression if set, else core.compression if set, else Z_BEST_SPEED. Packed objects will be compressed using --compression=level if seen, else pack.compression if set, else core.compression if set, else Z_DEFAULT_COMPRESSION. This is the "pack compression level". Loose objects added to a pack undeltified will be recompressed to the pack compression level if it is unequal to the current loose compression level by the preceding rules, or if the loose object was written while core.legacyheaders = true. Newly deltified loose objects are always compressed to the current pack compression level. Previously packed objects added to a pack are recompressed to the current pack compression level exactly when their deltification status changes, since the previous pack data cannot be reused. In either case, the --no-reuse-object switch from the first patch below will always force recompression to the current pack compression level, instead of assuming the pack compression level hasn't changed and pack data can be reused when possible. This applies on top of the following patches from Nicolas Pitre: [PATCH] allow for undeltified objects not to be reused [PATCH] make "repack -f" imply "pack-objects --no-reuse-object" Signed-off-by: Dana L. How <danahow@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
* | deprecate the new loose object header formatNicolas Pitre2007-05-10
|/ | | | | | | | | | | | | | | | | | | | Now that we encourage and actively preserve objects in a packed form more agressively than we did at the time the new loose object format and core.legacyheaders were introduced, that extra loose object format doesn't appear to be worth it anymore. Because the packing of loose objects has to go through the delta match loop anyway, and since most of them should end up being deltified in most cases, there is really little advantage to have this parallel loose object format as the CPU savings it might provide is rather lost in the noise in the end. This patch gets rid of core.legacyheaders, preserve the legacy format as the only writable loose object format and deprecate the other one to keep things simpler. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* allow for undeltified objects not to be reusedNicolas Pitre2007-05-10
| | | | | | | | | | | | Currently non deltified object data is always reused when possible. This means that any change to core.compression has no effect on those objects as they don't get recompressed when repacking them. Let's add a --no-reuse-object flag to git-repack in order to force recompression of all objects when desired. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
* Increase pack.depth default to 50Theodore Ts'o2007-05-08
| | | | | Signed-off-by: "Theodore Ts'o" <tytso@mit.edu> Signed-off-by: Junio C Hamano <junkio@cox.net>