aboutsummaryrefslogtreecommitdiff
path: root/fetch-pack.h
Commit message (Collapse)AuthorAge
* filter_refs(): delete matched refs from sought listMichael Haggerty2012-09-12
| | | | | | | | | | | | | | | Remove any references that are available from the remote from the sought list (rather than overwriting their names with NUL characters, as previously). Mark matching entries by writing a non-NULL pointer to string_list_item::util during the iteration, then use filter_string_list() later to filter out the entries that have been marked. Document this aspect of fetch_pack() in a comment in the header file. (More documentation is obviously still needed.) Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Change fetch_pack() and friends to take string_list argumentsMichael Haggerty2012-09-12
| | | | | | | | | | Instead of juggling <nr_heads,heads> (sometimes called <nr_match,match>), pass around the list of references to be sought in a single string_list variable called "sought". Future commits will make more use of string_list functionality. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* fetch_pack(): reindent function decl and defnMichael Haggerty2012-09-12
| | | | | Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* fetch-pack: new --stdin option to read refs from stdinIvan Todoroski2012-04-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a remote repo has too many tags (or branches), cloning it over the smart HTTP transport can fail because remote-curl.c puts all the refs from the remote repo on the fetch-pack command line. This can make the command line longer than the global OS command line limit, causing fetch-pack to fail. This is especially a problem on Windows where the command line limit is orders of magnitude shorter than Linux. There are already real repos out there that msysGit cannot clone over smart HTTP due to this problem. Here is an easy way to trigger this problem: git init too-many-refs cd too-many-refs echo bla > bla.txt git add . git commit -m test sha=$(git rev-parse HEAD) tag=$(perl -e 'print "bla" x 30') for i in `seq 50000`; do echo $sha refs/tags/$tag-$i >> .git/packed-refs done Then share this repo over the smart HTTP protocol and try cloning it: $ git clone http://localhost/.../too-many-refs/.git Cloning into 'too-many-refs'... fatal: cannot exec 'fetch-pack': Argument list too long 50k tags is obviously an absurd number, but it is required to demonstrate the problem on Linux because it has a much more generous command line limit. On Windows the clone fails with as little as 500 tags in the above loop, which is getting uncomfortably close to the number of tags you might see in real long lived repos. This is not just theoretical, msysGit is already failing to clone our company repo due to this. It's a large repo converted from CVS, nearly 10 years of history. Four possible solutions were discussed on the Git mailing list (in no particular order): 1) Call fetch-pack multiple times with smaller batches of refs. This was dismissed as inefficient and inelegant. 2) Add option --refs-fd=$n to pass a an fd from where to read the refs. This was rejected because inheriting descriptors other than stdin/stdout/stderr through exec() is apparently problematic on Windows, plus it would require changes to the run-command API to open extra pipes. 3) Add option --refs-from=$tmpfile to pass the refs using a temp file. This was not favored because of the temp file requirement. 4) Add option --stdin to pass the refs on stdin, one per line. In the end this option was chosen as the most efficient and most desirable from scripting perspective. There was however a small complication when using stdin to pass refs to fetch-pack. The --stateless-rpc option to fetch-pack also uses stdin for communication with the remote server. If we are going to sneak refs on stdin line by line, it would have to be done very carefully in the presence of --stateless-rpc, because when reading refs line by line we might read ahead too much data into our buffer and eat some of the remote protocol data which is also coming on stdin. One way to solve this would be to refactor get_remote_heads() in fetch-pack.c to accept a residual buffer from our stdin line parsing above, but this function is used in several places so other callers would be burdened by this residual buffer interface even when most of them don't need it. In the end we settled on the following solution: If --stdin is specified without --stateless-rpc, fetch-pack would read the refs from stdin one per line, in a script friendly format. However if --stdin is specified together with --stateless-rpc, fetch-pack would read the refs from stdin in packetized format (pkt-line) with a flush packet terminating the list of refs. This way we can read the exact number of bytes that we need from stdin, and then get_remote_heads() can continue reading from the same fd without losing a single byte of remote protocol data. This way the --stdin option only loses generality and scriptability when used together with --stateless-rpc, which is not easily scriptable anyway because it also uses pkt-line when talking to the remote server. Signed-off-by: Ivan Todoroski <grnch@gmx.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* standardize brace placement in struct definitionsJonathan Nieder2011-03-16
| | | | | | | | | | | | | | | | | | | | | | In a struct definitions, unlike functions, the prevailing style is for the opening brace to go on the same line as the struct name, like so: struct foo { int bar; char *baz; }; Indeed, grepping for 'struct [a-z_]* {$' yields about 5 times as many matches as 'struct [a-z_]*$'. Linus sayeth: Heretic people all over the world have claimed that this inconsistency is ... well ... inconsistent, but all right-thinking people know that (a) K&R are _right_ and (b) K&R are right. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Smart fetch over HTTP: client sideShawn O. Pearce2009-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | The git-remote-curl backend detects if the remote server supports the git-upload-pack service, and if so, runs git-fetch-pack locally in a pipe to generate the want/have commands. The advertisements from the server that were obtained during the discovery are passed into git-fetch-pack before the POST request starts, permitting server capability discovery and enablement. Common objects that are discovered are appended onto the request as have lines and are sent again on the next request. This allows the remote side to reinitialize its in-memory list of common objects during the next request. Because all requests are relatively short, below git-remote-curl's 1 MiB buffer limit, requests will use the standard Content-Length header and be valid HTTP/1.0 POST requests. This makes the fetch client more tolerant of proxy servers which don't support HTTP/1.1 or the chunked transfer encoding. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Teach fetch-pack/upload-pack about --include-tagShawn O. Pearce2008-03-04
| | | | | | | | | | | | The new protocol extension "include-tag" allows the client side of the connection (fetch-pack) to request that the server side of the native git protocol (upload-pack / pack-objects) use --include-tag as it prepares the packfile, thus ensuring that an annotated tag object will be included in the resulting packfile if the object it refers to was also included into the packfile. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Reduce the number of connects when fetchingDaniel Barkalow2008-02-05
| | | | | | | | | | | | | | | This shares the connection between getting the remote ref list and getting objects in the first batch. (A second connection is still used to follow tags). When we do not fetch objects (i.e. either ls-remote disconnects after getting list of refs, or we decide we are already up-to-date), we clean up the connection properly; otherwise the connection is left open in need of cleaning up to avoid getting an error message from the remote end when ssh is used. Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Always obtain fetch-pack arguments from struct fetch_pack_argsShawn O. Pearce2007-09-19
| | | | | | | | | | | | | | | | | | | | | | Copying the arguments from a fetch_pack_args into static globals within the builtin-fetch-pack module is error-prone and may lead rise to cases where arguments supplied via the struct from the new fetch_pack() API may not be honored by the implementation. Here we reorganize all of the static globals into a single static struct fetch_pack_args instance and use memcpy() to move the data from the caller supplied structure into the globals before we execute our pack fetching implementation. This strategy is more robust to additions and deletions of properties. As keep_pack is a single bit we have also introduced lock_pack to mean not only download and store the packfile via index-pack but also to lock it against repacking by creating a .keep file when the packfile itself is stored. The caller must remove the .keep file when it is safe to do so. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Use 'unsigned:1' when we mean boolean optionsShawn O. Pearce2007-09-19
| | | | | | | | | These options are all strictly boolean (true/false). Its easier to document this implicitly by making their storage type a single bit. There is no compelling memory space reduction reason for this change, it just makes the structure definition slightly more readable. Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
* Remove pack.keep after ref updates in git-fetchShawn O. Pearce2007-09-19
| | | | | | | | | | | | | | | | | | | | If we are using a native packfile to perform a git-fetch invocation and the received packfile contained more than the configured limits of fetch.unpackLimit/transfer.unpackLimit then index-pack will output a single line saying "keep\t$sha1\n" to stdout. This line needs to be captured and retained so we can delete the corresponding .keep file ("$GIT_DIR/objects/pack/pack-$sha1.keep") once all refs have been safely updated. This trick has long been in use with git-fetch.sh and its lower level helper git-fetch--tool as a way to allow index-pack to save the new packfile before the refs have been updated and yet avoid a race with any concurrently running git-repack process. It was unfortunately lost when git-fetch.sh was converted to pure C and fetch--tool was no longer being invoked. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Make fetch-pack a builtin with an internal APIDaniel Barkalow2007-09-19
Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>