aboutsummaryrefslogtreecommitdiff
path: root/remote-curl.c
Commit message (Collapse)AuthorAge
* Merge branch 'sp/fix-smart-http-deadlock-on-error'Junio C Hamano2010-08-12
|\ | | | | | | | | * sp/fix-smart-http-deadlock-on-error: smart-http: Don't deadlock on server failure
| * smart-http: Don't deadlock on server failureShawn O. Pearce2010-08-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the remote HTTP server fails (e.g. returns 404 or 500) when we posted the RPC to it, we won't have sent anything to the background Git process that is supposed to handle the stream. Because we didn't send anything, its waiting for input from remote-curl, and remote-curl cannot read its response payload because doing so would lead to a deadlock. Send the background task EOF on its input before we try to read its response back, that way it will break out of its read loop and terminate. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'rc/maint-curl-helper'Junio C Hamano2010-05-08
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | * rc/maint-curl-helper: remote-curl: ensure that URLs have a trailing slash http: make end_url_with_slash() public t5541-http-push: add test for URLs with trailing slash Conflicts: remote-curl.c
| * | remote-curl: ensure that URLs have a trailing slashTay Ray Chuan2010-04-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, we blindly assumed that URLs passed to the remote-curl helper did not end with a trailing slash. Use the convenience function end_url_with_slash() from http.[ch] to ensure that URLs have a trailing slash on invocation of the remote-curl helper, and use the URL as one with a trailing slash throughout. It is possible for users to pass a URL with a trailing slash to remote-curl, by, say, setting it in remote.<name>.url in their git config. The resulting requests have an empty path component (//) and may break implementations of the http git protocol. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Prompt for a username when an HTTP request 401sScott Chacon2010-04-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When an HTTP request returns a 401, Git will currently fail with a confusing message saying that it got a 401, which is not very descriptive. Currently if a user wants to use Git over HTTP, they have to use one URL with the username in the URL (e.g. "http://user@host.com/repo.git") for write access and another without the username for unauthenticated read access (unless they want to be prompted for the password each time). However, since the HTTP servers will return a 401 if an action requires authentication, we can prompt for username and password if we see this, allowing us to use a single URL for both purposes. This patch changes http_request to prompt for the username and password, then return HTTP_REAUTH so http_get_strbuf can try again. If it gets a 401 even when a user/pass is supplied, http_request will now return HTTP_NOAUTH which remote_curl can then use to display a more intelligent error message that is less confusing. Signed-off-by: Scott Chacon <schacon@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'tc/http-cleanup'Junio C Hamano2010-03-15
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * tc/http-cleanup: remote-curl: init walker only when needed remote-curl: use http_fetch_ref() instead of walker wrapper http: init and cleanup separately from http-walker http-walker: cleanup more thoroughly http-push: remove "|| 1" to enable verbose check t554[01]-http-push: refactor, add non-ff tests t5541-http-push: check that ref is unchanged for non-ff test
| * | | remote-curl: init walker only when neededTay Ray Chuan2010-03-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Invoke get_http_walker() only when fetching with the dumb protocol. Additionally, add an invocation to walker_free() after we're done using the walker. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | remote-curl: use http_fetch_ref() instead of walker wrapperTay Ray Chuan2010-03-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The http-walker implementation of walker->fetch_ref() doesn't do anything special compared to http_fetch_ref() anyway. Remove init_walker() invocation before fetching the ref, since we aren't using the walker wrapper and don't need a walker instance anymore. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | http: init and cleanup separately from http-walkerTay Ray Chuan2010-03-02
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, all our http operations were done with http-walker. With the new remote-curl helper, we find ourselves using http methods outside of http-walker - for example, fetching info/refs. Accomodate this by separating http_init() and http_cleanup() invocations from http-walker. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'sp/maint-push-sideband' into sp/push-sidebandJunio C Hamano2010-02-05
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * sp/maint-push-sideband: receive-pack: Send hook output over side band #2 receive-pack: Wrap status reports inside side-band-64k receive-pack: Refactor how capabilities are shown to the client send-pack: demultiplex a sideband stream with status data run-command: support custom fd-set in async run-command: Allow stderr to be a caller supplied pipe Update git fsck --full short description to mention packs Conflicts: run-command.c
| * | run-command: support custom fd-set in asyncErik Faye-Lund2010-02-05
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the possibility to supply a set of non-0 file descriptors for async process communication instead of the default-created pipe. Additionally, we now support bi-directional communiction with the async procedure, by giving the async function both read and write file descriptors. To retain compatiblity and similar "API feel" with start_command, we require start_async callers to set .out = -1 to get a readable file descriptor. If either of .in or .out is 0, we supply no file descriptor to the async process. [sp: Note: Erik started this patch, and a huge bulk of it is his work. All bugs were introduced later by Shawn.] Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'maint'Junio C Hamano2010-01-21
|\ \ | |/ | | | | | | | | * maint: merge-recursive: do not return NULL only to cause segfault retry request without query when info/refs?query fails
| * retry request without query when info/refs?query failsTay Ray Chuan2010-01-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When "info/refs" is a static file and not behind a CGI handler, some servers may not handle a GET request for it with a query string appended (eg. "?foo=bar") properly. If such a request fails, retry it sans the query string. In addition, ensure that the "smart" http protocol is not used (a service has to be specified with "?service=<service name>" to be conformant). Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Reported-and-tested-by: Yaroslav Halchenko <debian@onerussian.com> Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jc/symbol-static'Junio C Hamano2010-01-20
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * jc/symbol-static: date.c: mark file-local function static Replace parse_blob() with an explanatory comment symlinks.c: remove unused functions object.c: remove unused functions strbuf.c: remove unused function sha1_file.c: remove unused function mailmap.c: remove unused function utf8.c: mark file-local function static submodule.c: mark file-local function static quote.c: mark file-local function static remote-curl.c: mark file-local function static read-cache.c: mark file-local functions static parse-options.c: mark file-local function static entry.c: mark file-local function static http.c: mark file-local functions static pretty.c: mark file-local function static builtin-rev-list.c: mark file-local function static bisect.c: mark file-local function static
| * | remote-curl.c: mark file-local function staticJunio C Hamano2010-01-12
| | | | | | | | | | | | Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'maint'Junio C Hamano2010-01-12
|\ \ \ | |/ / |/| / | |/ | | | | | | * maint: remote-curl: Fix Accept header for smart HTTP connections grep: -L should show empty files rebase--interactive: Ignore comments and blank lines in peek_next_command
| * remote-curl: Fix Accept header for smart HTTP connectionsShawn O. Pearce2010-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | We actually expect to see an application/x-git-upload-pack-result but we lied and said we Accept *-response. This was a typo on my part when I was writing the code. Fortunately the wrong Accept header had no real impact, as the deployed git-http-backend servers were not testing the Accept header before they returned their content. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Allow curl to rewind the RPC read bufferMartin Storsjö2009-12-01
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | When using multi-pass authentication methods, the curl library may need to rewind the read buffers used for providing data to HTTP POST, if data has been output before a 401 error is received. This is needed only when the first request (when the multi-pass authentication method isn't initialized and hasn't received its challenge yet) for a certain curl session is a chunked HTTP POST. As long as the current rpc read buffer is the first one, we're able to rewind without need for additional buffering. The curl library currently starts sending data without waiting for a response to the Expect: 100-continue header, due to a bug in curl that exists up to curl version 7.19.7. If the HTTP server doesn't handle Expect: 100-continue headers properly (e.g. Lighttpd), the library has to start sending data without knowing if the request will be successfully authenticated. In this case, this rewinding solution is not sufficient - the whole request will be sent before the 401 error is received. Signed-off-by: Martin Storsjo <martin@martin.st> Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* remote-curl.c: fix rpc_out()Tay Ray Chuan2009-11-23
| | | | | | | | | | | Remove the extraneous semicolon (';') at the end of the if statement that allowed the code in its block to execute regardless of the condition. This fixes pushing to a smart http backend with chunked encoding. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Disable CURLOPT_NOBODY before enabling CURLOPT_PUT and CURLOPT_POSTMartin Storsjö2009-11-22
| | | | | | | | | | | | | | | | This works around a bug in curl versions up to 7.19.4, where disabling the CURLOPT_NOBODY option sets the internal state incorrectly considering that CURLOPT_PUT was enabled earlier. The bug is discussed at http://curl.haxx.se/bug/view.cgi?id=2727981 and is corrected in the latest version of curl in CVS. This bug usually has no impact on git, but may surface if using multi-pass authentication methods. Signed-off-by: Martin Storsjo <martin@martin.st> Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'sp/smart-http'Junio C Hamano2009-11-20
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * sp/smart-http: (37 commits) http-backend: Let gcc check the format of more printf-type functions. http-backend: Fix access beyond end of string. http-backend: Fix bad treatment of uintmax_t in Content-Length t5551-http-fetch: Work around broken Accept header in libcurl t5551-http-fetch: Work around some libcurl versions http-backend: Protect GIT_PROJECT_ROOT from /../ requests Git-aware CGI to provide dumb HTTP transport http-backend: Test configuration options http-backend: Use http.getanyfile to disable dumb HTTP serving test smart http fetch and push http tests: use /dumb/ URL prefix set httpd port before sourcing lib-httpd t5540-http-push: remove redundant fetches Smart HTTP fetch: gzip requests Smart fetch over HTTP: client side Smart push over HTTP: client side Discover refs via smart HTTP server when available http-backend: more explict LocationMatch http-backend: add example for gitweb on same URL http-backend: use mod_alias instead of mod_rewrite ... Conflicts: .gitignore remote-curl.c
| * Smart HTTP fetch: gzip requestsShawn O. Pearce2009-11-04
| | | | | | | | | | | | | | | | | | | | | | The upload-pack requests are mostly plain text and they compress rather well. Deflating them with Content-Encoding: gzip can easily drop the size of the request by 50%, reducing the amount of data to transfer as we negotiate the common commits. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * Smart fetch over HTTP: client sideShawn O. Pearce2009-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The git-remote-curl backend detects if the remote server supports the git-upload-pack service, and if so, runs git-fetch-pack locally in a pipe to generate the want/have commands. The advertisements from the server that were obtained during the discovery are passed into git-fetch-pack before the POST request starts, permitting server capability discovery and enablement. Common objects that are discovered are appended onto the request as have lines and are sent again on the next request. This allows the remote side to reinitialize its in-memory list of common objects during the next request. Because all requests are relatively short, below git-remote-curl's 1 MiB buffer limit, requests will use the standard Content-Length header and be valid HTTP/1.0 POST requests. This makes the fetch client more tolerant of proxy servers which don't support HTTP/1.1 or the chunked transfer encoding. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * Smart push over HTTP: client sideShawn O. Pearce2009-11-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The git-remote-curl backend detects if the remote server supports the git-receive-pack service, and if so, runs git-send-pack in a pipe to dump the command and pack data as a single POST request. The advertisements from the server that were obtained during the discovery are passed into git-send-pack before the POST request starts. This permits git-send-pack to operate largely unmodified. For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a Content-Length is used, permitting interaction with any server. The 1 MiB limit is arbitrary, but is sufficent to fit most deltas created by human authors against text sources with the occasional small binary file (e.g. few KiB icon image). The configuration option http.postBuffer can be used to increase (or shink) this buffer if the default is not sufficient. For larger packs which cannot be spooled entirely into the helper's memory space (due to http.postBuffer being too small), the POST request requires HTTP/1.1 and sets "Transfer-Encoding: chunked". This permits the client to upload an unknown amount of data in one HTTP transaction without needing to pregenerate the entire pack file locally. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * Discover refs via smart HTTP server when availableShawn O. Pearce2009-11-04
| | | | | | | | | | | | | | | | | | | | | | | | Instead of loading the cached info/refs, try to use the smart HTTP version when the server supports it. Since the smart variant is actually the pkt-line stream from the start of either upload-pack or receive-pack we need to parse these through get_remote_heads, which requires a background thread to feed its pipe. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * Move WebDAV HTTP push under remote-curlShawn O. Pearce2009-10-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The remote helper interface now supports the push capability, which can be used to ask the implementation to push one or more specs to the remote repository. For remote-curl we implement this by calling the existing WebDAV based git-http-push executable. Internally the helper interface uses the push_refs transport hook so that the complexity of the refspec parsing and matching can be reused between remote implementations. When possible however the helper protocol uses source ref name rather than the source SHA-1, thereby allowing the helper to access this name if it is useful. >From Clemens Buchacher <drizzd@aon.at>: update http tests according to remote-curl capabilities o Pushing packed refs is now fixed. o The transport helper fails if refs are already up-to-date. Add a test for that. o The transport helper will notice if refs are already up-to-date. We therefore need to update server info in the unpacked-refs test. o The transport helper will purge deleted branches automatically. o Use a variable ($ORIG_HEAD) instead of full SHA-1 name. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> CC: Mike Hommey <mh@glandium.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * remote-helpers: Support custom transport optionsShawn O. Pearce2009-10-30
| | | | | | | | | | | | | | | | | | | | | | Some transports, like the native pack transport implemented by fetch-pack, support useful features like depth or include tags. These should be exposed if the underlying helper knows how to use them. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * remote-helpers: Fetch more than one ref in a batchShawn O. Pearce2009-10-30
| | | | | | | | | | | | | | | | | | | | | | | | Some network protocols (e.g. native git://) are able to fetch more than one ref at a time and reduce the overall transfer cost by combining the requests into a single exchange. Instead of feeding each fetch request one at a time to the helper, feed all of them at once so the helper can decide whether or not it should batch them. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * remote-curl: Refactor walker initializationShawn O. Pearce2009-10-30
| | | | | | | | | | | | | | | | | | | | We will need the walker, url and remote in other functions as the code grows larger to support smart HTTP. Extract this out into a set of globals we can easily reference once configured. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Allow curl helper to work without a local repositoryDaniel Barkalow2009-11-03
|/ | | | | | | | | | It's okay to use the curl helper without a local repository, so long as you don't use "fetch". There aren't any git programs that would try to use it, and it doesn't make sense to try it (since there's nowhere to write the results), but we may as well be clear. Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* clone: Supply the right commit hash to post-checkout when -b is usedBjörn Steinbrink2009-10-14
| | | | | | | | | | | | | | | | | | | When we use -b <branch>, we may checkout something else than what the remote's HEAD references, but we still used remote_head to supply the new ref value to the post-checkout hook, which is wrong. So instead of using remote_head to find the value to be passed to the post-checkout hook, we have to use our_head_points_at, which is always correctly setup, even if -b is not used. This also fixes a segfault when "clone -b <branch>" is used with a remote repo that doesn't have a valid HEAD, as in such a case remote_head is NULL, but we still tried to access it. Reported-by: Devin Cofer <ranguvar@archlinux.us> Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* remote-curl: add missing initialization of argv0_pathJohannes Sixt2009-10-13
| | | | | | | | | | | | All programs, in particular also the stand-alone programs (non-builtins) must call git_extract_argv0_path(argv[0]) in order to help builds that derive the installation prefix at runtime, such as the MinGW build. Without this call, the program segfaults (or raises an assertion failure). Signed-off-by: Johannes Sixt <j6t@kdbg.org> Tested-by: Michael Wookey <michaelwookey@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Use an external program to implement fetching with curlDaniel Barkalow2009-08-05
Use the transport native helper mechanism to fetch by http (and ftp, etc). Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>