aboutsummaryrefslogtreecommitdiff
path: root/transport.c
Commit message (Collapse)AuthorAge
* Merge branch 'ab/free-and-null'Junio C Hamano2017-06-24
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | A common pattern to free a piece of memory and assign NULL to the pointer that used to point at it has been replaced with a new FREE_AND_NULL() macro. * ab/free-and-null: *.[ch] refactoring: make use of the FREE_AND_NULL() macro coccinelle: make use of the "expression" FREE_AND_NULL() rule coccinelle: add a rule to make "expression" code use FREE_AND_NULL() coccinelle: make use of the "type" FREE_AND_NULL() rule coccinelle: add a rule to make "type" code use FREE_AND_NULL() git-compat-util: add a FREE_AND_NULL() wrapper around free(ptr); ptr = NULL
| * coccinelle: make use of the "type" FREE_AND_NULL() ruleÆvar Arnfjörð Bjarmason2017-06-16
| | | | | | | | | | | | | | | | | | | | Apply the result of the just-added coccinelle rule. This manually excludes a few occurrences, mostly things that resulted in many FREE_AND_NULL() on one line, that'll be manually fixed in a subsequent change. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'bw/config-h'Junio C Hamano2017-06-24
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | Fix configuration codepath to pay proper attention to commondir that is used in multi-worktree situation, and isolate config API into its own header file. * bw/config-h: config: don't implicitly use gitdir or commondir config: respect commondir setup: teach discover_git_directory to respect the commondir config: don't include config.h by default config: remove git_config_iter config: create config.h
| * config: don't include config.h by defaultBrandon Williams2017-06-15
| | | | | | | | | | | | | | | | Stop including config.h by default in cache.h. Instead only include config.h in those files which require use of the config system. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | bundle: convert to struct object_idbrian m. carlson2017-05-02
|/ | | | | | | | | | Convert the bundle code, plus the sole external user of struct ref_list_entry, to use struct object_id. Include cache.h from within bundle.h to provide the definition. Convert some of the hash parsing code to use parse_oid_hex to avoid needing to hard-code constant values. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'bw/push-options-recursively-to-submodules'Junio C Hamano2017-04-19
|\ | | | | | | | | | | | | | | | | | | | | | | "git push --recurse-submodules --push-option=<string>" learned to propagate the push option recursively down to pushes in submodules. * bw/push-options-recursively-to-submodules: push: propagate remote and refspec with --recurse-submodules submodule--helper: add push-check subcommand remote: expose parse_push_refspec function push: propagate push-options with --recurse-submodules push: unmark a local variable as static
| * push: propagate remote and refspec with --recurse-submodulesBrandon Williams2017-04-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach "push --recurse-submodules" to propagate, if given a name as remote, the provided remote and refspec recursively to the pushes performed in the submodules. The push will therefore only succeed if all submodules have a remote with such a name configured. Note that "push --recurse-submodules" with a path or URL as remote will not propagate the remote or refspec and instead use the default remote and refspec configured in the submodule, preserving the current behavior. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * push: propagate push-options with --recurse-submodulesBrandon Williams2017-04-11
| | | | | | | | | | | | | | | | Teach push --recurse-submodules to propagate push-options recursively to the pushes performed in the submodules. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Rename sha1_array to oid_arraybrian m. carlson2017-03-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since this structure handles an array of object IDs, rename it to struct oid_array. Also rename the accessor functions and the initialization constant. This commit was produced mechanically by providing non-Documentation files to the following Perl one-liners: perl -pi -E 's/struct sha1_array/struct oid_array/g' perl -pi -E 's/\bsha1_array_/oid_array_/g' perl -pi -E 's/SHA1_ARRAY_INIT/OID_ARRAY_INIT/g' Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Make sha1_array_append take a struct object_id *brian m. carlson2017-03-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the callers to pass struct object_id by changing the function declaration and definition and applying the following semantic patch: @@ expression E1, E2; @@ - sha1_array_append(E1, E2.hash) + sha1_array_append(E1, &E2) @@ expression E1, E2; @@ - sha1_array_append(E1, E2->hash) + sha1_array_append(E1, E2) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Convert GIT_SHA1_HEXSZ used for allocation to GIT_MAX_HEXSZbrian m. carlson2017-03-26
|/ | | | | | | | | | | | | Since we will likely be introducing a new hash function at some point, and that hash function might be longer than 40 hex characters, use the constant GIT_MAX_HEXSZ, which is designed to be suitable for allocations, instead of GIT_SHA1_HEXSZ. This will ease the transition down the line by distinguishing between places where we need to allocate memory suitable for the largest hash from those where we need to handle the current hash. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'bc/object-id'Junio C Hamano2017-03-17
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "uchar [40]" to "struct object_id" conversion continues. * bc/object-id: wt-status: convert to struct object_id builtin/merge-base: convert to struct object_id Convert object iteration callbacks to struct object_id sha1_file: introduce an nth_packed_object_oid function refs: simplify parsing of reflog entries refs: convert each_reflog_ent_fn to struct object_id reflog-walk: convert struct reflog_info to struct object_id builtin/replace: convert to struct object_id Convert remaining callers of resolve_refdup to object_id builtin/merge: convert to struct object_id builtin/clone: convert to struct object_id builtin/branch: convert to struct object_id builtin/grep: convert to struct object_id builtin/fmt-merge-message: convert to struct object_id builtin/fast-export: convert to struct object_id builtin/describe: convert to struct object_id builtin/diff-tree: convert to struct object_id builtin/commit: convert to struct object_id hex: introduce parse_oid_hex
| * Convert remaining callers of resolve_refdup to object_idbrian m. carlson2017-02-22
| | | | | | | | | | | | | | | | | | There are a few leaf functions in various files that call resolve_refdup. Convert these functions to use struct object_id internally to prepare for transitioning resolve_refdup itself. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'mm/fetch-show-error-message-on-unadvertised-object'Junio C Hamano2017-03-14
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git fetch" that requests a commit by object name, when the other side does not allow such an request, failed without much explanation. * mm/fetch-show-error-message-on-unadvertised-object: fetch-pack: add specific error for fetching an unadvertised object fetch_refs_via_pack: call report_unmatched_refs fetch-pack: move code to report unmatched refs to a function
| * | fetch_refs_via_pack: call report_unmatched_refsMatt McCutchen2017-03-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git fetch" currently doesn't bother to check that it got all refs it sought, because the common case of requesting a nonexistent ref triggers a die() in get_fetch_map. However, there's at least one case that slipped through: "git fetch REMOTE SHA1" if the server doesn't allow requests for unadvertised objects. Make fetch_refs_via_pack (which is on the "git fetch" code path) call report_unmatched_refs so that we at least get an error message in that case. Signed-off-by: Matt McCutchen <matt@mattmccutchen.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | Merge branch 'bw/push-dry-run' into maintJunio C Hamano2017-01-17
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git push --dry-run --recurse-submodule=on-demand" wasn't "--dry-run" in the submodules. * bw/push-dry-run: push: fix --dry-run to not push submodules push: --dry-run updates submodules when --recurse-submodules=on-demand
| * \ \ Merge branch 'hv/submodule-not-yet-pushed-fix' into maintJunio C Hamano2017-01-17
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code in "git push" to compute if any commit being pushed in the superproject binds a commit in a submodule that hasn't been pushed out was overly inefficient, making it unusable even for a small project that does not have any submodule but have a reasonable number of refs. * hv/submodule-not-yet-pushed-fix: submodule_needs_pushing(): explain the behaviour when we cannot answer batch check whether submodule needs pushing into one call serialize collection of refs that contain submodule changes serialize collection of changed submodules
* | \ \ \ Merge branch 'km/delete-ref-reflog-message'Junio C Hamano2017-02-27
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git update-ref -d" and other operations to delete references did not leave any entry in HEAD's reflog when the reference being deleted was the current branch. This is not a problem in practice because you do not want to delete the branch you are currently on, but caused renaming of the current branch to something else not to be logged in a useful way. * km/delete-ref-reflog-message: branch: record creation of renamed branch in HEAD's log rename_ref: replace empty message in HEAD's log update-ref: pass reflog message to delete_ref() delete_ref: accept a reflog message argument
| * | | | | delete_ref: accept a reflog message argumentKyle Meyer2017-02-20
| | |_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the current branch is renamed with 'git branch -m/-M' or deleted with 'git update-ref -m<msg> -d', the event is recorded in HEAD's log with an empty message. In preparation for adding a more meaningful message to HEAD's log in these cases, update delete_ref() to take a message argument and pass it along to ref_transaction_delete(). Modify all callers to pass NULL for the new message argument; no change in behavior is intended. Note that this is relevant for HEAD's log but not for the deleted ref's log, which is currently deleted along with the ref. Even if it were not, an entry for the deletion wouldn't be present in the deleted ref's log. files_transaction_commit() writes to the log if REF_NEEDS_COMMIT or REF_LOG_ONLY are set, but lock_ref_for_update() doesn't set REF_NEEDS_COMMIT for the deleted ref because REF_DELETING is set. In contrast, the update for HEAD has REF_LOG_ONLY set by split_head_update(), resulting in the deletion being logged. Signed-off-by: Kyle Meyer <kyle@kyleam.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | for_each_alternate_ref: replace transport code with for-each-refJeff King2017-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current method for getting the refs from an alternate is to run upload-pack in the alternate and parse its output using the normal transport code. This works and is reasonably short, but it has a very bad memory footprint when there are a lot of refs in the alternate. There are two problems: 1. It reads in all of the refs before passing any back to us. Which means that our peak memory usage has to store every ref (including duplicates for peeled variants), even if our callback could determine that some are not interesting (e.g., because they point to the same sha1 as another ref). 2. It allocates a "struct ref" for each one. Among other things, this contains 3 separate 20-byte oids, along with the name and various pointers. That can add up, especially if the callback is only interested in the sha1 (which it can store in a sha1_array as just 20 bytes). On a particularly pathological case, where the alternate had over 80 million refs pointing to only around 60,000 unique objects, the peak heap usage of "git clone --reference" grew to over 25GB. This patch instead calls git-for-each-ref in the alternate repository, and passes each line to the callback as we read it. That drops the peak heap of the same command to 50MB. I considered and rejected a few alternatives. We could read all of the refs in the alternate using our own ref code, just as we do with submodules. However, as memory footprint is one of the concerns here, we want to avoid loading those refs into our own memory as a whole. It's possible that this will be a better technique in the future when the ref code can more easily iterate without loading all of packed-refs into memory. Another option is to keep calling upload-pack, and just parse its output ourselves in a streaming fashion. Besides for-each-ref being simpler (we get to define the format ourselves, and don't have to deal with speaking the git protocol), it's more flexible for possible future changes. For instance, it might be useful for the caller to be able to limit the set of "interesting" alternate refs. The motivating example is one where many "forks" of a particular repository share object storage, and the shared storage has refs for each fork (which is why so many of the refs are duplicates; each fork has the same tags). A plausible future optimization would be to ask for the alternate refs for just _one_ fork (if you had some out-of-band way of knowing which was the most interesting or important for the current operation). Similarly, no callbacks actually care about the symref value of alternate refs, and as before, this patch ignores them entirely. However, if we wanted to add them, for-each-ref's "%(symref)" is going to be more flexible than upload-pack, because the latter only handles the HEAD symref due to historical constraints. There is one potential downside, though: unlike upload-pack, our for-each-ref command doesn't report the peeled value of refs. The existing code calls the alternate_ref_fn callback twice for tags: once for the tag, and once for the peeled value with the refname set to "ref^{}". For the callers in fetch-pack, this doesn't matter at all. We immediately peel each tag down to a commit either way (so there's a slight improvement, as do not bother passing the redundant data over the pipe). For the caller in receive-pack, it means we will not advertise the peeled values of tags in our alternate. However, we also don't advertise peeled values for our _own_ tags, so this is actually making things more consistent. It's unclear whether receive-pack advertising peeled values is a win or not. On one hand, giving more information to the other side may let it omit some objects from the push. On the other hand, for tags which both sides have, they simply bloat the advertisement. The upload-pack advertisement of git.git is about 30% larger than the receive-pack advertisement due to its peeled information. This patch omits the peeled information from for_each_alternate_ref entirely, and leaves it up to the caller whether they want to dig up the information. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | for_each_alternate_ref: pass name/oid instead of ref structJeff King2017-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Breaking down the fields in the interface makes it easier to change the backend of for_each_alternate_ref to something that doesn't use "struct ref" internally. The only field that callers actually look at is the oid, anyway. The refname is kept in the interface as a plausible thing for future code to want. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | for_each_alternate_ref: use strbuf for path allocationJeff King2017-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a string with ".../objects" pointing to the alternate object store, and overwrite bits of it to look at other paths in the (potential) git repository holding it. This works because the only path we care about is "refs", which is shorter than "objects". Using a strbuf to hold the path lets us get rid of some magic numbers, and makes it more obvious that the memory operations are safe. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | for_each_alternate_ref: stop trimming trailing slashesJeff King2017-02-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The real_pathdup() function will have removed extra slashes for us already (on top of the normalize_path() done when we created the alternate_object_database struct in the first place). Incidentally, this also fixes the case where the path is just "/", which would read off the start of the array. That doesn't seem possible to trigger in practice, though, as link_alt_odb_entry() blindly eats trailing slashes, including a bare "/". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | for_each_alternate_ref: handle failure from real_pathdup()Jeff King2017-02-08
|/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In older versions of git, if real_path() failed to resolve the alternate object store path, we would die() with an error. However, since 4ac9006f8 (real_path: have callers use real_pathdup and strbuf_realpath, 2016-12-12) we use the real_pathdup() function, which may return NULL. Since we don't check the return value, we can segfault. This is hard to trigger in practice, since we check that the path is accessible before creating the alternate_object_database struct. But it could be removed racily, or we could see a transient filesystem error. We could restore the original behavior by switching back to xstrdup(real_path()). However, dying is probably not the best option here. This whole function is best-effort already; there might not even be a repository around the shared objects at all. And if the alternate store has gone away, there are no objects to show. So let's just quietly return, as we would if we failed to open "refs/", or if upload-pack failed to start, etc. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'bw/push-submodule-only'Junio C Hamano2017-01-31
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git submodule push" learned "--recurse-submodules=only option to push submodules out without pushing the top-level superproject. * bw/push-submodule-only: push: add option to push only submodules submodules: add RECURSE_SUBMODULES_ONLY value transport: reformat flag #defines to be more readable
| * | | | push: add option to push only submodulesBrandon Williams2016-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach push the --recurse-submodules=only option. This enables push to recursively push all unpushed submodules while leaving the superproject unpushed. This is a desirable feature in a scenario where updates to the superproject are handled automatically by some other means, perhaps a tool like Gerrit code review. In this scenario, a developer could make a change which spans multiple submodules and then push their commits for code review. Upon completion of the code review, their commits can be accepted and applied to their respective submodules while the code review tool can then automatically update the superproject to the most recent SHA1 of each submodule. This would reduce the merge conflicts in the superproject that could occur if multiple people are contributing to the same submodule. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'bw/grep-recurse-submodules'Junio C Hamano2017-01-18
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git grep" has been taught to optionally recurse into submodules. * bw/grep-recurse-submodules: grep: search history of moved submodules grep: enable recurse-submodules to work on <tree> objects grep: optionally recurse into submodules grep: add submodules as a grep source type submodules: load gitmodules file from commit sha1 submodules: add helper to determine if a submodule is initialized submodules: add helper to determine if a submodule is populated real_path: canonicalize directory separators in root parts real_path: have callers use real_pathdup and strbuf_realpath real_path: create real_pathdup real_path: convert real_path_internal to strbuf_realpath real_path: resolve symlinks by hand
| * | | | | real_path: have callers use real_pathdup and strbuf_realpathBrandon Williams2016-12-12
| | |/ / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migrate callers of real_path() who duplicate the retern value to use real_pathdup or strbuf_realpath. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'bw/transport-protocol-policy'Junio C Hamano2016-12-27
|\ \ \ \ \ | |_|/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Finer-grained control of what protocols are allowed for transports during clone/fetch/push have been enabled via a new configuration mechanism. * bw/transport-protocol-policy: http: respect protocol.*.allow=user for http-alternates transport: add from_user parameter to is_transport_allowed http: create function to get curl allowed protocols transport: add protocol policy config option http: always warn if libcurl version is too old lib-proto-disable: variable name fix
| * | | | transport: add from_user parameter to is_transport_allowedBrandon Williams2016-12-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a from_user parameter to is_transport_allowed() to allow http to be able to distinguish between protocol restrictions for redirects versus initial requests. CURLOPT_REDIR_PROTOCOLS can now be set differently from CURLOPT_PROTOCOLS to disallow use of protocols with the "user" policy in redirects. This change allows callers to query if a transport protocol is allowed, given that the caller knows that the protocol is coming from the user (1) or not from the user (0) such as redirects in libcurl. If unknown a -1 should be provided which falls back to reading `GIT_PROTOCOL_FROM_USER` to determine if the protocol came from the user. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | transport: add protocol policy config optionBrandon Williams2016-12-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously the `GIT_ALLOW_PROTOCOL` environment variable was used to specify a whitelist of protocols to be used in clone/fetch/push commands. This patch introduces new configuration options for more fine-grained control for allowing/disallowing protocols. This also has the added benefit of allowing easier construction of a protocol whitelist on systems where setting an environment variable is non-trivial. Now users can specify a policy to be used for each type of protocol via the 'protocol.<name>.allow' config option. A default policy for all unconfigured protocols can be set with the 'protocol.allow' config option. If no user configured default is made git will allow known-safe protocols (http, https, git, ssh, file), disallow known-dangerous protocols (ext), and have a default policy of `user` for all other protocols. The supported policies are `always`, `never`, and `user`. The `user` policy can be used to configure a protocol to be usable when explicitly used by a user, while disallowing it for commands which run clone/fetch/push commands without direct user intervention (e.g. recursive initialization of submodules). Commands which can potentially clone/fetch/push from untrusted repositories without user intervention can export `GIT_PROTOCOL_FROM_USER` with a value of '0' to prevent protocols configured to the `user` policy from being used. Fix remote-ext tests to use the new config to allow the ext protocol to be tested. Based on a patch by Jeff King <peff@peff.net> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | http: always warn if libcurl version is too oldBrandon Williams2016-12-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Always warn if libcurl version is too old because: 1. Even without a protocol whitelist, newer versions of curl have all non-standard protocols disabled by default. 2. A future patch will introduce default "known-good" and "known-bad" protocols which are allowed/disallowed by 'is_transport_allowed' which older version of libcurl can't respect. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | Merge branch 'rs/use-strbuf-add-unique-abbrev' into maintJunio C Hamano2016-09-08
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A small code clean-up. * rs/use-strbuf-add-unique-abbrev: use strbuf_add_unique_abbrev() for adding short hashes
| * \ \ \ \ Merge branch 'jk/push-scrub-url' into maintJunio C Hamano2016-08-08
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git fetch http://user:pass@host/repo..." scrubbed the userinfo part, but "git push" didn't. * jk/push-scrub-url: t5541: fix url scrubbing test when GPG is not set push: anonymize URL in status output
* | \ \ \ \ \ Merge branch 'bw/push-dry-run'Junio C Hamano2016-12-16
|\ \ \ \ \ \ \ | | |_|_|_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git push --dry-run --recurse-submodule=on-demand" wasn't "--dry-run" in the submodules. * bw/push-dry-run: push: fix --dry-run to not push submodules push: --dry-run updates submodules when --recurse-submodules=on-demand
| * | | | | | push: fix --dry-run to not push submodulesBrandon Williams2016-11-23
| | |_|_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Teach push to respect the --dry-run option when configured to recursively push submodules 'on-demand'. This is done by passing the --dry-run flag to the child process which performs a push for a submodules when performing a dry-run. In order to preserve good user experience, the additional check for unpushed submodules is skipped during a dry-run when --recurse-submodules=on-demand. The check is skipped because the submodule pushes were performed as dry-runs and this check would always fail as the submodules would still need to be pushed. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | Merge branch 'hv/submodule-not-yet-pushed-fix'Junio C Hamano2016-12-16
|\ \ \ \ \ \ | |/ / / / / | | | | | / | |_|_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code in "git push" to compute if any commit being pushed in the superproject binds a commit in a submodule that hasn't been pushed out was overly inefficient, making it unusable even for a small project that does not have any submodule but have a reasonable number of refs. * hv/submodule-not-yet-pushed-fix: submodule_needs_pushing(): explain the behaviour when we cannot answer batch check whether submodule needs pushing into one call serialize collection of refs that contain submodule changes serialize collection of changed submodules
| * | | | serialize collection of refs that contain submodule changesHeiko Voigt2016-11-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are iterating over each pushed ref and want to check whether it contains changes to submodules. Instead of immediately checking each ref lets first collect them and then do the check for all of them in one revision walk. Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'jc/abbrev-auto'Junio C Hamano2016-10-27
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git push" and "git fetch" reports from what old object to what new object each ref was updated, using abbreviated refnames, and they attempt to align the columns for this and other pieces of information. The way these codepaths compute how many display columns to allocate for the object names portion of this output has been updated to match the recent "auto scale the default abbreviation length" change. * jc/abbrev-auto: transport: compute summary-width dynamically transport: allow summary-width to be computed dynamically fetch: pass summary_width down the callchain transport: pass summary_width down the callchain
| * | | | | transport: compute summary-width dynamicallyJunio C Hamano2016-10-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now all that is left to do is to actually iterate over the refs and measure the display width needed to show their abbreviation. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | transport: allow summary-width to be computed dynamicallyJunio C Hamano2016-10-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now we have identified three callchains that have a set of refs that they want to show their <old, new> object names in an aligned output, we can replace their reference to the constant TRANSPORT_SUMMARY_WIDTH with a helper function call to transport_summary_width() that takes the set of ref as a parameter. This step does not yet iterate over the refs and compute, which is left as an exercise to the readers. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | transport: pass summary_width down the callchainJunio C Hamano2016-10-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The callchain that originates at transport_print_push_status() eventually hits a single leaf function, print_ref_status(), that is used to show from what old object to what new object a ref got updated, and the width of the part that shows old and new object names used a constant TRANSPORT_SUMMARY_WIDTH. Teach the callchain to pass the width down from the top instead. This allows a future enhancement to compute the necessary display width before calling down this callchain. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | Merge branch 'jk/alt-odb-cleanup'Junio C Hamano2016-10-17
|\ \ \ \ \ \ | |_|/ / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Codepaths involved in interacting alternate object store have been cleaned up. * jk/alt-odb-cleanup: alternates: use fspathcmp to detect duplicates sha1_file: always allow relative paths to alternates count-objects: report alternates via verbose mode fill_sha1_file: write into a strbuf alternates: store scratch buffer as strbuf fill_sha1_file: write "boring" characters alternates: use a separate scratch space alternates: encapsulate alt->base munging alternates: provide helper for allocating alternate alternates: provide helper for adding to alternates list link_alt_odb_entry: refactor string handling link_alt_odb_entry: handle normalize_path errors t5613: clarify "too deep" recursion tests t5613: do not chdir in main process t5613: whitespace/style cleanups t5613: use test_must_fail t5613: drop test_valid_repo function t5613: drop reachable_via function
| * | | | | alternates: use a separate scratch spaceJeff King2016-10-10
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The alternate_object_database struct uses a single buffer both for storing the path to the alternate, and as a scratch buffer for forming object names. This is efficient (since otherwise we'd end up storing the path twice), but it makes life hard for callers who just want to know the path to the alternate. They have to remember to stop reading after "alt->name - alt->base" bytes, and to subtract one for the trailing '/'. It would be much simpler if they could simply access a NUL-terminated path string. We could encapsulate this in a function which puts a NUL in the scratch buffer and returns the string, but that opens up questions about the lifetime of the result. The first time another caller uses the alternate, the scratch buffer may get other data tacked onto it. Let's instead just store the root path separately from the scratch buffer. There aren't enough alternates being stored for the duplicated data to matter for performance, and this keeps things simple and safe for the callers. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'nd/shallow-deepen'Junio C Hamano2016-10-10
|\ \ \ \ \ | |/ / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing "git fetch --depth=<n>" option was hard to use correctly when making the history of an existing shallow clone deeper. A new option, "--deepen=<n>", has been added to make this easier to use. "git clone" also learned "--shallow-since=<date>" and "--shallow-exclude=<tag>" options to make it easier to specify "I am interested only in the recent N months worth of history" and "Give me only the history since that version". * nd/shallow-deepen: (27 commits) fetch, upload-pack: --deepen=N extends shallow boundary by N commits upload-pack: add get_reachable_list() upload-pack: split check_unreachable() in two, prep for get_reachable_list() t5500, t5539: tests for shallow depth excluding a ref clone: define shallow clone boundary with --shallow-exclude fetch: define shallow boundary with --shallow-exclude upload-pack: support define shallow boundary by excluding revisions refs: add expand_ref() t5500, t5539: tests for shallow depth since a specific date clone: define shallow clone boundary based on time with --shallow-since fetch: define shallow boundary with --shallow-since upload-pack: add deepen-since to cut shallow repos based on time shallow.c: implement a generic shallow boundary finder based on rev-list fetch-pack: use a separate flag for fetch in deepening mode fetch-pack.c: mark strings for translating fetch-pack: use a common function for verbose printing fetch-pack: use skip_prefix() instead of starts_with() upload-pack: move rev-list code out of check_non_tip() upload-pack: make check_non_tip() clean things up on error upload-pack: tighten number parsing at "deepen" lines ...
| * | | | fetch, upload-pack: --deepen=N extends shallow boundary by N commitsNguyễn Thái Ngọc Duy2016-06-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In git-fetch, --depth argument is always relative with the latest remote refs. This makes it a bit difficult to cover this use case, where the user wants to make the shallow history, say 3 levels deeper. It would work if remote refs have not moved yet, but nobody can guarantee that, especially when that use case is performed a couple months after the last clone or "git fetch --depth". Also, modifying shallow boundary using --depth does not work well with clones created by --since or --not. This patch fixes that. A new argument --deepen=<N> will add <N> more (*) parent commits to the current history regardless of where remote refs are. Have/Want negotiation is still respected. So if remote refs move, the server will send two chunks: one between "have" and "want" and another to extend shallow history. In theory, the client could send no "want"s in order to get the second chunk only. But the protocol does not allow that. Either you send no want lines, which means ls-remote; or you have to send at least one want line that carries deep-relative to the server.. The main work was done by Dongcan Jiang. I fixed it up here and there. And of course all the bugs belong to me. (*) We could even support --deepen=<N> where <N> is negative. In that case we can cut some history from the shallow clone. This operation (and --depth=<shorter depth>) does not require interaction with remote side (and more complicated to implement as a result). Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | fetch: define shallow boundary with --shallow-excludeNguyễn Thái Ngọc Duy2016-06-13
| | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | fetch: define shallow boundary with --shallow-sinceNguyễn Thái Ngọc Duy2016-06-13
| | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | transport: report missing submodule pushes consistently on stderrStefan Beller2016-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The surrounding advice is printed to stderr, but the list of submodules is not. Make the report consistent by reporting everything to stderr. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'rs/use-strbuf-add-unique-abbrev'Junio C Hamano2016-08-12
|\ \ \ \ \ | | |_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | A small code clean-up. * rs/use-strbuf-add-unique-abbrev: use strbuf_add_unique_abbrev() for adding short hashes