aboutsummaryrefslogtreecommitdiff
path: root/builtin/fetch.c
Commit message (Collapse)AuthorAge
* Merge branch 'jc/abbrev-auto'Junio C Hamano2016-10-27
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git push" and "git fetch" reports from what old object to what new object each ref was updated, using abbreviated refnames, and they attempt to align the columns for this and other pieces of information. The way these codepaths compute how many display columns to allocate for the object names portion of this output has been updated to match the recent "auto scale the default abbreviation length" change. * jc/abbrev-auto: transport: compute summary-width dynamically transport: allow summary-width to be computed dynamically fetch: pass summary_width down the callchain transport: pass summary_width down the callchain
| * transport: allow summary-width to be computed dynamicallyJunio C Hamano2016-10-21
| | | | | | | | | | | | | | | | | | | | | | Now we have identified three callchains that have a set of refs that they want to show their <old, new> object names in an aligned output, we can replace their reference to the constant TRANSPORT_SUMMARY_WIDTH with a helper function call to transport_summary_width() that takes the set of ref as a parameter. This step does not yet iterate over the refs and compute, which is left as an exercise to the readers. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * fetch: pass summary_width down the callchainJunio C Hamano2016-10-21
| | | | | | | | | | | | | | | | | | | | | | | | | | The leaf function on the "fetch" side that uses TRANSPORT_SUMMARY_WIDTH constant is builtin/fetch.c::format_display() and it has two distinct callchains. The one that reports the primary result of fetch originates at store_updated_refs(); the other one that reports the pruning of the remote-tracking refs originates at prune_refs(). Teach these two places to pass summary_width down the callchain, just like we did for the "push" side in the previous commit. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'lt/abbrev-auto'Junio C Hamano2016-10-27
|\ \ | |/ | | | | | | | | | | | | | | | | | | Allow the default abbreviation length, which has historically been 7, to scale as the repository grows. The logic suggests to use 12 hexdigits for the Linux kernel, and 9 to 10 for Git itself. * lt/abbrev-auto: abbrev: auto size the default abbreviation abbrev: prepare for new world order abbrev: add FALLBACK_DEFAULT_ABBREV to prepare for auto sizing
| * abbrev: add FALLBACK_DEFAULT_ABBREV to prepare for auto sizingJunio C Hamano2016-10-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We'll be introducing a new way to decide the default abbreviation length by initialising DEFAULT_ABBREV to -1 to signal the first call to "find unique abbreviation" codepath to compute a reasonable value based on the number of objects we have to avoid collisions. We have long relied on DEFAULT_ABBREV being a positive concrete value that is used as the abbreviation length when no extra configuration or command line option has overridden it. Some codepaths wants to use such a positive concrete default value even before making their first request to actually trigger the computation for the auto sized default. Introduce FALLBACK_DEFAULT_ABBREV and use it to the code that attempts to align the report from "git fetch". For now, this macro is also used to initialize the default_abbrev variable, but the auto-sizing code will use -1 and then use the value of FALLBACK_DEFAULT_ABBREV as the starting point of auto-sizing. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jk/fetch-quick-tag-following'Junio C Hamano2016-10-26
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | When fetching from a remote that has many tags that are irrelevant to branches we are following, we used to waste way too many cycles when checking if the object pointed at by a tag (that we are not going to fetch!) exists in our repository too carefully. * jk/fetch-quick-tag-following: fetch: use "quick" has_sha1_file for tag following
| * | fetch: use "quick" has_sha1_file for tag followingJeff King2016-10-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we auto-follow tags in a fetch, we look at all of the tags advertised by the remote and fetch ones where we don't already have the tag, but we do have the object it peels to. This involves a lot of calls to has_sha1_file(), some of which we can reasonably expect to fail. Since 45e8a74 (has_sha1_file: re-check pack directory before giving up, 2013-08-30), this may cause many calls to reprepare_packed_git(), which is potentially expensive. This has gone unnoticed for several years because it requires a fairly unique setup to matter: 1. You need to have a lot of packs on the client side to make reprepare_packed_git() expensive (the most expensive part is finding duplicates in an unsorted list, which is currently quadratic). 2. You need a large number of tag refs on the server side that are candidates for auto-following (i.e., that the client doesn't have). Each one triggers a re-read of the pack directory. 3. Under normal circumstances, the client would auto-follow those tags and after one large fetch, (2) would no longer be true. But if those tags point to history which is disconnected from what the client otherwise fetches, then it will never auto-follow, and those candidates will impact it on every fetch. So when all three are true, each fetch pays an extra O(nr_tags * nr_packs^2) cost, mostly in string comparisons on the pack names. This was exacerbated by 47bf4b0 (prepare_packed_git_one: refactor duplicate-pack check, 2014-06-30) which uses a slightly more expensive string check, under the assumption that the duplicate check doesn't happen very often (and it shouldn't; the real problem here is how often we are calling reprepare_packed_git()). This patch teaches fetch to use HAS_SHA1_QUICK to sacrifice accuracy for speed, in cases where we might be racy with a simultaneous repack. This is similar to the fix in 0eeb077 (index-pack: avoid excessive re-reading of pack directory, 2015-06-09). As with that case, it's OK for has_sha1_file() occasionally say "no I don't have it" when we do, because the worst case is not a corruption, but simply that we may fail to auto-follow a tag that points to it. Here are results from the included perf script, which sets up a situation similar to the one described above: Test HEAD^ HEAD ---------------------------------------------------------- 5550.4: fetch 11.21(10.42+0.78) 0.08(0.04+0.02) -99.3% Reported-by: Vegard Nossum <vegard.nossum@oracle.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | Merge branch 'km/fetch-do-not-free-remote-name' into maintJunio C Hamano2016-07-11
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ownership rule for the piece of memory that hold references to be fetched in "git fetch" was screwy, which has been cleaned up. * km/fetch-do-not-free-remote-name: builtin/fetch.c: don't free remote->name after fetch
* | \ \ Merge branch 'nd/shallow-deepen'Junio C Hamano2016-10-10
|\ \ \ \ | |_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The existing "git fetch --depth=<n>" option was hard to use correctly when making the history of an existing shallow clone deeper. A new option, "--deepen=<n>", has been added to make this easier to use. "git clone" also learned "--shallow-since=<date>" and "--shallow-exclude=<tag>" options to make it easier to specify "I am interested only in the recent N months worth of history" and "Give me only the history since that version". * nd/shallow-deepen: (27 commits) fetch, upload-pack: --deepen=N extends shallow boundary by N commits upload-pack: add get_reachable_list() upload-pack: split check_unreachable() in two, prep for get_reachable_list() t5500, t5539: tests for shallow depth excluding a ref clone: define shallow clone boundary with --shallow-exclude fetch: define shallow boundary with --shallow-exclude upload-pack: support define shallow boundary by excluding revisions refs: add expand_ref() t5500, t5539: tests for shallow depth since a specific date clone: define shallow clone boundary based on time with --shallow-since fetch: define shallow boundary with --shallow-since upload-pack: add deepen-since to cut shallow repos based on time shallow.c: implement a generic shallow boundary finder based on rev-list fetch-pack: use a separate flag for fetch in deepening mode fetch-pack.c: mark strings for translating fetch-pack: use a common function for verbose printing fetch-pack: use skip_prefix() instead of starts_with() upload-pack: move rev-list code out of check_non_tip() upload-pack: make check_non_tip() clean things up on error upload-pack: tighten number parsing at "deepen" lines ...
| * | | fetch, upload-pack: --deepen=N extends shallow boundary by N commitsNguyễn Thái Ngọc Duy2016-06-13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In git-fetch, --depth argument is always relative with the latest remote refs. This makes it a bit difficult to cover this use case, where the user wants to make the shallow history, say 3 levels deeper. It would work if remote refs have not moved yet, but nobody can guarantee that, especially when that use case is performed a couple months after the last clone or "git fetch --depth". Also, modifying shallow boundary using --depth does not work well with clones created by --since or --not. This patch fixes that. A new argument --deepen=<N> will add <N> more (*) parent commits to the current history regardless of where remote refs are. Have/Want negotiation is still respected. So if remote refs move, the server will send two chunks: one between "have" and "want" and another to extend shallow history. In theory, the client could send no "want"s in order to get the second chunk only. But the protocol does not allow that. Either you send no want lines, which means ls-remote; or you have to send at least one want line that carries deep-relative to the server.. The main work was done by Dongcan Jiang. I fixed it up here and there. And of course all the bugs belong to me. (*) We could even support --deepen=<N> where <N> is negative. In that case we can cut some history from the shallow clone. This operation (and --depth=<shorter depth>) does not require interaction with remote side (and more complicated to implement as a result). Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | fetch: define shallow boundary with --shallow-excludeNguyễn Thái Ngọc Duy2016-06-13
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | fetch: define shallow boundary with --shallow-sinceNguyễn Thái Ngọc Duy2016-06-13
| | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'jk/push-progress'Junio C Hamano2016-08-03
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git push" and "git clone" learned to give better progress meters to the end user who is waiting on the terminal. * jk/push-progress: receive-pack: send keepalives during quiet periods receive-pack: turn on connectivity progress receive-pack: relay connectivity errors to sideband receive-pack: turn on index-pack resolving progress index-pack: add flag for showing delta-resolution progress clone: use a real progress meter for connectivity check check_connected: add progress flag check_connected: relay errors to alternate descriptor check_everything_connected: use a struct with named options check_everything_connected: convert to argv_array rev-list: add optional progress reporting check_everything_connected: always pass --quiet to rev-list
| * | | | check_everything_connected: use a struct with named optionsJeff King2016-07-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The number of variants of check_everything_connected has grown over the years, so that the "real" function takes several possibly-zero, possibly-NULL arguments. We hid the complexity behind some wrapper functions, but this doesn't scale well when we want to add new options. If we add more wrapper variants to handle the new options, then we can get a combinatorial explosion when those options might be used together (right now nobody wants to use both "shallow" and "transport" together, so we get by with just a few wrappers). If instead we add new parameters to each function, each of which can have a default value, then callers who want the defaults end up with confusing invocations like: check_everything_connected(fn, 0, data, -1, 0, NULL); where it is unclear which parameter is which (and every caller needs updated when we add new options). Instead, let's add a struct to hold all of the optional parameters. This is a little more verbose for the callers (who have to declare the struct and fill it in), but it makes their code much easier to follow, because every option is named as it is set (and unused options do not have to be mentioned at all). Note that we could also stick the iteration function and its callback data into the option struct, too. But since those are required for each call, by avoiding doing so, we can let very simple callers just pass "NULL" for the options and not worry about the struct at all. While we're touching each site, let's also rename the function to check_connected(). The existing name was quite long, and not all of the wrappers even used the full name. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'mh/ref-iterators'Junio C Hamano2016-07-25
|\ \ \ \ \ | |/ / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The API to iterate over all the refs (i.e. for_each_ref(), etc.) has been revamped. * mh/ref-iterators: for_each_reflog(): reimplement using iterators dir_iterator: new API for iterating over a directory tree for_each_reflog(): don't abort for bad references do_for_each_ref(): reimplement using reference iteration refs: introduce an iterator interface ref_resolves_to_object(): new function entry_resolves_to_object(): rename function from ref_resolves_to_object() get_ref_cache(): only create an instance if there is a submodule remote rm: handle symbolic refs correctly delete_refs(): add a flags argument refs: use name "prefix" consistently do_for_each_ref(): move docstring to the header file refs: remove unnecessary "extern" keywords
| * | | | delete_refs(): add a flags argumentMichael Haggerty2016-06-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This will be useful for passing REF_NODEREF through. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'nd/fetch-ref-summary'Junio C Hamano2016-07-19
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Improve the look of the way "git fetch" reports what happened to each ref that was fetched. * nd/fetch-ref-summary: fetch: reduce duplicate in ref update status lines with placeholder fetch: align all "remote -> local" output fetch: change flag code for displaying tag update and deleted ref fetch: refactor ref update status formatting code git-fetch.txt: document fetch output
| * | | | | fetch: reduce duplicate in ref update status lines with placeholderNguyễn Thái Ngọc Duy2016-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the "remote -> local" line, if either ref is a substring of the other, the common part in the other string is replaced with "*". For example abc -> origin/abc refs/pull/123/head -> pull/123 become abc -> origin/* refs/*/head -> pull/123 Activated with fetch.output=compact. For the record, this output is not perfect. A single giant ref can push all refs very far to the right and likely be wrapped around. We may have a few options: - exclude these long lines smarter - break the line after "->", exclude it from column width calculation - implement a new format, { -> origin/}foo, which makes the problem go away at the cost of a bit harder to read - reverse all the arrows so we have "* <- looong-ref", again still hard to read. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | fetch: align all "remote -> local" outputNguyễn Thái Ngọc Duy2016-07-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We do align "remote -> local" output by allocating 10 columns to "remote". That produces aligned output only for short refs. An extra pass is performed to find the longest remote ref name (that does not produce a line longer than terminal width) to produce better aligned output. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | fetch: change flag code for displaying tag update and deleted refNguyễn Thái Ngọc Duy2016-06-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes the fetch flag code consistent with push, where '-' means deleted ref. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | fetch: refactor ref update status formatting codeNguyễn Thái Ngọc Duy2016-06-27
| | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes it easier to change the formatting later. And it makes sure translators cannot mess up format specifiers and break Git. There are a couple call sites where the length of the second column is TRANSPORT_SUMMARY_WIDTH instead of calculated by TRANSPORT_SUMMARY(), which is enforced now. The result should be the same because these call sites do not contain characters outside ASCII range. The two strbuf_addf() calls instead of one is mostly to reduce diff-noise in a future patch where "ref -> ref" is reformatted differently. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'km/fetch-do-not-free-remote-name'Junio C Hamano2016-07-06
|\ \ \ \ \ | |/ / / / |/| | | / | | |_|/ | |/| | | | | | | | | | | | | | The ownership rule for the piece of memory that hold references to be fetched in "git fetch" was screwy, which has been cleaned up. * km/fetch-do-not-free-remote-name: builtin/fetch.c: don't free remote->name after fetch
| * | | builtin/fetch.c: don't free remote->name after fetchKeith McGuigan2016-06-14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make fetch's string_list of remote names own all of its string items (strdup'ing when necessary) so that it can deallocate them safely when clearing. Signed-off-by: Keith McGuigan <kmcguigan@twopensource.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'nd/error-errno'Junio C Hamano2016-05-17
|\ \ \ \ | |_|/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code for warning_errno/die_errno has been refactored and a new error_errno() reporting helper is introduced. * nd/error-errno: (41 commits) wrapper.c: use warning_errno() vcs-svn: use error_errno() upload-pack.c: use error_errno() unpack-trees.c: use error_errno() transport-helper.c: use error_errno() sha1_file.c: use {error,die,warning}_errno() server-info.c: use error_errno() sequencer.c: use error_errno() run-command.c: use error_errno() rerere.c: use error_errno() and warning_errno() reachable.c: use error_errno() mailmap.c: use error_errno() ident.c: use warning_errno() http.c: use error_errno() and warning_errno() grep.c: use error_errno() gpg-interface.c: use error_errno() fast-import.c: use error_errno() entry.c: use error_errno() editor.c: use error_errno() diff-no-index.c: use error_errno() ...
| * | | builtin/fetch.c: use error_errno()Nguyễn Thái Ngọc Duy2016-05-09
| |/ / | | | | | | | | | | | | | | | | | | | | | A couple of newlines are also removed, because both error() and error_errno() automatically append a newline. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'sb/submodule-parallel-update'Junio C Hamano2016-04-06
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A major part of "git submodule update" has been ported to C to take advantage of the recently added framework to run download tasks in parallel. * sb/submodule-parallel-update: clone: allow an explicit argument for parallel submodule clones submodule update: expose parallelism to the user submodule helper: remove double 'fatal: ' prefix git submodule update: have a dedicated helper for cloning run_processes_parallel: rename parameters for the callbacks run_processes_parallel: treat output of children as byte array submodule update: direct error message to stderr fetching submodules: respect `submodule.fetchJobs` config option submodule-config: drop check against NULL submodule-config: keep update strategy around
| * | fetching submodules: respect `submodule.fetchJobs` config optionStefan Beller2016-03-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows to configure fetching and updating in parallel without having the command line option. This moved the responsibility to determine how many parallel processes to start from builtin/fetch to submodule.c as we need a way to communicate "The user did not specify the number of parallel processes in the command line options" in the builtin fetch. The submodule code takes care of the precedence (CLI > config > default). Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'jk/tighten-alloc'Junio C Hamano2016-02-26
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update various codepaths to avoid manually-counted malloc(). * jk/tighten-alloc: (22 commits) ewah: convert to REALLOC_ARRAY, etc convert ewah/bitmap code to use xmalloc diff_populate_gitlink: use a strbuf transport_anonymize_url: use xstrfmt git-compat-util: drop mempcpy compat code sequencer: simplify memory allocation of get_message test-path-utils: fix normalize_path_copy output buffer size fetch-pack: simplify add_sought_entry fast-import: simplify allocation in start_packfile write_untracked_extension: use FLEX_ALLOC helper prepare_{git,shell}_cmd: use argv_array use st_add and st_mult for allocation size computation convert trivial cases to FLEX_ARRAY macros use xmallocz to avoid size arithmetic convert trivial cases to ALLOC_ARRAY convert manual allocations to argv_array argv-array: add detach function add helpers for allocating flex-array structs harden REALLOC_ARRAY and xcalloc against size_t overflow tree-diff: catch integer overflow in combine_diff_path allocation ...
| * | | use st_add and st_mult for allocation size computationJeff King2016-02-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If our size computation overflows size_t, we may allocate a much smaller buffer than we expected and overflow it. It's probably impossible to trigger an overflow in most of these sites in practice, but it is easy enough convert their additions and multiplications into overflow-checking variants. This may be fixing real bugs, and it makes auditing the code easier. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | Merge branch 'js/close-packs-before-gc' into maintJunio C Hamano2016-02-05
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many codepaths that run "gc --auto" before exiting kept packfiles mapped and left the file descriptors to them open, which was not friendly to systems that cannot remove files that are open. They now close the packs before doing so. * js/close-packs-before-gc: receive-pack: release pack files before garbage-collecting merge: release pack files before garbage-collecting am: release pack files before garbage-collecting fetch: release pack files before garbage-collecting
| * \ \ \ Merge branch 'js/fopen-harder' into maintJunio C Hamano2016-02-05
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR (e.g. COMMIT_EDITMSG) that is meant to be left after the command is done. This however did not work well if the repository is set to be shared with core.sharedRepository and the umask of the previous user is tighter. They have been made to work better by calling unlink(2) and retrying after fopen(3) fails with EPERM. * js/fopen-harder: Handle more file writes correctly in shared repos commit: allow editing the commit message even in shared repos
* | \ \ \ \ Merge branch 'tg/git-remote'Junio C Hamano2016-02-26
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The internal API to interact with "remote.*" configuration variables has been streamlined. * tg/git-remote: remote: use remote_is_configured() for add and rename remote: actually check if remote exits remote: simplify remote_is_configured() remote: use parse_config_key
| * | | | | | remote: simplify remote_is_configured()Thomas Gummerer2016-02-16
| | |_|_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The remote_is_configured() function allows checking whether a remote exists or not. The function however only works if remote_get() wasn't called before calling it. In addition, it only checks the configuration for remotes, but not remotes or branches files. Make use of the origin member of struct remote instead, which indicates where the remote comes from. It will be set to some value if the remote is configured in any file in the repository, but is initialized to 0 if the remote is only created in make_remote(). Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | Merge branch 'ew/force-ipv4'Junio C Hamano2016-02-24
|\ \ \ \ \ \ | |/ / / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git fetch" and friends that make network connections can now be told to only use ipv4 (or ipv6). * ew/force-ipv4: connect & http: support -4 and -6 switches for remote operations
| * | | | | connect & http: support -4 and -6 switches for remote operationsEric Wong2016-02-12
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes it is necessary to force IPv4-only or IPv6-only operation on networks where name lookups may return a non-routable address and stall remote operations. The ssh(1) command has an equivalent switches which we may pass when we run them. There may be old ssh(1) implementations out there which do not support these switches; they should report the appropriate error in that case. rsync support is untouched for now since it is deprecated and scheduled to be removed. Signed-off-by: Eric Wong <normalperson@yhbt.net> Reviewed-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'js/close-packs-before-gc'Junio C Hamano2016-01-26
|\ \ \ \ \ | | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many codepaths that run "gc --auto" before exiting kept packfiles mapped and left the file descriptors to them open, which was not friendly to systems that cannot remove files that are open. They now close the packs before doing so. * js/close-packs-before-gc: receive-pack: release pack files before garbage-collecting merge: release pack files before garbage-collecting am: release pack files before garbage-collecting fetch: release pack files before garbage-collecting
| * | | | fetch: release pack files before garbage-collectingJohannes Schindelin2016-01-13
| |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before auto-gc'ing, we need to make sure that the pack files are released in case they need to be repacked and garbage-collected. This fixes https://github.com/git-for-windows/git/issues/500 Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'js/fopen-harder'Junio C Hamano2016-01-20
|\ \ \ \ | | |/ / | |/| / | |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR (e.g. COMMIT_EDITMSG) that is meant to be left after the command is done. This however did not work well if the repository is set to be shared with core.sharedRepository and the umask of the previous user is tighter. They have been made to work better by calling unlink(2) and retrying after fopen(3) fails with EPERM. * js/fopen-harder: Handle more file writes correctly in shared repos commit: allow editing the commit message even in shared repos
| * | Handle more file writes correctly in shared reposJohannes Schindelin2016-01-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In shared repositories, we have to be careful when writing files whose permissions do not allow users other than the owner to write them. In particular, we force the marks file of fast-export and the FETCH_HEAD when fetching to be rewritten from scratch. This commit does not touch other calls to fopen() that want to write files: - commands that write to working tree files (core.sharedRepository does not affect permission bits of working tree files), e.g. .rej file created by "apply --reject", result of applying a previous conflict resolution by "rerere", "git merge-file". - git am, when splitting mails (git-am correctly cleans up its directory after finishing, so there is no need to share those files between users) - git submodule clone, when writing the .git file, because the file will not be overwritten - git_terminal_prompt() in compat/terminal.c, because it is not writing to a file at all - git diff --output, because the output file is clearly not intended to be shared between the users of the current repository - git fast-import, when writing a crash report, because the reports' file names are unique due to an embedded process ID - mailinfo() in mailinfo.c, because the output is clearly not intended to be shared between the users of the current repository - check_or_regenerate_marks() in remote-testsvn.c, because this is only used for Git's internal testing - git fsck, when writing lost&found blobs (this should probably be changed, but left as a low-hanging fruit for future contributors). Note that this patch does not touch callers of write_file() and write_file_gently(), which would benefit from the same scrutiny as to usage in shared repositories. Most notable users are branch, daemon, submodule & worktree, and a worrisome call in transport.c when updating one ref (which ignores the shared flag). Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | submodules: allow parallel fetching, add tests and documentationStefan Beller2015-12-16
| |/ |/| | | | | | | | | | | This enables the work of the previous patches. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Remove get_object_hash.brian m. carlson2015-11-20
| | | | | | | | | | | | | | | | | | Convert all instances of get_object_hash to use an appropriate reference to the hash member of the oid member of struct object. This provides no functional change, as it is essentially a macro substitution. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
* | Add several uses of get_object_hash.brian m. carlson2015-11-20
| | | | | | | | | | | | | | | | | | | | | | Convert most instances where the sha1 member of struct object is dereferenced to use get_object_hash. Most instances that are passed to functions that have versions taking struct object_id, such as get_sha1_hex/get_oid_hex, or instances that can be trivially converted to use struct object_id instead, are not converted. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
* | Convert struct ref to use object_id.brian m. carlson2015-11-20
| | | | | | | | | | | | | | | | Use struct object_id in three fields in struct ref and convert all the necessary places that use it. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
* | use alloc_ref rather than hand-allocating "struct ref"Jeff King2015-10-05
| | | | | | | | | | | | | | | | This saves us some manual computation, and eliminates a call to strcpy. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | transport: use strbufs for status table "quickref" stringsJeff King2015-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | We generate range strings like "1234abcd...5678efab" for use in the the fetch and push status tables. We use fixed-size buffers along with strcat to do so. These aren't buggy, as our manual size computation is correct, but there's nothing checking that this is so. Let's switch them to strbufs instead, which are obviously correct, and make it easier to audit the code base for problematic calls to strcat(). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | fetch: replace static buffer with xstrfmtJeff King2015-09-25
|/ | | | | | | | | | | | | | | | | | | | We parse the INFINITE_DEPTH constant into a static, fixed-size buffer using sprintf. This buffer is sufficiently large for the current constant, but it's a suspicious pattern, as the constant is defined far away, and it's not immediately obvious that 12 bytes are large enough to hold it. We can just use xstrfmt here, which gets rid of any question of the buffer size. It also removes any concerns with object lifetime, which means we do not have to wonder why this buffer deep within a conditional is marked "static" (we never free our newly allocated result, of course, but that's OK; it's global that lasts the lifetime of the whole program anyway). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'hv/submodule-config'Junio C Hamano2015-08-31
|\ | | | | | | | | | | | | | | | | | | | | The gitmodules API accessed from the C code learned to cache stuff lazily. * hv/submodule-config: submodule: allow erroneous values for the fetchRecurseSubmodules option submodule: use new config API for worktree configurations submodule: extract functions for config set and lookup submodule: implement a config API for lookup of .gitmodules values
| * submodule: allow erroneous values for the fetchRecurseSubmodules optionHeiko Voigt2015-08-19
| | | | | | | | | | | | | | | | | | | | | | | | | | We should not die when reading the submodule config cache since the user might not be able to get out of that situation when the configuration is part of the history. We should handle this condition later when the value is about to be used. Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jk/git-path'Junio C Hamano2015-08-19
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git_path() and mkpath() are handy helper functions but it is easy to misuse, as the callers need to be careful to keep the number of active results below 4. Their uses have been reduced. * jk/git-path: memoize common git-path "constant" files get_repo_path: refactor path-allocation find_hook: keep our own static buffer refs.c: remove_empty_directories can take a strbuf refs.c: avoid git_path assignment in lock_ref_sha1_basic refs.c: avoid repeated git_path calls in rename_tmp_log refs.c: simplify strbufs in reflog setup and writing path.c: drop git_path_submodule refs.c: remove extra git_path calls from read_loose_refs remote.c: drop extraneous local variable from migrate_file prefer mkpathdup to mkpath in assignments prefer git_pathdup to git_path in some possibly-dangerous cases add_to_alternates_file: don't add duplicate entries t5700: modernize style cache.h: complete set of git_path_submodule helpers cache.h: clarify documentation for git_path, et al
| * | memoize common git-path "constant" filesJeff King2015-08-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One of the most common uses of git_path() is to pass a constant, like git_path("MERGE_MSG"). This has two drawbacks: 1. The return value is a static buffer, and the lifetime is dependent on other calls to git_path, etc. 2. There's no compile-time checking of the pathname. This is OK for a one-off (after all, we have to spell it correctly at least once), but many of these constant strings appear throughout the code. This patch introduces a series of functions to "memoize" these strings, which are essentially globals for the lifetime of the program. We compute the value once, take ownership of the buffer, and return the cached value for subsequent calls. cache.h provides a helper macro for defining these functions as one-liners, and defines a few common ones for global use. Using a macro is a little bit gross, but it does nicely document the purpose of the functions. If we need to touch them all later (e.g., because we learned how to change the git_dir variable at runtime, and need to invalidate all of the stored values), it will be much easier to have the complete list. Note that the shared-global functions have separate, manual declarations. We could do something clever with the macros (e.g., expand it to a declaration in some places, and a declaration _and_ a definition in path.c). But there aren't that many, and it's probably better to stay away from too-magical macros. Likewise, if we abandon the C preprocessor in favor of generating these with a script, we could get much fancier. E.g., normalizing "FOO/BAR-BAZ" into "git_path_foo_bar_baz". But the small amount of saved typing is probably not worth the resulting confusion to readers who want to grep for the function's definition. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>