aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* Merge branch 'nd/status-long'Jeff King2012-10-29
|\ | | | | | | | | | | | | | | | | Allow an earlier "--short" option on the command line to be countermanded with the "--long" option for "git status" and "git commit". * nd/status-long: status: add --long output format option
| * status: add --long output format optionJeff King2012-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | You can currently set the output format to --short or --porcelain. There is no --long, because we default to it already. However, you may want to override an alias that uses "--short" to get back to the default. This requires a little bit of refactoring, because currently we use STATUS_FORMAT_LONG internally to mean the same as "the user did not specify anything". By expanding the enum to include STATUS_FORMAT_NONE, we can distinguish between the implicit and explicit cases. This effects these conditions: 1. The user has asked for NUL termination. With NONE, we currently default to turning on the porcelain mode. With an explicit --long, we would in theory use NUL termination with the long mode, but it does not support it. So we can just complain and die. 2. When an output format is given to "git commit", we default to "--dry-run". This behavior would now kick in when "--long" is given, too. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jk/sh-setup-in-filter-branch'Jeff King2012-10-29
|\ \ | | | | | | | | | | | | | | | | | | | | | Refactoring to avoid code duplication in shell scripts. * jk/sh-setup-in-filter-branch: filter-branch: use git-sh-setup's ident parsing functions git-sh-setup: refactor ident-parsing functions
| * | filter-branch: use git-sh-setup's ident parsing functionsJeff King2012-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This saves us some code, but it also reduces the number of processes we start for each filtered commit. Since we can parse both author and committer in the same sed invocation, we save one process. And since the new interface avoids tr, we save 4 processes. It also avoids using "tr", which has had some odd portability problems reported with from Solaris's xpg6 version. We also tweak one of the tests in t7003 to double-check that we are properly exporting the variables (because test-lib.sh exports GIT_AUTHOR_NAME, it will be automatically exported in subprograms. We override this to make sure that filter-branch handles it properly itself). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | git-sh-setup: refactor ident-parsing functionsJeff King2012-10-18
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The only ident-parsing function we currently provide is get_author_ident_from_commit. This is not very flexible for two reasons: 1. It takes a commit as an argument, and can't read from commit headers saved on disk. 2. It will only parse authors, not committers. This patch provides a more flexible interface which will parse multiple idents from a commit provide on stdin. We can easily use it as a building block for the current function to retain compatibility. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'nd/grep-true-path'Jeff King2012-10-29
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | "git grep -e pattern <tree>" asked the attribute system to read "<tree>:.gitattributes" file in the working tree, which was nonsense. * nd/grep-true-path: grep: stop looking at random places for .gitattributes
| * | grep: stop looking at random places for .gitattributesNguyễn Thái Ngọc Duy2012-10-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | grep searches for .gitattributes using "name" field in struct grep_source but that field is not real on-disk path name. For example, "grep pattern rev" fills the field with "rev:path", and Git looks for .gitattributes in the (non-existent but exploitable) path "rev:path" instead of "path". This patch passes real paths down to grep_source_load_driver() when: - grep on work tree - grep on the index - grep a commit (or a tag if it points to a commit) so that these cases look up .gitattributes at proper paths. .gitattributes lookup is disabled in all other cases. Initial-work-by: Jeff King <peff@peff.net> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | Merge branch 'jk/maint-http-init-not-in-result-handler'Jeff King2012-10-29
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Further clean-up to the http codepath that picks up results after cURL library is done with one request slot. * jk/maint-http-init-not-in-result-handler: http: do not set up curl auth after a 401 remote-curl: do not call run_slot repeatedly
| * | | http: do not set up curl auth after a 401Jeff King2012-10-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we get an http 401, we prompt for credentials and put them in our global credential struct. We also feed them to the curl handle that produced the 401, with the intent that they will be used on a retry. When the code was originally introduced in commit 42653c0, this was a necessary step. However, since dfa1725, we always feed our global credential into every curl handle when we initialize the slot with get_active_slot. So every further request already feeds the credential to curl. Moreover, accessing the slot here is somewhat dubious. After the slot has produced a response, we don't actually control it any more. If we are using curl_multi, it may even have been re-initialized to handle a different request. It just so happens that we will reuse the curl handle within the slot in such a case, and that because we only keep one global credential, it will be the one we want. So the current code is not buggy, but it is misleading. By cleaning it up, we can remove the slot argument entirely from handle_curl_result, making it much more obvious that slots should not be accessed after they are marked as finished. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | remote-curl: do not call run_slot repeatedlyJeff King2012-10-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b81401c (http: prompt for credentials on failed POST) taught post_rpc to call run_slot in a loop in order to retry a request after asking the user for credentials. However, after a call to run_slot we will have called finish_active_slot. This means we have released the slot, and we should no longer look at it. As it happens, this does not cause any bugs in the current code, since we know that we are not using curl_multi in this code path, and therefore nobody will have taken over our slot in the meantime. However, it is good form to actually call get_active_slot again. It also future proofs us against changes in the http code. We can do this by jumping back to a retry label at the top of our function. We just need to reorder a few setup lines that should not be repeated; everything else within the loop is either idempotent, needs to be repeated, or in a path we do not follow (e.g., we do not even try when large_request is set, because we don't know how much data we might have streamed from our helper program). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | Merge branch 'jc/grep-pcre-loose-ends'Jeff King2012-10-29
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git log -F -E --grep='<ere>'" failed to use the given <ere> pattern as extended regular expression, and instead looked for the string literally. The early part of this series is a fix for it; the latter part teaches log to respect the grep.* configuration. * jc/grep-pcre-loose-ends: log: honor grep.* configuration log --grep: accept --basic-regexp and --perl-regexp log --grep: use the same helper to set -E/-F options as "git grep" revisions: initialize revs->grep_filter using grep_init() grep: move pattern-type bits support to top-level grep.[ch] grep: move the configuration parsing logic to grep.[ch] builtin/grep.c: make configuration callback more reusable
| * | | | log: honor grep.* configurationJunio C Hamano2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now the grep_config() callback is reusable from other configuration callbacks, call it from git_log_config() so that grep.patterntype and friends can be used with the commands in the "git log" family. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | log --grep: accept --basic-regexp and --perl-regexpJunio C Hamano2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we added the "--perl-regexp" option (or "-P") to "git grep", we should have done the same for the commands in the "git log" family, but somehow we forgot to do so. This corrects it, but we will reserve the short-and-sweet "-P" option for something else for now. Also introduce the "--basic-regexp" option for completeness, so that the "last one wins" principle can be used to defeat an earlier -E option, e.g. "git log -E --basic-regexp --grep='<bre>'". Note that it cannot have the short "-G" option as the option is to grep in the patch text in the context of "log" family. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | log --grep: use the same helper to set -E/-F options as "git grep"Junio C Hamano2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The command line option parser for "git log -F -E --grep='<ere>'" did not flip the "fixed" bit, violating the general "last option wins" principle among conflicting options. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | revisions: initialize revs->grep_filter using grep_init()Junio C Hamano2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using the hand-rolled initialization sequence, use grep_init() to populate the necessary bits. This opens the door to allow the calling commands to optionally read grep.* configuration variables via git_config() if they want to. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | grep: move pattern-type bits support to top-level grep.[ch]Junio C Hamano2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Switching between -E/-G/-P/-F correctly needs a lot more than just flipping opt->regflags bit these days, and we have a nice helper function buried in builtin/grep.c for the sole use of "git grep". Extract it so that "log --grep" family can also use it. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | grep: move the configuration parsing logic to grep.[ch]Junio C Hamano2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The configuration handling is a library-ish part of this program, that is not specific to "git grep" command. It should be reusable by "log" and others. Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | builtin/grep.c: make configuration callback more reusableJunio C Hamano2012-10-09
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The grep_config() function takes one instance of grep_opt as its callback parameter, and populates it by running git_config(). This has three practical implications: - You have to have an instance of grep_opt already when you call the configuration, but that is not necessarily always true. You may be trying to initialize the grep_filter member of rev_info, but are not ready to call init_revisions() on it yet. - It is not easy to enhance grep_config() in such a way to make it cascade to other callback functions to grab other variables in one call of git_config(); grep_config() can be cascaded into from other callbacks, but it has to be at the leaf level of a cascade. - If you ever need to use more than one instance of grep_opt, you will have to open and read the configuration file(s) every time you initialize them. Rearrange the configuration mechanism and model it after how diff configuration variables are handled. An early call to git_config() reads and remembers the values taken from the configuration in the default "template", and a separate call to grep_init() uses this template to instantiate a grep_opt. The next step will be to move some of this out of this file so that the other user of the grep machinery (i.e. "log") can use it. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | Merge branch 'jl/submodule-add-by-name'Jeff King2012-10-29
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If you remove a submodule, in order to keep the repository so that "git checkout" to an older commit in the superproject history can resurrect the submodule, the real repository will stay in $GIT_DIR of the superproject. A later "git submodule add $path" to add a different submodule at the same path will fail. Diagnose this case a bit better, and if the user really wants to add an unrelated submodule at the same path, give the "--name" option to give it a place in $GIT_DIR of the superproject that does not conflict with the original submodule. * jl/submodule-add-by-name: submodule add: Fail when .git/modules/<name> already exists unless forced Teach "git submodule add" the --name option
| * | | | | submodule add: Fail when .git/modules/<name> already exists unless forcedJens Lehmann2012-09-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When adding a new submodule it can happen that .git/modules/<name> already contains a submodule repo, e.g. when a submodule is removed from the work tree and another submodule is added at the same path. But then the work tree of the submodule will be populated using the existing repository and not the one the user provided, which results in an incorrect work tree. On the other hand the user might reactivate a submodule removed earlier, then reusing that .git directory is the Right Thing to do. As git can't decide what is the case, error out and tell the user she should use either use a different name for the submodule with the "--name" option or can reuse the .git directory for the newly added submodule by providing the --force option (which only makes sense when the upstream matches, so the error message lists all remotes of .git/modules/<name>). In one test in t7406 the --force option had to be added to "git submodule add", as that test re-adds a formerly removed submodule. Reported-by: Jonathan Johnson <me@jondavidjohn.com> Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | Teach "git submodule add" the --name optionJens Lehmann2012-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git submodule add" initializes the name of a submodule to its path. This was ok as long as the .git directory lived inside the submodule's work tree, but since 1.7.8 it is stored in the .git/modules/<name> directory of the superproject, making the submodule name survive the removal of the submodule's work tree. This leads to problems when the user tries to add a different submodule at the same path - and thus the same name - later, as that will happily try to restore the submodule from the old repository instead of the one the user specified and will lead to a checkout of the wrong repository. Add the new "--name" option to let the user provide a name for the submodule. This enables the user to solve this conflict without having to remove .git/modules/<name> by hand (which is no viable solution as it makes it impossible to checkout a commit that records the old submodule and populate it, as that will still check out the new submodule for the same reason). To achieve that the submodule's name is added to the parameter list of the module_clone() helper function. This makes it possible to remove the call of module_name() there because both callers of module_clone() already know the name and can provide it as argument number two. Reported-by: Jonathan Johnson <me@jondavidjohn.com> Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | Merge branch 'jl/submodule-rm'Jeff King2012-10-29
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git rm submodule" cannot blindly remove a submodule directory as its working tree may have local changes, and worse yet, it may even have its repository embedded in it. Teach it some special cases where it is safe to remove a submodule, specifically, when there is no local changes in the submodule working tree, and its repository is not embedded in its working tree but is elsewhere and uses the gitfile mechanism to point at it. * jl/submodule-rm: submodule: teach rm to remove submodules unless they contain a git directory
| * | | | | | submodule: teach rm to remove submodules unless they contain a git directoryJens Lehmann2012-09-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently using "git rm" on a submodule - populated or not - fails with this error: fatal: git rm: '<submodule path>': Is a directory This made sense in the past as there was no way to remove a submodule without possibly removing unpushed parts of the submodule's history contained in its .git directory too, so erroring out here protected the user from possible loss of data. But submodules cloned with a recent git version do not contain the .git directory anymore, they use a gitfile to point to their git directory which is safely stored inside the superproject's .git directory. The work tree of these submodules can safely be removed without losing history, so let's teach git to do so. Using rm on an unpopulated submodule now removes the empty directory from the work tree and the gitlink from the index. If the submodule's directory is missing from the work tree, it will still be removed from the index. Using rm on a populated submodule using a gitfile will apply the usual checks for work tree modification adapted to submodules (unless forced). For a submodule that means that the HEAD is the same as recorded in the index, no tracked files are modified and no untracked files that aren't ignored are present in the submodules work tree (ignored files are deemed expendable and won't stop a submodule's work tree from being removed). That logic has to be applied in all nested submodules too. Using rm on a submodule which has its .git directory inside the work trees top level directory will just error out like it did before to protect the repository, even when forced. In the future git could either provide a message informing the user to convert the submodule to use a gitfile or even attempt to do the conversion itself, but that is not part of this change. Signed-off-by: Jens Lehmann <Jens.Lehmann@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | Merge branch 'jk/strbuf-detach-always-non-null'Jeff King2012-10-25
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * jk/strbuf-detach-always-non-null: strbuf: always return a non-NULL value from strbuf_detach
| * | | | | | | strbuf: always return a non-NULL value from strbuf_detachJeff King2012-10-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current behavior is to return NULL when strbuf did not actually allocate a string. This can be quite surprising to callers, though, who may feed the strbuf from arbitrary data and expect to always get a valid value. In most cases, it does not make a difference because calling any strbuf function will cause an allocation (even if the function ends up not inserting any data). But if the code is structured like: struct strbuf buf = STRBUF_INIT; if (some_condition) strbuf_addstr(&buf, some_string); return strbuf_detach(&buf, NULL); then you may or may not return NULL, depending on the condition. This can cause us to segfault in http-push (when fed an empty URL) and in http-backend (when an empty parameter like "foo=bar&&" is in the $QUERY_STRING). This patch forces strbuf_detach to allocate an empty NUL-terminated string when it is called on a strbuf that has not been allocated. I investigated all call-sites of strbuf_detach. The majority are either not affected by the change (because they call a strbuf_* function unconditionally), or can handle the empty string just as easily as NULL. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | | Merge branch 'js/mingw-fflush-errno'Jeff King2012-10-25
|\ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * js/mingw-fflush-errno: maybe_flush_or_die: move a too-loose Windows specific error
| * | | | | | | | maybe_flush_or_die: move a too-loose Windows specific errorJohannes Sixt2012-10-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | check to compat Commit b2f5e268 (Windows: Work around an oddity when a pipe with no reader is written to) introduced a check for EINVAL after fflush() to fight spurious "Invalid argument" errors on Windows when a pipe was broken. But this check may hide real errors on systems that do not have the this odd behavior. Introduce an fflush wrapper in compat/mingw.* so that the treatment is only applied on Windows. Signed-off-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | | | Merge branch 'da/mergetools-p4'Jeff King2012-10-25
|\ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * da/mergetools-p4: mergetools/p4merge: Handle "/dev/null"
| * | | | | | | | | mergetools/p4merge: Handle "/dev/null"David Aguilar2012-10-11
| | |_|_|_|_|_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | p4merge does not properly handle the case where "/dev/null" is passed as a filename. Work it around by creating a temporary file for this purpose. Reported-by: Jeremy Morton <admin@game-point.net> Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> --- Needs to be amended with Tested-by when a report comes...
* | | | | | | | | Merge branch 'jc/test-say-color-avoid-echo-escape'Jeff King2012-10-25
|\ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recent nd/wildmatch series was the first to reveal this ancient bug in the test scaffolding. * jc/test-say-color-avoid-echo-escape: test-lib: Fix say_color () not to interpret \a\b\c in the message
| * | | | | | | | | test-lib: Fix say_color () not to interpret \a\b\c in the messageJunio C Hamano2012-10-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When running with color disabled (e.g. under prove to produce TAP output), say_color() helper function is defined to use echo to show the message. With a message that ends with "\c", echo is allowed to interpret it as "Do not end the line with LF". Use printf "%s\n" to emit the message literally. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | | | | Merge branch 'nd/attr-match-optim'Jeff King2012-10-25
|\ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Trivial and obvious optimization for finding attributes that match a given path. * nd/attr-match-optim: attr: avoid searching for basename on every match attr: avoid strlen() on every match
| * | | | | | | | | | attr: avoid searching for basename on every matchNguyễn Thái Ngọc Duy2012-10-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | attr: avoid strlen() on every matchNguyễn Thái Ngọc Duy2012-10-05
| | |_|/ / / / / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | | | | Merge branch 'jk/peel-ref'Jeff King2012-10-25
|\ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Speeds up "git upload-pack" (what is invoked by "git fetch" on the other side of the connection) by reducing the cost to advertise the branches and tags that are available in the repository. * jk/peel-ref: upload-pack: use peel_ref for ref advertisements peel_ref: check object type before loading peel_ref: do not return a null sha1 peel_ref: use faster deref_tag_noverify
| * | | | | | | | | | upload-pack: use peel_ref for ref advertisementsJeff King2012-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When upload-pack advertises refs, we attempt to peel tags and advertise the peeled version. We currently hand-roll the tag dereferencing, and use as many optimizations as we can to avoid loading non-tag objects into memory. Not only has peel_ref recently learned these optimizations, too, but it also contains an even more important one: it has access to the "peeled" data from the pack-refs file. That means we can avoid not only loading annotated tags entirely, but also avoid doing any kind of object lookup at all. This cut the CPU time to advertise refs by 50% in the linux-2.6 repo, as measured by: echo 0000 | git-upload-pack . >/dev/null best-of-five, warm cache, objects and refs fully packed: [before] [after] real 0m0.026s real 0m0.013s user 0m0.024s user 0m0.008s sys 0m0.000s sys 0m0.000s Those numbers are irrelevantly small compared to an actual fetch. Here's a larger repo (400K refs, of which 12K are unique, and of which only 107 are unique annotated tags): [before] [after] real 0m0.704s real 0m0.596s user 0m0.600s user 0m0.496s sys 0m0.096s sys 0m0.092s This shows only a 15% speedup (mostly because it has fewer actual tags to parse), but a larger absolute value (100ms, which isn't a lot compared to a real fetch, but this advertisement happens on every fetch, even if the client is just finding out they are completely up to date). In truly pathological cases, where you have a large number of unique annotated tags, it can make an even bigger difference. Here are the numbers for a linux-2.6 repository that has had every seventh commit tagged (so about 50K tags): [before] [after] real 0m0.443s real 0m0.097s user 0m0.416s user 0m0.080s sys 0m0.024s sys 0m0.012s Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | peel_ref: check object type before loadingJeff King2012-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The point of peel_ref is to dereference tags; if the base object is not a tag, then we can return early without even loading the object into memory. This patch accomplishes that by checking sha1_object_info for the type. For a packed object, we can get away with just looking in the pack index. For a loose object, we only need to inflate the first couple of header bytes. This is a bit of a gamble; if we do find a tag object, then we will end up loading the content anyway, and the extra lookup will have been wasteful. However, if it is not a tag object, then we save loading the object entirely. Depending on the ratio of non-tags to tags in the input, this can be a minor win or minor loss. However, it does give us one potential major win: if a ref points to a large blob (e.g., via an unannotated tag), then we can avoid looking at it entirely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | peel_ref: do not return a null sha1Jeff King2012-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The idea of the peel_ref function is to dereference tag objects recursively until we hit a non-tag, and return the sha1. Conceptually, it should return 0 if it is successful (and fill in the sha1), or -1 if there was nothing to peel. However, the current behavior is much more confusing. For a regular loose ref, the behavior is as described above. But there is an optimization to reuse the peeled-ref value for a ref that came from a packed-refs file. If we have such a ref, we return its peeled value, even if that peeled value is null (indicating that we know the ref definitely does _not_ peel). It might seem like such information is useful to the caller, who would then know not to bother loading and trying to peel the object. Except that they should not bother loading and trying to peel the object _anyway_, because that fallback is already handled by peel_ref. In other words, the whole point of calling this function is that it handles those details internally, and you either get a sha1, or you know that it is not peel-able. This patch catches the null sha1 case internally and converts it into a -1 return value (i.e., there is nothing to peel). This simplifies callers, which do not need to bother checking themselves. Two callers are worth noting: - in pack-objects, a comment indicates that there is a difference between non-peelable tags and unannotated tags. But that is not the case (before or after this patch). Whether you get a null sha1 has to do with internal details of how peel_ref operated. - in show-ref, if peel_ref returns a failure, the caller tries to decide whether to try peeling manually based on whether the REF_ISPACKED flag is set. But this doesn't make any sense. If the flag is set, that does not necessarily mean the ref came from a packed-refs file with the "peeled" extension. But it doesn't matter, because even if it didn't, there's no point in trying to peel it ourselves, as peel_ref would already have done so. In other words, the fallback peeling is guaranteed to fail. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | peel_ref: use faster deref_tag_noverifyJeff King2012-10-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we are asked to peel a ref to a sha1, we internally call deref_tag, which will recursively parse each tagged object until we reach a non-tag. This has the benefit that we will verify our ability to load and parse the pointed-to object. However, there is a performance downside: we may not need to load that object at all (e.g., if we are listing peeled simply listing peeled refs), or it may be a large object that should follow a streaming code path (e.g., an annotated tag of a large blob). It makes more sense for peel_ref to choose the fast thing rather than performing the extra check, for two reasons: 1. We will already sometimes short-circuit the tag parsing in favor of a peeled entry from a packed-refs file. So we are already favoring speed in some cases, and it is not wise for a caller to rely on peel_ref to detect corruption. 2. We already silently ignore much larger corruptions, like a ref that points to a non-existent object, or a tag object that exists but is corrupted. 2. peel_ref is not the right place to check for such a database corruption. It is returning only the sha1 anyway, not the actual object. Any callers which use that sha1 to load an object will soon discover the corruption anyway, so we are really just pushing back the discovery to later in the program. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | | | | | Merge branch 'bw/config-lift-variable-name-length-limit'Jeff King2012-10-25
|\ \ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The configuration parser had an unnecessary hardcoded limit on variable names that was not checked consistently. Lift the limit. * bw/config-lift-variable-name-length-limit: Remove the hard coded length limit on variable names in config files
| * | | | | | | | | | | Remove the hard coded length limit on variable names in config filesBen Walton2012-10-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously while reading the variable names in config files, there was a 256 character limit with at most 128 of those characters being used by the section header portion of the variable name. This limitation was only enforced while reading the config files. It was possible to write a config file that was not subsequently readable. Instead of enforcing this limitation for both reading and writing, remove it entirely by changing the var member of the config_file struct to a strbuf instead of a fixed length buffer. Update all of the parsing functions in config.c to use the strbuf instead of the static buffer. The parsing functions that returned the base length of the variable name now return simply 0 for success and -1 for failure. The base length information is obtained through the strbuf's len member. We now send the buf member of the strbuf to external callback functions to preserve the external api. None of the external callers rely on the old size limitation for sizing their own buffers so removing the limit should have no externally visible effect. Signed-off-by: Ben Walton <bdwalton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | | | | | | | | | | | Merge branch 'fa/remote-svn'Jeff King2012-10-25
|\ \ \ \ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A GSoC project. * fa/remote-svn: Add a test script for remote-svn remote-svn: add marks-file regeneration Add a svnrdump-simulator replaying a dump file for testing remote-svn: add incremental import remote-svn: Activate import/export-marks for fast-import Create a note for every imported commit containing svn metadata vcs-svn: add fast_export_note to create notes Allow reading svn dumps from files via file:// urls remote-svn, vcs-svn: Enable fetching to private refs When debug==1, start fast-import with "--stats" instead of "--quiet" Add documentation for the 'bidi-import' capability of remote-helpers Connect fast-import to the remote-helper via pipe, adding 'bidi-import' capability Add argv_array_detach and argv_array_free_detached Add svndump_init_fd to allow reading dumps from arbitrary FDs Add git-remote-testsvn to Makefile Implement a remote helper for svn in C
| * | | | | | | | | | | | Add a test script for remote-svnFlorian Achleitner2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use svnrdump_sim.py to emulate svnrdump without an svn server. Tests fetching, incremental fetching, fetching from file://, and the regeneration of fast-import's marks file. Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | | | remote-svn: add marks-file regenerationFlorian Achleitner2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fast-import mark files are stored outside the object database and are therefore not fetched and can be lost somehow else. marks provide a svn revision --> git sha1 mapping, while the notes that are attached to each commit when it is imported provide a git sha1 --> svn revision mapping. If the marks file is not available or not plausible, regenerate it by walking through the notes tree. , i.e. The plausibility check tests if the highest revision in the marks file matches the revision of the top ref. It doesn't ensure that the mark file is completely correct. This could only be done with an effort equal to unconditional regeneration. Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | | | Add a svnrdump-simulator replaying a dump file for testingFlorian Achleitner2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To ease testing without depending on a reachable svn server, this compact python script mimics parts of svnrdumps behaviour. It requires the remote url to start with sim://. Start and end revisions are evaluated. If the requested revision doesn't exist, as it is the case with incremental imports, if no new commit was added, it returns 1 (like svnrdump). To allow using the same dump file for simulating multiple incremental imports, the highest revision can be limited by setting the environment variable SVNRMAX to that value. This simulates the situation where higher revs don't exist yet. Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | | | remote-svn: add incremental importFlorian Achleitner2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Search for a note attached to the ref to update and read it's 'Revision-number:'-line. Start import from the next svn revision. If there is no next revision in the svn repo, svnrdump terminates with a message on stderr an non-zero return value. This looks a little weird, but there is no other way to know whether there is a new revision in the svn repo. On the start of an incremental import, the parent of the first commit in the fast-import stream is set to the branch name to update. All following commits specify their parent by a mark number. Previous mark files are currently not reused. Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | | | remote-svn: Activate import/export-marks for fast-importFlorian Achleitner2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable import and export of a marks file by sending the appropriate feature commands to fast-import before sending data. Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | | | Create a note for every imported commit containing svn metadataFlorian Achleitner2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To provide metadata from svn dumps for further processing, e.g. branch detection, attach a note to each imported commit that stores additional information. The notes are currently hard-coded in refs/notes/svn/revs. Currently the following lines from the svn dump are directly accumulated in the note. This can be refined as needed. - "Revision-number" - "Node-path" - "Node-kind" - "Node-action" - "Node-copyfrom-path" - "Node-copyfrom-rev" Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | | | vcs-svn: add fast_export_note to create notesDmitry Ivankov2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fast_export lacked a method to writes notes to fast-import stream. Add two new functions fast_export_note which is similar to fast_export_modify. And also add fast_export_buf_to_data to be able to write inline blobs that don't come from a line_buffer or from delta application. To be used like this: fast_export_begin_commit("refs/notes/somenotes", ...) fast_export_note("refs/heads/master", "inline") fast_export_buf_to_data(&data) or maybe fast_export_note("refs/heads/master", sha1) Signed-off-by: Dmitry Ivankov <divanorama@gmail.com> Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * | | | | | | | | | | | Allow reading svn dumps from files via file:// urlsFlorian Achleitner2012-10-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For testing as well as for importing large, already available dumps, it's useful to bypass svnrdump and replay the svndump from a file directly. Add support for file:// urls in the remote url, e.g. svn::file:///path/to/dump When the remote helper finds an url starting with file:// it tries to open that file instead of invoking svnrdump. Signed-off-by: Florian Achleitner <florian.achleitner.2.6.31@gmail.com> Acked-by: David Michael Barr <b@rr-dav.id.au> Signed-off-by: Junio C Hamano <gitster@pobox.com>