aboutsummaryrefslogtreecommitdiff
path: root/streaming.c
Commit message (Collapse)AuthorAge
* open_istream(): do not dereference NULL in the error caseJunio C Hamano2014-02-18
| | | | | | | | | | | | When stream-filter cannot be attached, it is expected to return NULL, and we should close the stream we opened and signal an error by returning NULL ourselves from this function. However, we attempted to dereference that NULL pointer between the point we detected the error and returned from the function. Brought-to-attention-by: John Keeping <john@keeping.me.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'ef/mingw-write'Junio C Hamano2014-01-27
|\ | | | | | | | | | | * ef/mingw-write: mingw: remove mingw_write prefer xwrite instead of write
| * prefer xwrite instead of writeErik Faye-Lund2014-01-17
| | | | | | | | | | | | | | | | | | | | Our xwrite wrapper already deals with a few potential hazards, and are as such more robust. Prefer it instead of write to get the robustness benefits everywhere. Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com> Reviewed-and-improved-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | sha1_object_info_extended(): add an "unsigned flags" parameterChristian Couder2013-12-12
|/ | | | | | | | | This parameter is not used yet, but it will be used to tell sha1_object_info_extended() if it should perform object replacement or not. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'jk/cat-file-batch-optim'Junio C Hamano2013-07-24
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If somebody wants to only know on-disk footprint of an object without having to know its type or payload size, we can bypass a lot of code to cheaply learn it. * jk/cat-file-batch-optim: Fix some sparse warnings sha1_object_info_extended: pass object_info to helpers sha1_object_info_extended: make type calculation optional packed_object_info: make type lookup optional packed_object_info: hoist delta type resolution to helper sha1_loose_object_info: make type lookup optional sha1_object_info_extended: rename "status" to "type" cat-file: disable object/refname ambiguity check for batch mode
| * Fix some sparse warningsRamsay Jones2013-07-18
| | | | | | | | | | | | | | | | | | | | | | | | Sparse issues some "Using plain integer as NULL pointer" warnings. Each warning relates to the use of an '{0}' initialiser expression in the declaration of an 'struct object_info'. The first field of this structure has pointer type. Thus, in order to suppress these warnings, we replace the initialiser expression with '{NULL}'. Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
| * sha1_object_info_extended: make type calculation optionalJeff King2013-07-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Each caller of sha1_object_info_extended sets up an object_info struct to tell the function which elements of the object it wants to get. Until now, getting the type of the object has always been required (and it is returned via the return type rather than a pointer in object_info). This can involve actually opening a loose object file to determine its type, or following delta chains to determine a packed file's base type. These effects produce a measurable slow-down when doing a "cat-file --batch-check" that does not include %(objecttype). This patch adds a "typep" query to struct object_info, so that it can be optionally queried just like size and disk_size. As a result, the return type of the function is no longer the object type, but rather 0/-1 for success/error. As there are only three callers total, we just fix up each caller rather than keep a compatibility wrapper: 1. The simpler sha1_object_info wrapper continues to always ask for and return the type field. 2. The istream_source function wants to know the type, and so always asks for it. 3. The cat-file batch code asks for the type only when %(objecttype) is part of the format string. On linux.git, the best-of-five for running: $ git rev-list --objects --all >objects $ time git cat-file --batch-check='%(objectsize:disk)' on a fully packed repository goes from: real 0m8.680s user 0m8.160s sys 0m0.512s to: real 0m7.205s user 0m6.580s sys 0m0.608s Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | open_istream: remove unneeded check for null pointerStefan Beller2013-07-23
|/ | | | | | | | | | | | | | 'st' is allocated via xmalloc a few lines before and passed to the stream opening functions. The xmalloc function is written in a way that either 'st' is allocated valid memory or xmalloc already dies. The function calls to open_istream_* do not change 'st', as the pointer is passed by reference and not a pointer of a pointer. Hence 'st' cannot be null at that part of the code. Signed-off-by: Stefan Beller <stefanbeller@googlemail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* zero-initialize object_info structsJeff King2013-07-07
| | | | | | | | | | | | | | | | | | | | | | | The sha1_object_info_extended function expects the caller to provide a "struct object_info" which contains pointers to "query" items that will be filled in. The purpose of providing pointers rather than storing the response directly in the struct is so that callers can choose not to incur the expense in finding particular fields that they do not care about. Right now the only query item is "sizep", and all callers set it explicitly to choose whether or not to query it; they can then leave the rest of the struct uninitialized. However, as we add new query items, each caller will have to be updated to explicitly turn off the new ones (by setting them to NULL). Instead, let's teach each caller to zero-initialize the struct, so that they do not have to learn about each new query item added. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* avoid infinite loop in read_istream_looseJeff King2013-03-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The read_istream_loose function loops on inflating a chunk of data from an mmap'd loose object. We end the loop when we run out of space in our output buffer, or if we see a zlib error. We need to treat Z_BUF_ERROR specially, though, as it is not fatal; it is just zlib's way of telling us that we need to either feed it more input or give it more output space. It is perfectly normal for us to hit this when we are at the end of our buffer. However, we may also get Z_BUF_ERROR because we have run out of input. In a well-formed object, this should not happen, because we have fed the whole mmap'd contents to zlib. But if the object is truncated or corrupt, we will loop forever, never giving zlib any more data, but continuing to ask it to inflate. We can fix this by considering it an error when zlib returns Z_BUF_ERROR but we still have output space left (which means it must want more input, which we know is a truncation error). It would not be sufficient to just check whether zlib had consumed all the input at the start of the loop, as it might still want to generate output from what is in its internal state. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* read_istream_filtered: propagate read error from upstreamJeff King2013-03-27
| | | | | | | | | | | The filter istream pulls data from an "upstream" stream, running it through a filter function. However, we did not properly notice when the upstream filter yielded an error, and just returned what we had read. Instead, we should propagate the error. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* stream_blob_to_fd: detect errors reading from streamJeff King2013-03-27
| | | | | | | | | | | We call read_istream, but never check its return value for errors. This can lead to us looping infinitely, as we just keep trying to write "-1" bytes (and we do not notice the error, as we simply check that write_in_full reports the same number of bytes we fed it, which of course is also -1). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* pack-objects, streaming: turn "xx >= big_file_threshold" to ".. > .."Nguyễn Thái Ngọc Duy2012-05-18
| | | | | | | This is because all other places do "xx > big_file_threshold" Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* streaming: void pointer instead of char pointerRené Scharfe2012-05-03
| | | | | | | | | | Allow any kind of buffer to be fed to read_istream() without an explicit cast by making it's buf argument a void pointer. It's about arbitrary data, not only characters. Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* streaming: make streaming-write-entry to be more reusableJunio C Hamano2012-03-07
| | | | | | | | | | The static function in entry.c takes a cache entry and streams its blob contents to a file in the working tree. Refactor the logic to a new API function stream_blob_to_fd() that takes an object name and an open file descriptor, so that it can be reused by other callers. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Merge branch 'jc/streaming-filter'Junio C Hamano2011-08-01
|\ | | | | | | | | * jc/streaming-filter: streaming: free git_istream upon closing
| * streaming: free git_istream upon closingJeff King2011-07-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Kirill Smelkov noticed that post-1.7.6 "git checkout" started leaking tons of memory. The streaming_write_entry function properly calls close_istream(), but that function did not actually free() the allocated git_istream struct. The git_istream struct is totally opaque to calling code, and must be heap-allocated by open_istream. Therefore it's not appropriate for callers to have to free it. This patch makes close_istream() into "close and de-allocate all associated resources". We could add a new "free_istream" call, but there's not much point in letting callers inspect the istream after close. And this patch's semantics make us match fopen/fclose, which is well-known and understood. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* | Merge branch 'jc/zlib-wrap'Junio C Hamano2011-07-19
|/ | | | | | | | | | | | | | * jc/zlib-wrap: zlib: allow feeding more than 4GB in one go zlib: zlib can only process 4GB at a time zlib: wrap deflateBound() too zlib: wrap deflate side of the API zlib: wrap inflateInit2 used to accept only for gzip format zlib: wrap remaining calls to direct inflate/inflateEnd zlib wrapper: refactor error message formatter Conflicts: sha1_file.c
* stream filter: add "no more input" to the filtersJunio C Hamano2011-05-26
| | | | | | | | | | | Some filters may need to buffer the input and look-ahead inside it to decide what to output, and they may consume more than zero bytes of input and still not produce any output. After feeding all the input, pass NULL as input as keep calling stream_filter() to let such filters know there is no more input coming, and it is time for them to produce the remaining output based on the buffered input. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* Add streaming filter APIJunio C Hamano2011-05-26
| | | | | | | | | | | | | | | | This introduces an API to plug custom filters to an input stream. The caller gets get_stream_filter("path") to obtain an appropriate filter for the path, and then uses it when opening an input stream via open_istream(). After that, the caller can read from the stream with read_istream(), and close it with close_istream(), just like an unfiltered stream. This only adds a "null" filter that is a pass-thru filter, but later changes can add LF-to-CRLF and other filters, and the callers of the streaming API do not have to change. Signed-off-by: Junio C Hamano <gitster@pobox.com>
* streaming: read loose objects incrementallyJunio C Hamano2011-05-20
| | | | | Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* streaming: read non-delta incrementally from a packJunio C Hamano2011-05-20
| | | | | Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
* streaming: a new API to read from the object storeJunio C Hamano2011-05-20
Given an object name, use open_istream() to get a git_istream handle that you can read_istream() from as if you are using read(2) to read the contents of the object, and close it with close_istream() when you are done. Currently, we do not do anything fancy--it just calls read_sha1_file() and keeps the contents in memory as a whole, and carve it out as you request with read_istream(). Signed-off-by: Junio C Hamano <gitster@pobox.com>