summaryrefslogtreecommitdiff
path: root/include/mupdf/pdf/xref.h
AgeCommit message (Collapse)Author
2018-03-22Add pdf_add_new_dict family of functions.Tor Andersson
Create a new empty dictionary, add it to the xref and return a new indirect reference object that points to it. This indirect reference needs to be dropped, since it was created with a function using the 'new' keyword.
2018-02-02Signature support: separate pkcs7 specifics into a separate file.Paul Gardiner
Previously, pdf-pkcs7.c contained mishmash of functions required for creating and checking signatures, with no separation between the parts relating to pdf and those relating to pkcs7. This commit introduces pdf_signature.c which contains the pdf specifics, leaving pdf-pkcs7.c to be purely pkcs7 functions. This should more easily allow the use of pkcs7 solutions other than openssl. The pkcs7 api is declared in pdf-pkcs7.h. It is entirely free of mupdf specifics, other than using an fz_stream to specify the bytes to be hashed.
2017-12-13PDF object numbers need not be int64_t, int is sufficient.Sebastian Rasmussen
This is true because they are now limited below PDF_MAX_OBJECT_NUMBER.
2017-11-01Use int64_t for public file API offsets.Tor Andersson
Don't mess with conditional compilation with LARGEFILE -- always expose 64-bit file offsets in our public API.
2016-11-16pdf: Add 'compressed/raw' flag to pdf_add_stream.Tor Andersson
Also expose the argument to JS and JNI.
2016-11-14Add optional 'object' argument to pdf_add_stream.Tor Andersson
2016-09-01pdf: Load/open streams by indirect reference object when possible.Tor Andersson
2016-07-06Fix garbage collection and page grafting for indirect reference chains.Tor Andersson
The mark & sweep pass of garbage collection, and resolving indirect objects when grafting objects was following the full chain of indirect references. In the unusual case where a numbered object is itself only an indirect reference to another object, this intermediate numbered object would be missed both when marking for garbage collection, and when copying objects for grafting. Add a function to resolve only one step for these two uses. The following is an example of a file that would break during garbage collection if we follow full indirect reference chains: %PDF-1.3 1 0 obj <</Type/Catalog /Foo[2 0 R 3 0 R]>> endobj 2 0 obj 4 0 R endobj 3 0 obj 5 0 R endobj 4 0 obj <</Length 1>> stream A endstream endobj 5 0 obj <</Length 1>> stream B endstream endobj
2016-07-06pdf: Drop generation number from public interfaces.Tor Andersson
The generation number is only needed for decryption, and is assumed to be zero or irrelevant for all other uses. Store the original object number and generation in the xref slot, so that we can decrypt them even when the objects have been renumbered, without needing to pass the original object number around through the stream loading APIs.
2016-04-28Refactor fz_image code cases.Robin Watts
Split compressed images (images based on a compressed buffer) and pixmap images (images based on a pixmap) out into separate subclasses.
2016-03-01Rename pdf_new_ref to pdf_add_object.Tor Andersson
2015-10-01Bug 696146: Improve pdf_repair to find /Root in new style XRefs.Robin Watts
The current code never looks for /Root objects in dictionaries as it parses them. This means that 'New style' files end up without any Roots after repair. The new code therefore updates pdf_repair_obj to look for Root objects in the same way it looks for encrypt and id objects. These go into the list of found roots. The Root object almost certainly has indirections within it, so it is vital that the 'doc' pointer gets set. This means we have to make a slight adjustment to pdf_repair_obj so that the dict is parsed with a doc pointer. In turn this means we need to manually ensure that none of the other information read from the dict during the repair operation will cause indirections to be resolved. This is achieved by checking for !pdf_is_indirect at various points.
2015-08-27Support several levels of incremental xrefPaul Gardiner
This fixes bug #696123 by allowing multiple signatures each to be written to the document in a separate incemental update. Add count num_incremental_sections to keep track of the number of incremental sections. Add xref_base, which can be set between 0 and num_incremental_sections inclusive to access different versions of the document. Add disallow_new_increments flag that stops new incremental sections being provoked by the creation of an xref stream. Move the unsaved_sigs list from the document structure to the xref structure. With this commit in place, the lists will never grow beyond length one, but we've maintained the list structure in case other cases need supporting in the future. Add an end offset field to the xref structure, so that during completion of signatures the document length of the various incremental versions of the document are available. Factor out functions for storing unsaved signatures and for checking if an object is an unsaved signature. Do deep copy of objects that require the holding of several versions.
2015-05-15Support pdf files larger than 2Gig.Robin Watts
If FZ_LARGEFILE is defined when building, MuPDF uses 64bit offsets for files; this allows us to open streams larger than 2Gig. The downsides to this are that: * The xref entries are larger. * All PDF ints are held as 64bit things rather than 32bit things (to cope with /Prev entries, hint stream offsets etc). * All file positions are stored as 64bits rather than 32. The implementation works by detecting FZ_LARGEFILE. Some #ifdeffery in fitz/system.h sets fz_off_t to either int or int64_t as appropriate, and sets defines for fz_fopen, fz_fseek, fz_ftell etc as required. These call the fseeko64 etc functions on linux (and so define _LARGEFILE64_SOURCE) and the explicit 64bit functions on windows.
2015-03-20Automatically update /Length and /Filter in pdf_update_stream.Tor Andersson
2015-02-17Add ctx parameter and remove embedded contexts for API regularity.Tor Andersson
Purge several embedded contexts: Remove embedded context in fz_output. Remove embedded context in fz_stream. Remove embedded context in fz_device. Remove fz_rebind_stream (since it is no longer necessary). Remove embedded context in svg_device. Remove embedded context in XML parser. Add ctx argument to fz_document functions. Remove embedded context in fz_document. Remove embedded context in pdf_document. Remove embedded context in pdf_obj. Make fz_page independent of fz_document in the interface. We shouldn't need to pass the document to all functions handling a page. If a page is tied to the source document, it's redundant; otherwise it's just pointless. Fix reference counting oddity in fz_new_image_from_pixmap.
2015-01-06Add xref_index to speed searching of sparse xrefs.Robin Watts
Add a new index that quickly maps object number to the first xref in which an object appears. This appears to get us the speed back that we lost when moving to sparse xrefs.
2014-12-29Performance optimisation with pdf_cache_object/pdf_get_xref_entryRobin Watts
The recent change to holding pdf xrefs in a sparse format has resulted in a significant decrease in speed (x10). Malc points out that some of this (2x) can be recovered simply by making pdf_cache_object return the entry which it found the object in. This saves us having to immediately call pdf_get_xref_entry again afterwards. I am still thinking about ways to try and get the remaining time back.
2014-11-26Change xref representation to cope better with sparse xrefs.Robin Watts
Currently each xref in the file results in an array from 0 to num_objects. If we have a file that has been updated many times this causes a huge waste of memory. Instead we now hold each xref as a list of non-overlapping subsections (exactly as the file holds them). Lookup is therefore potentially slower, but only on files where the xrefs are highly fragmented (i.e. where we would be saving in memory terms). Some parts of our code (notably the file writing code that does garbage collection etc) assumes that lookups of object entry pointers will not change previous object entry pointers that have been looked up. To cope with this, and to cope with the case where we are updating/creating new objects, we introduce the idea of a 'solid' xref. A solid xref is one where it has a single subsection record that spans the entire range of valid object numbers for a file. Once we have ensured that an xref is 'solid', we can safely work on the pointers within it without fear of them moving. We ensure that any 'incremental' xref is solid. We also ensure that any non-incremental write makes the xref solid.
2014-03-18Fix operator buffering of inline images.Robin Watts
Previously pdf_process buffer did not understand inline images. In order to make this work without needlessly duplicating complex code from within pdf-op-run, the parsing of inline images has been moved to happen in pdf-interpret.c. When the op_table entry for BI is called it now expects the inline image to be in csi->img and the dictionary object to be in csi->obj. To make this work, we have had to improve the handling of inline images in general. While non-inline images have been loaded and held in memory in their compressed form and only decoded when required, until now we have always loaded and decoded inline images immediately. This has been due to the difficulty in knowing how many bytes of data to read from the stream - we know the length of the stream once uncompressed, but relating this to the compressed length is hard. To cure this we introduce a new type of filter stream, a 'leecher'. We insert a leecher stream before we build the filters required to decode the image. We then read and discard the appropriate number of uncompressed bytes from the filters. This pulls the compressed data through the leecher stream, which stores it in an fz_buffer. Thus images are now always held in their compressed forms in memory. The pdf-op-run implementation is now trivial. The only real complexity in the pdf-op-buffer implementation is the need to ensure that the /Filter entry in the dictionary object matches the exact point at which we backstopped the decompression.
2014-03-04Bug 691691: Add way of clearing cached objects out of the xref.Robin Watts
We add various facilities here, intended to allow us to efficiently minimise the memory we use for holding cached pdf objects. Firstly, we add the ability to 'mark' all the currently loaded objects. Next we add the ability to 'clear the xref' - to drop all the currently loaded objects that have no other references except the ones held by the xref table itself. Finally, we add the ability to 'clear the xref to the last mark' - to drop all the currently loaded objects that have been created since the last 'mark' operation and have no other references except the ones held by the xref table. We expose this to the user by adding a new device hint 'FZ_NO_CACHE'. If set on the device, then the PDF interpreter will pdf_mark_xref before starting and pdf_clear_xref_to_mark afterwards. Thus no additional objects will be retained in memory after a given page is run, unless someone else picks them up and takes a reference to them as part of the run. We amend our simple example app to set this device hint when loading pages as part of a search.
2014-01-17Bug 694896: Ensure that repairs don't lose trailer dict.Robin Watts
When we find certain classes of flaw in the file while attempting to read an object, we trigger an automatic repair of the file. This leaves almost all objects unchanged; the sole exception is that of the trailer object (and its sub objects) which can get dropped and recreated. To avoid leaving people holding handles to objects within the trailer dict high and dry, we introduce a 'pre_repair_trailer' object to each xref entry. On a repair, we copy the existing trailer object to this. As we only ever repair once, this is safe. The only known place where this is a problem is when setting up the pdf_crypt for a document; we adapt the code here to allow for potential problems. The example file that shows this up is: 048d14d2f5f0ae31e9a2cde0be66f16a_asan_heap-uaf_86d4ed_3961_3661.pdf Thanks to Mateusz Jurczyk and Gynvael Coldwind of the Google Security Team for providing the fuzzing files.
2013-07-19Initial work on progressive loadingRobin Watts
We are testing this using a new -p flag to mupdf that sets a bitrate at which data will appear to arrive progressively as time goes on. For example: mupdf -p 102400 pdf_reference17.pdf Details of the scheme used here are presented in docs/progressive.txt
2013-07-04Update pdf_write_document to support incremental updatePaul Gardiner
2013-06-28Ensure altered objects are moved to the incremental xref sectionPaul Gardiner
2013-06-25Rid the world of "pdf_document *xref".Robin Watts
For historical reasons lots of the code uses "xref" when talking about a pdf document. Now pdf_xref is a separate type this has become confusing, so replace 'xref' with 'doc' for clarity.
2013-06-18Split pdf.h into subheaders.Tor Andersson