summaryrefslogtreecommitdiff
path: root/source/pdf/pdf-repair.c
AgeCommit message (Collapse)Author
2016-09-22Bug 697015: Avoid object references vanishing during repair.Robin Watts
A PDF repair can be triggered 'just in time', when we encounter a problem in the file. The idea is that this can happen without the enclosing code being aware of it. Thus the enclosing code may be holding 'borrowed' references (such as those returned by pdf_dict_get()) at the time when the repair is triggered. We are therefore at pains to ensure that the repair does not replace any objects that exist already, so that the calling code will not have these references unexpectedly invalidated. The sole exception to this is when we replace the 'Length' fields in stream dictionaries with the actual lengths. Bug 697015 shows exactly this situation causing a reference to become invalid. The solution implemented here is to add an 'orphan list' to the document, where we put these (hopefully few, small) objects. These orphans are kept around until the document is closed.
2016-09-01pdf: Load/open streams by indirect reference object when possible.Tor Andersson
2016-07-06pdf: Drop generation number from public interfaces.Tor Andersson
The generation number is only needed for decryption, and is assumed to be zero or irrelevant for all other uses. Store the original object number and generation in the xref slot, so that we can decrypt them even when the objects have been renumbered, without needing to pass the original object number around through the stream loading APIs.
2016-06-30Fix bug when opening small PDF-files.Sebastian Rasmussen
The PDF repair code suffered an buffer index overflow while searching the buffer of file data if the file (and hence the buffer) was sufficiently small. This also happened while attempting to open a path pointing to a directory as they are treated as zero byte files.
2016-06-21Fix typo due to switching from int to size_t.Sebastian Rasmussen
Commit 4a4e6adae4c1a0e9ab3b6fad477edfe26c1a2aca introduced a typo when doing the int -> size_t conversion. This caused a coverity warning which is now fixed.
2016-06-17Use 'size_t' instead of int as appropriate.Robin Watts
This silences the many warnings we get when building for x64 in windows. This does not address any of the warnings we get in thirdparty libraries - in particular harfbuzz. These look (at a quick glance) harmless though.
2016-04-27Fix 696649: remove fz_rethrow_message calls.Tor Andersson
2015-10-01Bug 696146: Improve pdf_repair to find /Root in new style XRefs.Robin Watts
The current code never looks for /Root objects in dictionaries as it parses them. This means that 'New style' files end up without any Roots after repair. The new code therefore updates pdf_repair_obj to look for Root objects in the same way it looks for encrypt and id objects. These go into the list of found roots. The Root object almost certainly has indirections within it, so it is vital that the 'doc' pointer gets set. This means we have to make a slight adjustment to pdf_repair_obj so that the dict is parsed with a doc pointer. In turn this means we need to manually ensure that none of the other information read from the dict during the repair operation will cause indirections to be resolved. This is achieved by checking for !pdf_is_indirect at various points.
2015-09-25Abort repairing if we cannot find any objects.Tor Andersson
2015-07-27Do not attempt to resolve indirect objects during pdf repairSebastian Rasmussen
When parsing dicts or arrays while reparing objects the xref should not be used to try to resolve indirect objects since the xref has not been fully rebuilt yet. As was the case prior to commit 07dd854.
2015-05-15Support pdf files larger than 2Gig.Robin Watts
If FZ_LARGEFILE is defined when building, MuPDF uses 64bit offsets for files; this allows us to open streams larger than 2Gig. The downsides to this are that: * The xref entries are larger. * All PDF ints are held as 64bit things rather than 32bit things (to cope with /Prev entries, hint stream offsets etc). * All file positions are stored as 64bits rather than 32. The implementation works by detecting FZ_LARGEFILE. Some #ifdeffery in fitz/system.h sets fz_off_t to either int or int64_t as appropriate, and sets defines for fz_fopen, fz_fseek, fz_ftell etc as required. These call the fseeko64 etc functions on linux (and so define _LARGEFILE64_SOURCE) and the explicit 64bit functions on windows.
2015-04-01Bug 693719: Attempt #2. Broken trailer repair.Robin Watts
Calling pdf_is_dict causes the file to seek. This is a bad thing in a process that is running through the file. It's doubly bad, as the thing it seeks to read may not be there as it might not have been repaired yet. So, instead of just keeping the 'most recent root that is a dictionary', we change to keeping a list of the roots we have found while parsing the doc. At the end we then check for the most recent one that is a dictionary and use that.
2015-04-01Bug 693719: Improve repairing of files with broken trailers.Robin Watts
When repairing a file we keep track of the most recent 'Root' entry we have found. Only accept a new Root entry as a replacement if it is a dictionary.
2015-03-24Rework handling of PDF names for speed and memory.Robin Watts
Currently, every PDF name is allocated in a pdf_obj structure, and comparisons are done using strcmp. Given that we can predict most of the PDF names we'll use in a given file, this seems wasteful. The pdf_obj type is opaque outside the pdf-object.c file, so we can abuse it slightly without anyone outside knowing. We collect a sorted list of names used in PDF (resources/pdf/names.txt), and we add a utility (namedump) that preprocesses this into 2 header files. The first (include/mupdf/pdf/pdf-names-table.h, included as part of include/mupdf/pdf/object.h), defines a set of "PDF_NAME_xxxx" entries. These are pdf_obj *'s that callers can use to mean "A PDF object that means literal name 'xxxx'" The second (source/pdf/pdf-name-impl.h) is a C array of names. We therefore update the code so that rather than passing "xxxx" to functions (such as pdf_dict_gets(...)) we now pass PDF_NAME_xxxx (to pdf_dict_get(...)). This is a fairly natural (if widespread) change. The pdf_dict_getp (and sibling) functions that take a path (e.g. "foo/bar/baz") are therefore supplemented with equivalents that take a list (pdf_dict_getl(... , PDF_NAME_foo, PDF_NAME_bar, PDF_NAME_baz, NULL)). The actual implementation of this relies on the fact that small pointer values are never valid values. For a given pdf_obj *p, if NULL < (intptr_t)p < PDF_NAME__LIMIT then p is a literal entry in the name table. This enables us to do fast pointer compares and to skip expensive strcmps. Also, bring "null", "true" and "false" into the same style as PDF names. Rather than using full pdf_obj structures for null/true/false, use special pointer values just above the PDF_NAME_ table. This saves memory and makes comparisons easier.
2015-02-17Add ctx parameter and remove embedded contexts for API regularity.Tor Andersson
Purge several embedded contexts: Remove embedded context in fz_output. Remove embedded context in fz_stream. Remove embedded context in fz_device. Remove fz_rebind_stream (since it is no longer necessary). Remove embedded context in svg_device. Remove embedded context in XML parser. Add ctx argument to fz_document functions. Remove embedded context in fz_document. Remove embedded context in pdf_document. Remove embedded context in pdf_obj. Make fz_page independent of fz_document in the interface. We shouldn't need to pass the document to all functions handling a page. If a page is tied to the source document, it's redundant; otherwise it's just pointless. Fix reference counting oddity in fz_new_image_from_pixmap.
2015-02-17Rename fz_close_* and fz_free_* to fz_drop_*.Tor Andersson
Rename fz_close to fz_drop_stream. Rename fz_close_archive to fz_drop_archive. Rename fz_close_output to fz_drop_output. Rename fz_free_* to fz_drop_*. Rename pdf_free_* to pdf_drop_*. Rename xps_free_* to xps_drop_*.
2015-01-06Add xref_index to speed searching of sparse xrefs.Robin Watts
Add a new index that quickly maps object number to the first xref in which an object appears. This appears to get us the speed back that we lost when moving to sparse xrefs.
2014-11-26Change xref representation to cope better with sparse xrefs.Robin Watts
Currently each xref in the file results in an array from 0 to num_objects. If we have a file that has been updated many times this causes a huge waste of memory. Instead we now hold each xref as a list of non-overlapping subsections (exactly as the file holds them). Lookup is therefore potentially slower, but only on files where the xrefs are highly fragmented (i.e. where we would be saving in memory terms). Some parts of our code (notably the file writing code that does garbage collection etc) assumes that lookups of object entry pointers will not change previous object entry pointers that have been looked up. To cope with this, and to cope with the case where we are updating/creating new objects, we introduce the idea of a 'solid' xref. A solid xref is one where it has a single subsection record that spans the entire range of valid object numbers for a file. Once we have ensured that an xref is 'solid', we can safely work on the pointers within it without fear of them moving. We ensure that any 'incremental' xref is solid. We also ensure that any non-incremental write makes the xref solid.
2014-08-19try not to wrongly overwrite /ID when repairing encrypted documentsSimon Bünzli
If garbage is appended to an encrypted document, there could be another trailer with /ID but without /Encrypt . The repairing code currently always uses the last encountered values, but replacing the /ID value alone can cause decryption to break. One possible solution is to use the /ID value only when there's either none yet, when there's no /Encrypt or when there's a matching /Encrypt in the same trailer. See https://code.google.com/p/sumatrapdf/issues/detail?id=2697 for a document which Adobe Reader is able to read but MuPDF isn't (it used to before pdf_lex_no_string was introduced, but that's accidental).
2014-01-02Improve PDF repair logic.Robin Watts
When we meet a broken PDF file, we attempt to repair it. We do this by reading tokens from the file and attempting to interpret them as a normal PDF stream. Unfortunately, if the file is corrupt enough so that we start to read from the middle of a stream, and we happen to hit an '(' character, we can go into string reading mode. We can then end up skipping over vast swathes of file that we could otherwise repair. We fix this here by using a new version of the pdf_lex function that refuses to ever return a string. This means we may take more time over skipping things than we did before, but are less likely to skip stuff. We also tweak other parts of the pdf repair logic here. If we hit a badly formed piece of data, clear the num/gen we have stored so that the next plausible piece we get does not get assigned to a random object number.
2014-01-02fix memory leak in pdf_repair_xrefSimon Bünzli
The 0 null object is leaked if a document refers to 0 0 obj before requiring a delayed reparation (seen e.g. with 3324.pdf.asan.3.2585).
2014-01-01don't fail on invalid object streamsSimon Bünzli
At https://code.google.com/p/sumatrapdf/issues/detail?id=2436 , there's a document with an empty xref section which since recently causes a repair to be triggered. Repairs then stop when pdf_repair_obj_stms fails on an object which isn't even required for the document to render. Such broken object streams should rather be ignored same as broken objects are ignored in pdf_init_document.
2013-12-24Bug 694810: Implement late file repair for PDFs.Robin Watts
Currently, if we spot a bad xref as we are reading a PDF in, we can repair that PDF by doing a long exhaustive read of the file. This reconstructs the information that was in the xref, and the file can be opened (and later saved) as normal. If we hit an object that is not in the expected place however, we cannot trigger a repair at that point - so xrefs with duff offsets in (within the bounds of the file) will never be repaired. This commit solves that by triggering a repair (just once) whenever we fail to parse an object in the expected place.
2013-09-27stop checking if the result of fz_read is negativeSimon Bünzli
fz_read used to return a negative value on errors. With the introduction of fz_try/fz_catch, it throws an error instead and always returns non-negative values. This removes the pointless checks.
2013-09-13Fix various compile warnings spotted by the cluster.Robin Watts
2013-07-19Initial work on progressive loadingRobin Watts
We are testing this using a new -p flag to mupdf that sets a bitrate at which data will appear to arrive progressively as time goes on. For example: mupdf -p 102400 pdf_reference17.pdf Details of the scheme used here are presented in docs/progressive.txt
2013-06-28Ensure altered objects are moved to the incremental xref sectionPaul Gardiner
2013-06-25Rid the world of "pdf_document *xref".Robin Watts
For historical reasons lots of the code uses "xref" when talking about a pdf document. Now pdf_xref is a separate type this has become confusing, so replace 'xref' with 'doc' for clarity.
2013-06-25Update pdf_obj's to have a pdf_document field.Robin Watts
Remove the fz_context field to avoid the structure growing.
2013-06-24fix recent regressionszeniko
* at one place, code returns from inside an fz_try which borks up the error stack * pdf_load_xref wrongly assumes that at least one non-empty xref has been read if there were no errors thrown during parsing * pdf_repair_xref skips integers when object numbers are out of range
2013-06-20Rearrange source files.Tor Andersson