Age | Commit message (Collapse) | Author |
|
Purge several embedded contexts:
Remove embedded context in fz_output.
Remove embedded context in fz_stream.
Remove embedded context in fz_device.
Remove fz_rebind_stream (since it is no longer necessary).
Remove embedded context in svg_device.
Remove embedded context in XML parser.
Add ctx argument to fz_document functions.
Remove embedded context in fz_document.
Remove embedded context in pdf_document.
Remove embedded context in pdf_obj.
Make fz_page independent of fz_document in the interface.
We shouldn't need to pass the document to all functions handling a page.
If a page is tied to the source document, it's redundant; otherwise it's
just pointless.
Fix reference counting oddity in fz_new_image_from_pixmap.
|
|
Rename fz_close to fz_drop_stream.
Rename fz_close_archive to fz_drop_archive.
Rename fz_close_output to fz_drop_output.
Rename fz_free_* to fz_drop_*.
Rename pdf_free_* to pdf_drop_*.
Rename xps_free_* to xps_drop_*.
|
|
|
|
|
|
When loading e.g. the file from bug 694567, MuPDF uses an unitialized
variable because pdf_document::xref_index contains values relative to
the document's original multi-part xref while the actual xref is the
repaired single-part one (and thus the cached value is too large).
Properly resetting the xref_index before starting reparation fixes this
crash.
|
|
|
|
pdf_xref_find_subsection does indeed solidify the wrong xref section:
it should operate only on the oldest xref and not overwrite the most
recent one with older entries.
|
|
Add a new index that quickly maps object number to the first
xref in which an object appears. This appears to get us the
speed back that we lost when moving to sparse xrefs.
|
|
Following the recent change to hold pdf xrefs in their native 'sparse'
representation, searching the xref takes longer.
Malc has investigated this slowdown and found that it can be largely
avoided by not searching the xref lists first. A modified version of
his first patch has gone in already (getting us from 10x slower to
just 5x slower).
This commit is a modified version of a second patch from him. Again
it works by avoiding searching the xref list twice. The original
version of this patch 1) appears broken to me, as it could return the
wrong xref entry when object streams have more than one object in them,
and 2) supposedly gets the speed back to the original 'pre-sparse change'
speed.
I have updated the patch to fix 1), and I hope this should not affect 2).
I am slightly suspicious that removing a search can get us a 5x speed
increase, but certainly this is an improvemnet.
There is scope for us further reducing the search times, by us using a
new table to map object number -> xref number, but unless we find a case
where we are noticably slower than before, I think we can ignore this.
|
|
We know i >= 0 as we've already thrown if i < 0 earlier.
Credit to Malc for spotting this.
|
|
The recent change to holding pdf xrefs in a sparse format has resulted
in a significant decrease in speed (x10). Malc points out that some of
this (2x) can be recovered simply by making pdf_cache_object return the
entry which it found the object in.
This saves us having to immediately call pdf_get_xref_entry again
afterwards.
I am still thinking about ways to try and get the remaining time back.
|
|
After calling ensure_solid_xref, the pdf_xref pointer must be updated
in case ensure_solid_xref has reallocated the sections table or uses
a different section table than originally used. Commit
e767bd783d91ae88cd79da19e79afb2c36bcf32a fails to do so in one case.
TODO: Why does pdf_xref_find_subsection solidify xref section 0 instead
of xref section sub?
|
|
Currently each xref in the file results in an array from 0 to
num_objects. If we have a file that has been updated many times
this causes a huge waste of memory.
Instead we now hold each xref as a list of non-overlapping subsections
(exactly as the file holds them).
Lookup is therefore potentially slower, but only on files where the
xrefs are highly fragmented (i.e. where we would be saving in memory
terms).
Some parts of our code (notably the file writing code that does
garbage collection etc) assumes that lookups of object entry pointers
will not change previous object entry pointers that have been
looked up. To cope with this, and to cope with the case where we are
updating/creating new objects, we introduce the idea of a 'solid'
xref.
A solid xref is one where it has a single subsection record that spans
the entire range of valid object numbers for a file. Once we have
ensured that an xref is 'solid', we can safely work on the pointers
within it without fear of them moving.
We ensure that any 'incremental' xref is solid.
We also ensure that any non-incremental write makes the xref solid.
|
|
If a PDF document is encrypted but broken, repairing caches all
strings in encrypted form. Clearing the xref after repairing
ensures that strings are returned to API callers as expected.
Cf. https://code.google.com/p/sumatrapdf/issues/detail?id=2610
|
|
Return the null object rather than throwing an exception when parsing
indirect object references with negative object numbers.
Do range check for object numbers (1 .. length) when object numbers
are used instead.
Object number 0 is not a valid object number. It must always be 'free'.
|
|
pdf_create_document leaks the trailer and in pdf-device.c many objects
are inserted into dictionaries using pdf_dict_puts and leaked instead
of using pdf_dict_puts_drop.
|
|
...like the one Microsoft Word generates.
|
|
|
|
Split functions out of pdf-form.c that shouldn't be there, and make
javascript initialization explicit.
|
|
This avoids leaks when pdf_clear_xref etc are used.
|
|
We add various facilities here, intended to allow us to efficiently
minimise the memory we use for holding cached pdf objects.
Firstly, we add the ability to 'mark' all the currently loaded objects.
Next we add the ability to 'clear the xref' - to drop all the currently
loaded objects that have no other references except the ones held by the
xref table itself.
Finally, we add the ability to 'clear the xref to the last mark' - to
drop all the currently loaded objects that have been created since the
last 'mark' operation and have no other references except the ones held
by the xref table.
We expose this to the user by adding a new device hint 'FZ_NO_CACHE'.
If set on the device, then the PDF interpreter will pdf_mark_xref before
starting and pdf_clear_xref_to_mark afterwards. Thus no additional
objects will be retained in memory after a given page is run, unless
someone else picks them up and takes a reference to them as part of
the run.
We amend our simple example app to set this device hint when loading
pages as part of a search.
|
|
|
|
see https://code.google.com/p/sumatrapdf/issues/detail?id=2517 for a
document which is broken to the point where it fails to load using
reparation but loads successfully if object 0 is implicitly defined.
|
|
When we find certain classes of flaw in the file while attempting to
read an object, we trigger an automatic repair of the file. This
leaves almost all objects unchanged; the sole exception is that of
the trailer object (and its sub objects) which can get dropped and
recreated.
To avoid leaving people holding handles to objects within the trailer
dict high and dry, we introduce a 'pre_repair_trailer' object to
each xref entry. On a repair, we copy the existing trailer object to
this. As we only ever repair once, this is safe.
The only known place where this is a problem is when setting up the
pdf_crypt for a document; we adapt the code here to allow for
potential problems.
The example file that shows this up is:
048d14d2f5f0ae31e9a2cde0be66f16a_asan_heap-uaf_86d4ed_3961_3661.pdf
Thanks to Mateusz Jurczyk and Gynvael Coldwind of the Google Security
Team for providing the fuzzing files.
|
|
If the /Version is a single character string (say "s") then the
current code for converting this in pdf_init_document reads off
the end of the string.
Simple fix is to use fz_atof instead.
Same fix for reading the PDF version normally.
This solves:
53b830f849d028fb2d528520716e157a_asan_heap-oob_478692_5259_4534.pdf
Thanks to Mateusz Jurczyk and Gynvael Coldwind of the Google Security
Team for providing the example files.
|
|
pdf_load_obj_stm may resize the xref if it finds further objects in the
stream, that might however invalidate any pdf_xref_entry hold such as
the one in pdf_cache_object. This can be seen e.g. with
7ac3ad9ddad98d10b947a43cf640062f_asan_heap-uaf_930b78_1007_1675.pdf
Thanks to Mateusz Jurczyk and Gynvael Coldwind of the Google Security
Team for providing the example files.
|
|
We define a document handler for each file type (2 in the case of PDF, one
to handle files with the ability to 'run' them, and one without).
We then register these handlers with the context at startup, and then
call fz_open_document... as usual. This enables people to select the
document types they want at will (and even to extend the library with more
document types should they wish).
|
|
These warnings are caused by casting function pointers to void*
instead of proper function types.
|
|
The SVG device needs rebinding as it holds a file. The PDF device needs
to rebind the underlying pdf document.
All documents need to rebind their underlying streams.
|
|
Thanks to Simon for spotting the original problem. This is a slight
tweak on the patch he supplied.
|
|
Currently, if we spot a bad xref as we are reading a PDF in, we can
repair that PDF by doing a long exhaustive read of the file. This
reconstructs the information that was in the xref, and the file can
be opened (and later saved) as normal.
If we hit an object that is not in the expected place however, we
cannot trigger a repair at that point - so xrefs with duff offsets
in (within the bounds of the file) will never be repaired.
This commit solves that by triggering a repair (just once) whenever
we fail to parse an object in the expected place.
|
|
Unused field. Also tweak some comments for clarity.
|
|
fz_read used to return a negative value on errors. With the
introduction of fz_try/fz_catch, it throws an error instead and
always returns non-negative values. This removes the pointless
checks.
|
|
This was causing an infinite loop.
|
|
This can occur early on during xref repair.
|
|
|
|
|
|
By default an OCG is supposed to be visible (for a testcase, see
2011 - ocg without ocgs invisible.pdf). Also, the default visibility
value can be overwritten in both ways, so that pdf_is_hidden_ocg
must check the state both for being "OFF" and "ON" (testcase was
2066 - ocg not printed.pdf rendered with event="Print").
|
|
|
|
The symptoms were that, created annoations were in some cases not saved.
Some updated objects withing the document were not being moved into the
incremental-save xref section. That in turn was due to nodes within the
hierarchy of those objects not having their parent_num field set. The
objects falling foul of this problem were those held in object streams.
When any one object from a stream is cached, the whole stream is read and
all other objects from that stream are also cached, but only the initial
one has its parent_num set. This patch ensures that all objects from a
stream are accounted for. In fact, for the initially-requested object, we
now set parent_num twice, but that is harmless and the code to avoid doing
so wouls be an unnecessary complication.
|
|
Use of the feature is currently enabled only in the case that a file
that already contains xref streams is being updated incrementally. To
do so in that case is necessary because an old-style xref is then not
permitted.
This fixes bug #694527
|
|
|
|
|
|
|
|
We are testing this using a new -p flag to mupdf that sets a bitrate at
which data will appear to arrive progressively as time goes on. For
example:
mupdf -p 102400 pdf_reference17.pdf
Details of the scheme used here are presented in docs/progressive.txt
|
|
|
|
No more caching a flattened page tree in doc->page_objs/refs.
No more flattening of page resources, rotation and boxes.
Smart page number lookup by following Parent links.
Naive implementation of insert and delet page that doesn't rebalance the trees.
Requires existing page tree to hook into, cannot be used to create a page tree
from scratch.
|
|
|
|
|
|
Thanks to zeniko for spotting the problem here.
Type 3 fonts contain a reference to the resources objects required
to render the glyphs. Traditionally these have been freed when the
font is freed. Unfortunately, after recent changes, freeing a PDF
object requires the pdf_document concerned to still exist.
While in most cases the type 3 resources are not used after we have
converted the type3 glyphs to display lists, this is not always the
case. For uncachable Type 3 glyphs (such as those that do not
completely define elements in the graphics state that they use, such
as color or line width), we end up running the glyphs at interpretation
time.
[ Interpretation time = when doing a direct render on the main thread,
or when creating a display list - so also on the main thread. No
multi-threading issues with file access here. ]
The fix implemented here is for each pdf document to keep a list of
the type3 fonts it has created, and to 'decouple' them from the
document when the document is destroyed. The sole effect of this
decoupling is to remove the resources (and the PDF operator buffers)
from the font. These are only ever used during interpretation, and
no further interpretations are possible without the document being
alive anyway, so this should have no net effect on operation, other
than allowing cleanup to proceed cleanly later on.
|