Age | Commit message (Collapse) | Author |
|
Certain optimized documents use a rather large common symbol dictionary
for all JBIG2 images. Caching these JBIG2Globals speeds up loading and
rendering of such documents.
|
|
This helps debugging issues with JBIG2 images.
Conflicts:
source/fitz/filter-jbig2.c
|
|
See SumatraPDF's repo for a Windows-only implementation using WIC.
|
|
At https://code.google.com/p/sumatrapdf/issues/detail?id=2460 , there's
a file with missing /Type keys in the page tree nodes. In that case,
leaf nodes and intermediary nodes have to be distinguished in a
different way.
|
|
These warnings are caused by casting function pointers to void*
instead of proper function types.
|
|
Some warnings we'd like to enable for MuPDF and still be able to
compile it with warnings as errors using MSVC (2008 to 2013):
* C4115: 'timeval' : named type definition in parentheses
* C4204: nonstandard extension used : non-constant aggregate initializer
* C4295: 'hex' : array is too small to include a terminating null character
* C4389: '==' : signed/unsigned mismatch
* C4702: unreachable code
* C4706: assignment within conditional expression
Also, globally disable C4701 which is frequently caused by MSVC not
being able to correctly figure out fz_try/fz_catch code flow.
And don't define isnan for VS2013 and later where that's no longer needed.
|
|
The SVG device needs rebinding as it holds a file. The PDF device needs
to rebind the underlying pdf document.
All documents need to rebind their underlying streams.
|
|
|
|
|
|
Add a cached color converter mechanism. Use this for rendering meshes
to speed repeated conversions.
This reduces a (release build to ppm at default resolution) run from
23.5s to 13.2 seconds.
|
|
In the existing code for meshes, we decompose the mesh down into
quads (or triangles) and then call a process routine to actually
do the work. This process routine typically maps each vertexes
position/color and plots it.
As each vertex is used several times by neighbouring patches, this
results in each vertex being processed several times. The fix in
this commit is therefore to break the processing into 'prepare' and
'process' phases. Each vertex is 'prepared' before being used in
the 'process' phase. This cuts the number of prepare operations in
half.
In testing, this reduced the time for a (release build, generating ppm
at default resolution) run from 33.4s to 23.5s.
|
|
When we meet a broken PDF file, we attempt to repair it. We do this by
reading tokens from the file and attempting to interpret them as a
normal PDF stream.
Unfortunately, if the file is corrupt enough so that we start to read
from the middle of a stream, and we happen to hit an '(' character,
we can go into string reading mode. We can then end up skipping over
vast swathes of file that we could otherwise repair.
We fix this here by using a new version of the pdf_lex function that
refuses to ever return a string. This means we may take more time
over skipping things than we did before, but are less likely to
skip stuff.
We also tweak other parts of the pdf repair logic here. If we hit a
badly formed piece of data, clear the num/gen we have stored so that
the next plausible piece we get does not get assigned to a random
object number.
|
|
Remove code that's not used any more as a result of the previous
fix, plus some code that was unused anyway.
|
|
The 0 null object is leaked if a document refers to 0 0 obj before
requiring a delayed reparation (seen e.g. with 3324.pdf.asan.3.2585).
|
|
Thanks to Simon for spotting the original problem. This is a slight
tweak on the patch he supplied.
|
|
Replace an explicit i = i by a comment in a for loop where i is
already at the correct starting value.
|
|
Use round caps and joins so as to better match the result of drawing, and also
so that single dots display. Thanks to Michael Cadilhac for the suggestion.
|
|
Avoid unnecessary copies. Minimise calls to isbigendian.
|
|
This is required for e.g. 1980_-_compressed_inline_image.pdf and
Bug690300.pdf .
|
|
At https://code.google.com/p/sumatrapdf/issues/detail?id=2436 , there's
a document with an empty xref section which since recently causes a
repair to be triggered. Repairs then stop when pdf_repair_obj_stms fails
on an object which isn't even required for the document to render. Such
broken object streams should rather be ignored same as broken objects
are ignored in pdf_init_document.
|
|
Avoid recursion to avoid stack overflows.
|
|
The pattern repeat calculation should be done in pattern space, but
one of the arguments in the calculation was being taken from device
space. Fix this. Also only apply the bias in the case where the
bias would make it larger.
173 progressions.
|
|
Currently, if we spot a bad xref as we are reading a PDF in, we can
repair that PDF by doing a long exhaustive read of the file. This
reconstructs the information that was in the xref, and the file can
be opened (and later saved) as normal.
If we hit an object that is not in the expected place however, we
cannot trigger a repair at that point - so xrefs with duff offsets
in (within the bounds of the file) will never be repaired.
This commit solves that by triggering a repair (just once) whenever
we fail to parse an object in the expected place.
|
|
Empty Contents streams are not valid - they need a length at least.
The alternative approach would be to put /Length 0 and update it
later.
|
|
Thanks to Michael Cadilhac for spotting this.
|
|
Simple typo. Thanks to Alexander Monakov for spotting this.
|
|
Previously we were setting blendmode in the created form XObjects
transparency group definition. This didn't work as PDF readers don't
look for it there.
Now we set it in the calling stream's resources, and set it before
calling the group.
|
|
Thanks to Makoto Fujiwara for spotting this.
|
|
It seems that (int)-98.5 = 98, not -99. Use floorf instead.
|
|
If we have a NULL page, don't attempt to pass events to it.
|
|
Unused field. Also tweak some comments for clarity.
|
|
|
|
|
|
A poorly formed string can cause us to overrun the end of the buffer.
Now we check the end of the string at each stage to avoid this.
|
|
We were miscalculating the offsets into a sampled functions table,
causing us to overrun the end. Fixed here.
|
|
The changes to fz_render_glyph cause the scissor rectangle to no longer
match the transformation matrix which causes Type 3 glyphs to be
clipped at larger resolutions.
|
|
ft_file was removed in a2c945506ea2a2b58edbde84124094c6b4f69eac even
though it might still be needed by downstream consumers (such as
SumatraPDF) for allowing devices to load fonts again when a font has
been loaded by fz_new_font_from_file which doesn't maintain a buffer.
|
|
fz_new_font_from_buffer keeps the buffer for the font, so callers which
no longer need the data have to drop the buffer themselves explicitly.
|
|
The actual issue here is that a pixmap is dropped more times than
it should be due to an error in the rendering pipeline.
The problem arises because we fail to push a clip image mask, but
still pop the mask off the stack later. This puts us off by 1 in
the stack handling.
The simplest solution to this (that will be safe no matter what
mistakes are made by the caller too) is to add some simple tests
in the draw device to ensure we do not free too early.
|
|
I believe the implementation for revision 3 is wrong.
From pdf_reference17.pdf, step 5 of Algorithm 3.5 says:
5. Do the following 19 times: Take the output from the
previous invocation of the RC4 function and pass it
as input to a new invocation of the function; use an
encryption key generated by taking each byte of the
original encryption key (obtained in step 1) and
performing an XOR (exclusive or) operation between
that byte and the single-byte value of the iteration
counter (from 1 to 19).
"the original encryption key (obtained in step 1)" is pwbuf
(32 bytes) not key. Even if it was key, it wouldn't be n
bytes long, but only 16.
|
|
In case of an unknown function type, we free 'func'. Then we later
read func->type out of the block, and drop the block.
Simple solution is not to free the block initially and to let the
drop of the block do it for us.
|
|
|
|
|
|
|
|
Rely on the document creator to have sorted them rather than risk
getting the wrong page order.
|
|
pdf_load_annots was leaving the tail pointer pointing at the
automatic variable head in the case of the page having no
annotations.
|
|
|
|
Use fz_buffer to wrap and reference count data used in font.
|
|
|
|
|