Age | Commit message (Collapse) | Author |
|
Add a new index that quickly maps object number to the first
xref in which an object appears. This appears to get us the
speed back that we lost when moving to sparse xrefs.
|
|
Following the recent change to hold pdf xrefs in their native 'sparse'
representation, searching the xref takes longer.
Malc has investigated this slowdown and found that it can be largely
avoided by not searching the xref lists first. A modified version of
his first patch has gone in already (getting us from 10x slower to
just 5x slower).
This commit is a modified version of a second patch from him. Again
it works by avoiding searching the xref list twice. The original
version of this patch 1) appears broken to me, as it could return the
wrong xref entry when object streams have more than one object in them,
and 2) supposedly gets the speed back to the original 'pre-sparse change'
speed.
I have updated the patch to fix 1), and I hope this should not affect 2).
I am slightly suspicious that removing a search can get us a 5x speed
increase, but certainly this is an improvemnet.
There is scope for us further reducing the search times, by us using a
new table to map object number -> xref number, but unless we find a case
where we are noticably slower than before, I think we can ignore this.
|
|
We know i >= 0 as we've already thrown if i < 0 earlier.
Credit to Malc for spotting this.
|
|
The recent change to holding pdf xrefs in a sparse format has resulted
in a significant decrease in speed (x10). Malc points out that some of
this (2x) can be recovered simply by making pdf_cache_object return the
entry which it found the object in.
This saves us having to immediately call pdf_get_xref_entry again
afterwards.
I am still thinking about ways to try and get the remaining time back.
|
|
C89 code is preferable to gcc code; define variables at the start of
blocks.
|
|
Ghostscript's LZW decoder accepts invalid LZW code 4096 if it's
immediately followed by LZW clear code 256 for handling files produced
by certain broken encoders and other common PDF readers seem to have
similar error handling. This patch makes MuPDF tolerate such broken
files as well.
|
|
MSVC complains about using const char** as argument to qsort or free
which both expects pointers to (pointers to)* non-const values. Adding
type casts fixes the warning.
|
|
Starting with commit 2f4cdd4fd0580e3121773e89a7c6e7a9e1ffa54b,
xps_read_part zero-terminates the read data. It does however also count
that zero-terminator to the part's size which confuses callers handling
non-text data.
|
|
Commit 5add23c7233c3f34fdfa6387873b1d3bdb93e1d6 and commit
2f4cdd4fd0580e3121773e89a7c6e7a9e1ffa54b introduced three memory leaks
which only appear in error cases:
* unzip.c leaks if a ZIP archive uses a compression method other than
store or Deflate
* xps-zip.c leaks if fz_open_archive_with_stream throws for broken
ZIP archives
* xps-zip.c leaks also if a piece of a split file is missing
|
|
After calling ensure_solid_xref, the pdf_xref pointer must be updated
in case ensure_solid_xref has reallocated the sections table or uses
a different section table than originally used. Commit
e767bd783d91ae88cd79da19e79afb2c36bcf32a fails to do so in one case.
TODO: Why does pdf_xref_find_subsection solidify xref section 0 instead
of xref section sub?
|
|
|
|
|
|
It's surprising and may cause unexpected effects for code that may have
saved pointers to the underlying data in read only buffers, such as
fz_new_image_from_buffer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Read the contents of a file into a fz_buffer in one go.
|
|
Many instances just use the data and free it, so reallocing to shrink
is a waste of time. Other instances need to append a terminating zero,
such as the XML and CSS parsers.
|
|
Find the first sibling, next sibling or first child matching tag name.
|
|
|
|
|
|
Currently each xref in the file results in an array from 0 to
num_objects. If we have a file that has been updated many times
this causes a huge waste of memory.
Instead we now hold each xref as a list of non-overlapping subsections
(exactly as the file holds them).
Lookup is therefore potentially slower, but only on files where the
xrefs are highly fragmented (i.e. where we would be saving in memory
terms).
Some parts of our code (notably the file writing code that does
garbage collection etc) assumes that lookups of object entry pointers
will not change previous object entry pointers that have been
looked up. To cope with this, and to cope with the case where we are
updating/creating new objects, we introduce the idea of a 'solid'
xref.
A solid xref is one where it has a single subsection record that spans
the entire range of valid object numbers for a file. Once we have
ensured that an xref is 'solid', we can safely work on the pointers
within it without fear of them moving.
We ensure that any 'incremental' xref is solid.
We also ensure that any non-incremental write makes the xref solid.
|
|
pdf_lookup_page_loc_imp currently throws if any object in the page tree
is neither a /Pages node nor a /Page leaf. This unnecessarily rejects
slightly broken documents such as the ones from
https://code.google.com/p/sumatrapdf/issues/detail?id=2582 and
https://code.google.com/p/sumatrapdf/issues/detail?id=2608 .
pdf_count_pages_before_kid currently wrongly throws if a /Pages node
doesn't contain any kids and correctly states so (which even seems to
be permitted by the PDF specification).
|
|
In load_sample_func, the stream is not closed and thus leaked if one of
the fz_read_byte or fz_read_bits calls throws (which might happen e.g.
on a Deflate data error).
In pdf_load_compressed_inline_image, the allocated buffer is not freed
if one of the stream initializers or the tile creation throws
(fz_open_leecher does not take ownership of the stream).
|
|
|
|
In file included from scripts/cmapdump.c:19:
scripts/../source/fitz/ftoa.c:30:23: warning: redefinition of typedef 'ulong' is a C11 feature [-Wtypedef-redefinition]
typedef unsigned long ulong;
^
scripts/../source/fitz/strtod.c:30:23: note: previous definition is here
typedef unsigned long ulong;
^
1 warning generated.
(Apparently in earlier versions of clang this is an error.)
|
|
Use the actual ranges from the cpt-to-gid cmap to optimize the
remapping of ToUnicode cmaps from cpt-to-unicode into gid-to-unicode
format.
|
|
When inverting the CMap to create a ToUnicode, first check the actual
range of input characters rather than relying only on the codespace
range list.
|
|
The dtoa function is for doubles (which is what MuJS uses) but for MuPDF
we only need and want float precision in our output formatting.
|
|
This is required for XPS, as otherwise images can be completely
omitted.
|
|
|
|
|
|
Add a whitespace preserving mode, for future use with XHTML.
Also parse XHTML entities. This is not strictly according to spec,
but for properly formed XML documents it should not matter.
|
|
Add a new class of errors and use them to abort interpretation when
the test device detects a color page.
|
|
Even though the encryption key length isn't supposed to be taken from
the encryption dictionary's /Length for crypt version 4, other readers
such as Adobe's still use that value if a crypt filter's /Length is
missing.
See https://code.google.com/p/sumatrapdf/issues/detail?id=2710 for a
document where this makes a difference (or simply remove /Length from
the crypt filter in any document encrypted with crypt version 4 and an
AESV2 crypt filter).
|
|
With this change, all 32-bit values read from untrusted data through
read_value are compared unmodified in order to prevent unintended
integer overflows during the comparison.
|
|
Rather than decoding entire image only to give up after we find the
very first pixel is color, add code so that the test-device can
treat the image as a stream. This means that (for most image types
at least) we can bale out without decoding everything.
This reduces the runtime of 3001Pages.pdf from 14 minutes to 18 seconds.
|
|
Add some #definery for platforms where NAN and INFINITY aren't
defined in std headers.
|
|
We were failing to drop each pixmap after testing it, and to free
the test_device structure at the end of each run.
|
|
|
|
|
|
|
|
When we detect that a page is color, set the ignore image hint
to avoid us loading future images. The overhead on loading
images is not generally huge, except for JPEG2000 ones, which
currently require decoding at load time. This therefore saves
lots of time for such files.
Also, a tiny tweak to ignore page components with 0 alpha.
|
|
The original version of the test-device could characterise pages
as being grayscale/color purely based on the colorspaces used.
This could easily be upset by grayscale images or shadings that
happened to be specified in non-grayscale colorspaces however.
We now look at the actual shading and image color values, and use
a threshold value to allow for some measure of rounding errors in
color values that are in practice grayscale.
|
|
Let either or both of the 'prepare' and 'process' callbacks be no-ops.
|
|
Don't rely on the csize and usize fileds being set in the individual
entry headers.
|
|
win32 supports tinting, but cannot change the color from the default.
|