Age | Commit message (Collapse) | Author |
|
When I changed the stream implementations to use implementation
specific buffers, rather than a generic public one in every fz_stream,
I changed fz_read_byte to only get a single byte at a time.
I noted at the time that the underlying stream was free to decode
larger blocks if it wanted too, but I forgot to actually do this for
the flate decoder. Fixing this here should solve the speed issues.
|
|
Without this, comparefiles/Bug695086 renders the barcode test upside
down.
|
|
|
|
Grow the edge list using an exponential realloc pattern.
Use qsort for huge paths and only fall back to the simple
shell sort for small paths.
|
|
Fixes bug introduced in commit 1679c1e7a89ae62260fd84ce55c6bef376c6e6ba:
Optimize UniXXX CMap files.
|
|
|
|
This adds a custom memory management layer between libjpeg and the calling
app - in such a way that the code can be shared between mupdf and
Ghostscript/PDL.
|
|
Don't let a glyph's bbox be too much bigger than the font bbox.
|
|
See bug 693314 (file Z23-04.pdf) for an example file.
|
|
key length.
This reverts commit b1ed116091b790223a976eca2381da2875341e10.
The key length for V==2 must be 40 <= length <= 128.
The key length for V==4 is not taken from the /Length entry.
|
|
|
|
|
|
|
|
Same as for fz_bbox_fill_image_mask, fz_bbox_clip_image_mask must
transform the unit rectangle to get the bounding bbox.
|
|
There are two issues where variables may be used unitialized:
* extract_exif_resolution fails to set xres and yres for JPEG images if
there's no valid resolution unit (mainly affects XPS documents)
* xps_measure_font_glyph uses hadv and vadv unitialized if the glyph id
isn't valid (i.e. if FT_Get_Advance fails)
|
|
Split common parts into separate CMap files and include them with usecmap.
This reduces the size of the compiled in CMap resources from 3Mb to 2Mb.
|
|
Thanks to Triet Lai.
|
|
Increasing the existing data structure to 32-bit values would bloat the data
tables too much.
Simplify the data structure and use three separate range tables for lookups --
one with small 16-bit to 16-bit range lookups, one with 32-bit range lookups,
and a final one for one-to-many lookups.
This loses the range-to-table optimization we had before, but even with the
extra ranges this necessitates, the total size of the compiled binary CMap data
is smaller than if we were to extend the previous scheme to 32 bits.
|
|
Remove obsolete Adobe-Japan-2 based CMaps.
|
|
pdf_write_document still writes the entire xref with references to all
freed objects even if the xref has been compacted which makes the
result of mutool clean -ggg larger than necessary.
|
|
|
|
Currently, png_read_phys always rounds the resolution down. Many images
have a resolution just slightly shy of 96 DPI and are thus rendered too
large when they're resized from 95 to match the required 96 for output.
|
|
If the reported height is 0 or too large, use the image size reported
in the PDF itself instead (in the case of height 0, the JPEG library
is supposed to read the correct value from the DNL segment, but libjpeg
doesn't support that).
|
|
If a JPEG stream is missing valid values for width/height (usually -1),
Adobe Reader substitutes these using the values read from the PDF
object. This can be done by scanning and patching the data before
passing it to libjpeg.
Thanks to zeniko for the patch.
|
|
fast_cmyk_to_rgb had a simple 1 place cache to avoid recalculating
the same conversions again and again. The implementation was broken
though, both in C and ARM code versions. This seems to fix it.
|
|
... instead convert a JPEG2000 used as a soft mask into grayscale.
This is more robust than trusting the PDF specified colorspace over
the internal JPX colorspace.
The spec implies that in a colorspace conflict, the internal JPX
colorspace should be used.
The PDF colorspace may be a DeviceN or Separation colorspace.
DeviceN and Separation colorspaces are not valid destination
colorspaces, so we may not always be able to convert the internal
JPX colorspace into the PDF specified colorspace.
Converting from the internal colorspace into grayscale is more robust,
and solves the issue that the original commit was intended to fix.
|
|
|
|
|
|
|
|
When we return the padding byte in an fz_concat stream, ensure that
we remember to increment rp to point just past in. If not, then we'll
read 2 whitespace chars out. This is fine unless we try and
fz_unread_byte the first one, when we'll leave rp pointing to
an out of buffer address.
Credit to Malc for the bisecting/debugging that got me to the fix.
Many thanks.
|
|
OpenType CFF fonts are detected as TYPE1 by ft_kind.
Relaxing the test for when to load a CIDToGIDMap lets us load
it even for OpenType fonts.
|
|
Don't print the code point number, to let the inhibition of multiple
identical warnings kick in.
|
|
fts_5904.xps and fts_5905.xps use namespace prefixes.
Work around that by ignoring the namespace prefix for tag names.
A more robust solution would be to expand or record the tag and
attribute namespaces in the fz_xml node structure, but that's a
overkill for our current needs.
|
|
|
|
|
|
NoExport (and ReadOnly) fields shouldn't mark the document for saving.
|
|
After rushing to get the fix for a crash in, I realised the
routine could be simplified a bit.
|
|
|
|
This fixes three instances of warning C4706, allows compilation with
VS2013 and prevents an accidental va_end for when va_end is defined
(which is the case for debug builds).
|
|
|
|
Michael spotted that double closing an fz_stream on an inline image
does bad things. Simple fix is not to double close.
|
|
It has no real reason to live in mudraw, and it does pull in the
javascript dependency via pdf-form.c.
|
|
Split functions out of pdf-form.c that shouldn't be there, and make
javascript initialization explicit.
|
|
|
|
Adds simpler choice of Javascript library to makefiles.
Will prefer in order: MuJS, JavaScriptCore, V8, none based
on HAVE_MUJS, HAVE_JSCORE, and HAVE_V8.
For simplicity, we build mujstest even with no javascript implementation.
|
|
|
|
0.4 is not exactly representable using floats, and libjs uses a different
atod function than v8.
|
|
|
|
New routine to filter the content streams for pages, xobjects,
type3 charprocs, patterns etc. The filtered streams are guaranteed
to be properly matched with q/Q's, and to not have changed the top
level ctm. Additionally we remove (some) repeated settings of
colors etc. This filtering can be extended to be smarter later.
The idea of this is to both repair after editing, and to leave the
streams in a form that can be easily appended to.
This is preparatory to work on Bates numbering and Watermarking.
Currently the streams produced are uncompressed.
|
|
When you use mutool clean to subset pages out of a PDF, we already
remove the Name tree entries for named locations that aren't in the
target file. We have henceforth failed to remove references to these
removed names though. This can cause errors (really warnings) on
reading the file back.
|