Age | Commit message (Collapse) | Author |
|
key length.
This reverts commit b1ed116091b790223a976eca2381da2875341e10.
The key length for V==2 must be 40 <= length <= 128.
The key length for V==4 is not taken from the /Length entry.
|
|
|
|
|
|
|
|
Same as for fz_bbox_fill_image_mask, fz_bbox_clip_image_mask must
transform the unit rectangle to get the bounding bbox.
|
|
There are two issues where variables may be used unitialized:
* extract_exif_resolution fails to set xres and yres for JPEG images if
there's no valid resolution unit (mainly affects XPS documents)
* xps_measure_font_glyph uses hadv and vadv unitialized if the glyph id
isn't valid (i.e. if FT_Get_Advance fails)
|
|
Split common parts into separate CMap files and include them with usecmap.
This reduces the size of the compiled in CMap resources from 3Mb to 2Mb.
|
|
Thanks to Triet Lai.
|
|
Increasing the existing data structure to 32-bit values would bloat the data
tables too much.
Simplify the data structure and use three separate range tables for lookups --
one with small 16-bit to 16-bit range lookups, one with 32-bit range lookups,
and a final one for one-to-many lookups.
This loses the range-to-table optimization we had before, but even with the
extra ranges this necessitates, the total size of the compiled binary CMap data
is smaller than if we were to extend the previous scheme to 32 bits.
|
|
Remove obsolete Adobe-Japan-2 based CMaps.
|
|
pdf_write_document still writes the entire xref with references to all
freed objects even if the xref has been compacted which makes the
result of mutool clean -ggg larger than necessary.
|
|
|
|
Currently, png_read_phys always rounds the resolution down. Many images
have a resolution just slightly shy of 96 DPI and are thus rendered too
large when they're resized from 95 to match the required 96 for output.
|
|
If the reported height is 0 or too large, use the image size reported
in the PDF itself instead (in the case of height 0, the JPEG library
is supposed to read the correct value from the DNL segment, but libjpeg
doesn't support that).
|
|
If a JPEG stream is missing valid values for width/height (usually -1),
Adobe Reader substitutes these using the values read from the PDF
object. This can be done by scanning and patching the data before
passing it to libjpeg.
Thanks to zeniko for the patch.
|
|
fast_cmyk_to_rgb had a simple 1 place cache to avoid recalculating
the same conversions again and again. The implementation was broken
though, both in C and ARM code versions. This seems to fix it.
|
|
... instead convert a JPEG2000 used as a soft mask into grayscale.
This is more robust than trusting the PDF specified colorspace over
the internal JPX colorspace.
The spec implies that in a colorspace conflict, the internal JPX
colorspace should be used.
The PDF colorspace may be a DeviceN or Separation colorspace.
DeviceN and Separation colorspaces are not valid destination
colorspaces, so we may not always be able to convert the internal
JPX colorspace into the PDF specified colorspace.
Converting from the internal colorspace into grayscale is more robust,
and solves the issue that the original commit was intended to fix.
|
|
|
|
|
|
|
|
When we return the padding byte in an fz_concat stream, ensure that
we remember to increment rp to point just past in. If not, then we'll
read 2 whitespace chars out. This is fine unless we try and
fz_unread_byte the first one, when we'll leave rp pointing to
an out of buffer address.
Credit to Malc for the bisecting/debugging that got me to the fix.
Many thanks.
|
|
OpenType CFF fonts are detected as TYPE1 by ft_kind.
Relaxing the test for when to load a CIDToGIDMap lets us load
it even for OpenType fonts.
|
|
Don't print the code point number, to let the inhibition of multiple
identical warnings kick in.
|
|
fts_5904.xps and fts_5905.xps use namespace prefixes.
Work around that by ignoring the namespace prefix for tag names.
A more robust solution would be to expand or record the tag and
attribute namespaces in the fz_xml node structure, but that's a
overkill for our current needs.
|
|
|
|
|
|
NoExport (and ReadOnly) fields shouldn't mark the document for saving.
|
|
After rushing to get the fix for a crash in, I realised the
routine could be simplified a bit.
|
|
|
|
This fixes three instances of warning C4706, allows compilation with
VS2013 and prevents an accidental va_end for when va_end is defined
(which is the case for debug builds).
|
|
|
|
Michael spotted that double closing an fz_stream on an inline image
does bad things. Simple fix is not to double close.
|
|
It has no real reason to live in mudraw, and it does pull in the
javascript dependency via pdf-form.c.
|
|
Split functions out of pdf-form.c that shouldn't be there, and make
javascript initialization explicit.
|
|
|
|
Adds simpler choice of Javascript library to makefiles.
Will prefer in order: MuJS, JavaScriptCore, V8, none based
on HAVE_MUJS, HAVE_JSCORE, and HAVE_V8.
For simplicity, we build mujstest even with no javascript implementation.
|
|
|
|
0.4 is not exactly representable using floats, and libjs uses a different
atod function than v8.
|
|
|
|
New routine to filter the content streams for pages, xobjects,
type3 charprocs, patterns etc. The filtered streams are guaranteed
to be properly matched with q/Q's, and to not have changed the top
level ctm. Additionally we remove (some) repeated settings of
colors etc. This filtering can be extended to be smarter later.
The idea of this is to both repair after editing, and to leave the
streams in a form that can be easily appended to.
This is preparatory to work on Bates numbering and Watermarking.
Currently the streams produced are uncompressed.
|
|
When you use mutool clean to subset pages out of a PDF, we already
remove the Name tree entries for named locations that aren't in the
target file. We have henceforth failed to remove references to these
removed names though. This can cause errors (really warnings) on
reading the file back.
|
|
Stupid MSVC has no strtof.
|
|
%q escapes using C syntax and wraps the string in double quotes.
%( escapes using PS/PDF syntax and wraps the string in parens.
|
|
The primary motivator for this is so that we can print floating point
values and get the full accuracy out, without having to print 1.5 as
1.5000000, and without getting 23e24 etc.
We only support %c, %f, %d, %o, %x and %s currently.
We only support the zero padding qualifier, for integers.
We do support some extensions:
%C turns values >=128 into UTF-8.
%M prints a fz_matrix.
%R prints a fz_rect.
%P prints a fz_point.
We also implement a fprintf variant on top of this to allow for
consistent results when using fz_output.
a
|
|
Previously pdf_process buffer did not understand inline images.
In order to make this work without needlessly duplicating complex code
from within pdf-op-run, the parsing of inline images has been moved to
happen in pdf-interpret.c. When the op_table entry for BI is called
it now expects the inline image to be in csi->img and the dictionary
object to be in csi->obj.
To make this work, we have had to improve the handling of inline images
in general. While non-inline images have been loaded and held in
memory in their compressed form and only decoded when required, until
now we have always loaded and decoded inline images immediately. This
has been due to the difficulty in knowing how many bytes of data to
read from the stream - we know the length of the stream once
uncompressed, but relating this to the compressed length is hard.
To cure this we introduce a new type of filter stream, a 'leecher'.
We insert a leecher stream before we build the filters required to
decode the image. We then read and discard the appropriate number
of uncompressed bytes from the filters. This pulls the compressed
data through the leecher stream, which stores it in an fz_buffer.
Thus images are now always held in their compressed forms in memory.
The pdf-op-run implementation is now trivial. The only real complexity
in the pdf-op-buffer implementation is the need to ensure that the
/Filter entry in the dictionary object matches the exact point at
which we backstopped the decompression.
|
|
Currently fz_streams have a 4K buffer within their header. The call
to read from a stream fills this buffer, resulting in more data being
pulled from any underlying stream than we might like. This causes
problems with the forthcoming 'leech' filter.
Here we simplify the fields available in the public stream header.
No specific buffer is given; simply the read and write pointers.
The underlying 'read' function is replaced by a 'next' function
that makes the next block of data available and returns the first
character of it (or EOF).
A caller to the 'next' function should supply the maximum number of
bytes that it knows it will need (possibly not now, but eventually).
This enables the underlying stream to efficiently decode just enough.
The underlying stream is free to return fewer, or a greater number
if it wants to.
The exact size of the 'block' of data returned will depend on the
filter in use and (possibly) the data therein.
Callers can get the currently available amount of data by calling
fz_available (but again should pass the maximum amount of data they know
they will need). The only time this will ever return 0 is if we have
hit EOF.
|
|
Gridfitting can increase the required width/height of images by up to
2 pixels. This makes images that are rendered very small very
sensitive to over quantisation.
This can produce 'mushier' images than it should, for instance on
tests/Ghent_V3.0/090_Font-Support_x3.pdf (pgmraw, 72dpi)
|
|
This avoids leaks when pdf_clear_xref etc are used.
|
|
Currently, when parsing, each time we encounter a name, we throw away
the last name we had. BDC operators are called with:
/Name <object> BDC
If the <object> is a name, we lose the original /Name.
To fix this, parsing a name when we already have a name will cause
the name to be stored as an object.
This has various knock on effects throughout the code to read from
csi->obj rather than csi->name.
Also, ensure that when cleaning, we collect a list of the object
names in our new resources dictionary.
|
|
When inserting a new value into a dictionary, if replacing an existing
entry, ensure we keep the new value before dropping the old one.
This is important in the case where (for example) the existing value
is "[ object ]" and the new value is "object". If we drop the array
and that loses the only reference to object, we can find that we have
lost the value we are adding.
|