Age | Commit message (Collapse) | Author |
|
|
|
pdf_parse_file_spec sometimes extracts the wrong path from a FileSpec:
E.g. the /DOS path should never be returned under Unix systems, neither
should be the old /Mac paths.
For consistency, this patch also converts filesystem paths under
Windows into a format applications will expect (e.g. from "/C/path/..."
to "C:\path\...").
Finally, pdf_parse_file_spec is exposed to callers (SumatraPDF requires
that for manually processing FZ_ANNOT_FILEATTACHMENT and embedded files).
|
|
When watermarking, we may want to use the PDF device on an
existing buffer. In this case, we have no 'contents' object.
|
|
In order to be able to watermark etc, we want the ability to
add more operators/resources after page cleaning.
Add a post processing hook to enable this to be done more
easily.
|
|
Add missing initialisation of glo.ctx required due to API
change.
|
|
A few casts are required within the code, along with a few #ifdef
changes.
Some tweaks to curl are required too.
|
|
|
|
Silly oversight.
|
|
We were not filling in the matrix and bbox fields for images
collected as part of the text extraction device. Fixed here.
|
|
When we meet cached tiles when rendering the display list, we need to
skip over their contents. Previously we did this by skipping
display list nodes in their entirety.
With the new display list scheme however, we cannot simply skip
nodes completely as the graphic state changes must be remembered.
We therefore update the list playback routine to keep track of the
clip depth and to skip the function calls as required.
|
|
I was testing an untransformed rectangle. This was not being picked up
as our cluster tests use the identity matrix.
|
|
Add the new source files to the solution.
Windows builds whinge about float->double conversions. Fix these
with explicit casts.
Avoid calling strtof and strcasecmp.
|
|
The example file for this bug has an invalid font bbox. The current
code uses this bbox (or some multiple of it) to clip the glyphs
size.
In the new code, when we convert the glyphs to display lists we
watch for the bbox given in any d1 operator used. If we find one,
we gather the rectangle specified and store it as the glyph rectangle
in the fz_font.
If we then attempt to bound a glyph that used d1, it happens instantly
without needing to run the list. This seems to match acrobats behaviour.
Tests indicate that Acrobat never clips d0 glyphs, so our behaviour
is still different here, but I am not changing this at the moment.
Also, I note that t3flags should be a un unsigned short but are currently
just a char. Fix that too.
Also fix some missing code in fz_new_font that would cause leaks if mallocs
failed.
|
|
Move the logic in pdf_show_char to use the same idiom as used
elsewhere. Specifically this ensures that empty rects are
handled correctly.
|
|
This avoids up problems with a forthcoming commit.
|
|
|
|
Conflicts:
Makefile
|
|
|
|
|
|
Whenever we fz_keep a path, it's an indication that we're going to be
keeping the path around for a while (anod not changing it any more).
We therefore take the opportunity to trim the path buffers down.
|
|
Rather than a linked list of display nodes, we use solid block of
serialised data. We send a 32bit word, which contains various bitfields.
These bitfields indicate the command type, and the presence or absence
of various fields (such as paths, colorspaces, colors etc).
If these fields are not present, they are held to be the same as the
previous values.
|
|
Thanks to malc for pointing out the problem.
|
|
|
|
|
|
|
|
|
|
Add locks around fz_path and fz_text reference counting.
|
|
fz_open_file does not return NULL on failure -- it throws an exception!
|
|
Purge several embedded contexts:
Remove embedded context in fz_output.
Remove embedded context in fz_stream.
Remove embedded context in fz_device.
Remove fz_rebind_stream (since it is no longer necessary).
Remove embedded context in svg_device.
Remove embedded context in XML parser.
Add ctx argument to fz_document functions.
Remove embedded context in fz_document.
Remove embedded context in pdf_document.
Remove embedded context in pdf_obj.
Make fz_page independent of fz_document in the interface.
We shouldn't need to pass the document to all functions handling a page.
If a page is tied to the source document, it's redundant; otherwise it's
just pointless.
Fix reference counting oddity in fz_new_image_from_pixmap.
|
|
Rename fz_close to fz_drop_stream.
Rename fz_close_archive to fz_drop_archive.
Rename fz_close_output to fz_drop_output.
Rename fz_free_* to fz_drop_*.
Rename pdf_free_* to pdf_drop_*.
Rename xps_free_* to xps_drop_*.
|
|
|
|
Disallow modification of shared fz_path and fz_text objects.
They should follow a create once, consume often pattern, and as such should
be immutable once created.
|
|
|
|
We end up trying to scale the JPEG up 72 times and fail a malloc.
A better plan is to make the image handler disbelieve any xres or
yres values less than 72dpi. We take care to still preserve aspect
ratios etc.
|
|
Add the new URW base fonts that include greek and cyrillic scripts.
These new fonts remove the need for DroidSans as a generic fallback font.
|
|
|
|
|
|
If a pattern is expected to be rendered exactly once and its relevant
part covers the target area, the xstep and ystep values may be far
larger than the pattern's relevant content. Due to rounding applied in
pdf_show_pattern, such patterns have been omitted so far. This issue is
exposed e.g. by the document linked from
http://forums.fofou.org/sumatrapdf/topic?id=3184639 .
|
|
At https://github.com/sumatrapdfreader/sumatrapdf/issues/66 there's a
document which contains a string (\358) which is parsed as (\360) with
the 8 overflowing instead of as (\0358) with the 8 being the first
character after the octal escape. This patch restricts octal digits to
'0' to '7' to fix that issue.
|
|
|
|
When loading e.g. the file from bug 694567, MuPDF uses an unitialized
variable because pdf_document::xref_index contains values relative to
the document's original multi-part xref while the actual xref is the
repaired single-part one (and thus the cached value is too large).
Properly resetting the xref_index before starting reparation fixes this
crash.
|
|
|
|
pdf_xref_find_subsection does indeed solidify the wrong xref section:
it should operate only on the oldest xref and not overwrite the most
recent one with older entries.
|
|
pdfextract_main by default iterates through all objects from number 0
to the size of the document's xref table. Object number 0 is however
always supposed to be free, so pdfextract consistently fails and shows
a slightly confusing warning. Object extraction should by default start
at object 1 in order to prevent this warning.
|
|
Add a new index that quickly maps object number to the first
xref in which an object appears. This appears to get us the
speed back that we lost when moving to sparse xrefs.
|
|
Following the recent change to hold pdf xrefs in their native 'sparse'
representation, searching the xref takes longer.
Malc has investigated this slowdown and found that it can be largely
avoided by not searching the xref lists first. A modified version of
his first patch has gone in already (getting us from 10x slower to
just 5x slower).
This commit is a modified version of a second patch from him. Again
it works by avoiding searching the xref list twice. The original
version of this patch 1) appears broken to me, as it could return the
wrong xref entry when object streams have more than one object in them,
and 2) supposedly gets the speed back to the original 'pre-sparse change'
speed.
I have updated the patch to fix 1), and I hope this should not affect 2).
I am slightly suspicious that removing a search can get us a 5x speed
increase, but certainly this is an improvemnet.
There is scope for us further reducing the search times, by us using a
new table to map object number -> xref number, but unless we find a case
where we are noticably slower than before, I think we can ignore this.
|
|
We know i >= 0 as we've already thrown if i < 0 earlier.
Credit to Malc for spotting this.
|
|
The recent change to holding pdf xrefs in a sparse format has resulted
in a significant decrease in speed (x10). Malc points out that some of
this (2x) can be recovered simply by making pdf_cache_object return the
entry which it found the object in.
This saves us having to immediately call pdf_get_xref_entry again
afterwards.
I am still thinking about ways to try and get the remaining time back.
|
|
C89 code is preferable to gcc code; define variables at the start of
blocks.
|
|
Ghostscript's LZW decoder accepts invalid LZW code 4096 if it's
immediately followed by LZW clear code 256 for handling files produced
by certain broken encoders and other common PDF readers seem to have
similar error handling. This patch makes MuPDF tolerate such broken
files as well.
|