Age | Commit message (Collapse) | Author |
|
|
|
Don't rely on having to run the page once with an identity transform
before being able to load the links.
|
|
For use in a later link parsing refactoring commit.
|
|
|
|
|
|
Respect default widths when creating the glyph width table.
|
|
Add NaCl cross compile rules to Makerules (together with a tiny
header tweak). Thanks to Robert Bamler for the rules to include.
|
|
|
|
The current code never looks for /Root objects in dictionaries
as it parses them. This means that 'New style' files end up
without any Roots after repair.
The new code therefore updates pdf_repair_obj to look for Root
objects in the same way it looks for encrypt and id objects.
These go into the list of found roots.
The Root object almost certainly has indirections within it, so
it is vital that the 'doc' pointer gets set. This means we have
to make a slight adjustment to pdf_repair_obj so that the dict
is parsed with a doc pointer. In turn this means we need to
manually ensure that none of the other information read from
the dict during the repair operation will cause indirections
to be resolved. This is achieved by checking for
!pdf_is_indirect at various points.
|
|
Tor turned up an interesting section in the C spec about this. See
page 275 of http://open-std.org/jtc1/sc22/wg14/www/docs/n1494.pdf
regarding acceptable places for setjmp to occur.
It seems that:
if (setjmp(buf))
if (!setjmp(buf))
if (setjmp(buf) {==,!=,<,>} <integer constant>)
etc are all valid things to do, but assignments (and subsequent
testing of values) like:
if ((code = setjmp(buf)) == 0)
are not allowed.
Further, it's not even clear that:
if (a() && setjmp(buf))
is permissible.
We therefore recast the macros into the form:
a();
if (setjmp((buf)) == 0)
which should be acceptable under the C spec.
To keep try atomic, we introduce a block '{{{' around this, along
with a matching close block '}}}' in the catch clause. This has the
nifty extra effect of giving us a compile time error if we mismatch
our try/catches.
|
|
|
|
Modern gcc's have a compiler (optimiser) bug that can cause
values not to be written back to memory when they should. We
work around this by using an inline function to force the
compiler to behave.
Many thanks to Marcos Woehrmann for doing the analysis that
lead to this workaround.
|
|
|
|
|
|
|
|
The default profile case (sRGB and SWOP CMYK) are indicated by
empty strings for those entries.
|
|
|
|
This fixes bug #696123 by allowing multiple signatures each to be written
to the document in a separate incemental update.
Add count num_incremental_sections to keep track of the number of
incremental sections.
Add xref_base, which can be set between 0 and num_incremental_sections
inclusive to access different versions of the document.
Add disallow_new_increments flag that stops new incremental sections
being provoked by the creation of an xref stream.
Move the unsaved_sigs list from the document structure to the xref
structure. With this commit in place, the lists will never grow beyond
length one, but we've maintained the list structure in case other cases
need supporting in the future.
Add an end offset field to the xref structure, so that during completion
of signatures the document length of the various incremental versions of
the document are available.
Factor out functions for storing unsaved signatures and for checking if
an object is an unsaved signature.
Do deep copy of objects that require the holding of several versions.
|
|
This is work towards supporting several levels of incremental xref,
which in turn, is work towards bug #696123. When several levels of
incremental xref are present there can be objects that appear at
multiple levels and differ between those levels. This deep-copy function
will be used to create new copies before the new version is altered.
|
|
This is work towards bug #696123. It does not fix the bug because, in fact,
saving multiple signatures in one go is not permitted (they need to use
several incremental saves), but we may as well have the order correctly
held.
|
|
|
|
|
|
Use that within gproof. The existing use of fz_read_line was broken
and was resulting in bad values for separations.
|
|
Get separation information out to the Java level.
|
|
Use an endian-ness independent method of reading, instead of byte swapping.
|
|
|
|
|
|
|
|
|
|
By default in MuPDF, when we render an axis aligned image, we
'gridfit' it. This is a heuristic used to improve the rendering
of tiled images, and avoid the background showing through on the
antialiased edges.
The general algorithm we use is to expand any image outwards so that
it completely covers any pixels that it touches any part of. This is
'safe' in that we never cause any pixels to not be covered that
should otherwise be so, and is important when we have images that are
aligned with (say) line art rectangles.
For gproof files though, this gives nasty results - because we have
multiple images tiled across the page all exactly abutting, in most
cases the edges will not be on exact integer coordinates. This means
we expand both images and 1 (destination) pixel is lost. This severely
hurts the rendering (in particular on text based pages).
We therefore introduce a new type of grid fitting, where we simply
align the edges of images to the closest integer pixel. This is safe
because we know that neighbouring images will be adjusted identically
and edges will stay coincident.
We enable/disable this behaviour through a new device flag, and make
the gproof interpreter set/clear this flag when generating the page -
thus normal rendering is unaffected.
We *could* have just poked the dev->flags fields directly, but that
would require magic in the display list device to check for them
being set/unset and to poke the dev->flags fields on playback, so
instead we introduce a new fz_render_flags function (that calls a
device function) to set/unset flags.
The other attraction of this is that if we ever have devices that
'filter', we can neatly handle passing flag changes on with those.
Currently the display list implementation only copes with set/clear
of the FZ_DEVFLAG_GRIDFIT_AS_TILED option. We only readily have 6
bits available to us, so we'll just extend this as required if we
add new render flags.
|
|
Given a document, generate a gproof file from it. This encapsulates
the name of the file, the desired resolution for proofing, and the
page dimensions of all the pages in the file.
The idea is that an app will call this when it is asked to go into
'proofing' mode, and will reinvoke itself on this file. This gives
the gprf document handler just enough information to fake up a
document of n pages of the required sizes. Each page will then be
autogenerated on demand.
|
|
Doesn't actually trigger generation from ghostscript, or load
images from files generated by ghostscript yet.
|
|
In android, we can't write to '.', and we don't have
TMP defined. Therefore use the path of the supplied file as
a hint.
|
|
Hopefully this clarifies the intent.
|
|
This way an app can query the separations on a page, turn them on/off
etc.
|
|
Simple set of functions for managing sets of separations. Separations
have names, equivalent rgb/cmyk colors, and can be enabled/disabled.
|
|
Ensure that subsampling and caching happen in the generic image
code, not in the specific.
Previously, the subsampling happened only for images that were
decoded from streams. Images that were loaded direct were never
subsampled and hence were always cached at full size. After this
change both classes of image are correctly subsampled, and
the subsampled version kept in the cache.
This produces various image diffs in the cluster, none of which
are noticable to the naked eye.
|
|
Previously, we had people calling image->get_pixmap directly. Now we
have them all call fz_image_get_pixmap, which will look for a cached
version in the store, and only call get_pixmap if required.
Previously fz_image_get_pixmap used to look for the cached version
in the store, and decode if not - hence the decoding code is now
extracted out into standard_image_get_pixmap.
This was the original intent of the code, it just somehow didn't end
up like that.
This nicely queues us up for being able to have fz_images that use
a different get_pixel implementation, such as that which will be
required for the gprf code.
|
|
This will be required for the gprf work.
|
|
fz_read_int16le, fz_read_int32le, fz_read_int64le.
|
|
Previously, only the unix executable had been updated to take
command line flags; update the windows one in line with it.
We have to cope with the argv being in Unicode; add a windows
specific version of getoptw for this.
Also note that that fprintf's in the windows mupdf exe won't work
as GUI apps don't have a console window, and can't write to the
parent one. Fixing that is a larger project than I have time
for right now.
|
|
We were allocating the ofs array as ints and then filling it
with fz_off_t's.
|
|
I'd missed converting some int's to fz_off_t's.
|
|
Add -U option to mupdf and mudraw to set a user stylesheet.
Uses a context to store user the stylesheet, just like the AA level.
|
|
|
|
|
|
|
|
Add 'break' nodes to flow list for forced line breaks.
|
|
Firstly, when displaying a list of nested blocks, don't suppress
outputting a block just because it contains a pointer to itself.
Various valgrind fixes from the gs version of memento.
Experimental C++ operators. See writeup in memento.h comments for
how to integrate.
|
|
If FZ_LARGEFILE is defined when building, MuPDF uses 64bit offsets
for files; this allows us to open streams larger than 2Gig.
The downsides to this are that:
* The xref entries are larger.
* All PDF ints are held as 64bit things rather than 32bit things
(to cope with /Prev entries, hint stream offsets etc).
* All file positions are stored as 64bits rather than 32.
The implementation works by detecting FZ_LARGEFILE. Some #ifdeffery
in fitz/system.h sets fz_off_t to either int or int64_t as appropriate,
and sets defines for fz_fopen, fz_fseek, fz_ftell etc as required.
These call the fseeko64 etc functions on linux (and so define
_LARGEFILE64_SOURCE) and the explicit 64bit functions on windows.
|