Age | Commit message (Collapse) | Author |
|
|
|
Previously we used to have a special case hack in for MacOS. Now
we call sigsetjmp/siglongjmp on all platforms that define __unix.
(i.e. pretty much all of them except windows).
|
|
When we allocate a pixmap > 2G, but < 4G, the index into that
pixmap, when calculated as an int can be negative. Fix this with
various casts to unsigned int.
If we ever move to support >4G images we'll need to rejig the
casting to cast each part of the element to ptrdiff_t first.
|
|
The file supplied with the bug contains corrupt jpeg data on page
61. This causes an error to be thrown which results in mudraw
exiting.
Previously, when image decode was done at loading time, the error
would have been thrown under the pdf interpreter rather than under
the display list renderer. This error would have been caught, a
warning given, and the program would have continued. This is not
ideal behaviour, as there is no way for a caller to know that there
was a problem, and that the image is potentially incomplete.
The solution adopted here, solves both these problems. The fz_cookie
structure is expanded to include a 'errors' count. Whenever we meet
an error during rendering, we increment the 'errors' count, and
continue.
This enables applications to spot the errors count being non-zero on
exit and to display a warning.
mupdf is updated here to pass a cookie in and to check the error count
at the end; if it is found to be non zero, then a warning is given (just
once per visit to each page) to say that the page may have errors on it.
|
|
When handling knockout groups, we have to copy the background from the
previous group in so we can 'knockout' properly. If the previous group
is a different colorspace, this gives us problems!
The fix, implemented here, is to update the copy_pixmap_rect function
to know how to copy between pixmaps of different depth.
Gray <-> RGB are the ones we really care about; the generic code will
probably do a horrible job, but shouldn't ever be called at present.
This suffices to stop the crashing - we will probably revisit this
when we revise the blending support.
|
|
Extend mupdfclean to have a new -l file that writes the file
linearized. This should still be considered experimental
When writing a pdf file, analyse object use, flatten resource use,
reorder the objects, generate a hintstream and output with linearisaton
parameters.
This is enough for Acrobat to accept the file as being optimised
for Fast Web View. We ought to add more tables to the hintstream
in some cases, but I doubt anyone actually uses it, the spec is so
badly written. Certainly acrobat accepts the file as being optimised
for 'Fast Web View'.
Update fz_dict_put to allow for us adding a reference to the dictionary
that is the sole owner of that reference already (i.e. don't drop then
keep something that has a reference count of just 1).
Update pdf_load_image_stream to use the stm_buf from the xref if there
is one.
Update pdf_close_document to discard any stm_bufs it may be holding.
Update fz_dict_put to be pdf_dict_put - this was missed in a renaming
ages ago and has been inconsistent since.
|
|
|
|
When including fitz.h from C++ files, we must not alter the definition
of inline, as it may upset code that follows it. We only alter the
definition to enable it if it's available, and it's always available
in C++ - so simply avoiding changing it in the C++ case gives us what
we want.
|
|
Previously, before interpreting a pages content stream we would
load it entirely into a buffer. Then we would interpret that
buffer. This has a cost in memory use.
Here, we update the code to read from a stream on the fly.
This has required changes in various different parts of the code.
Firstly, we have removed all use of the FILE lock - as stream
reads can now safely be interrupted by resource (or object) reads
from elsewhere in the file, the file lock becomes a very hard
thing to maintain, and doesn't actually benefit us at all. The
choices were to either use a recursive lock, or to remove it
entirely; I opted for the latter.
The file lock enum value remains as a placeholder for future use in
extendable data streams.
Secondly, we add a new 'concat' filter that concatenates a series of
streams together into one, optionally putting whitespace between each
stream (as the pdf parser requires this).
Finally, we change page/xobject/pattern content streams to work
on the fly, but we leave type3 glyphs using buffers (as presumably
these will be run repeatedly).
|
|
In order to (hopefully) allow page content streams to be interpreted
without having to preload them all into memory before we run them, we
need to make the stream reading code cope with other users moving
the stream pointer.
For example: Consider the case where we are midway through
interpreting a contents stream, and us hitting an operator that
requires something to be read from Resources. This will move the
underlying stream file pointer, and cause the contents stream to
read incorrectly when control returns to the interpreter.
The solution to this seems to be fairly simple; whenever we create
a filter out of the file stream, the existing code puts in a 'null'
filter first, to enforce a length limit on the stream. This null
filter already does most of the work we need it to, in that by it
being there, the buffering of data is done in the null filter rather
than in the underlying stream layer.
All we need to do is to keep track of where in the underlying stream
the null filter thinks it is, and ensure that it seeks there before
each read (in case anyone else has moved it).
We move the setting of the offset to be explicit in the pdf_open_filter
(and associated) call(s), rather than requiring fz_seeks elsewhere.
|
|
|
|
|
|
Expose pdf_write function through the document interface.
|
|
|
|
Use _wopen on a UTF8 -> wchar_t decoded filename to support UTF8 filenames
for win32.
|
|
Previously, we would only open files with the correct extension.
Until such time as we get file type detection by contents working
assume that any file that doesn't end in .xps or .cbz is a pdf
file.
|
|
Allows us to render files with broken font hinting programs when hinting
is enabled(whether by no-AA or DynaLab detection).
Fix bug 692949.
|
|
Comment changes only.
|
|
Use this to reintroduce "Document Properties..." in mupdf viewer.
|
|
Restricts rendering to a sub rectangle of the supplied bbox.
|
|
In my previous commit, I forgot to initialise the variable before
using it. Thanks to Bas Weelinck for spotting this.
|
|
Bas Weelinck points out a potential problem with multiple threads
starting up at the same time, running into a race condition in
the thread debugging code. He suggests using an extra lock to
avoid this, and indeed, it would be a simple way.
I am reluctant to introduce an extra lock purely for this case
though, so I've instead reused the ALLOC lock. This has the advantage
of us not having to take the lock except in the 'first call with a
new context' case.
|
|
When cloning, ensure the locks are done on the new context, not the
old one; this makes no difference except to suppress some spurious
debugging messages.
Also ensure that DEBUG is predefined for Makefile based debug and memento
builds.
Thanks to Bas Weelinck.
|
|
Don't reset the size of arrays until we have successfully resized them.
|
|
Depending on the operation used (< or <=) the threshold array should
never have either 0 and ff in it. As we are using <, it should never
have 0 in it. Fixed here.
|
|
|
|
|
|
|
|
|
|
The default page userspace transform changed to a top-down coordinate
space, and I forgot this detail when updating the text device branch.
Also remove the final block sorting pass to give preference to the original
PDF text order.
|
|
|
|
It seems that JPX images can be supplied in indexed format, with
both a palette internal to the jpx stream, and a palette in the
PDF. Googling seems to suggest that the internal palette should
be ignored in this case, and the external palette applied.
Fortunately, since OpenJPEG-1.5 there is a flag that can be used
to tell OpenJPEG not to decode palettes. We update the code here
to spot that there is an external palette, and to set this flag.
|
|
Currently all conversions from rect to bbox are done using a single
function, fz_round_rect. This causes problems, as sometimes we want
'round, allowing for slight calculation errors' and sometimes we
want 'round slavishly to ensure we have a bbox that covers the rect'.
We therefore split these 2 cases into 2 separate functions;
fz_round_rect is kept, meaning "round outwards allowing for slight
errors", and fz_bbox_covering_rect is added to mean "give us the
smallest bbox that is guaranteed to cover rect".
No regressions seen.
|
|
When coping with missing transparency entries, fill with 255,
not 0. Simplify code slightly so we fill completely, not just
to depth.
|
|
If entries are larger than they need to be, accept just the amount
we need. If not large enough, pad out with zeros.
|
|
Move fz_stroke_state from being a simple structure whose contents
are copied repeatedly to being a dynamically allocated reference
counted object so we can cope with large numbers of entries in
the dash array.
|
|
From SumatraMuPDF.patch - Many thanks.
|
|
Taken from SumatraPDF.patch - Many thanks.
|
|
Thanks to SumatraPDF for the patch.
|
|
|
|
last character was across style changes.
|
|
|
|
|
|
When we have finished replacing tiff->samples, free the old samples
block. Taken from Sumatra.patch - many thanks.
|
|
Taken from Sumatra.patch - Many thanks.
|
|
Bring up to date with current APIs, including text device changes.
|
|
Also tidy up the taking of fz_context *'s, and hide an unwanted indent
param.
|
|
Fix a couple of silly problems (one gccism, and one windows specific
bug).
|
|
|
|
Debug printing functions: debug -> print.
Accessors: get noun attribute -> noun attribute.
Find -> lookup when the returned value is not reference counted.
pixmap_with_rect -> pixmap_with_bbox.
We are reserving the word "find" to mean lookups that give ownership
of objects to the caller. Lookup is used in other places where the
ownership is not transferred, or simple values are returned.
The rename is done by the sed script in scripts/rename3.sed
|