Age | Commit message (Collapse) | Author |
|
This makes no difference to the current operation of the code, but
ensures that 'saner' values are put into the image_params structure.
This will help pdfwrite give more aesthetically pleasing output later.
|
|
Unused objects could cause problems with the sort order and picking
the object to start with. Now coped with.
If the hintstream object replaces another object that already had a
stream, pdf_open_raw_filter would get confused by the presence of a
stm_buf. Now fixed.
Fix a 64bit problem in page_objects_list_ensure, as well as tweaking
the code for readability.
When outputting single page files, we can end up with opts->start = 1
and this upset the offset calculating logic.
Insist on compacting the xref when linearising.
Thanks to Sebras and Zeniko for providing test cases. This commit
should (hopefully) stop the SEGVs, but there are still cases where
Acrobat doesn't think that the files output are "Optimised for Fast
Web View". I cannot see why.
|
|
|
|
If predictor, number of columns, number of colors or bits per component is
negative assume default value.
|
|
|
|
Extend mupdfclean to have a new -l file that writes the file
linearized. This should still be considered experimental
When writing a pdf file, analyse object use, flatten resource use,
reorder the objects, generate a hintstream and output with linearisaton
parameters.
This is enough for Acrobat to accept the file as being optimised
for Fast Web View. We ought to add more tables to the hintstream
in some cases, but I doubt anyone actually uses it, the spec is so
badly written. Certainly acrobat accepts the file as being optimised
for 'Fast Web View'.
Update fz_dict_put to allow for us adding a reference to the dictionary
that is the sole owner of that reference already (i.e. don't drop then
keep something that has a reference count of just 1).
Update pdf_load_image_stream to use the stm_buf from the xref if there
is one.
Update pdf_close_document to discard any stm_bufs it may be holding.
Update fz_dict_put to be pdf_dict_put - this was missed in a renaming
ages ago and has been inconsistent since.
|
|
Needs more work to use the linked list of free xref slots.
|
|
mupdfclean (or more correctly, the pdf_write function) currently has
a limitation, in that we cannot renumber objects when encryption is
being used. This is because the object/generation number is pickled
into the stream, and renumbering the object causes it to become
unreadable.
The solution used here is to provide extended functions that take both
the object/generation number and the original object/generation
number. The original object numbers are only used for setting up the
encryption.
pdf_write now keeps track of the original object/generation number
for each object.
This fix is important, if we ever want to output linearized pdf as
this requires us to be able to renumber objects to a very specific
order.
We also make a fix in removeduplicateobjects that should only
matter in the case where we fail to read an object correctly.
|
|
Previously, before interpreting a pages content stream we would
load it entirely into a buffer. Then we would interpret that
buffer. This has a cost in memory use.
Here, we update the code to read from a stream on the fly.
This has required changes in various different parts of the code.
Firstly, we have removed all use of the FILE lock - as stream
reads can now safely be interrupted by resource (or object) reads
from elsewhere in the file, the file lock becomes a very hard
thing to maintain, and doesn't actually benefit us at all. The
choices were to either use a recursive lock, or to remove it
entirely; I opted for the latter.
The file lock enum value remains as a placeholder for future use in
extendable data streams.
Secondly, we add a new 'concat' filter that concatenates a series of
streams together into one, optionally putting whitespace between each
stream (as the pdf parser requires this).
Finally, we change page/xobject/pattern content streams to work
on the fly, but we leave type3 glyphs using buffers (as presumably
these will be run repeatedly).
|
|
In order to (hopefully) allow page content streams to be interpreted
without having to preload them all into memory before we run them, we
need to make the stream reading code cope with other users moving
the stream pointer.
For example: Consider the case where we are midway through
interpreting a contents stream, and us hitting an operator that
requires something to be read from Resources. This will move the
underlying stream file pointer, and cause the contents stream to
read incorrectly when control returns to the interpreter.
The solution to this seems to be fairly simple; whenever we create
a filter out of the file stream, the existing code puts in a 'null'
filter first, to enforce a length limit on the stream. This null
filter already does most of the work we need it to, in that by it
being there, the buffering of data is done in the null filter rather
than in the underlying stream layer.
All we need to do is to keep track of where in the underlying stream
the null filter thinks it is, and ensure that it seeks there before
each read (in case anyone else has moved it).
We move the setting of the offset to be explicit in the pdf_open_filter
(and associated) call(s), rather than requiring fz_seeks elsewhere.
|
|
Attempt to separate public API from internal functions.
|
|
Currently, we are in the slightly strange position of having
the PDF specific object types as part of fitz. Here we pull
them out into the pdf layer instead. This has been made possible
by the recent changes to make the store no longer be tied to
having fz_obj's as keys.
Most of this work is a simple huge rename; to help customers who
may have code that use such functions we have provided a sed
script to do the renaming; scripts/rename2.sed.
Various other small tweaks are required; the store used to have
some debugging code that still required knowledge of fz_obj
types - we extract that into a nicer 'type' based function
pointer. Also, the type 3 font handling used to have an fz_obj
pointer for type 3 resources, and therefore needed to know how
to free this; this has become a void * with a function to free
it.
|
|
Introduce a new 'fz_image' type; this type contains rudimentary
information about images (such as native, size, colorspace etc)
and a function to call to get a pixmap of that image (with a
size hint).
Instead of passing pixmaps through the device interface (and
holding pixmaps in the display list) we now pass images instead.
The rendering routines therefore call fz_image_to_pixmap to get
pixmaps to render, and fz_pixmap_drop those afterwards.
The file format handling routines therefore need to produce
images rather than pixmaps; xps and cbz currently just wrap
pixmaps as images. PDF is more involved.
The stream handling routines in PDF have been altered so that
they can recognise when the last stream entry in a filter
dictionary is an image decoding filter. Rather than applying
this filter, they read and store the parameters into a
pdf_image_params structure, and stop decoding at that point.
This allows us to read the compressed data for an image into
memory as a block. We can then restart the image decode process
later.
pdf_images therefore consist of the compressed image data for
images. When a pixmap is requested for such an image, the code
checks to see if we have one (of an appropriate size), and if
not, decodes it.
The size hint is used to determine whether it is possible to
subsample the image; currently this is only supported for
JPEGs, but we could add generic subsampling code later.
In order to handle caching the produced images, various changes
have been made to the store and the underlying hash table.
Previously the store was indexed purely by fz_obj keys; we don't
have an fz_obj key any more, so have extended the store by adding
a concept of a key 'type'. A key type is a pointer to a set of
functions that keep/drop/compare and make a hashable key from
a key pointer.
We make a pdf_store.c file that contains functions to offer the
existing fz_obj based functions, and add a new 'type' for keys
(based on the fz_image handle, and the subsample factor) in the
pdf_image.c file.
While working on this, a problem became apparent in the existing
store codel; fz_obj objects had no protection on their reference
counts, hence an interpreter thread could try to alter a ref count
at the same time as a malloc caused an eviction from the store.
This has been solved by using the alloc lock as protection. This in
turn requires some tweaks to the code to make sure we don't try
and keep/drop fz_obj's from the store code while the alloc lock is
held.
A side effect of this work is that when a hash table is created, we
inform it what lock should be used to protect its innards (if any).
If the alloc lock is used, the insert method knows to drop/retake it
to allow it to safely expand the hash table. Callers to the hash
functions have the responsibility of taking/dropping the appropriate
lock, and ensuring that they cope with the possibility that insert
might drop the alloc lock, causing race conditions.
|
|
I made a last minute change to make pdf_open_filter do the locking,
and forgot to remove the call to lock from pdf_open_stream_with_offset.
Fixed here. Thanks to Radu Lazar for pointing this out.
|
|
This is a significant change to the use of locks in MuPDF.
Previously, the user had the option of passing us lock/unlock
functions for a single mutex as part of the allocation struct.
Now we remove these entries from the allocation struct, and
make a separate 'locks' struct. This enables people to use
fz_alloc_default with locking.
If multithreaded operation is required, then the user is
required to create FZ_LOCK_MAX mutexes, which will be locked
or unlocked by MuPDF calling the lock/unlock functions within
the new fz_locks_context structure passed in at context creation.
These mutexes are not required to be recursive (they may be, but
MuPDF should never call them in this way). MuPDF avoids deadlocks
by imposing a locking ordering on itself; a thread will never take
lock n, if it already holds any lock i for which 0 <= i <= n.
Currently, there are 4 locks used within MuPDF.
Lock 0: The alloc lock; taken around all calls to user supplied
(or default) allocation functions. Also taken around all accesses
to the refs field of storable items.
Lock 1: The store lock; taken whenever the store data structures
(specifically the linked list pointers) are accessed.
Lock 2: The file lock; taken whenever a thread is accessing the raw
file. We use the debugging macros to insist that this is held
whenever we do a file based seek or read. We also insist that this
is never held when we resolve an indirect reference, as this can
have the effect of moving the file pointer.
Lock 3: The glyphcache lock; taken whenever a thread calls freetype,
or accesses the glyphcache data structures. This introduces some
complexities w.r.t type3 fonts.
Locking can be hugely problematic, so to ease our minds as to
the correctness of this code, we introduce some debugging macros.
These compile away to nothing unless FITZ_DEBUG_LOCKING is defined.
fz_assert_lock_held(ctx, lock) checks that we hold lock.
fz_assert_lock_not_held(ctx, lock) checks that we do not hold lock.
In addition fz_lock_debug_lock and fz_lock_debug_unlock are used
on every fz_lock/fz_unlock to check the validity of the operation
we are performing - in particular it checks that we do/do not already
hold the lock we are trying to take/drop, and that by taking this
lock we are not violating our defined locking order.
The RESOLVE macro (used throughout the code to check whether we need
to resolve an indirect reference) calls fz_assert_lock_not_held to
ensure that we aren't about to resolve an indirect reference (and
hence move the stream pointer) when the file is locked.
In order to implement the file locking properly, pdf_open_stream
(and friends) now lock the file as a side effect (because they
fz_seek to the start of the stream). The lock is automatically
dropped on an fz_close of such streams.
Previously, the glyph cache was created in a context when it was first
required; this presents problems as it can be shared between several
contexts or not, depending on whether it is created before the
contexts are cloned. We now always create it at startup, so it is
always shared.
This means that we need reference counting for the glyph caches.
Added here.
In fz_render_glyph, we take the glyph cache lock, and check to see
whether the glyph is in the cache. If it is, we bump the refcount,
drop the lock and returned the cached character. If it is not, we
need to render the character.
For freetype based fonts we keep the lock throughout the rendering
process, thus ensuring that freetype is only called in a single
threaded manner.
For type3 fonts, however, we need to invoke the interpreter again
to render the glyph streams. This can require reentrance to this
routine. We therefore drop the glyph cache lock, call the
interpreter to render us our pixmap, and take the lock again.
This dropping and retaking of the lock introduces a possible race
condition; 2 threads may try to render the same character at the
same time. We therefore modify our hash table insert routines to
behave differently if it comes to insert an entry only to find
that an entry with the same key is already there.
We spot this case; if we have just rendered a type3 glyph and when
we try to insert it into the cache discover that someone has beaten
us to it, we just discard our entry and use the cached one.
Hopefully this will seldom be a problem in practise; to solve it
properly would require greater complexity (probably involving
spotting that another thread is already working on the desired
rendering, and sleeping on a semaphore until it completes).
|
|
|
|
|
|
|
|
|
|
In error cases, ensure we free objects correctly. Thanks to Zeniko
for finding the problems (and many of the solutions!)
|
|
|
|
|
|
Rather than passing a stream to a close function, just pass context
and state - that's all that is required. This enables us to
call close to cleanup neatly if the stream fails to allocate.
|
|
|
|
When using exceptions (which are implemented using setjmp/longjmp), we
need to be careful to ensure that variable values get written back
before any exception happens.
Previously we've done that using volatile, but that produces nasty
warnings (and unduly limits the compilers freedom to optimise). Here
we introduce a new macro fz_var that passes the address of the variable
out of scope. This means that the compiler has to ensure that any
changes to its value are written back to memory before calling any
out of scope function.
|
|
This frees us from passing errors back everywhere, and hence enables us
to pass results back as return values.
Rather than having to explicitly check for errors everywhere and bubble
them, we now allow exception handling to do the work for us; the
downside to this is that we no longer emit as much debugging information
as we did before (though this could be put back in). For now, the
debugging information we have lost has been retained in comments
with 'RJW:' at the start.
This code needs fuller testing, but is being committed as a work in
progress.
|
|
|
|
|
|
Huge pervasive change to lots of files, adding a context for exception
handling and allocation.
In time we'll move more statics into there.
Also fix some for(i = 0; i < function(...); i++) calls.
|
|
Import exception handling code from WSS, modified to fit into the
fitz world.
With this code we have 'real' fz_try/fz_catch/fz_rethrow functions,
handling a fz_except type. We therefore rename the existing fz_throw/
fz_catch/fz_rethrow to be fz_error_make/fz_error_handle/fz_error_note.
We don't actually use fz_try/fz_catch/fz_rethrow yet...
|
|
|
|
The run-together words are dead! Long live the underscores!
The postscript inspired naming convention of using all run-together
words has served us well, but it is now time for more readable code.
In this commit I have also added the sed script, rename.sed, that I used
to convert the source. Use it on your patches and application code.
|
|
|