Age | Commit message (Collapse) | Author |
|
Remove the shim indirection layer for fz_document. A little less type
safe, but a lot less boiler plate.
|
|
Previously, before interpreting a pages content stream we would
load it entirely into a buffer. Then we would interpret that
buffer. This has a cost in memory use.
Here, we update the code to read from a stream on the fly.
This has required changes in various different parts of the code.
Firstly, we have removed all use of the FILE lock - as stream
reads can now safely be interrupted by resource (or object) reads
from elsewhere in the file, the file lock becomes a very hard
thing to maintain, and doesn't actually benefit us at all. The
choices were to either use a recursive lock, or to remove it
entirely; I opted for the latter.
The file lock enum value remains as a placeholder for future use in
extendable data streams.
Secondly, we add a new 'concat' filter that concatenates a series of
streams together into one, optionally putting whitespace between each
stream (as the pdf parser requires this).
Finally, we change page/xobject/pattern content streams to work
on the fly, but we leave type3 glyphs using buffers (as presumably
these will be run repeatedly).
|
|
Use this to reintroduce "Document Properties..." in mupdf viewer.
|
|
Thanks to SumatraPDF.
|
|
|
|
Attempt to separate public API from internal functions.
|
|
This is a significant change to the use of locks in MuPDF.
Previously, the user had the option of passing us lock/unlock
functions for a single mutex as part of the allocation struct.
Now we remove these entries from the allocation struct, and
make a separate 'locks' struct. This enables people to use
fz_alloc_default with locking.
If multithreaded operation is required, then the user is
required to create FZ_LOCK_MAX mutexes, which will be locked
or unlocked by MuPDF calling the lock/unlock functions within
the new fz_locks_context structure passed in at context creation.
These mutexes are not required to be recursive (they may be, but
MuPDF should never call them in this way). MuPDF avoids deadlocks
by imposing a locking ordering on itself; a thread will never take
lock n, if it already holds any lock i for which 0 <= i <= n.
Currently, there are 4 locks used within MuPDF.
Lock 0: The alloc lock; taken around all calls to user supplied
(or default) allocation functions. Also taken around all accesses
to the refs field of storable items.
Lock 1: The store lock; taken whenever the store data structures
(specifically the linked list pointers) are accessed.
Lock 2: The file lock; taken whenever a thread is accessing the raw
file. We use the debugging macros to insist that this is held
whenever we do a file based seek or read. We also insist that this
is never held when we resolve an indirect reference, as this can
have the effect of moving the file pointer.
Lock 3: The glyphcache lock; taken whenever a thread calls freetype,
or accesses the glyphcache data structures. This introduces some
complexities w.r.t type3 fonts.
Locking can be hugely problematic, so to ease our minds as to
the correctness of this code, we introduce some debugging macros.
These compile away to nothing unless FITZ_DEBUG_LOCKING is defined.
fz_assert_lock_held(ctx, lock) checks that we hold lock.
fz_assert_lock_not_held(ctx, lock) checks that we do not hold lock.
In addition fz_lock_debug_lock and fz_lock_debug_unlock are used
on every fz_lock/fz_unlock to check the validity of the operation
we are performing - in particular it checks that we do/do not already
hold the lock we are trying to take/drop, and that by taking this
lock we are not violating our defined locking order.
The RESOLVE macro (used throughout the code to check whether we need
to resolve an indirect reference) calls fz_assert_lock_not_held to
ensure that we aren't about to resolve an indirect reference (and
hence move the stream pointer) when the file is locked.
In order to implement the file locking properly, pdf_open_stream
(and friends) now lock the file as a side effect (because they
fz_seek to the start of the stream). The lock is automatically
dropped on an fz_close of such streams.
Previously, the glyph cache was created in a context when it was first
required; this presents problems as it can be shared between several
contexts or not, depending on whether it is created before the
contexts are cloned. We now always create it at startup, so it is
always shared.
This means that we need reference counting for the glyph caches.
Added here.
In fz_render_glyph, we take the glyph cache lock, and check to see
whether the glyph is in the cache. If it is, we bump the refcount,
drop the lock and returned the cached character. If it is not, we
need to render the character.
For freetype based fonts we keep the lock throughout the rendering
process, thus ensuring that freetype is only called in a single
threaded manner.
For type3 fonts, however, we need to invoke the interpreter again
to render the glyph streams. This can require reentrance to this
routine. We therefore drop the glyph cache lock, call the
interpreter to render us our pixmap, and take the lock again.
This dropping and retaking of the lock introduces a possible race
condition; 2 threads may try to render the same character at the
same time. We therefore modify our hash table insert routines to
behave differently if it comes to insert an entry only to find
that an entry with the same key is already there.
We spot this case; if we have just rendered a type3 glyph and when
we try to insert it into the cache discover that someone has beaten
us to it, we just discard our entry and use the cached one.
Hopefully this will seldom be a problem in practise; to solve it
properly would require greater complexity (probably involving
spotting that another thread is already working on the desired
rendering, and sleeping on a semaphore until it completes).
|
|
|
|
|
|
|
|
All destructors should accept NULL.
|
|
xps_parse_resource_dictionary can now warn and return NULL when reading
an empty dictionary, rather than throwing an error.
Various bits of code are therefore updated to check for a NULL return
value.
We should cope better with multiple Resource dictionaries.
Tweak to the zip file handling when looking for parts.
Again, all from (or inspired by) Zenikos patch.
|
|
|
|
The new fz_malloc_struct(A,B) macro allocates sizeof(B) bytes using
fz_malloc, and then passes the resultant pointer to Memento_label
to label it with "B".
This costs nothing in non-memento builds, but gives much nicer
listings of leaked blocks when memento is enabled.
|
|
|
|
This frees us from passing errors back everywhere, and hence enables us
to pass results back as return values.
Rather than having to explicitly check for errors everywhere and bubble
them, we now allow exception handling to do the work for us; the
downside to this is that we no longer emit as much debugging information
as we did before (though this could be put back in). For now, the
debugging information we have lost has been retained in comments
with 'RJW:' at the start.
This code needs fuller testing, but is being committed as a work in
progress.
|
|
|
|
|
|
Huge pervasive change to lots of files, adding a context for exception
handling and allocation.
In time we'll move more statics into there.
Also fix some for(i = 0; i < function(...); i++) calls.
|
|
Import exception handling code from WSS, modified to fit into the
fitz world.
With this code we have 'real' fz_try/fz_catch/fz_rethrow functions,
handling a fz_except type. We therefore rename the existing fz_throw/
fz_catch/fz_rethrow to be fz_error_make/fz_error_handle/fz_error_note.
We don't actually use fz_try/fz_catch/fz_rethrow yet...
|
|
|
|
|
|
|
|
|
|
The run-together words are dead! Long live the underscores!
The postscript inspired naming convention of using all run-together
words has served us well, but it is now time for more readable code.
In this commit I have also added the sed script, rename.sed, that I used
to convert the source. Use it on your patches and application code.
|
|
|
|
|
|
|
|
|