Age | Commit message (Collapse) | Author |
|
If an illegal keysize is passed into the AES crypt filter, we
currently exit without setting up the AES context. This causes
us to fail in all manner of ways later on.
We now return failure and callers throw an exception.
This appears to solve all the SEGVs and memory exceptions found in
crypt_aes by Mateusz "j00ru" Jurczyk and Gynvael Coldwind of the
Google Security Team using Address Sanitizer. Many thanks!
|
|
Instead of using macros for min/max/abs/clamp, we move to using
inline functions. These are more typesafe, and should produce
equivalent code on compilers that support inline (i.e. pretty much
everything we care about these days).
People can always do their own macro versions if they prefer.
|
|
normal_994.pdf SEGVs due to a negative length. Simple fix to treat
negative length streams as 0 length.
|
|
This solves the normal_87.pdf rendering issues.
|
|
Previously, before interpreting a pages content stream we would
load it entirely into a buffer. Then we would interpret that
buffer. This has a cost in memory use.
Here, we update the code to read from a stream on the fly.
This has required changes in various different parts of the code.
Firstly, we have removed all use of the FILE lock - as stream
reads can now safely be interrupted by resource (or object) reads
from elsewhere in the file, the file lock becomes a very hard
thing to maintain, and doesn't actually benefit us at all. The
choices were to either use a recursive lock, or to remove it
entirely; I opted for the latter.
The file lock enum value remains as a placeholder for future use in
extendable data streams.
Secondly, we add a new 'concat' filter that concatenates a series of
streams together into one, optionally putting whitespace between each
stream (as the pdf parser requires this).
Finally, we change page/xobject/pattern content streams to work
on the fly, but we leave type3 glyphs using buffers (as presumably
these will be run repeatedly).
|
|
In order to (hopefully) allow page content streams to be interpreted
without having to preload them all into memory before we run them, we
need to make the stream reading code cope with other users moving
the stream pointer.
For example: Consider the case where we are midway through
interpreting a contents stream, and us hitting an operator that
requires something to be read from Resources. This will move the
underlying stream file pointer, and cause the contents stream to
read incorrectly when control returns to the interpreter.
The solution to this seems to be fairly simple; whenever we create
a filter out of the file stream, the existing code puts in a 'null'
filter first, to enforce a length limit on the stream. This null
filter already does most of the work we need it to, in that by it
being there, the buffering of data is done in the null filter rather
than in the underlying stream layer.
All we need to do is to keep track of where in the underlying stream
the null filter thinks it is, and ensure that it seeks there before
each read (in case anyone else has moved it).
We move the setting of the offset to be explicit in the pdf_open_filter
(and associated) call(s), rather than requiring fz_seeks elsewhere.
|
|
Attempt to separate public API from internal functions.
|
|
One of the previous memsqueezing fixes (specifically that in
close_dctd) appears to cause the Memento fork bases squeezing
process to stop.
This appears to be because old code would do a NULL dereference
causing a SEGV. This would somehow NOT be picked up by the signal
handler, and the child would exit.
If the code is fixed to avoid the SEGV the code then somehow
goes on to do something (not in the close_dctd code) that makes
the mem squeeze process grind to a halt - but NOT in the same
instance of the executable. I am at a loss to explain this, but
would rather the code stays as it is (being as far as I can see
correct) for now.
|
|
|
|
Rather than passing a stream to a close function, just pass context
and state - that's all that is required. This enables us to
call close to cleanup neatly if the stream fails to allocate.
|
|
The new fz_malloc_struct(A,B) macro allocates sizeof(B) bytes using
fz_malloc, and then passes the resultant pointer to Memento_label
to label it with "B".
This costs nothing in non-memento builds, but gives much nicer
listings of leaked blocks when memento is enabled.
|
|
|
|
Also: use 'cannot' instead of 'failed to' in error messages.
|
|
|
|
|
|
Huge pervasive change to lots of files, adding a context for exception
handling and allocation.
In time we'll move more statics into there.
Also fix some for(i = 0; i < function(...); i++) calls.
|
|
Import exception handling code from WSS, modified to fit into the
fitz world.
With this code we have 'real' fz_try/fz_catch/fz_rethrow functions,
handling a fz_except type. We therefore rename the existing fz_throw/
fz_catch/fz_rethrow to be fz_error_make/fz_error_handle/fz_error_note.
We don't actually use fz_try/fz_catch/fz_rethrow yet...
|
|
Also put the function on the same line for inline functions, so
they stick out and are easy to find with grep.
|
|
The run-together words are dead! Long live the underscores!
The postscript inspired naming convention of using all run-together
words has served us well, but it is now time for more readable code.
In this commit I have also added the sed script, rename.sed, that I used
to convert the source. Use it on your patches and application code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
make all occurances of the code follow a common idiom.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|