Age | Commit message (Collapse) | Author |
|
If the Contents of a page are an array, we were forgetting to
write the new singleton replacement into the dictionary.
|
|
Otherwise files (such as bug696754.pdf) can go wrong.
|
|
The PDF spec says that line thickness of 0 should mean "1 device
pixel". We have been doing some dodgy logic where if the line
thickness as scaled by the ctm is small (< 0.1f), make it at
least 1 device pixel.
This can mean that a line can not qualify for being thickened at
36dpi, but can be thickened at 24dpi. The thickened line at 24dpi
is much thicker than the unthickened line at 36dpi, meaning that
we get a noticable shift in rendering.
Why do we do this strange logic? Well, presumably it's to avoid
thin lines dropping out completely.
We therefore move to some new logic. Firstly, we create a fudged
'aa_level' value, dependent on the antialias level. With AA level
0 (no antialiasing), this corresponds to 1 device pixel. For
maximum AA level (8), this corresponds to 1/5 of a device pixel.
Thus we should get 'continuous' results across different dpis.
|
|
Apparently neither OSX nor iOS support unnnamed semaphores,
so steal the gs versions and use them instead.
|
|
|
|
I was using fz_compressed_image when I should have been using
fz_pixmap_image.
|
|
|
|
Split compressed images (images based on a compressed buffer)
and pixmap images (images based on a pixmap) out into separate
subclasses.
|
|
Move from ints to bits where possible.
|
|
For now, just use it for controlling image decoding and image scaling.
|
|
Update the core fz_get_pixmap_from_image code to allow fetching
a subarea of a pixmap. We pass in the required subarea, together
with the transformation matrix for the whole image.
On return, we have a pixmap at least as big as was requested,
and the transformation matrix is updated to map the supplied
area to the correct place on the screen.
The draw device is updated to use this as required. Everywhere
else passes NULLs in, and so gets unchanged behaviour.
The standard 'get_pixmap' function has been updated to decode
just the required areas of the bitmaps.
This means that banded rendering of pages will decode just the
image subareas that are required for each band, limiting the
memory use. The downside to this is that each band will redecode
the image again to extract just the section we want.
The image subareas are put into the fz_store in the same way
as full images. Currently image areas in the store are only
matched when they match exactly; subareas are not identified
as being able to use existing images.
|
|
|
|
The handling of not-decompressing images/fonts was geared towards
pdfclean usage; but now that we can create new PDF files, it makes
more sense to ask for images and fonts to be compressed, rather than
asking for them not to be decompressed with quirky interaction with
the 'expand' and 'deflate' flags.
If -f or -i are set, we will never decompress images, and we will
compress them if they are uncompressed.
If -d is set, we will first decompress all streams (module -f or -i).
If -z is set, we will then compress all uncompressed streams.
|
|
|
|
Garbage collected languages need a way to signal that they are done
with a device other than freeing it.
Call it implicitly on fz_drop_device; so take care not to call it again
in case it has been explicitly called already.
|
|
|
|
|
|
|
|
|
|
When we clone the context, we copy the AA levels from the
base context into the cloned context. This means that we
must set the AA levels in the base context BEFORE cloning
if we want them to be the same everywhere (or set them
explicitly in all contexts).
|
|
|
|
Use a macro to make fz_new_document nicer (akin to
fz_malloc_struct).
|
|
Some new files hadn't been added to the solution, and we were
calling strcasecmp instead of fz_strcasecmp.
|
|
It's a lot of extra typing to prefix everything with "mupdf.".
|
|
|
|
|
|
|
|
|
|
Resources are defined before they are used; so it's only logical to
have the resource dictionary before the content buffer in the argument
list.
|
|
|
|
|
|
|
|
Only supports CBZ writing for now.
Also add a zip file writer.
|
|
|
|
No images.
The default stylesheet is preliminary, and will need improvements.
|
|
|
|
fz_read_int32_be.
|
|
Does not support page-break-before/after: avoid.
|
|
|
|
|
|
|
|
svg: Implement graphics state stack.
svg: Use idmap for symbol and use elements.
svg: Put viewport and viewBox in state stack.
svg: Rebase to version 1.9 master.
|
|
Previously, we would refuse to store any object in the store that
was larger than the store limits. We'd also refuse to store any
object that took the total store size over the limit.
This was wrong.
Consider the case where we have a store of 1 byte, and a page that
repeatedly uses the same font. The first time we meet the font, we
look in the store, it isn't there, we load it, and we try to store
it. The current code refuses to store it, and we continue, putting
that font into the display list.
The next time we meet to the font, we look in the store, it still
isn't there, we load it, and we try to store it. Again we refuse to
store it, and that copy of the font goes into the display list.
The net effect of this is that we end up using far more memory in
total than we would have done had we stored the first one.
The code here, therefore, changes the store to always store objects
regardless of their size. Given that we have already loaded the
objects into memory before we store them, this doesn't actually
cost us any extra memory. If an object is dropped (bringing the
reference count down to 1, being the reference for the stores copy),
then the object is NOT freed instantly, but will be freed either
on the next attempt to store an object, or on the next scavenging
malloc.
|
|
|
|
Fixes http://bugs.ghostscript.com/show_bug.cgi?id=696687
|
|
|
|
|
|
|
|
Also remove comment with links used for reference during implementation.
|
|
|