Age | Commit message (Collapse) | Author |
|
|
|
This makes it possible to redirect standard out and standard error output
streams to output streams of your liking.
This means that now you can, in gdb, type:
(gdb) call pdf_print_obj(ctx, fz_stdout(ctx), obj, 0)
(gdb) call fflush(0)
or when dealing with an unresolved indirect reference:
(gdb) call pdf_print_obj(ctx, fz_stdout(ctx), pdf_resolve_indirect(ctx, ref), 0)
(gdb) call fflush(0)
|
|
|
|
|
|
|
|
Calculations need to be done differently.
|
|
There was a race condition on bgprint.pagenum that could cause
the bgprint worker to close down early, leaving the main thread
waiting for notification of its closedown.
|
|
|
|
Add -P flag to mudraw to do 'parallel' rendering. We shift rendering
onto a background thread, so that the main thread can continue
interpreting page n+1 while page n is being rendered.
To do this, we extract the core of the drawpage routine into
'dodrawpage', and either call it directly (in the normal case)
or from a bgprint worker thread (in the parallel case).
The threading construction exactly parallels that of the threaded
band rendering. We have a semaphore to start the render process,
a semaphore to indicate when the process has stopped, and the
thread itself.
The most complex thing here is the rejigging of the printfs
required to ensure that we still get the timings displayed in a
sane way.
|
|
When decoding < 8 bpp images, we need to allow for the fact
that the data is byte aligned at the end of each row by
being careful in our calculation of r_skip.
|
|
And improve the header file commenting.
|
|
If the Contents of a page are an array, we were forgetting to
write the new singleton replacement into the dictionary.
|
|
Otherwise files (such as bug696754.pdf) can go wrong.
|
|
Spot https and pass to curl. If curl isn't built with https
support we'll fail, but then we'd fail anyway without trying.
|
|
The PDF spec says that line thickness of 0 should mean "1 device
pixel". We have been doing some dodgy logic where if the line
thickness as scaled by the ctm is small (< 0.1f), make it at
least 1 device pixel.
This can mean that a line can not qualify for being thickened at
36dpi, but can be thickened at 24dpi. The thickened line at 24dpi
is much thicker than the unthickened line at 36dpi, meaning that
we get a noticable shift in rendering.
Why do we do this strange logic? Well, presumably it's to avoid
thin lines dropping out completely.
We therefore move to some new logic. Firstly, we create a fudged
'aa_level' value, dependent on the antialias level. With AA level
0 (no antialiasing), this corresponds to 1 device pixel. For
maximum AA level (8), this corresponds to 1/5 of a device pixel.
Thus we should get 'continuous' results across different dpis.
|
|
Apparently neither OSX nor iOS support unnnamed semaphores,
so steal the gs versions and use them instead.
|
|
|
|
|
|
I was using fz_compressed_image when I should have been using
fz_pixmap_image.
|
|
|
|
Split compressed images (images based on a compressed buffer)
and pixmap images (images based on a pixmap) out into separate
subclasses.
|
|
Move from ints to bits where possible.
|
|
For now, just use it for controlling image decoding and image scaling.
|
|
Update the core fz_get_pixmap_from_image code to allow fetching
a subarea of a pixmap. We pass in the required subarea, together
with the transformation matrix for the whole image.
On return, we have a pixmap at least as big as was requested,
and the transformation matrix is updated to map the supplied
area to the correct place on the screen.
The draw device is updated to use this as required. Everywhere
else passes NULLs in, and so gets unchanged behaviour.
The standard 'get_pixmap' function has been updated to decode
just the required areas of the bitmaps.
This means that banded rendering of pages will decode just the
image subareas that are required for each band, limiting the
memory use. The downside to this is that each band will redecode
the image again to extract just the section we want.
The image subareas are put into the fz_store in the same way
as full images. Currently image areas in the store are only
matched when they match exactly; subareas are not identified
as being able to use existing images.
|
|
|
|
The handling of not-decompressing images/fonts was geared towards
pdfclean usage; but now that we can create new PDF files, it makes
more sense to ask for images and fonts to be compressed, rather than
asking for them not to be decompressed with quirky interaction with
the 'expand' and 'deflate' flags.
If -f or -i are set, we will never decompress images, and we will
compress them if they are uncompressed.
If -d is set, we will first decompress all streams (module -f or -i).
If -z is set, we will then compress all uncompressed streams.
|
|
|
|
Garbage collected languages need a way to signal that they are done
with a device other than freeing it.
Call it implicitly on fz_drop_device; so take care not to call it again
in case it has been explicitly called already.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
When we clone the context, we copy the AA levels from the
base context into the cloned context. This means that we
must set the AA levels in the base context BEFORE cloning
if we want them to be the same everywhere (or set them
explicitly in all contexts).
|
|
|
|
Use a macro to make fz_new_document nicer (akin to
fz_malloc_struct).
|
|
Some new files hadn't been added to the solution, and we were
calling strcasecmp instead of fz_strcasecmp.
|
|
|
|
It's a lot of extra typing to prefix everything with "mupdf.".
|
|
|
|
|
|
|
|
|
|
Resources are defined before they are used; so it's only logical to
have the resource dictionary before the content buffer in the argument
list.
|
|
|
|
|
|
|
|
|
|
Only supports CBZ writing for now.
Also add a zip file writer.
|