Age | Commit message (Collapse) | Author |
|
When affine plotting with linear interpolation, we need to perform
different calculations for the texture and for the edge of the shape.
The edge of the shape needs to be calculated in exactly the same way
as for non-linear interpolated shapes.
The 'texture' position needs to be offset by 1/2 a texture unit in
each direction so that the 'pure' color is given in the middle of
the texture cells image, not in the top left corner.
To achieve these aims, we actually offset the u/v positions by 1/2
(32768, given the fixed point we are using) and adjust for this in
the boundary tests.
I have a test file that shows this working, which I will attach to
the bug, and add to the regression suite.
|
|
|
|
Add some paranoid checks to pdf_graft_object to prevent user
errors from crashing mupdf.
|
|
|
|
|
|
|
|
We had some code in draw-affine that we didn't really understand.
Change this for some code that seems more plausible.
The voodoo code was showing up problems with the plotter rework
(don't really know why), but the non-voodoo code seems happier.
|
|
|
|
|
|
Remove unnecessary extra indirect object; pdf_add_object returns
an indirect reference already, so we don't need to duplicate it.
|
|
|
|
|
|
Language bindings sometimes require objects to be reference counted.
|
|
|
|
|
|
Use comma-separated list of flags and key/value pairs, for
example: "linearize,resolution=72,colorspace=gray"
|
|
|
|
|
|
It is not used by mupdf itself and was added in commit
9915a386ea1dab21c5bbd4a0c8012dd13dbda301 to make it easier for
sumatrapdf, but sumatrapdf has stopped using the interface.
|
|
|
|
This makes it possible to redirect standard out and standard error output
streams to output streams of your liking.
This means that now you can, in gdb, type:
(gdb) call pdf_print_obj(ctx, fz_stdout(ctx), obj, 0)
(gdb) call fflush(0)
or when dealing with an unresolved indirect reference:
(gdb) call pdf_print_obj(ctx, fz_stdout(ctx), pdf_resolve_indirect(ctx, ref), 0)
(gdb) call fflush(0)
|
|
|
|
|
|
|
|
Calculations need to be done differently.
|
|
There was a race condition on bgprint.pagenum that could cause
the bgprint worker to close down early, leaving the main thread
waiting for notification of its closedown.
|
|
|
|
Add -P flag to mudraw to do 'parallel' rendering. We shift rendering
onto a background thread, so that the main thread can continue
interpreting page n+1 while page n is being rendered.
To do this, we extract the core of the drawpage routine into
'dodrawpage', and either call it directly (in the normal case)
or from a bgprint worker thread (in the parallel case).
The threading construction exactly parallels that of the threaded
band rendering. We have a semaphore to start the render process,
a semaphore to indicate when the process has stopped, and the
thread itself.
The most complex thing here is the rejigging of the printfs
required to ensure that we still get the timings displayed in a
sane way.
|
|
When decoding < 8 bpp images, we need to allow for the fact
that the data is byte aligned at the end of each row by
being careful in our calculation of r_skip.
|
|
And improve the header file commenting.
|
|
If the Contents of a page are an array, we were forgetting to
write the new singleton replacement into the dictionary.
|
|
Otherwise files (such as bug696754.pdf) can go wrong.
|
|
Spot https and pass to curl. If curl isn't built with https
support we'll fail, but then we'd fail anyway without trying.
|
|
The PDF spec says that line thickness of 0 should mean "1 device
pixel". We have been doing some dodgy logic where if the line
thickness as scaled by the ctm is small (< 0.1f), make it at
least 1 device pixel.
This can mean that a line can not qualify for being thickened at
36dpi, but can be thickened at 24dpi. The thickened line at 24dpi
is much thicker than the unthickened line at 36dpi, meaning that
we get a noticable shift in rendering.
Why do we do this strange logic? Well, presumably it's to avoid
thin lines dropping out completely.
We therefore move to some new logic. Firstly, we create a fudged
'aa_level' value, dependent on the antialias level. With AA level
0 (no antialiasing), this corresponds to 1 device pixel. For
maximum AA level (8), this corresponds to 1/5 of a device pixel.
Thus we should get 'continuous' results across different dpis.
|
|
Apparently neither OSX nor iOS support unnnamed semaphores,
so steal the gs versions and use them instead.
|
|
|
|
|
|
I was using fz_compressed_image when I should have been using
fz_pixmap_image.
|
|
|
|
Split compressed images (images based on a compressed buffer)
and pixmap images (images based on a pixmap) out into separate
subclasses.
|
|
Move from ints to bits where possible.
|
|
For now, just use it for controlling image decoding and image scaling.
|
|
Update the core fz_get_pixmap_from_image code to allow fetching
a subarea of a pixmap. We pass in the required subarea, together
with the transformation matrix for the whole image.
On return, we have a pixmap at least as big as was requested,
and the transformation matrix is updated to map the supplied
area to the correct place on the screen.
The draw device is updated to use this as required. Everywhere
else passes NULLs in, and so gets unchanged behaviour.
The standard 'get_pixmap' function has been updated to decode
just the required areas of the bitmaps.
This means that banded rendering of pages will decode just the
image subareas that are required for each band, limiting the
memory use. The downside to this is that each band will redecode
the image again to extract just the section we want.
The image subareas are put into the fz_store in the same way
as full images. Currently image areas in the store are only
matched when they match exactly; subareas are not identified
as being able to use existing images.
|
|
|
|
The handling of not-decompressing images/fonts was geared towards
pdfclean usage; but now that we can create new PDF files, it makes
more sense to ask for images and fonts to be compressed, rather than
asking for them not to be decompressed with quirky interaction with
the 'expand' and 'deflate' flags.
If -f or -i are set, we will never decompress images, and we will
compress them if they are uncompressed.
If -d is set, we will first decompress all streams (module -f or -i).
If -z is set, we will then compress all uncompressed streams.
|
|
|
|
Garbage collected languages need a way to signal that they are done
with a device other than freeing it.
Call it implicitly on fz_drop_device; so take care not to call it again
in case it has been explicitly called already.
|
|
|
|
|
|
|