Age | Commit message (Collapse) | Author |
|
As requested by customer 530.
|
|
Don't decompose meshes just to find their bbox.
|
|
V8_OK needs to be set before the includes. LOCAL_LDLIBS need to be set
after them.
|
|
fz_bound_path already takes care of stroke expansion - don't apply
it twice.
|
|
|
|
Only split as many components of colors in the tensor patch as we
actually use.
|
|
Apply the same optimisations to mesh type 6 as were just applied to
mesh type 7.
|
|
|
|
Avoid needless copies. Thanks to Christophe from customer 530 for
the original version of this patch.
Also tweak clipx and clipy so that both intersection calculations
in each rotuine are identical.
|
|
Softmasks can be applied in 2 places in our code; once when starting a
group, once when running an XObject. The two implementations had
drifted apart. To avoid this in future, pull the two together.
This solves the bug, apart from the issue of transfer functions not
working.
Also, fix another issue seen in cluster testing. For luminance smasks
the bbox is only used to clip the contents drawn - the background color
extends into the surrounding area. Fix the code to respect this.
And another problem; text in soft masks would upset text outside the
SMasks - fix this by storing/restoring the text settings in the
interpreter state around the smask rendering.
|
|
|
|
In order to be able to output images (either in the pdfwrite device or
in the html conversion), we need to be able to get to the original
compressed data stream (or else we're going to end up recompressing
images). To do that, we need to expose all of the contents of pdf_image
into fz_image, so it makes sense to just amalgamate the two.
This has knock on effects for the creation of indexed colorspaces,
requiring some of that logic to be moved.
Also, we need to make xps use the same structures; this means pushing
PNG and TIFF support into the decoding code. Also we need to be able
to load just the headers from PNG/TIFF/JPEGs as xps doesn't include
dimension/resolution information.
Also, separate out all the fz_image stuff into fitz/res_image.c rather
than having it in res_pixmap.
|
|
|
|
PDFDocEncoding for crypt revisions <= 4, UTF-8 for newer.
|
|
|
|
|
|
Previously we combined the softmask xobject->matrix with the ctm
to make gstate->softmask_ctm at load time. This meant that when
we ran the softmask xobject the xobject->matrix was reapplied a
second time.
The fix is to keep the xobject->matrix out and apply it manually
whereever we use the softmask_ctm (which is just for the bbox
transformation currently).
|
|
|
|
Otherwise loading calc.pdf and clicking buttons causes a null
pointer exception.
|
|
We have this as a local class, and the import was a hangover from
old code that stops it working on Froyo/Gingerbread.
|
|
Until someone can get me v8 libs that work on armeabi at least!
|
|
|
|
|
|
Android resolves references at class load time, so when MuPDFActivity
is loaded, it tries to resolve AnimatorInflater. This fails on a 2.2
system.
The fix is to push the code into 'SafeAnimatorInflater'. When
MuPDFActivity is loaded, SafeAnimatorInflater is resolved, but
it's not actually loaded until it's used. We never use it unless
we have at least honeycomb, hence we never try to resolve the missing
class.
|
|
The "-G gamma" entry in the usage string was different in style to
all the other entries.
|
|
|
|
|
|
Less 'pretty', but more in the style of the others.
|
|
|
|
|
|
This actually turned out to be far easier than I'd feared; remove the
explicit check that stopped this working, and ensure that we pass the
correct value in for the 'indexed' param.
Add a function to check for colorspaces being indexed. Bit nasty that
this requires a strcmp...
|
|
|
|
Thanks to Michael Weber.
|
|
Disable some features when in reflow mode
Disable features when document format prohibits
Add a few instructional on-scrren, info messages
|
|
This wont work for other than PDF documents
Also, we should save the file before printing if it has been changed
|
|
|
|
Implementations remain unexposed, but this means we can safely
pass functions in shades without having to 'sample' them (though
we may still choose to do this for speed).
|
|
|
|
The div/spans still use table style rendering, but it's simpler
code (and html) this way.
|
|
|
|
|
|
|
|
Update fz_text_analysis function to look for 'regions'; use this to
spot columns etc. Spot columns/width/alignment info.
"Intelligently" merge lines based on this.
Update html output to make use of this extra information.
|
|
If a line starts with a recognised unicode bullet char, then split
the paragraph there. Don't use this lines separation from the previous
line to determine paragraph line step.
Also attempt to spot numbered list items (digits or roman numerals).
The digits/roman numerals code is disabled by default, as while it
worked, later commits made it less useful - but it may be worth
reinstating later.
|
|
Rework the text extraction structures - the broad strokes are similar
but we now hold more information at each stage to enable us to perform
more detailed analysis on the structure of the page.
We now hold:
fz_text_char's (the position, ucs value, and style of each char).
fz_text_span's (sets of chars that share the same baseline/transform,
with no more than an expected amount of whitespace between each char).
fz_text_line's (sets of spans that share the same baseline (more or
less, allowing for super/subscript, but possibly with a larger than
expected amount of whitespace).
fz_text_block's (sets of lines that follow one another)
After fz_text_analysis is called, we hope to have fz_text_blocks split
such that each block is a paragraph.
This new implementation has the same restrictions as the current
implementation it replaces, namely that chars are only considered for
addition onto the most recent span at the moment, but this revised form
is designed to allow more easy extension, and for this restriction to
be lifted.
Also add simple paragraph splitting based on finding the most common
'line distance' in blocks.
When we add spans together to collate them into lines, we record the
'horizontal' and 'vertical' spacing between them. (Not actually
horizontal or vertical, so much as 'in the direction of writing' and
'perpendicular to the direction of writing').
The 'horizontal' value enables us to more correctly output spaces when
converting to (say) html later.
The 'vertical' value enables us to spot subscripts and superscripts etc,
as well as small changes in the baseline due to style changes. We are
careful to base the baseline comparison on the baseline for the line,
not the baseline for the previous span, as otherwise superscripts/
subscripts on the end of the line affect what we match next.
Also, we are less tolerant of vertical shifts after a large gap. This
avoids false positives where different columns just happen to almost
line up.
|
|
|
|
|
|
Don't subtract the itemsize on error when we haven't added it
yet.
|
|
When we calculate the bbox to store in display list nodes, we had been
forgetting to allow for the stroke state.
|
|
When storing tiling bitmaps from the draw_device to the store, we
frequently hit the case where we insert tile records that are already
there. (This also happens in other cases, such as an image being decoded
simultaneously on 2 different threads, but more rarely).
In such cases, the existing code attempts to evict store contents to
bring the size down enough to fit the new object in, only to find that
it needn't have. This patch attempts to fix that behaviour.
The only way we know if an equivalent entry is in place already is to
try to place the new one; we therefore do this earlier in the store
function. If this encaching succeeds (no equivalent entry already
exists) we are safe to evict as required.
Should the eviction be incapable of removing enough from the store to
make it fit, we now need to remove the entry we just added to the hash
table.
To avoid doing a full (and potentially expensive linear probe), we
amend the hash table functions slightly. Firstly, we add a new function
fz_hash_insert_with_pos that does the insert, but returns the position
within the hashtable that the entry was inserted. Secondly, we then add
a new fz_hash_remove_fast function that takes this position as an entry.
The 'fast' removal function checks to see whether the entry is still
correct (it always should be unless we have been very unlucky with a
table rebuild, or another hashtable operation happening at the same time)
and can quickly remove the entry. If lightning has struck, it works
the old (slower) way.
|