Age | Commit message (Collapse) | Author |
|
fz_bound_path already takes care of stroke expansion - don't apply
it twice.
|
|
Only split as many components of colors in the tensor patch as we
actually use.
|
|
Apply the same optimisations to mesh type 6 as were just applied to
mesh type 7.
|
|
|
|
|
|
In order to be able to output images (either in the pdfwrite device or
in the html conversion), we need to be able to get to the original
compressed data stream (or else we're going to end up recompressing
images). To do that, we need to expose all of the contents of pdf_image
into fz_image, so it makes sense to just amalgamate the two.
This has knock on effects for the creation of indexed colorspaces,
requiring some of that logic to be moved.
Also, we need to make xps use the same structures; this means pushing
PNG and TIFF support into the decoding code. Also we need to be able
to load just the headers from PNG/TIFF/JPEGs as xps doesn't include
dimension/resolution information.
Also, separate out all the fz_image stuff into fitz/res_image.c rather
than having it in res_pixmap.
|
|
This actually turned out to be far easier than I'd feared; remove the
explicit check that stopped this working, and ensure that we pass the
correct value in for the 'indexed' param.
Add a function to check for colorspaces being indexed. Bit nasty that
this requires a strcmp...
|
|
|
|
Implementations remain unexposed, but this means we can safely
pass functions in shades without having to 'sample' them (though
we may still choose to do this for speed).
|
|
The div/spans still use table style rendering, but it's simpler
code (and html) this way.
|
|
|
|
|
|
|
|
Update fz_text_analysis function to look for 'regions'; use this to
spot columns etc. Spot columns/width/alignment info.
"Intelligently" merge lines based on this.
Update html output to make use of this extra information.
|
|
If a line starts with a recognised unicode bullet char, then split
the paragraph there. Don't use this lines separation from the previous
line to determine paragraph line step.
Also attempt to spot numbered list items (digits or roman numerals).
The digits/roman numerals code is disabled by default, as while it
worked, later commits made it less useful - but it may be worth
reinstating later.
|
|
Rework the text extraction structures - the broad strokes are similar
but we now hold more information at each stage to enable us to perform
more detailed analysis on the structure of the page.
We now hold:
fz_text_char's (the position, ucs value, and style of each char).
fz_text_span's (sets of chars that share the same baseline/transform,
with no more than an expected amount of whitespace between each char).
fz_text_line's (sets of spans that share the same baseline (more or
less, allowing for super/subscript, but possibly with a larger than
expected amount of whitespace).
fz_text_block's (sets of lines that follow one another)
After fz_text_analysis is called, we hope to have fz_text_blocks split
such that each block is a paragraph.
This new implementation has the same restrictions as the current
implementation it replaces, namely that chars are only considered for
addition onto the most recent span at the moment, but this revised form
is designed to allow more easy extension, and for this restriction to
be lifted.
Also add simple paragraph splitting based on finding the most common
'line distance' in blocks.
When we add spans together to collate them into lines, we record the
'horizontal' and 'vertical' spacing between them. (Not actually
horizontal or vertical, so much as 'in the direction of writing' and
'perpendicular to the direction of writing').
The 'horizontal' value enables us to more correctly output spaces when
converting to (say) html later.
The 'vertical' value enables us to spot subscripts and superscripts etc,
as well as small changes in the baseline due to style changes. We are
careful to base the baseline comparison on the baseline for the line,
not the baseline for the previous span, as otherwise superscripts/
subscripts on the end of the line affect what we match next.
Also, we are less tolerant of vertical shifts after a large gap. This
avoids false positives where different columns just happen to almost
line up.
|
|
Don't subtract the itemsize on error when we haven't added it
yet.
|
|
When we calculate the bbox to store in display list nodes, we had been
forgetting to allow for the stroke state.
|
|
When storing tiling bitmaps from the draw_device to the store, we
frequently hit the case where we insert tile records that are already
there. (This also happens in other cases, such as an image being decoded
simultaneously on 2 different threads, but more rarely).
In such cases, the existing code attempts to evict store contents to
bring the size down enough to fit the new object in, only to find that
it needn't have. This patch attempts to fix that behaviour.
The only way we know if an equivalent entry is in place already is to
try to place the new one; we therefore do this earlier in the store
function. If this encaching succeeds (no equivalent entry already
exists) we are safe to evict as required.
Should the eviction be incapable of removing enough from the store to
make it fit, we now need to remove the entry we just added to the hash
table.
To avoid doing a full (and potentially expensive linear probe), we
amend the hash table functions slightly. Firstly, we add a new function
fz_hash_insert_with_pos that does the insert, but returns the position
within the hashtable that the entry was inserted. Secondly, we then add
a new fz_hash_remove_fast function that takes this position as an entry.
The 'fast' removal function checks to see whether the entry is still
correct (it always should be unless we have been very unlucky with a
table rebuild, or another hashtable operation happening at the same time)
and can quickly remove the entry. If lightning has struck, it works
the old (slower) way.
|
|
If we find that the store already contains a copy of an object, then
we don't reinsert it. We should therefore undo the addition of the
object size that we just did.
|
|
|
|
Thanks to Brian Nixon for pointing this out.
|
|
Some -Wshadow ones, plus some 'set but not used' ones.
|
|
|
|
|
|
Ensure pointer is non NULL before dereferencing.
|
|
|
|
This requires a slight change to the device interface.
Callers that use fz_begin_tile will see no change (and no caching
will be done). We add a new fz_begin_tile_id function that takes an
extra 'id' parameter, and returns 0 or 1. If the id is 0 then the
function behaves exactly as fz_being_tile does, and always returns 0.
The PDF and XPS code continues to call the old (uncached) version.
The display list code however generates a unique id for every
BEGIN_TILE node, and passes this in.
If the id is non zero, then it is taken to be a unique identifier
for this tile; the implementer of the fz_begin_tile_id entry point
can choose to use this to implement caching. If it chooses to ignore
the id (and do no caching), it returns 0.
If the device implements caching, then it can check on entry for a
previously rendered tile with the appropriate matrix and a matching id.
If it finds one, then it returns 1. It is the callers responsibility
to then skip over all the device calls that would usually happen to
render the tiles (i.e. to skip forward to the matching 'END_TILE'
operation).
|
|
The font bbox is wrong in some fonts, so any calculations we base
on that will be wrong; in particular this affects fz_bound_glyph.
We now spot an illegal bbox, and use a 'large' default.
|
|
|
|
This fixes bug #693664, and also simplifies app code.
The example file attached to the bug produces strange results, but that
is because the QuadPoint information is incorrect.
|
|
When running under Windows, replace fopen with our own fopen_utf8
that converts from utf8 to unicode before calling the unicode
version of fopen.
|
|
Use of the bbox device to derive the area of the display list can lead
to bad results because of heuristics used to handle corners of stroked
paths.
|
|
If the colorspace given in the dictionary of a JPX image differs from
the colorspace given in the image itself, decode to the native image
format, then convert.
This goes a long way towards fixing "1439 - color softmask fails to
draw jpx image.pdf" (aka hivemind.pdf). The lack of transfer function
support hopefully explains the rest.
|
|
Also change the way we pass the text rectangles so that
non-axis-aligned ones can be permitted, and relocate the code that
calculates the strike-out lines from the bounding boxes
|
|
|
|
|
|
Avoid heap overflow in the error case in fz_end_tile.
Avoid leaking all previously loaded annotations from pdf_load_annots
if pdf_is_dict throws an exception.
Various whitespace fixes.
Many thanks to zeniko.
|
|
A UTF-8 BOM followed by a UTF-16 BOM would treat the data as UTF-16 rather
than UTF-8. Clean up the BOM detection logic.
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
Thanks to zeniko.
|
|
|
|
|