Age | Commit message (Collapse) | Author |
|
The primary motivator for this is so that we can print floating point
values and get the full accuracy out, without having to print 1.5 as
1.5000000, and without getting 23e24 etc.
We only support %c, %f, %d, %o, %x and %s currently.
We only support the zero padding qualifier, for integers.
We do support some extensions:
%C turns values >=128 into UTF-8.
%M prints a fz_matrix.
%R prints a fz_rect.
%P prints a fz_point.
We also implement a fprintf variant on top of this to allow for
consistent results when using fz_output.
a
|
|
Previously pdf_process buffer did not understand inline images.
In order to make this work without needlessly duplicating complex code
from within pdf-op-run, the parsing of inline images has been moved to
happen in pdf-interpret.c. When the op_table entry for BI is called
it now expects the inline image to be in csi->img and the dictionary
object to be in csi->obj.
To make this work, we have had to improve the handling of inline images
in general. While non-inline images have been loaded and held in
memory in their compressed form and only decoded when required, until
now we have always loaded and decoded inline images immediately. This
has been due to the difficulty in knowing how many bytes of data to
read from the stream - we know the length of the stream once
uncompressed, but relating this to the compressed length is hard.
To cure this we introduce a new type of filter stream, a 'leecher'.
We insert a leecher stream before we build the filters required to
decode the image. We then read and discard the appropriate number
of uncompressed bytes from the filters. This pulls the compressed
data through the leecher stream, which stores it in an fz_buffer.
Thus images are now always held in their compressed forms in memory.
The pdf-op-run implementation is now trivial. The only real complexity
in the pdf-op-buffer implementation is the need to ensure that the
/Filter entry in the dictionary object matches the exact point at
which we backstopped the decompression.
|
|
Currently fz_streams have a 4K buffer within their header. The call
to read from a stream fills this buffer, resulting in more data being
pulled from any underlying stream than we might like. This causes
problems with the forthcoming 'leech' filter.
Here we simplify the fields available in the public stream header.
No specific buffer is given; simply the read and write pointers.
The underlying 'read' function is replaced by a 'next' function
that makes the next block of data available and returns the first
character of it (or EOF).
A caller to the 'next' function should supply the maximum number of
bytes that it knows it will need (possibly not now, but eventually).
This enables the underlying stream to efficiently decode just enough.
The underlying stream is free to return fewer, or a greater number
if it wants to.
The exact size of the 'block' of data returned will depend on the
filter in use and (possibly) the data therein.
Callers can get the currently available amount of data by calling
fz_available (but again should pass the maximum amount of data they know
they will need). The only time this will ever return 0 is if we have
hit EOF.
|
|
Gridfitting can increase the required width/height of images by up to
2 pixels. This makes images that are rendered very small very
sensitive to over quantisation.
This can produce 'mushier' images than it should, for instance on
tests/Ghent_V3.0/090_Font-Support_x3.pdf (pgmraw, 72dpi)
|
|
This avoids leaks when pdf_clear_xref etc are used.
|
|
Currently, when parsing, each time we encounter a name, we throw away
the last name we had. BDC operators are called with:
/Name <object> BDC
If the <object> is a name, we lose the original /Name.
To fix this, parsing a name when we already have a name will cause
the name to be stored as an object.
This has various knock on effects throughout the code to read from
csi->obj rather than csi->name.
Also, ensure that when cleaning, we collect a list of the object
names in our new resources dictionary.
|
|
When inserting a new value into a dictionary, if replacing an existing
entry, ensure we keep the new value before dropping the old one.
This is important in the case where (for example) the existing value
is "[ object ]" and the new value is "object". If we drop the array
and that loses the only reference to object, we can find that we have
lost the value we are adding.
|
|
Firstly, we remove the use of global variables; this is done by
introducing a 'globals' structure for each of these files and
passing it internally between functions.
Next, split the core of pdfclean_main into pdfclean_clean, and the
core of pdfinfo_main into pdfinfo_info.
The _main functions now do the argv processing. The new functions now
run entirely thread safely, so can be called from library functions.
|
|
Pass in the 'tight' flag.
|
|
In the event that an annot is hidden or invisible, the pdf_process
would never be freed. Solve that here.
Thanks to Simon for spotting this!
|
|
We were never freeing the top level filter_gstate, and we were losing
a reference to each new resource type dictionary when we create them.
|
|
We add various facilities here, intended to allow us to efficiently
minimise the memory we use for holding cached pdf objects.
Firstly, we add the ability to 'mark' all the currently loaded objects.
Next we add the ability to 'clear the xref' - to drop all the currently
loaded objects that have no other references except the ones held by the
xref table itself.
Finally, we add the ability to 'clear the xref to the last mark' - to
drop all the currently loaded objects that have been created since the
last 'mark' operation and have no other references except the ones held
by the xref table.
We expose this to the user by adding a new device hint 'FZ_NO_CACHE'.
If set on the device, then the PDF interpreter will pdf_mark_xref before
starting and pdf_clear_xref_to_mark afterwards. Thus no additional
objects will be retained in memory after a given page is run, unless
someone else picks them up and takes a reference to them as part of
the run.
We amend our simple example app to set this device hint when loading
pages as part of a search.
|
|
Currently this knows about q/Q matching/eliding and avoiding
repeated/unneccesary color/colorspace setting.
It will also collect a dictionary of resources used by a page.
This can be extended to be cleverer in future.
|
|
Using this, we can reconstruct pdf streams out of the process
called. This will enable us to do filtering when used in
combination with future commits.
|
|
Currently the only processing we can do of PDF pages is to run
them through an fz_device. We introduce new "pdf_process"
functionality here to enable us to do more things.
We define a pdf_processor structure with a set of function
pointers in, one per PDF operator, together with functions
for processing xobjects etc. The guts of pdf_run_page_contents
and pdf_run_annot operations are then extracted to give
pdf_process_page_contents and pdf_process_annot, and the
originals implemented in terms of these.
This commit contains just one instance of a pdf_processor, namely
the "run" processor, which contains the original code refactored.
The graphical state (and device pointer) is now part of private data
to the run operator set, rather than being in pdf_csi.
|
|
Thanks to Sebastian for spotting this.
|
|
Add a RESOLVE(obj) call in line with other such functions.
|
|
pdf_flush_text can cause the list of gstates to be extended. This
can in turn cause them to move in memory. This means that any
gstate pointers already held can be invalidated.
Update the code to allow for this.
|
|
|
|
Currently, pdf_new_obj_from_str returns NULL if the object can't be
parsed. This isn't consistent with how all other pdf_new_* methods
behave which is to throw on errors.
|
|
See https://code.google.com/p/sumatrapdf/issues/detail?id=2526 for a
file which renders wrongly if no encoding is loaded.
|
|
If the expansion of a transformation matrix is huge, the path flatness
becomes so small that even simple paths consist of millions of edges
which easily causes MuPDF to hang quite long for simple documents. One
solution for this is to limit the allowed flatness.
|
|
|
|
The following changes allow font providers to make better choices WRT
what font to provide and under what circumstances:
* bold and italic flags are passed in so that implementors can decide
themselves whether to ask for simulated boldening/italicising
if a font claims not to be bold/italic
* is_substitute is replaced with needs_exact_metrics to make the
meaning of this argument hopefully clearer (that argument is set only
for PDF fonts without a FontDescriptor)
* the font name is always passed as requested by the document instead
of the cleaned name for the standard 14 fonts which allows
distinguishing e.g. Symbol and Symbol,Bold
|
|
|
|
|
|
|
|
and give them names more likely to be unique.
|
|
|
|
|
|
Many times, the idiom p.x = x; p.y = y; fz_transform_point() is used.
This function should simplify that use case by both initializing and
transforming the point in one call.
|
|
|
|
This feature is being implemented mostly for the purpose of permitting
the addition to a page of invisible signatures.
Also change pdf_create_annot to make freshly created annotations
printable by default.
|
|
|
|
Use a fixed number for Math.random().
Return a fixed date for Date.now() and Date.UTC().
|
|
|
|
|
|
Make the scoping clearer, since Javascript doesn't have block scoping.
|
|
|
|
|
|
|
|
see https://code.google.com/p/sumatrapdf/issues/detail?id=2517 for a
document which is broken to the point where it fails to load using
reparation but loads successfully if object 0 is implicitly defined.
|
|
|
|
Patch from Thomas Fach-Pedersen. Many thanks!
Add a new format handler that copes with TIFF files. This replaces
the TIFF functionality within the image format handler, and is
better because this copes with multiple images (as one image per
page).
|
|
Patch from Thomas Fach-Pedersen. Many Thanks.
|
|
Patch from Thomas Fach-Pedersen to fix the operation of pdf_insert_page
when called with an empty page tree. Many thanks! As noted in the code
with a FIXME this currently throws an error.
Also, cope with being told to add a page "at" INT_MAX as meaning to
add it at the end of the document.
Possibly this code should cope with a Root without a Pages entry, or
a Pages without a Kids too, but we can fix this in future if it ever
becomes a problem.
|
|
This makes every pdf_run_XX operator function have the same function
type. This paves the way for future changes in this area.
|
|
Acrobat honours Tc and Tw operators found during parsing TJ arrays.
We update the code here to cope. Possibly to completely match we should
honour other operators too, but this will do for now.
This maintains the behaviour of
tests_private/pdf/sumatra/916_-_invalid_argument_to_TJ.pdf 916.pdf
and improves the behaviour in general.
|
|
Useful utility missing from our arsenal.
|
|
Reuses the same internals as pdf_fprintf_obj etc.
|