Age | Commit message (Collapse) | Author |
|
|
|
|
|
Use a macro to make fz_new_document nicer (akin to
fz_malloc_struct).
|
|
If we rewrite a page content stream, and then drop that entire page
we shouldn't leak the buffer.
Or to put it another way, when we change the obj for an xref entry,
ditch the cached stm_buf.
|
|
In particular for html docs we were getting the refcount wrong,
causing us to leak on closedown.
|
|
|
|
|
|
Remove void* typecasts.
|
|
Initial framework for creating pdfs
This adds a create option to mutool for us to use in working
on the API for creating content as well as adding content to
existing documents.
mutool create: Get page sizes and add them
Start the parsing of the contents.txt file which may have
multiple page information. Add the pages at the proper sizes.
Further work on mutool create_pdf
Remove the calls that were being made to the pdf-write device.
Clean up several issues with the reading of the page contents.
Get the content streams for each page associated with the page->contents
Temp. created a pdf_create_page_contents procedure. I will merge
this with pdf_create_page as there is significant overlap.
Next is to add in the font and image resources and indirect references.
Include pdfcreate in build
Merge pdf_create_page_contents and pdf_create_page
Add support for images in pdfcreate
This adds images to the pdf document using a function stolen from pdf-device (send_image).
This was renamed pdf_add_image_res and added to pdf-image. Down the road, send-image will
be removed. Prior to that, I need to work on making sure that multiple copies of the same
image do not end up in the document.
Code was also added to create the page resources to point to the proper image in the document.
Next fonts will be added in a similar manner, then I will work on computing the md5 sums of
image and fonts to ensure only one copy ends up in the document. Then pdf-write will be
reworked to use the same code as opposed to its current list of md5 sums that are stored in
a device structure.
mutool pdfcreate: support for WinAnsiEncoded fonts
Added support for very simple fonts (WinAnsiEncoding). Methods
added in pdf-font.c. Added first_width and last_width to fz_font_s
and stem_v to pdf_font_desc_s.
Ran code through memento with simple test of 4 page document
creation including an image and a font. Fixed several leaks
as well as buffer corruption issues (main changes in pdfcreate).
Thanks to Robin for the help with Memento in finding leaks.
Added StemV to pdf names as it was needed for the font descriptor creation.
Fix for pdf_write_document rename to pdf_save_document
Add resource_ids to pdf document structure
The purpose of this structure will be to allow the search
and reuse of resources when we attempt to add new ones
to the document.
Fix name changes from recent updates
pdf_create branch updated to work with recent changes in master
Initial use of hash table for resources
To avoid adding in the same resource this adds a
resource_tables member to pdf_document. The
resource_tables structure consists of multiple
fz_hash_table entries, one for each resource type.
When an attempt is made to search for an existing
resource, the table will be initialized in a brute
force search for existing resources. Currently this
is only set up for the image resources and accessed
through pdf_add_image_res. If a match is found,
the reference object is returned. If no match is found
NULL is returned and the ref object created in pdf_add_image_res
is added into the hash table. In this case, a command line
such as
create -o output.pdf -f F0:font.ttf -i Im0:image.jpg -i Im1:image1.jpg \\
-i Im2:image.jpg contents.txt
will avoid the insertion of two copies of image.jpg into the
output PDF document.
CID Identity-H Font added for handing ttf
This adds a method for adding a ttf to a PDF as a
CID font with Identity-H mapping and a ToUnicode
entry that is created using FT_Get_Char_Index
This takes much care in the creation of the ToUnicode
CMap to ensure that the minimum number of entries
are created in that we try to use beginbfrange as
much as possible before using beginbfchar. The
code makes sure to limit the number of entries in
a group to 100 and to not cross first-byte boundaries
for the CID values as described in the Adobe
Technical note 5411.
Add missing file pdf-resources.c
pdf-resources.c was missing and should have been
committed earlier. Added to windows project file.
Not sure where else it needs to be added for the
other platforms.
Clean up names and spacing
Make sure that the visible functions have the proper namespace (e.g. pdf_xxxx)
Also make sure we have a blank line prior to comment.
Be consistent with static function naming in pdf_resources.c
pdfwrite make use of image resource fz_hash_table
The pdfwrite device now shares the structure that stores the
resource images for pdfcreate. With this fix, pdfwrite now
avoids duplicating the writing of the same images that are
shared across multiple pages.
Add missing file pdf-resources.c
Initial work toward having pdfwrite use Identity-H Type0 encoding for fonts
Finish of CID type0 Identity-H font for pdfwrite
This adds in the proper widths which may have been stored in the source font
in the width table (parsed from the W entry in the pdf file) or if the
free type structure has its own cmap then we can get the width from free type.
Widths are restructured into format described in 5.6.3 of PDF spec.
Fix issue from conflict merging and multiple define of structure
Clean up warnings and make mutool create use simple font
|
|
|
|
|
|
In preparation of adding pdf_write_document that writes a document
to a fz_output stream.
|
|
Separate naming of functions that save complete files to disk
from functions that write data to streams.
|
|
See Bug 696284. Do not set disallow_new_increments in pdf_create_document, as
this breaks following calls to pdf_sign_signature.
See also the comments in bug 696251.
|
|
In pdf_create_document set disallow_new_increments to 1. Without this, the
calls to pdf_new_ref in pdf_create_document create an incremental xref
section. The following call to pdf_set_populating_xref_trailer then does not
set the trailer of the final xref section.
|
|
When lexing a number, do NOT check for overflow. This causes
loss of data in some files. The current implementation matches
Acrobat.
When lexing a startxref offset, check for overflow. If found, throw
an error.
|
|
The PDF spec says that old format xrefs should start with:
xref\n<start> <len>
The example file in question has:
xref <start> <len>
which confuses our parsing code. Update the parse code to avoid
using fz_read_line, and to instead work on a char level.
Also, downgrade the error given when the first object is not free
to be a warning. Now we do 'just in time' repair, we are probably
better able to cope.
|
|
The current code never looks for /Root objects in dictionaries
as it parses them. This means that 'New style' files end up
without any Roots after repair.
The new code therefore updates pdf_repair_obj to look for Root
objects in the same way it looks for encrypt and id objects.
These go into the list of found roots.
The Root object almost certainly has indirections within it, so
it is vital that the 'doc' pointer gets set. This means we have
to make a slight adjustment to pdf_repair_obj so that the dict
is parsed with a doc pointer. In turn this means we need to
manually ensure that none of the other information read from
the dict during the repair operation will cause indirections
to be resolved. This is achieved by checking for
!pdf_is_indirect at various points.
|
|
This fixes bug #696123 by allowing multiple signatures each to be written
to the document in a separate incemental update.
Add count num_incremental_sections to keep track of the number of
incremental sections.
Add xref_base, which can be set between 0 and num_incremental_sections
inclusive to access different versions of the document.
Add disallow_new_increments flag that stops new incremental sections
being provoked by the creation of an xref stream.
Move the unsaved_sigs list from the document structure to the xref
structure. With this commit in place, the lists will never grow beyond
length one, but we've maintained the list structure in case other cases
need supporting in the future.
Add an end offset field to the xref structure, so that during completion
of signatures the document length of the various incremental versions of
the document are available.
Factor out functions for storing unsaved signatures and for checking if
an object is an unsaved signature.
Do deep copy of objects that require the holding of several versions.
|
|
When attempting to load pdf objects and a valid pdf object is found but
it has the wrong number, mark the xref object entry as being free before
attempting to repair the xref. This ensures that if the wanted object
cannot be found in the document then the missing object will be
considered to be null. Previously it was still assumed to be around, but
the object pointer was NULL triggering an assert in pdf_load_object().
|
|
When replacing the xref_index, lose the old one.
|
|
I'd missed converting some int's to fz_off_t's.
|
|
If FZ_LARGEFILE is defined when building, MuPDF uses 64bit offsets
for files; this allows us to open streams larger than 2Gig.
The downsides to this are that:
* The xref entries are larger.
* All PDF ints are held as 64bit things rather than 32bit things
(to cope with /Prev entries, hint stream offsets etc).
* All file positions are stored as 64bits rather than 32.
The implementation works by detecting FZ_LARGEFILE. Some #ifdeffery
in fitz/system.h sets fz_off_t to either int or int64_t as appropriate,
and sets defines for fz_fopen, fz_fseek, fz_ftell etc as required.
These call the fseeko64 etc functions on linux (and so define
_LARGEFILE64_SOURCE) and the explicit 64bit functions on windows.
|
|
Add fz_has_permission function to fz_document.
Add fz_lookup_metadata function to fz_document.
Remove fz_meta function from fz_document.
|
|
The new pdfclean sanitize functionality mean that mutool now
needs the data files, so maintaining the split that was designed to
keep data files out of mutool is no longer viable.
|
|
Calling pdf_is_dict causes the file to seek. This is a bad thing
in a process that is running through the file. It's doubly bad, as
the thing it seeks to read may not be there as it might not have
been repaired yet.
So, instead of just keeping the 'most recent root that is a
dictionary', we change to keeping a list of the roots we have found
while parsing the doc. At the end we then check for the most recent
one that is a dictionary and use that.
|
|
Simon Reinhardt points out that writexrefstream calls pdf_update_stream
on an object, rather than on a reference. The code as written fails to
do the update, and the updated file is broken.
I fix this here by updating pdf_update_stream to be able to work
with both objects and references. This is in contrast to his patch
which would create a reference for the sole purpose of performing
the update.
|
|
Currently, every PDF name is allocated in a pdf_obj structure, and
comparisons are done using strcmp. Given that we can predict most
of the PDF names we'll use in a given file, this seems wasteful.
The pdf_obj type is opaque outside the pdf-object.c file, so we can
abuse it slightly without anyone outside knowing.
We collect a sorted list of names used in PDF (resources/pdf/names.txt),
and we add a utility (namedump) that preprocesses this into 2 header
files.
The first (include/mupdf/pdf/pdf-names-table.h, included as part of
include/mupdf/pdf/object.h), defines a set of "PDF_NAME_xxxx"
entries. These are pdf_obj *'s that callers can use to mean "A PDF
object that means literal name 'xxxx'"
The second (source/pdf/pdf-name-impl.h) is a C array of names.
We therefore update the code so that rather than passing "xxxx" to
functions (such as pdf_dict_gets(...)) we now pass PDF_NAME_xxxx (to
pdf_dict_get(...)). This is a fairly natural (if widespread) change.
The pdf_dict_getp (and sibling) functions that take a path (e.g.
"foo/bar/baz") are therefore supplemented with equivalents that
take a list (pdf_dict_getl(... , PDF_NAME_foo, PDF_NAME_bar,
PDF_NAME_baz, NULL)).
The actual implementation of this relies on the fact that small
pointer values are never valid values. For a given pdf_obj *p,
if NULL < (intptr_t)p < PDF_NAME__LIMIT then p is a literal
entry in the name table.
This enables us to do fast pointer compares and to skip expensive
strcmps.
Also, bring "null", "true" and "false" into the same style as PDF names.
Rather than using full pdf_obj structures for null/true/false, use
special pointer values just above the PDF_NAME_ table. This saves
memory and makes comparisons easier.
|
|
|
|
Purge several embedded contexts:
Remove embedded context in fz_output.
Remove embedded context in fz_stream.
Remove embedded context in fz_device.
Remove fz_rebind_stream (since it is no longer necessary).
Remove embedded context in svg_device.
Remove embedded context in XML parser.
Add ctx argument to fz_document functions.
Remove embedded context in fz_document.
Remove embedded context in pdf_document.
Remove embedded context in pdf_obj.
Make fz_page independent of fz_document in the interface.
We shouldn't need to pass the document to all functions handling a page.
If a page is tied to the source document, it's redundant; otherwise it's
just pointless.
Fix reference counting oddity in fz_new_image_from_pixmap.
|
|
Rename fz_close to fz_drop_stream.
Rename fz_close_archive to fz_drop_archive.
Rename fz_close_output to fz_drop_output.
Rename fz_free_* to fz_drop_*.
Rename pdf_free_* to pdf_drop_*.
Rename xps_free_* to xps_drop_*.
|
|
|
|
|
|
When loading e.g. the file from bug 694567, MuPDF uses an unitialized
variable because pdf_document::xref_index contains values relative to
the document's original multi-part xref while the actual xref is the
repaired single-part one (and thus the cached value is too large).
Properly resetting the xref_index before starting reparation fixes this
crash.
|
|
|
|
pdf_xref_find_subsection does indeed solidify the wrong xref section:
it should operate only on the oldest xref and not overwrite the most
recent one with older entries.
|
|
Add a new index that quickly maps object number to the first
xref in which an object appears. This appears to get us the
speed back that we lost when moving to sparse xrefs.
|
|
Following the recent change to hold pdf xrefs in their native 'sparse'
representation, searching the xref takes longer.
Malc has investigated this slowdown and found that it can be largely
avoided by not searching the xref lists first. A modified version of
his first patch has gone in already (getting us from 10x slower to
just 5x slower).
This commit is a modified version of a second patch from him. Again
it works by avoiding searching the xref list twice. The original
version of this patch 1) appears broken to me, as it could return the
wrong xref entry when object streams have more than one object in them,
and 2) supposedly gets the speed back to the original 'pre-sparse change'
speed.
I have updated the patch to fix 1), and I hope this should not affect 2).
I am slightly suspicious that removing a search can get us a 5x speed
increase, but certainly this is an improvemnet.
There is scope for us further reducing the search times, by us using a
new table to map object number -> xref number, but unless we find a case
where we are noticably slower than before, I think we can ignore this.
|
|
We know i >= 0 as we've already thrown if i < 0 earlier.
Credit to Malc for spotting this.
|
|
The recent change to holding pdf xrefs in a sparse format has resulted
in a significant decrease in speed (x10). Malc points out that some of
this (2x) can be recovered simply by making pdf_cache_object return the
entry which it found the object in.
This saves us having to immediately call pdf_get_xref_entry again
afterwards.
I am still thinking about ways to try and get the remaining time back.
|
|
After calling ensure_solid_xref, the pdf_xref pointer must be updated
in case ensure_solid_xref has reallocated the sections table or uses
a different section table than originally used. Commit
e767bd783d91ae88cd79da19e79afb2c36bcf32a fails to do so in one case.
TODO: Why does pdf_xref_find_subsection solidify xref section 0 instead
of xref section sub?
|
|
Currently each xref in the file results in an array from 0 to
num_objects. If we have a file that has been updated many times
this causes a huge waste of memory.
Instead we now hold each xref as a list of non-overlapping subsections
(exactly as the file holds them).
Lookup is therefore potentially slower, but only on files where the
xrefs are highly fragmented (i.e. where we would be saving in memory
terms).
Some parts of our code (notably the file writing code that does
garbage collection etc) assumes that lookups of object entry pointers
will not change previous object entry pointers that have been
looked up. To cope with this, and to cope with the case where we are
updating/creating new objects, we introduce the idea of a 'solid'
xref.
A solid xref is one where it has a single subsection record that spans
the entire range of valid object numbers for a file. Once we have
ensured that an xref is 'solid', we can safely work on the pointers
within it without fear of them moving.
We ensure that any 'incremental' xref is solid.
We also ensure that any non-incremental write makes the xref solid.
|
|
If a PDF document is encrypted but broken, repairing caches all
strings in encrypted form. Clearing the xref after repairing
ensures that strings are returned to API callers as expected.
Cf. https://code.google.com/p/sumatrapdf/issues/detail?id=2610
|
|
Return the null object rather than throwing an exception when parsing
indirect object references with negative object numbers.
Do range check for object numbers (1 .. length) when object numbers
are used instead.
Object number 0 is not a valid object number. It must always be 'free'.
|
|
pdf_create_document leaks the trailer and in pdf-device.c many objects
are inserted into dictionaries using pdf_dict_puts and leaked instead
of using pdf_dict_puts_drop.
|
|
...like the one Microsoft Word generates.
|
|
|
|
Split functions out of pdf-form.c that shouldn't be there, and make
javascript initialization explicit.
|
|
This avoids leaks when pdf_clear_xref etc are used.
|
|
We add various facilities here, intended to allow us to efficiently
minimise the memory we use for holding cached pdf objects.
Firstly, we add the ability to 'mark' all the currently loaded objects.
Next we add the ability to 'clear the xref' - to drop all the currently
loaded objects that have no other references except the ones held by the
xref table itself.
Finally, we add the ability to 'clear the xref to the last mark' - to
drop all the currently loaded objects that have been created since the
last 'mark' operation and have no other references except the ones held
by the xref table.
We expose this to the user by adding a new device hint 'FZ_NO_CACHE'.
If set on the device, then the PDF interpreter will pdf_mark_xref before
starting and pdf_clear_xref_to_mark afterwards. Thus no additional
objects will be retained in memory after a given page is run, unless
someone else picks them up and takes a reference to them as part of
the run.
We amend our simple example app to set this device hint when loading
pages as part of a search.
|