Age | Commit message (Collapse) | Author |
|
|
|
|
|
Use that within gproof. The existing use of fz_read_line was broken
and was resulting in bad values for separations.
|
|
Use an endian-ness independent method of reading, instead of byte swapping.
|
|
fz_read_int16le, fz_read_int32le, fz_read_int64le.
|
|
If FZ_LARGEFILE is defined when building, MuPDF uses 64bit offsets
for files; this allows us to open streams larger than 2Gig.
The downsides to this are that:
* The xref entries are larger.
* All PDF ints are held as 64bit things rather than 32bit things
(to cope with /Prev entries, hint stream offsets etc).
* All file positions are stored as 64bits rather than 32.
The implementation works by detecting FZ_LARGEFILE. Some #ifdeffery
in fitz/system.h sets fz_off_t to either int or int64_t as appropriate,
and sets defines for fz_fopen, fz_fseek, fz_ftell etc as required.
These call the fseeko64 etc functions on linux (and so define
_LARGEFILE64_SOURCE) and the explicit 64bit functions on windows.
|
|
Purge several embedded contexts:
Remove embedded context in fz_output.
Remove embedded context in fz_stream.
Remove embedded context in fz_device.
Remove fz_rebind_stream (since it is no longer necessary).
Remove embedded context in svg_device.
Remove embedded context in XML parser.
Add ctx argument to fz_document functions.
Remove embedded context in fz_document.
Remove embedded context in pdf_document.
Remove embedded context in pdf_obj.
Make fz_page independent of fz_document in the interface.
We shouldn't need to pass the document to all functions handling a page.
If a page is tied to the source document, it's redundant; otherwise it's
just pointless.
Fix reference counting oddity in fz_new_image_from_pixmap.
|
|
Rename fz_close to fz_drop_stream.
Rename fz_close_archive to fz_drop_archive.
Rename fz_close_output to fz_drop_output.
Rename fz_free_* to fz_drop_*.
Rename pdf_free_* to pdf_drop_*.
Rename xps_free_* to xps_drop_*.
|
|
Read the contents of a file into a fz_buffer in one go.
|
|
Many instances just use the data and free it, so reallocing to shrink
is a waste of time. Other instances need to append a terminating zero,
such as the XML and CSS parsers.
|
|
Currently fz_streams have a 4K buffer within their header. The call
to read from a stream fills this buffer, resulting in more data being
pulled from any underlying stream than we might like. This causes
problems with the forthcoming 'leech' filter.
Here we simplify the fields available in the public stream header.
No specific buffer is given; simply the read and write pointers.
The underlying 'read' function is replaced by a 'next' function
that makes the next block of data available and returns the first
character of it (or EOF).
A caller to the 'next' function should supply the maximum number of
bytes that it knows it will need (possibly not now, but eventually).
This enables the underlying stream to efficiently decode just enough.
The underlying stream is free to return fewer, or a greater number
if it wants to.
The exact size of the 'block' of data returned will depend on the
filter in use and (possibly) the data therein.
Callers can get the currently available amount of data by calling
fz_available (but again should pass the maximum amount of data they know
they will need). The only time this will ever return 0 is if we have
hit EOF.
|
|
In streams that we cannot seek in (such as flate ones) we implement
seeking forward by skipping bytes. We failed to spot that we hit EOF,
and spent ages just looping. Fix is simply to spot that we hit EOF
and bale with a warning.
|
|
We are testing this using a new -p flag to mupdf that sets a bitrate at
which data will appear to arrive progressively as time goes on. For
example:
mupdf -p 102400 pdf_reference17.pdf
Details of the scheme used here are presented in docs/progressive.txt
|
|
|