Age | Commit message (Collapse) | Author |
|
Read the contents of a file into a fz_buffer in one go.
|
|
Many instances just use the data and free it, so reallocing to shrink
is a waste of time. Other instances need to append a terminating zero,
such as the XML and CSS parsers.
|
|
Currently fz_streams have a 4K buffer within their header. The call
to read from a stream fills this buffer, resulting in more data being
pulled from any underlying stream than we might like. This causes
problems with the forthcoming 'leech' filter.
Here we simplify the fields available in the public stream header.
No specific buffer is given; simply the read and write pointers.
The underlying 'read' function is replaced by a 'next' function
that makes the next block of data available and returns the first
character of it (or EOF).
A caller to the 'next' function should supply the maximum number of
bytes that it knows it will need (possibly not now, but eventually).
This enables the underlying stream to efficiently decode just enough.
The underlying stream is free to return fewer, or a greater number
if it wants to.
The exact size of the 'block' of data returned will depend on the
filter in use and (possibly) the data therein.
Callers can get the currently available amount of data by calling
fz_available (but again should pass the maximum amount of data they know
they will need). The only time this will ever return 0 is if we have
hit EOF.
|
|
In streams that we cannot seek in (such as flate ones) we implement
seeking forward by skipping bytes. We failed to spot that we hit EOF,
and spent ages just looping. Fix is simply to spot that we hit EOF
and bale with a warning.
|
|
We are testing this using a new -p flag to mupdf that sets a bitrate at
which data will appear to arrive progressively as time goes on. For
example:
mupdf -p 102400 pdf_reference17.pdf
Details of the scheme used here are presented in docs/progressive.txt
|
|
|