Age | Commit message (Collapse) | Author |
|
Only use the PDF character widths when also stretching glyphs to match
the PDF metrics.
|
|
Ignore space-sized backward motions. Assume that these motions are
either extreme levels of kerning, or something else fishy going on.
|
|
|
|
These formats are all almost identical to GNU tar format.
|
|
Without guards the device calls might end up with a device pointer
being NULL, causing segfaults.
|
|
|
|
Hide fz_stack_slot and exception handling details too.
Also make sure we have an initialized jmp_buf so we can safely throw from
the always block even in the exception stack overflow case.
|
|
Now the image size limit is 131072 x 131072 instead of 32768 x 32768.
|
|
|
|
|
|
|
|
fast_rgb_to_cmyk had || instead of && so always triggered incorrectly.
Only throw, no need to both assert and throw.
|
|
|
|
If you want accurate CMYK, don't build with FZ_ENABLE_ICC=0.
|
|
|
|
Stops all the extra errors and warnings about missing ICC support.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Also include "data:" schema in the data uri for fz_write_image_as_data_uri.
|
|
This is required if embedding multiple SVG documents in a web page,
for example.
|
|
This allows the output to be more easily embedded in other HTML documents.
|
|
We pack the mask type and the color parameters into a byte. We
were unpacking it incorrectly, resulting in all masks being
treated as luminosity ones.
|
|
1) Reset the margins at the start of each PCL job to avoid offsetting
the page contents.
2) Fix height/band_height confusion in mono band writing.
|
|
Writing to wide format output was causing uncompressed lines of
more than 32K to be written to a 32K buffer.
I now recognise that there is an inherent limitation in PCL where
image data can't be larger than 32K, so we'll have to split
page output into subimages and hope they register well enough.
This new commit does that (and solves the overwrite). I am seeing
problems when feeding the output from this into gpcl due to the
delta compression. We believe this is a bug in gpcl, and is being
investigated as bug 699969.
|
|
Pull in the latest changes from mainline lcms2, and bugfixes from
gs. This should now be the definitive version.
|
|
|
|
|
|
This makes it easier to test failure inside
the succeeding fz_try().
|
|
The exception is still thrown, however. This just
ensures that CMM is not left in an unknown state.
|
|
By setting ctx->cmm_instance == NULL we actively made sure that
fz_cmm_fin_profile() would never call ->fin_profile() to actually
clean up the ICC profiles.
This could be triggered by doing mutool draw -N even without a
file name, triggering a memory leak.
|
|
|
|
|
|
Drop the unused 'serif' argument to the CJK lookup functions.
Use the BCP 47 names for CJK scripts and languages:
zh-Hant for traditional Chinese,
zh-Hans for simplified Chinese,
ja for Japanese,
ko for Korean.
The lookup function also allows commonly used language+country codes:
zh-TW and zh-HK for traditional Chinese,
zh-CN for simplified Chinese.
|
|
Using #ifdef FZ_ENABLE_ means we build code in, even if we have
defined FZ_ENABLE_WHATEVER to be 0 (as we do in config.h).
|
|
MuPDF may attempt to load a page but fail to do so, e.g. due to
a circular page tree. When this happens the page will never be
introduced into the document's list of pages. Its next and prev
pointers are both NULL, but the code in fz_drop_page() falsely
assumed that the prev pointer was always set.
Thanks to oss-fuzz for reporting.
|
|
Keep a list of currently open pages for each document. Attempting to
load a page that is already loaded will return the same instance again.
|
|
There is a regression for 2325_-_JPX_image_with_padding_rejected.pdf.
Object 3 in that document is a JPX-encoded image. Its EOC marker is
preceded by two extra bytes of data, 0x80 0x80. This makes the file
broken according to the JPEG 2000 specification.
Acrobat Reader and the Kakadu JPX decoder accepts this file without
issues, so OpenJPEG 2.1.0 added code to fix this (bug 226, commit
005e75bdc). That fix detects exactly two bytes of 0x80 0x80, a rather
brittle fix. Adding more padding or changing the padding byte values
is not accepted. Adding more padding is acceptable to Acrobat Reader
and Kakadu. An unrelated fix for another problem has since broken
OpenJPEG's support for this broken image.
|
|
|
|
The upsampling code in the JPX decode attempted to guess a
suitable upsampling factor. The guessed factor was wrong,
causing writes of samples outside of the decoded image buffer.
Simply limiting the coordinates to the image buffer would
not suffice because the factor was wrong for every upsampled
row of pixels. openjpeg does provide an upsampling factor,
so use that instead and also take the component offsets into
account when decoding components into the pixmap. Combined
this resolves the issue that previously triggered ASAN.
Thanks to oss-fuzz for reporting.
|
|
|
|
|
|
|