| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
New target sits between extract and fetch. Thus every build ensures that
exach tarball is not corrupt before extract.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fetch is now python-based and runs on the same version as does
configure. The source script is make/fetch.py. New features:
MD5 hash tracking for tarballs. Data values for all contribs added.
Upon download, the file will be verified, and only then will it be moved
into place inside downloads/ . Files that exist before the build system
does a fetch will not be md5-checked.
Multiple URLs for tarballs. Each module may specify one or more URLs and
by convention the official HandBrake should be first when possible. Each
URL is tried in sequence, and if it fails for any reason, the next URL
is tried. If no URL succeeds, a hard-error is reported.
Network fetching may be disabled via configure options. --disable-fetch
will hard-error if a fetch is attempted. --accept-fetch-url=SPEC and
--deny-fetch-url=SPEC offer an ACL-style mechanism using regex to match
against URLs. For example, --accept-fecth-url='.*/download.handbrake.fr/.*'
would skip any non-matching URLs.
Build dependencies have been lightened. wget and curl are no longer
required. TODO: GTK packaging should also be able to remove those deps.
|
|
|
|
|
|
| |
It was dropping subtitles because the "end of CC" marker buffer can have
the same time as the next valid CC which triggered the subtitle overlap
dropping code.
|
| |
|
|
|
|
| |
Use hb_chapter_enqueue/dequeue
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* sync: correct timestamp discontinuities in sync instead of reader
This patch passes discontinuity information through the pipeline till it
reaches sync.c. The timestamps are passed through the pipeline as read
and unmodified to sync.c (instead of attempting to correct
discontinuities in reader). In sync, when we see a discontinuity,
we know where the next timestamp should be based on the timestamp
and duration of the previous buffer (before the discontinuity). So
we calculate an "SCR" offset based on the timestamp after the
discontinuity and what we calculate it should be.
The old discontinuity handling code was broken due to the following.
The MPEG STD timing model relies heavily on the decoder having an STC
that is phase lock looped to the PCRs in the stream. When decoding a
broadcast stream, the decoder can count on the time measure between PCRs
using the STC to match to a high degree of accuracy.
I.e. STC - lastSTC == PCR - lastPCR. When a discontinuity occurs, the
decoder calculates a new PCR offset = PCR - STC. I.e. the offset is the
new PCR value minus what it would have been if there had been no
discontinuity.
The above does not work without a reliable STC, which we do not have.
We have been attempting to approximate one by avereraging the duration
of received packets and extrapolating an "STC" based on the last PTS and
the average packet duration. But this is highly variable and
unreliable.
* decavcodec: fix data type of next_pts
It needs to be double so that partial ticks are not lost
* deccc608sub: clarify comment
* sync: allow queueing more audio
Audio is small, and there is often a significant amount of audio in the
stream before the first video frame.
* sync: improve handling of damaged streams
When data is missing, the audio decoder was extrapolating timestamps
from the last pts before the error caused by the missing data which
caused sync issues.
Also, missing data can cause the video decoder to output a frame out of
order with the wrong scr sequence. Drop such frames.
|
|
|
|
|
|
| |
* build: add ability to set c++ standard
* fdk-aac: Fix building with g++ 6, set c++98 standard
|
|
|
|
| |
... When "Add Multiple" is used.
|
| |
|
|
|
|
| |
brainfart!
|
| |
|
|
|
|
| |
HBVideo KVO dependecies.
|
| |
|
| |
|
|
|
|
|
|
| |
essentially an off-by-one error. OutputBuffer had to wait for one more
buffer before any output was performed after the queue should have
already been filled to it's minimum levels.
|
| |
|
| |
|
|
|
|
|
|
| |
... and not only when the version increases. This ensures that presets
from a newer version are not lost when temporarily reverting to an older
version.
|
| |
|
|
|
|
|
| |
forced, default, and burned flags were getting assigned to the wrong
output tracks.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This prevents audio from getting so far ahead of video which improves
sync's ability to fix discontinuities
|
| |
|
| |
|
|
|
|
|
| |
It's not strictly necessary because it gets done elsewhere as well. But
putting it here makes the code more understandable.
|
| |
|
| |
|
|
|
|
|
| |
... by allowing a deeper initial buffer when looking for the fist PTS of
each stream.
|
|
|
|
| |
... at log level 11 ;)
|
| |
|
|
|
|
|
|
|
| |
We were dropping all buffers before the start frame was found regardless
of the buffers start time. Now we keep track of the start time of the
last video frame seen and only drop buffers that start before that
frame.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
since sync interleaves it's output by PTS, the stream of the incoming
buffer is mostly not the same as the stream of the outgoing buffer. This
causes a delay in the data to get to it's respective fifo until the sync
work function for the output stream is called next. Writing directly to
the output fifo fixes this.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
If the entire stream fits in the sync queues, the first PTS was not
detected and initial offsets were not applied.
|
|
|
|
|
| |
The way the constant is defined requires an (int64_t) cast to force it
to be signed.
|
| |
|