| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
the angle can affect the seek position
|
| |
|
|
|
|
|
| |
This broke when sync was reworked. Sync now expects job->pts_to_start
to be relative to the first frame that it sees.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Splicing of buffers that got duplicated to multiple output fifos was
broken.
Caused https://forum.handbrake.fr/viewtopic.php?f=11&t=33666
|
|
|
|
|
|
| |
Don't use hard coded 100 fifo array, allocate what is needed.
We probably just crashed if the number of tracks was > 99 since the
limit of 100 fifos was not universally checked.
|
|
|
|
|
|
|
|
| |
We split PES packets when there is a PCR change in the middle of the
packet. This works fine for audio and video where the decoder parses
the ES to find frame boundaries. But it does not work for some decoders
such as PGS subtitles. So mark split buffers and reassemble them in
reader after processing the PCR change.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
simplify job initialization sequence, clean up code, and document
dependencies in the sequence better.
Make hb_add set job->sequence_id. It is no longer necessary for the
frontend to do this. If the frontend needs the sequence_id, it is
returned by hb_add().
Clean up use of interjob. do_job() now uses sequence_id to detect when
a new sequence of related jobs is running and automatically clears
interjob.
|
|
|
|
|
| |
This brings together several independent implementations of a simple
buffer list manager.
|
|
|
|
| |
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@7332 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
| |
... instead of a 0 length buffer.
This fixes this issue:
https://forum.handbrake.fr/viewtopic.php?f=12&t=31959
Theora can create 0 length output. These 0 length frames indicate
duplicate frames. So we can't use 0 length buffers to indicate the end
of the stream.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@7143 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
| |
When a file demuxed by libav does not start to time 0, our seek
to the initial start pts tried to seek too far forward.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@7124 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
| |
Just invalidate the timestamps and let the decoders interpolate.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@7121 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
| |
We must set a start time for subtitles or the vobsub decoder (and probably
other subtitle decoders) will assign duplicate timestamps. So when a
discontinuity occurs, make the closest guess that we can.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@7072 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
| |
This global was shared between the CLI and libhb and used as a back door to
force scan and encode passes to use the same ffmpeg context for hardware
decoding. Aside from the fact that this context sharing should not be necessary
and needs fixing, this information belongs in the hb_handle_t that is shared
between the scan and the encode. So put it there and make sure the hb_handle_t
get propagated to where the flag is needed.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@7028 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Simplifies the WinGui.
This also changes how jobs are processed. Creating the sub-jobs for
multiple passes is delayed until after scanning and immediately before
running the job.
Working status has also changed. Sub-job passes are identified in status
with an ID that allows the frontend to definitively identify what pass
is in progress.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@6976 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
| |
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@6852 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
| |
There are several changes to job and title structs that break
current windows interop code. The interop code should be changed
such that it only uses json APIs. So if there is any missing
features (or bugs) in these APIs, please let me know.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@6602 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
| |
The primary problem was in setting our "zero" time in reader based on a
stream that is not decoded. Since this stream never reaches sync, there
would appear to be a long initial frame from syncs perspective.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@6506 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
| |
scr_offset was not accounted for in stop time
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@6372 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
| |
In some cases, initial data when in p-to-p mode causes libav decoder
initialization to fail. This only happens when multi-threaded encoding
is enabled.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@6229 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
| |
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@6042 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
| |
-1 is not a good value as a flag for invalid timestamps.
There are cases where small negative timestamps are useful.
So this eliminates a potential ambiguity.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@6001 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
| |
The adjustment made to the start time was made incorrectly.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@5614 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
| |
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@5318 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
| |
A "fix" for another sync issue caused a regression in handling of DVD sync.
So revert the change and make other improvements.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@5153 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
| |
Issues with timestamps made cfr think it needed to duplicate a few thousand
frames. this leads to an over-cunsumption of memory since all duplicates
are placed in a list at once.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@5082 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Cleans up several several unavoidable memory leaks caused by old api.
Clearly separates titles from jobs. Titles are set during scan and
never modified now.
Since titles are immutable, this lead to API some changes. For example,
We were setting chapter names in the title from the front ends. Now
these get set in the job.
These new APIs allow us to start moving away from our use of title->job.
Eventually, I would like to eliminate title->job completely, but the
mac ui is too tightly tied to using this field to allow removing it
at this time. So there is temporarily a convenience function used
only by the mac ui that allows it to continue using title->job and also
use the new APIs.
New APIs:
typedef struct hb_title_set_s hb_title_set_t;
struct hb_title_set_s
{
hb_list_t * list_title;
int feature; // Detected DVD feature title
};
hb_title_set_t * hb_get_title_set( hb_handle_t * );
This is just something I added to clean up how "feature title" info
is passed.
hb_job_t * hb_job_init( hb_title_t * title );
Initializes a new job with default settings from the title.
hb_job_t * hb_job_init_by_index( hb_handle_t *h, int title_index );
Same as hb_job_init(). For use by win Interop lib.
void hb_job_reset( hb_job_t * job );
Convenience function for the MacUi.
Clears audio, subtitle, and filter lists. The macui still uses
title->job because it is so intricately tied to it. So I created
this convenience function that it can call after adding a job.
void hb_job_close( hb_job_t ** job );
Releases the job an all resources it contains.
void hb_job_set_advanced_opts( hb_job_t *job, const char *advanced_opts );
Makes a copy of "advanced_opts" and stores in job.
Freed by hb_job_close().
void hb_job_set_file( hb_job_t *job, const char *file );
Makes a copy of "file" and stores in job.
Freed by hb_job_close().
void hb_chapter_set_title(hb_chapter_t *chapter, const char *title);
Makes a copy of "title" and stores in chapter.
Freed by hb_chapter_close().
Recommended usage (cli and lingui are updated to do this):
job = hb_job_init( title );
// set job settings ...
hb_add(h, job);
hb_job_close( &job );
I have also added new APIs for managing metadata. These are
used to add metadata to a job.
void hb_metadata_set_name( hb_metadata_t *metadata, const char *name );
void hb_metadata_set_artist( hb_metadata_t *metadata, const char *artist );
void hb_metadata_set_composer( hb_metadata_t *metadata, const char *composer );
void hb_metadata_set_release_date( hb_metadata_t *metadata, const char *release_date );
void hb_metadata_set_comment( hb_metadata_t *metadata, const char *comment );
void hb_metadata_set_genre( hb_metadata_t *metadata, const char *genre );
void hb_metadata_set_album( hb_metadata_t *metadata, const char *album );
void hb_metadata_set_coverart( hb_metadata_t *metadata, const uint8_t *coverart, int size );
Example:
job = hb_job_init( &job );
// set job settings ...
hb_metadata_set_artist( job->metadata, "Danny Elfman" );
hb_add(h, job);
hb_job_close( &job );
Some APIs have changed in order to avoid using title incorrectly and
use the new hb_title_set_t.
-void hb_autopassthru_apply_settings( hb_job_t * job, hb_title_t * title );
+void hb_autopassthru_apply_settings( hb_job_t * job );
-void hb_get_preview( hb_handle_t *, hb_title_t *, int, uint8_t * );
+void hb_get_preview( hb_handle_t *, hb_job_t *, int, uint8_t * );
hb_thread_t * hb_scan_init( hb_handle_t *, volatile int * die,
const char * path, int title_index,
- hb_list_t * list_title, int preview_count,
+ hb_title_set_t * title_set, int preview_count,
int store_previews, uint64_t min_duration );
These APIs have been removed. Win Interop will need some changes.
I think what I've provided will be suffecient, but let me know if it's not.
-void hb_get_preview_by_index( hb_handle_t *, int, int, uint8_t * );
-void hb_set_anamorphic_size_by_index( hb_handle_t *, int,
- int *output_width, int *output_height,
- int *output_par_width, int *output_par_height );
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@5058 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
| |
Make reader compute subtitle scan progress based on timestamps seen and
duration instead of chapter marks.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4864 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
| |
GetFifoForId() was not re-entrant (it used a static array).
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4809 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
| |
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4737 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- move all subtitle hit counting to the decoders instead of reader
---> allows us to count actual subtitles rather than just packets
- always count subtitles, even when not doing a scan (may be useful in the future)
Miscellaneous improvements:
- always insert select_subtitle at the head of the output subtitle list, to make it less likely to be dropped
- when multiple subtitle tracks have forced hits, pick the track with the fewest forced hits
---> Foreign Audio Search should now work with Star Wars on Blu-ray
- logging improvements (more readable, and log job->select_subtitle configuration - Forced Only vs. All, Render vs. Passthrough)
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4622 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch enhances the filter objects. The 2 key improvements are:
1. A filter can change the image dimensions as frames pass through it.
2. A filter can output more than one frame.
In addition, I have:
Moved cropping & scalling into a filter object
Added 90 degree rotation to the rotate filter
Moved subtitle burn-in rendering to a filter object.
Moved VFR/CFR handling into a framerate shaping filter object.
Removed render.c since all it's responsibilities got moved to filters.
Improves VOBSUB and SSA subtitle handling. Allows subtitle animations.
SSA karaoke support.
My apologies in advance if anything breaks ;)
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4546 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
| |
Brainfart caused start time detection in TS files to break.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4488 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For TS streams that don't have PCRs, we substitute the DTS timestamp
from the video track (or PTS if we don't see DTS). But these can bounce
around or be wider spaced in the stream that PCRs are meant to be. So I
have added a test to see if the timestamp looks like a discontinuity.
Then I only pass the timestamp as a PCR if there appears to be a
discontinuity. This prevents a lot of scr_offset thrashing.
I have also fixed an error in our scr_offset processing. It is rarely
triggered and it's effects are so minor with well behaved streams that
it would be completely unnoticed. But with the test stream I was using,
it caused a factor of 10 times more "audio went backwards" errors.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4254 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds support for MPEG-1 PS, HDDVD EVOB, and video codecs other
than mpeg1/2 in PS
Improves probing of unknown stream types by using Libav's probing
utilities
Use Libav to probe for dts profile in TS and PS files when profile is
unknown
Improves framerate detection (improved telecine detection)
Fixes preview generation for mpeg video that has only a single sequence
header
Patches Libav to handle VC-1 pulldown flags properly
Improve PS and TS stream log information
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4220 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For files that are demuxed by Libav, we must share the format context
with the decoder iso that it can obtain the codec context for each stream.
The code that did this was very convoluted and difficult to understand.
It is simplified by simply passing the context in hb_title_t.
Reader was closing stream files before the decoder was finished with the
context. This created the need to delay the actual close and cache
the context. Changed reader so it behaves more like the rest of handbrake's
work objects which lets us explicitly close after the decoders are finished.
Libav does some probing of the file when av_find_stream_info is called.
This probing leaves the format context in a bad state for some files and
causes subsequent reads or seeks to misbehave. So open 2 contexts in
ffmpeg_open. One is used only for probing, and the other only for reading.
decavcodec.c had 2 separate decoders for files demuxed by hb and files
demuxed by Libav. They have been combined and simplified.
Previously, it was not possible to decode one source audio track multiple
times in order to fan it out to multiple output tracks if the file is
demuxed by Libav. We were using the codec context from the format context.
Since there is only one of these for each stream, we could only do one
decode for each stream. Use avcodec_copy_context to make copies of
the codec context and allow multiple decodes. This allows removal of
a lot of special case code for Libav streams that was necessary to
duplicate the output of the decoder.
Patch Libav's mkv demux to fix a seek problem. This has been pushed
upstreams, so the next time we update Libav, we must remove this patch.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@4141 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
TrueHD and DTS-HD now show up in the audio list along side their
AC-3 and DTS counterparts.
Note that currently the DTS-HD decoder we are using (ffmpeg) discards
the HD portion of the stream and onle decodes the DTS core portion. So
there is no advantage yet to using the DTS-HD stream. In the future
I would like to add DTS-HD passthru support and hopefully ffmpeg will
improve their DTS-HD decoder.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3950 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With p-to-p, the audio sync thread waits for the video sync thread to
reach the designated start point. There is a possibility that the video
decoder will drop so many frames that the audio sync fifo fills before
any frames reach the video sync thread. When this happens, drop some
audio to unplug the pipeline.
Also, to make this less likely to happen, start sending data to the video
decoder 2 seconds before the actual desired start point. This will allow
the decoder to find an initial i-frame before the audio stalls since the
audio sync thread drops any audio that is before the designated start point.
A side effect of this is our start time now more accurate since the decoder
is only dropping frames before the start point instead of after.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3917 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
| |
Forome reason, frames that are tagged as recovery points in many BD h.264
streams do not result in complete frames when decoded. Pushing 2 extra
frames through the decoder seems to always fix this. This patch extends
something I was already doing when generating previews from a BD structure.
This just applies the same logic to ffmpeg streams that have h.264 video.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3895 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
| |
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3797 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
| |
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3685 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unencrypted BD directory trees only. Doesn't support iso images.
Also, no PGS subtitle support yet.
Chapters and angles are supported.
Adds a new contrib libbluray.
Adds new option to hb_scan() for duration of short titles to filter.
This applies to BD and DVD multi-title scans only. Does not apply
to any single title scans.
Fixes memory leak during scan. hb_buffer_close() was not freeing
all buffers in a chain of buffers passed to it.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3510 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
| |
some rearrangement of code that was previously done to reader caused
scr_offset to be subtracted from renderOffset twice whenever a
new scr_offset was calculated. this could cause subsequent timestamp
calculations to be way off and in at least one known case lead to
a crash due to consuming too much memory in hb_buffer_t's
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3399 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
| |
in reader, the timestamps were not being correctly adjusted for scr offset
before comparing to start time. This could cause an early start in reader.
Then in sync, syncAudioWork stalled until the correct start of video was
found, causing the audio fifo to fill and stall the whole pipeline.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3329 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
allows frame and pts based start points. end points were already
previously supported.
New job variables pts_to_start and frame_to_start specify the start point.
There can be a period during the encode where it has to search for
the start point. During this period, libhb sets a new state
HB_STATE_SEARCHING and sets progress and eta till start point found.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3039 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
| |
pthread_cond_timedwait can wake early. under certain system load conditions, this
happens often. I was going ahead and adding buffers whenever it woke, regardless
of whether the condition had actually been met. so the fifo depth would
increase until memory ran out.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3030 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
| |
reader drops all buffers till it finds video or audio.
but since video and audio fifos are null when indepth_scan is
set, we never see video or audio.
Solution is to not drop buffers in indepth scan mode
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3011 b64f7644-9d1e-0410-96f1-a4d463321fa5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pipeline
For HD sources on an 8 core system with hyperthreading, we were using 1.5GB
of ram. Add to that the 600MB x264 uses for rc-lookahead, pushes it north of 2GB.
To reduce our memory usage, the fifo depths have been reduced are are no longer
a multiple of cpu count. Use of hb_snooze has been eliminated in the encoding
pipeline so that performance doesn't fall as a result of the reduced fifo depths.
In sync, each audio and video were given separate threads so that each can wait on
it's respective input fifo without blocking the others. In muxcommon, each stream
being muxed was given a separate thread so that each can wait on it's respective fifo.
This allows the removal of hb_snooze in the sync and muxer work loops. In both sync
and muxer, there is common data that is shared by all threads, so special init
routines allocate this shared data and initialize the threads.
git-svn-id: svn://svn.handbrake.fr/HandBrake/trunk@3007 b64f7644-9d1e-0410-96f1-a4d463321fa5
|