1. Warn when a title is obviously interlaced because a lot of the previews show combing.
2. Do rough frame-by-frame combing detection, and route around the deinterlacer when the frame doesn't have obvious combing. This is fast, and works with ffmpeg (fast) deinterlacing as well as yadif.
Once step 2 is stable, it should included in the app as an automated filtering option, a rebranded VFR. It'd run VFR with pullup using -1 breaks* and frame-by-frame comb detection on every source. Or, it could be separated out from VFR as a new deinterlacing mode. Right now, locally, I'm uncommitted--I still have it running all the time.
3. Do comb detection on groups of, say, 4 lines at a time. This is pretty much what the current comb detection does, except it goes through every chunk of 4 lines in the frame at once. The difference would be linking the results to yadif_filter, so it'll only call yadif_filter_line when combing is detected in those 4 lines. This is slower, and only works with yadif, since ffmpeg deinterlacing works with whole frames. Of course, we could rip the algorithm out of ffmpeg, do it in libhb like we do yadif, and then there'd be precise enough control to only deinterlace specific lines of a frame. Might be a good idea for debugging, too...allows testing of this comb detection stuff without the timesink that is yadif.
4. Do pixel-by-pixel detection. In yadif_filter_line, only generate a prediction for the current pixel when there appears to be interlacing. A function would run on every frame to generate a table of what pixels to deinterlace, and yadif_filter_line would route around the heavy lifting for ones that didn't need it, and just pass through cur[0] as dst[0]. This seems like the best way to go about things, but requires some hard-core image analysis know-how I lack. Unless we just do the current comb detection on individual pixels ("Is the current pixel more similar to the one below, or the one two below?").
5. Even cooler would be what name99 suggested in a Ponies thread...going beyond the basic differences check for combing that the algorithm from 32detect uses. Instead, break things up into macroblocks for the spatial checks. Then...and this is the part that I think would be awesome...patch libmpeg2 to pass through everything it learns from decoding the frame, and use that to help decide:
name99 wrote:Detection requires us to define anomalous fields. This is, presumably, a pair of fields that we expect to be part of a single frame but they are not. This could presumably be done through a combination of
- defining an energy for each macroblock concentrated in the difference between even and odd lines. This could be something as simple as subtract each even luma line in an 8x8 from the corresponding odd line, sum the absolute diffs, and compare with the overall energy in the block. If we have enough of these at small values, or perhaps a few at large values, we flag the frame.
- using the various flags MPEG2 provides to encode interlaced blocks (eg the alternative IDCT pattern and the alternative macroblock motion stuff). (This of course requires the ability of the decoder to annotate the decoded frame that it then sends on to the encoder stage. But this doesn't have to be too difficult. Just add a new field in the decoded frame structure that can hold a block of hints, and have the decoder just sets these hints as it's doing it's thing. A bunch of stores that nothing else depends on, at the point where you parse the IDCT pattern type and macroblock motion stuff shouldn't slow down the code at all.
Anyway, that's beyond me, but...I hope someone pursues it
In the mean time, I'm thinking about how to schedule 1 through 4. 1 could cause issues if it tries to detect on a preview that wasn't successfully generated (DVDReadBlocks Failed or whatever). 2 is simply a matter of tuning parameters to give decent performance with interlaced material without decimating picture quality on animation. 3 and 4 will require some experimentation. I've played around with them a little but I'm still cautious. It's tough to find ways to impinge on the yadif process without breaking things. Although, my previous attempts were prior to diagramming it....
So. If anyone is still following this, here's a diff for 1 and 2:
http://handbrake.pastebin.ca/930493
Code: Select all
Index: test/test.c
===================================================================
--- test/test.c (revision 1331)
+++ test/test.c (working copy)
@@ -300,6 +300,13 @@
fprintf( stderr, " + %d, %s (iso639-2: %s)\n", i + 1, subtitle->lang,
subtitle->iso639_2);
}
+
+ if(title->detected_interlacing)
+ {
+ /* Interlacing was found in 6 or more preview frames */
+ fprintf( stderr, " + interlacing artifacts detected\n");
+ }
+
}
static int HandleEvents( hb_handle_t * h )
Index: libhb/hb.c
===================================================================
--- libhb/hb.c (revision 1331)
+++ libhb/hb.c (working copy)
@@ -426,6 +426,111 @@
avpicture_free( &pic_in );
}
+ /**
+ * Analyzes a frame to detect interlacing artifacts
+ * and returns true if interlacing (combing) is found.
+ *
+ * Code taken from Thomas Oestreich's 32detect filter
+ * in the Transcode project, with minor formatting changes.
+ *
+ * @param buf An hb_buffer structure holding valid frame data.
+ * @param width The frame's width in pixels.
+ * @param height The frame's height in pixels.
+ */
+int hb_detect_comb( hb_buffer_t * buf, int width, int height)
+{
+ int j, k, n, off, block, cc_1, cc_2, cc[3], flag[3];
+ uint16_t s1, s2, s3, s4;
+ cc_1 = 0; cc_2 = 0;
+
+ /* These values are defined in hb.h */
+ int thres = THRESHOLD;
+ int eq = COLOR_EQUAL;
+ int diff = COLOR_DIFF;
+
+ int offset = 0;
+
+ for(k=0; k < 3; k++)
+ {
+ /* One pas for Y, one pass for Cb, one pass for Cr */
+ if( k == 1 )
+ {
+ /* Y has already been checked, now offset by Y's dimensions
+ and divide all the other values by 2, since Cr and Cb
+ are half-size compared to Y. */
+ offset = width * height;
+ width >>= 1;
+ height >>= 1;
+ thres >>= 1;
+ eq >>= 1;
+ diff >>= 1;
+ }
+ else if (k == 2 )
+ {
+ /* Y and Cb are done, so the offset needs to be bumped
+ so it's width*height + (width / 2) * (height / 2) */
+ offset *= 5/4;
+ }
+
+ /* Look at one horizontal line at a time */
+ block = width;
+
+ for(j=0; j<block; ++j)
+ {
+ off=0;
+
+ for(n=0; n<(height-4); n=n+2)
+ {
+ /* Look at groups of 4 sequential horizontal lines */
+ s1 = ((buf->data+offset)[off+j ] & 0xff);
+ s2 = ((buf->data+offset)[off+j+ block] & 0xff);
+ s3 = ((buf->data+offset)[off+j+2*block] & 0xff);
+ s4 = ((buf->data+offset)[off+j+3*block] & 0xff);
+
+ /* Note if the 1st and 2nd lines are more different in
+ color than the 1st and 3rd lines are similar in color.*/
+ if((abs(s1 - s3) < eq) &&
+ (abs(s1 - s2) > diff)) ++cc_1;
+
+ /* Note if the 2nd and 3rd lines are more different in
+ color than the 2nd and 4th lines are similar in color.*/
+ if((abs(s2 - s4) < eq) &&
+ (abs(s2 - s3) > diff)) ++cc_2;
+
+ /* Now move down 2 horizontal lines before starting over.*/
+ off +=2*block;
+ }
+ }
+
+ // compare results
+ /* The final metric seems to be doing some kind of bits per pixel style calculation
+ to decide whether or not enough lines showed alternating colors for the frame size. */
+ cc[k] = (int)((cc_1 + cc_2)*1000.0/(width*height));
+
+ /* If the plane's cc score meets the threshold, flag it as combed. */
+ flag[k] = 0;
+ if(cc[k] > thres)
+ {
+ flag[k] = 1;
+ }
+ }
+
+#if 0
+/* Debugging info */
+// if(flag)
+ hb_log("flags: %i/%i/%i | cc0: %i | cc1: %i | cc2: %i", flag[0], flag[1], flag[2], cc[0], cc[1], cc[2]);
+#endif
+
+ /* When more than one plane shows combing, tell the caller. */
+ if (flag[0] || flag[1] || flag[2] )
+ {
+ return 1;
+ }
+
+ return 0;
+}
+
+
/**
* Calculates job width and height for anamorphic content,
*
Index: libhb/hb.h
===================================================================
--- libhb/hb.h (revision 1331)
+++ libhb/hb.h (working copy)
@@ -75,6 +75,14 @@
Returns the list of valid titles detected by the latest scan. */
hb_list_t * hb_get_titles( hb_handle_t * );
+/* hb_detect_comb()
+ Analyze a frame for interlacing artifacts, returns true if they're found.
+ Taken from Thomas Oestreich's 32detect filter in the Transcode project. */
+int hb_detect_comb( hb_buffer_t * buf, int width, int height);
+#define COLOR_EQUAL 10 // Sensitivity for detecting similar colors.
+#define COLOR_DIFF 30 // Sensitivity for detecting different colors
+#define THRESHOLD 9 // Sensitivity for flagging planes as combed
+
void hb_get_preview( hb_handle_t *, hb_title_t *, int,
uint8_t * );
void hb_set_size( hb_job_t *, int ratio, int pixels );
Index: libhb/deinterlace.c
===================================================================
--- libhb/deinterlace.c (revision 1331)
+++ libhb/deinterlace.c (working copy)
@@ -42,6 +42,8 @@
int yadif_parity;
int yadif_ready;
+ int comb;
+
uint8_t * yadif_ref[4][3];
int yadif_ref_stride[3];
@@ -58,6 +60,9 @@
AVPicture pic_out;
hb_buffer_t * buf_out[2];
hb_buffer_t * buf_settings;
+
+ int deinterlaced_frames;
+ int passed_frames;
};
hb_filter_private_t * hb_deinterlace_init( int pix_fmt,
@@ -87,6 +92,7 @@
static void yadif_store_ref( const uint8_t ** pic,
hb_filter_private_t * pv )
{
+ /* 1st entry becomes 4th, 2nd entry because 1st. */
memcpy( pv->yadif_ref[3],
pv->yadif_ref[0],
sizeof(uint8_t *)*3 );
@@ -94,17 +100,22 @@
memmove( pv->yadif_ref[0],
pv->yadif_ref[1],
sizeof(uint8_t *)*3*3 );
-
+
+ /* 3 color planes */
int i;
for( i = 0; i < 3; i++ )
{
+ /* Source is the input plane */
const uint8_t * src = pic[i];
+ /* Ref points to where it'll be stored, the 3rd slot in the ref array.*/
uint8_t * ref = pv->yadif_ref[2][i];
-
+
+ /* Dimensions will be halved for Cb + Cr, stride is offset for line length..*/
int w = pv->width[i];
int h = pv->height[i];
int ref_stride = pv->yadif_ref_stride[i];
-
+
+ /* Go through each horizontal line of the src plane and copy it to the buffer. */
int y;
for( y = 0; y < pv->height[i]; y++ )
{
@@ -115,6 +126,27 @@
}
}
+static void yadif_get_ref( uint8_t ** pic, hb_filter_private_t * pv, int frm )
+{
+ int i;
+ for( i = 0; i < 3; i++ )
+ {
+ uint8_t * dst = pic[i];
+ const uint8_t * ref = pv->yadif_ref[frm][i];
+ int w = pv->width[i];
+ int ref_stride = pv->yadif_ref_stride[i];
+
+ int y;
+ for( y = 0; y < pv->height[i]; y++ )
+ {
+ memcpy(dst, ref, w);
+ dst += w;
+ ref += ref_stride;
+ }
+ }
+}
+
+
static void yadif_filter_line( uint8_t *dst,
uint8_t *prev,
uint8_t *cur,
@@ -123,26 +155,83 @@
int parity,
hb_filter_private_t * pv )
{
+ /* If TFF, look at the previous and current frames, otherwise the current and next */
uint8_t *prev2 = parity ? prev : cur ;
uint8_t *next2 = parity ? cur : next;
-
+
+ /* Width aries with plane size.*/
int w = pv->width[plane];
+ /* Stride offset varies with planar width. */
int refs = pv->yadif_ref_stride[plane];
-
+
+ /* Step through the horizontal pixels in this line. */
int x;
for( x = 0; x < w; x++)
{
+ /* C: Pixel above*/
int c = cur[-refs];
+ /* D: the average of this pixel in the frames before and after.*/
int d = (prev2[0] + next2[0])>>1;
+ /* E: Pixel below*/
int e = cur[+refs];
+ /* diff0: The energy delta between this pixel in the previous and next frames.*/
int temporal_diff0 = ABS(prev2[0] - next2[0]);
+ /* diff1: The average of the deltas between:
+ The pixel above in the last frame minus the pixel above in the current frame, and
+ The pixel below in the last frame minus the pixel below in the current frame. */
int temporal_diff1 = ( ABS(prev[-refs] - c) + ABS(prev[+refs] - e) ) >> 1;
+ /* diff2: The average of the deltas between:
+ The pixel above in the next frame minus the pixel above in the current frame, and
+ The pixel below in the next frame minus the pixel below in the current frame. */
int temporal_diff2 = ( ABS(next[-refs] - c) + ABS(next[+refs] - e) ) >> 1;
+ /* diff: Choose the largest of the three:
+ Half the change in the current pixel between the previous and next frames,
+ The average vertical change since the last frame, or
+ The average vertical change in the next frame. */
int diff = MAX3(temporal_diff0>>1, temporal_diff1, temporal_diff2);
+ /* spatial_pred: Average of the pixels above and below in the current frame. */
int spatial_pred = (c+e)>>1;
+
+ /* spatial_score:
+ The difference between the pixels above and below the pixel to the left, plus
+ The difference between the pixels above and below the current pixel, plus,
+ The difference between the pixels above and below the pixel to the right,
+ ...minus one. Why? */
int spatial_score = ABS(cur[-refs-1] - cur[+refs-1]) + ABS(c-e) +
ABS(cur[-refs+1] - cur[+refs+1]) - 1;
+/* The Yadif Spatial Check Score is run 2-4 times.
+ PASS ONE, j=-1
+ The pixel 2 to the left above minus the pixel below, plus
+ The pixel 1 to the left abve minus the pixel 1 to the right below, plus
+ The pixel in the line above minus the pixel 2 to the right below.
+ IF this is less than the spatial score:
+ It replaces the spatial score.
+ The spatial_pred is set to the average of the pixel 1 to the left above and the pixel 1 to the right below.
+ Run PASS 2:
+ PASS TWO j = -2
+ The pixel 3 to the left above minus the pixel 1 to the right below, plus
+ The pixel 2 to the left above minus the pixel 2 to the right below, plus
+ The pixel 1 to the left above minus the pixel 3 to the right below.
+ IF this is less than the spatial score:
+ It replaces the spatial score.
+ The spatial_pred is set to the average of the pixel 2 to the left above minus the pixel 2 to the right below.
+ PASS THREE: j = 1
+ The pixel above minus the pixel 2 to the left below, plus
+ The pixel 1 to the right above minus the pixel 1 to the left below, plus
+ The pixel 2 to the right above minus the pixel below.
+ IF this is less than the spatial score:
+ It replaces the spatial score.
+ The spatial_pred is set ot the average of the pixel 1 to the right above and the pixel 1 to the left below.
+ Run PASS 4:
+ PASS FOUR: j = 2
+ The pixel 1 to the right above minus the pixel 3 to the left below, plus
+ The pixel 2 to the right above minus the pixel 2 to the left below, plus
+ The pixel 3 to the right above, minus the pixel 1 to the left below.
+ IF this is less than the spatal score:
+ It repalces the spatial score.
+ The spatial_pred is set to the average of the pixel 2 to the right above and 2 to the left below.
+*/
#define YADIF_CHECK(j)\
{ int score = ABS(cur[-refs-1+j] - cur[+refs-1-j])\
+ ABS(cur[-refs +j] - cur[+refs -j])\
@@ -153,18 +242,32 @@
YADIF_CHECK(-1) YADIF_CHECK(-2) }} }}
YADIF_CHECK( 1) YADIF_CHECK( 2) }} }}
-
+
+ /* The slow stuff */
if( pv->yadif_mode < 2 )
{
+ /* B: the average of the pixel 2 lines above in the frames before and after. */
int b = (prev2[-2*refs] + next2[-2*refs])>>1;
+ /* F: the average of the pixel 2 lines below in the frames before and after. */
int f = (prev2[+2*refs] + next2[+2*refs])>>1;
-
+
+ /* Which is bigger? / Which is smaller?
+ The average of this pixel in the frames before and after minus the pixel below, or
+ The average of this pixel in the frames before and after miinus the pixel above, or
+ The smaller of: / The bigger of:
+ The average of the pixel 2 lines above in the frames before and afer minus the pixel above, or
+ The average of the pixel 2 lines below in the frames before and after minus the pixel below */
int max = MAX3(d-e, d-c, MIN(b-c, f-e));
int min = MIN3(d-e, d-c, MAX(b-c, f-e));
-
+
+ /* For the real temporal diff, use whichever's largest.*/
diff = MAX3( diff, min, -max );
}
-
+
+ /* If the prediction is larger than the average of the pixel in frames before and after
+ plus temporal correction, replace it. Otherwise --
+ If the prediction is smaller than the average of the pixel in the frames before and
+ after minus temporal correction, replace it. */
if( spatial_pred > d + diff )
{
spatial_pred = d + diff;
@@ -173,9 +276,11 @@
{
spatial_pred = d - diff;
}
-
+
+ /* Use the prediction as the output. */
dst[0] = spatial_pred;
-
+
+ /* Bump the head for the next pixel. */
dst++;
cur++;
prev++;
@@ -190,27 +295,32 @@
int tff,
hb_filter_private_t * pv )
{
+ /* Step through the color planes. */
int i;
for( i = 0; i < 3; i++ )
{
+ /* Dimensions will be halved for Cb+Cr, stride is offset for previous planes.*/
int w = pv->width[i];
int h = pv->height[i];
int ref_stride = pv->yadif_ref_stride[i];
-
+
+ /* Step through horizontal lines.*/
int y;
for( y = 0; y < h; y++ )
{
if( (y ^ parity) & 1 )
{
+ /* Only filter second field? */
uint8_t *prev = &pv->yadif_ref[0][i][y*ref_stride];
uint8_t *cur = &pv->yadif_ref[1][i][y*ref_stride];
uint8_t *next = &pv->yadif_ref[2][i][y*ref_stride];
uint8_t *dst2 = &dst[i][y*w];
-
+
yadif_filter_line( dst2, prev, cur, next, i, parity ^ tff, pv );
}
else
{
+ /* Pass the line through unscathed.*/
memcpy( &dst[i][y*w],
&pv->yadif_ref[1][i][y*ref_stride],
w * sizeof(uint8_t) );
@@ -356,6 +466,9 @@
pv->buf_out[1] = hb_buffer_init( buf_size );
pv->buf_settings = hb_buffer_init( 0 );
+ pv->deinterlaced_frames = 0;
+ pv->passed_frames = 0;
+
pv->yadif_ready = 0;
pv->yadif_mode = YADIF_MODE_DEFAULT;
pv->yadif_parity = YADIF_PARITY_DEFAULT;
@@ -451,6 +564,8 @@
return;
}
+ hb_log("deinterlacer: filtered %i | unfiltered %i | total %i", pv->deinterlaced_frames, pv->passed_frames, pv->deinterlaced_frames + pv->passed_frames);
+
/* Cleanup frame buffers */
if( pv->buf_out[0] )
{
@@ -521,13 +636,31 @@
avpicture_fill( &pv->pic_out, pv->buf_out[0]->data,
pix_fmt, width, height );
- avpicture_deinterlace( &pv->pic_out, &pv->pic_in,
- pix_fmt, width, height );
+ /* Check for combing on the input frame */
+ int interlaced = ( hb_detect_comb(buf_in, width, height) );
+
+ if(interlaced)
+ {
+ avpicture_deinterlace( &pv->pic_out, &pv->pic_in,
+ pix_fmt, width, height );
- hb_buffer_copy_settings( pv->buf_out[0], buf_in );
+ pv->deinterlaced_frames++;
- *buf_out = pv->buf_out[0];
+ hb_buffer_copy_settings( pv->buf_out[0], buf_in );
+ *buf_out = pv->buf_out[0];
+ }
+ else
+ {
+ /* No combing detected, pass input frame through unmolested.*/
+
+ pv->passed_frames++;
+
+ hb_buffer_copy_settings( pv->buf_out[0], buf_in );
+ *buf_out = buf_in;
+
+ }
+
return FILTER_OK;
}
@@ -545,6 +678,9 @@
/* Store current frame in yadif cache */
yadif_store_ref( (const uint8_t**)pv->pic_in.data, pv );
+ /* Note down if the input frame is combed */
+ pv->comb = (pv->comb << 1) | hb_detect_comb(buf_in, width, height);
+
/* If yadif is not ready, store another ref and return FILTER_DELAY */
if( pv->yadif_ready == 0 )
{
@@ -559,32 +695,51 @@
return FILTER_DELAY;
}
-
- /* Perform yadif and mcdeint filtering */
- int frame;
- for( frame = 0; frame <= (pv->yadif_mode & 1); frame++ )
+
+ /* yadif & mcdeint work one frame behind so if the previous frame
+ * had combing, deinterlace it otherwise just output it. */
+ if( (pv->comb & 2 ) == 0 )
{
- int parity = frame ^ tff ^ 1;
+ /* previous frame not interlaced - copy cached input frame to buf_out */
+
+ pv->passed_frames++;
+
+ avpicture_fill( &pv->pic_out, pv->buf_out[0]->data, pix_fmt, width, height );
+ yadif_get_ref( (uint8_t**)pv->pic_out.data, pv, 1 );
+ *buf_out = pv->buf_out[0];
+ }
+ else
+ {
+ /* Perform yadif and mcdeint filtering */
+
+ pv->deinterlaced_frames++;
+
+ int frame;
+ for( frame = 0; frame <= (pv->yadif_mode & 1); frame++ )
+ {
+ int parity = frame ^ tff ^ 1;
- avpicture_fill( &pv->pic_out, pv->buf_out[!(frame^1)]->data,
- pix_fmt, width, height );
+ avpicture_fill( &pv->pic_out, pv->buf_out[!(frame^1)]->data,
+ pix_fmt, width, height );
- yadif_filter( pv->pic_out.data, parity, tff, pv );
+ yadif_filter( pv->pic_out.data, parity, tff, pv );
- if( pv->mcdeint_mode >= 0 )
- {
- avpicture_fill( &pv->pic_in, pv->buf_out[(frame^1)]->data,
- pix_fmt, width, height );
+ if( pv->mcdeint_mode >= 0 )
+ {
+ avpicture_fill( &pv->pic_in, pv->buf_out[(frame^1)]->data,
+ pix_fmt, width, height );
- mcdeint_filter( pv->pic_in.data, pv->pic_out.data, parity, pv );
+ mcdeint_filter( pv->pic_in.data, pv->pic_out.data, parity, pv );
- *buf_out = pv->buf_out[ (frame^1)];
+ *buf_out = pv->buf_out[ (frame^1)];
+ }
+ else
+ {
+ *buf_out = pv->buf_out[!(frame^1)];
+ }
}
- else
- {
- *buf_out = pv->buf_out[!(frame^1)];
- }
}
+
/* Copy buffered settings to output buffer settings */
hb_buffer_copy_settings( *buf_out, pv->buf_settings );
Index: libhb/scan.c
===================================================================
--- libhb/scan.c (revision 1331)
+++ libhb/scan.c (working copy)
@@ -286,6 +286,8 @@
hb_list_t * list_es, * list_raw;
hb_libmpeg2_t * mpeg2;
int progressive_count = 0;
+ int interlacing[10];
+
int ar16_count = 0, ar4_count = 0;
buf_ps = hb_buffer_init( HB_DVD_READ_BUFFER_SIZE );
@@ -419,7 +421,7 @@
*/
if( progressive_count == 6 )
{
- hb_log("Title's mostly progressive NTSC, setting fps to 23.976");
+ hb_log("Title's mostly NTSC Film, setting fps to 23.976");
}
title->rate_base = 1126125;
}
@@ -449,7 +451,19 @@
}
buf_raw = hb_list_item( list_raw, 0 );
-
+
+ /* Check preview for interlacing artifacts */
+ if( hb_detect_comb(buf_raw, title->width, title->height))
+ {
+ hb_log("Interlacing detected in preview frame %i", i);
+ interlacing[i] = 1;
+ }
+ else
+ {
+ interlacing[i] = 0;
+ }
+
+
hb_get_tempory_filename( data->h, filename, "%x%d",
(intptr_t)title, i );
@@ -522,7 +536,28 @@
(float) title->rate_base, title->crop[0], title->crop[1],
title->crop[2], title->crop[3],
title->aspect == HB_ASPECT_BASE * 16 / 9 ? "16:9" :
- title->aspect == HB_ASPECT_BASE * 4 / 3 ? "4:3" : "none" );
+ title->aspect == HB_ASPECT_BASE * 4 / 3 ? "4:3" : "none" );
+
+ /* Add up how many previews were interlaced.*/
+ int interlacing_sum, t;
+ for(t = 0; t < 10; t++ )
+ {
+ if( interlacing[t] == 1 )
+ {
+ interlacing_sum++;
+ }
+ }
+
+ if( interlacing_sum >= 6)
+ {
+ hb_log("Title is mostly interlaced or telecined (%i out of 10 previews). You should do something about that.", interlacing_sum);
+ title->detected_interlacing = 1;
+ }
+ else
+ {
+ title->detected_interlacing = 0;
+ }
+
goto cleanup;
error:
Index: libhb/common.h
===================================================================
--- libhb/common.h (revision 1331)
+++ libhb/common.h (working copy)
@@ -426,6 +426,7 @@
int rate;
int rate_base;
int crop[4];
+ int detected_interlacing;
uint32_t palette[16];
It's rather similar to my last one. However, it comes after saintdev's vigorous Campaign Against Non-Uniform White Space, so it'll actually apply to the svn head. I've also modified the comb detection. I've gone back to transcode's default threshold of 9. Instead of returning true when 2 or more planes are above the threshold, I'm returning true when any of the three are above it -- same as transcode. I'd wanted something more sensitive, but rhester finally convinced me to give up. This gives me good results with relatively recent TV (interlaced Comedy Central shows from the late 90s like Strangers With Candy). It doesn't deinterlace anime to a horrific extent. Does a medicore job on older, lower quality footage like early Kids in the Hall episodes, where it misses just enough combing to be noticeable.
*Note: I've found that using the default break sensitivity for pulllup will drop too many frames from fully interlaced material. I think it gets tripped up when the camera is steady and nothing is moving for a couple of frames. Not seeing interlacing, it thinks the frames are progressive and starts to anticipate a pulldown pattern that doesn't exist. Setting -1 for breaks (--detelecine="1:1:4:4
0") fixes that. Since it also, as van discovered, helps with PAL->NTSC Film->NTSC Video transfers, I'll probably make that the default soon enough.