The code takes one image in YUV format and tries to deinterlace it. The original code just takes a five point tap filter and applies it to all odd lines; the result is close to setting odd lines to the average of the neighboring even lines. One weird thing is that this is done in U and V as well which seems rather pointless - each pixel in U and V already covers one even and one odd line so there should be no filtering done there at all.
One problem with deinterlacing is that it removes sharpness from parts of the image that doesn't contain motion. To fix this I try to find out whether deinterlacing is necessary. The method is quite simple: Usually you would expect that the difference between line 0 and 1 or between line 2 and 3 is less than the difference between line 0 and 2 and between line 1 and 3, because the first set of pairs is closer together. But if you can see the effect of interlacing, then this wouldn't be the case. So my first attempt does the deinterlacing only if that test tells it to.
With this change, usually deinterlacing leaves 90 percent of the image unchanged, and affects those bits where the interlacing is most visible. It doesn't catch everything yet; it doesn't work well in cases where interlacing meets with brightness changes along a diagonal, so I'll have to try to improve it a bit. To try it out, I added this to hb.c, and calls to avpicture_deinterlace must be changed. The commented out line marks everything with black that would be filtered, so I can see that the algorithm doesn't filter or doesn't miss too match.
Code: Select all
int my_avpicture_deinterlace (AVPicture *dstPict, const AVPicture *srcPict, int pix_fmt, int width, int height)
{
int i, x, y;
// We only handle YUV420 with different source and destination. Call the original function in
// all other cases.
if ((width & 3) != 0 || (height & 3) != 0 || srcPict == dstPict || pix_fmt != PIX_FMT_YUV420P)
return avpicture_deinterlace (dstPict, srcPict, pix_fmt, width, height);
for (i = 0; i < 3; i++)
{
// Get pointers to source and destination data and the distance between consecutive lines.
uint8_t *src = srcPict->data[i];
uint8_t *dst = dstPict->data[i];
int src_wrap = srcPict->linesize[i];
int dst_wrap = dstPict->linesize[i];
if (i == 1)
{
height /= 2;
width /= 2;
}
// First copy everything. As a special case, use one single memcpy to improve speed if there
// is no gap between lines.
if (src_wrap == width && dst_wrap == width)
{
memcpy (dst, src, height * width);
}
else
{
for (y = 0; y < height; y++, src += src_wrap, dst += dst_wrap)
memcpy (dst, src, width);
}
if (i >= 1)
continue;
src += src_wrap;
dst += dst_wrap;
for (y = 0; y < height; y += 2, src += 2*src_wrap, dst += 2*dst_wrap)
{
// Make pointers to the previous and next two source lines; special case for the first
// and last lines to stay inside existing data.
uint8_t *src_m2 = (y == 0 ? src - src_wrap : src - 2*src_wrap);
uint8_t *src_m1 = src - src_wrap;
uint8_t *src_p1 = src + (y == height - 2 ? 0 : src_wrap);
uint8_t *src_p2 = src + (y == height - 2 ? 0 : 2*src_wrap);
for (x = 0; x < width; x += 1)
{
int even1 = src_m1 [x];
int even2 = src_p1 [x];
int odd1 = src [x];
int odd2 = src_p2 [x];
if (abs (even2 - even1) + abs (odd2 - odd1) <= abs (odd1 - even1) + abs (odd2 - even2) - 4)
{
// Pixels that are two lines apart are much close together than pixels on
// consecutive lines: This means we have movement and should apply the tap
// filter.
int sum = -src_m2 [x] + (src_m1 [x] << 2) + (src [x] << 1) + (src_p1 [x] << 2) - src_p2 [x];
dst [x] = sum < 0 ? 0 : sum >= 2040 ? 255 : (sum + 4) >> 3;
// dst [x] = dst [x - dst_wrap] = 0;
}
}
}
}
return 0;
}