Started some work on deinterlacing

Archive of historical development discussions
Discussions / Development has moved to GitHub
Forum rules
*******************************
Please be aware we are now using GitHub for issue tracking and feature requests.
- This section of the forum is now closed to new topics.

*******************************
gnasher729
Posts: 3
Joined: Sun Apr 22, 2007 9:37 pm

Started some work on deinterlacing

Post by gnasher729 »

I just started some work trying to improve the deinterlacing. It is rather easy to change; all I needed to do is change three calls to avpicture_deinterlace to call my own function my_avpicture_deinterlace. What I found so far (in case people are interested):

The code takes one image in YUV format and tries to deinterlace it. The original code just takes a five point tap filter and applies it to all odd lines; the result is close to setting odd lines to the average of the neighboring even lines. One weird thing is that this is done in U and V as well which seems rather pointless - each pixel in U and V already covers one even and one odd line so there should be no filtering done there at all.

One problem with deinterlacing is that it removes sharpness from parts of the image that doesn't contain motion. To fix this I try to find out whether deinterlacing is necessary. The method is quite simple: Usually you would expect that the difference between line 0 and 1 or between line 2 and 3 is less than the difference between line 0 and 2 and between line 1 and 3, because the first set of pairs is closer together. But if you can see the effect of interlacing, then this wouldn't be the case. So my first attempt does the deinterlacing only if that test tells it to.

With this change, usually deinterlacing leaves 90 percent of the image unchanged, and affects those bits where the interlacing is most visible. It doesn't catch everything yet; it doesn't work well in cases where interlacing meets with brightness changes along a diagonal, so I'll have to try to improve it a bit. To try it out, I added this to hb.c, and calls to avpicture_deinterlace must be changed. The commented out line marks everything with black that would be filtered, so I can see that the algorithm doesn't filter or doesn't miss too match.

Code: Select all

int my_avpicture_deinterlace (AVPicture *dstPict, const AVPicture *srcPict, int pix_fmt, int width, int height)
{
	int i, x, y;
	
	// We only handle YUV420 with different source and destination. Call the original function in
	// all other cases. 
	if ((width & 3) != 0 || (height & 3) != 0 || srcPict == dstPict || pix_fmt != PIX_FMT_YUV420P)
		return avpicture_deinterlace (dstPict, srcPict, pix_fmt, width, height);
		
		
	for (i = 0; i < 3; i++)
	{
		// Get pointers to source and destination data and the distance between consecutive lines.
		uint8_t *src = srcPict->data[i];
		uint8_t *dst = dstPict->data[i];
		int src_wrap = srcPict->linesize[i];
		int dst_wrap = dstPict->linesize[i];
		
		if (i == 1)
		{
			height /= 2;
			width /= 2;
		}
		
		// First copy everything. As a special case, use one single memcpy to improve speed if there
		// is no gap between lines.
		if (src_wrap == width && dst_wrap == width)
		{
			memcpy (dst, src, height * width);
		}
		else
		{
			for (y = 0; y < height; y++, src += src_wrap, dst += dst_wrap)
				memcpy (dst, src, width);
		}
		
		if (i >= 1)
			continue;
		
		src += src_wrap;
		dst += dst_wrap;

		for (y = 0; y < height; y += 2, src += 2*src_wrap, dst += 2*dst_wrap)
		{
			// Make pointers to the previous and next two source lines; special case for the first
			// and last lines to stay inside existing data.
			uint8_t *src_m2 = (y == 0 ? src - src_wrap : src - 2*src_wrap);
			uint8_t *src_m1 = src - src_wrap;
			uint8_t *src_p1 = src + (y == height - 2 ? 0 : src_wrap);
			uint8_t *src_p2 = src + (y == height - 2 ? 0 : 2*src_wrap);

			for (x = 0; x < width; x += 1)
			{
				int even1 = src_m1 [x];
				int even2 = src_p1 [x];
				int odd1 = src [x];
				int odd2 = src_p2 [x];

				if (abs (even2 - even1) + abs (odd2 - odd1) <= abs (odd1 - even1) + abs (odd2 - even2) - 4)
				{
					// Pixels that are two lines apart are much close together than pixels on
					// consecutive lines: This means we have movement and should apply the tap
					// filter. 
					int sum = -src_m2 [x] + (src_m1 [x] << 2) + (src [x] << 1) + (src_p1 [x] << 2) - src_p2 [x];
					dst [x] = sum < 0 ? 0 : sum >= 2040 ? 255 : (sum + 4) >> 3;
//					dst [x] = dst [x - dst_wrap] = 0;
				}
			}
		}
	}
	
	return 0;
}
User avatar
s55
HandBrake Team
Posts: 9805
Joined: Sun Dec 24, 2006 1:05 pm

Post by s55 »

Interesting!

look forward to seeing this completed. If you havn't already you should join the IRC chat.
jbrjake
Veteran User
Posts: 4805
Joined: Wed Dec 13, 2006 1:38 am

Post by jbrjake »

Very cool.

How does it compare to established open source deinterlacers like smartdeint, tom's motion compensator, yadif, mcdeint, kerneldeint, etc?
realityking
Veteran User
Posts: 680
Joined: Tue Apr 24, 2007 12:36 pm

Post by realityking »

If it's so easy to change I would like to see an option to choose what de-interlacing I want to apply to my DVD. Options could grow over time (VLC for example has a couple nice ones). And for the UI: One could just add an Drop-Down Menu behind the de-interlacing label.
Leo
Bright Spark User
Posts: 174
Joined: Thu Feb 22, 2007 4:39 pm

Post by Leo »

Yay, this sounds cool. Glad someone is progressing the deinterlacing issue, which is one of the biggest nuisances for me!

Your thing sounds better than the current deinterlacing algorithm, so it would be cool if it were introduced as an option in the next release, even as it is.

How about if you increase the sensitivity of detecting interlacing, even if you reduce the specificity? (i.e. so that it picks up most of the bits it's missing, even if it causes some bits to be deinterlaced (perhaps by Linear?) when they shouldn't be. It'd still be way better than the current one!)

Are you also able to easily drop in the source code for one of the already tried and tested deinterlacers, e.g. the ones that jbrjake mentioned?

Thanks!
deckeda
Regular User
Posts: 138
Joined: Thu Feb 22, 2007 8:38 am

Post by deckeda »

realityking wrote:If it's so easy to change I would like to see an option to choose what de-interlacing I want to apply to my DVD. Options could grow over time (VLC for example has a couple nice ones). And for the UI: One could just add an Drop-Down Menu behind the de-interlacing label.
I think you're a bit ahead of the game here with respect to how quickly something would/coud be implemented without dedicated work. Perhaps gnasher729 is that person.

Also, I think you have a request here, so I can't resist my 2 cents to say, most of the VLC de-interlace varieties are very similar to my eyes --- but maybe I'm blind.

I can easily see where this place would get cluttered up with a billion "which de-interlace should I use ...?" questions, with much gnashing of teeth and theorizing as to which is ideal for a particular type of source. That's just the nature of being a geek, myself included.

I'm all for choices but surely, isn't there "one" that's about as good as a software de-interlacer is gonna get and just be done with it? Where is the practical limitation here?

For example, the list of de-interlacers looks impressive in VLC, but are they all kept up to date? Does VLC have a team that doesn't do much of anything else besides this?
Leo
Bright Spark User
Posts: 174
Joined: Thu Feb 22, 2007 4:39 pm

Post by Leo »

How about looking at the code for (if it's available):

mplayer -vf yadif=3:1,mcdeint=2:1:10

which is shown on

http://guru.multimedia.cx/deinterlacing-filters/

and looks great.
User avatar
s55
HandBrake Team
Posts: 9805
Joined: Sun Dec 24, 2006 1:05 pm

Post by s55 »

Leo stop trying to scare him off please ;)
realityking
Veteran User
Posts: 680
Joined: Tue Apr 24, 2007 12:36 pm

Post by realityking »

deckeda wrote:For example, the list of de-interlacers looks impressive in VLC, but are they all kept up to date? Does VLC have a team that doesn't do much of anything else besides this?
Thats what I can answer you ;) VLC is very short on developer so if they are maintaining them themself (what I don't know) there are probably not updated regularly.
gnasher729
Posts: 3
Joined: Sun Apr 22, 2007 9:37 pm

Post by gnasher729 »

Leo wrote:How about if you increase the sensitivity of detecting interlacing, even if you reduce the specificity? (i.e. so that it picks up most of the bits it's missing, even if it causes some bits to be deinterlaced (perhaps by Linear?) when they shouldn't be. It'd still be way better than the current one!)
No, I tried that. On certain images, the check for movement gets it completely wrong, typically an area with a diagonal border moving to the right. For example an arm in a black suit at a 45 degree angle against a white background, and moving, if I set the sensitivity do an extreme value, it will de-interlace _everything_ except that part!
Are you also able to easily drop in the source code for one of the already tried and tested deinterlacers, e.g. the ones that jbrjake mentioned?

Thanks!
I'd probably need to make some changes to the code, but it shouldn't be too difficult. I have downloaded the code for "yuvdeinterlace", just to see how it works. I'll see how long it will take me to get that working.

Just wondering: Does anyone have recommendations for DVDs with material that is interlaced but otherwise in high quality? At the moment I use a DVD from a TV series for testing, and it looks like decent quality, but it could be better. And the better the material, the more difference deinterlacing should make.
gnasher729
Posts: 3
Joined: Sun Apr 22, 2007 9:37 pm

Post by gnasher729 »

sr55 wrote:Leo stop trying to scare him off please ;)
No fear of that. I'll be looking at a few algorithms. There are a few things that limit what I can do without changing the interfaces: 1. I only have one single frame to play with. I have read about algorithms that compare with the previous frame for motion prediction, and I haven't got that. 2. U and V is already combined when I get the image, so I can't do anything useful there.

You could probably get better results if deinterlacing were done just after decoding. You could even use the motion prediction information from the decoder.
golias
Regular User
Posts: 105
Joined: Wed Jan 03, 2007 7:29 pm

Post by golias »

gnasher729 wrote:
Leo wrote:How about if you increase the sensitivity of detecting interlacing, even if you reduce the specificity? (i.e. so that it picks up most of the bits it's missing, even if it causes some bits to be deinterlaced (perhaps by Linear?) when they shouldn't be. It'd still be way better than the current one!)
No, I tried that. On certain images, the check for movement gets it completely wrong, typically an area with a diagonal border moving to the right. For example an arm in a black suit at a 45 degree angle against a white background, and moving, if I set the sensitivity do an extreme value, it will de-interlace _everything_ except that part!
Are you also able to easily drop in the source code for one of the already tried and tested deinterlacers, e.g. the ones that jbrjake mentioned?

Thanks!
I'd probably need to make some changes to the code, but it shouldn't be too difficult. I have downloaded the code for "yuvdeinterlace", just to see how it works. I'll see how long it will take me to get that working.

Just wondering: Does anyone have recommendations for DVDs with material that is interlaced but otherwise in high quality? At the moment I use a DVD from a TV series for testing, and it looks like decent quality, but it could be better. And the better the material, the more difference deinterlacing should make.
Buffy the Vampire Slayer, especially the later seasons, has outstanding cinematographic values, as TV shows go, but suffers horribly from interlace issues because there is so much fast action in them.
Most wide-screen TV shows, such as the Sopranos, tend to be released on progressive-scan disks, so they are not very useful for testing this feature.

One exception would be the anime series Samurai Champloo. It's in anamorphic wide-screen, it's interlaced, and the line art of the show really makes interlace problems stand out. Early in episode 1 there's a moment where they rapidly cut back and forth between a bar fight and a spinning hat which was tossed in the air. If you can capture that scene correctly, your deinterlacing method is rock-solid!
icaruscollapse
Posts: 25
Joined: Fri Mar 30, 2007 3:42 am

Post by icaruscollapse »

Handbrake really does have everything I want except a good system for interlacing. If we can get something that produces the same (Or visually indistinguishable results) from the "blend" method (My personal favorite) of deinterlacing in VLC player, I would be the happiest man alive.
Leo
Bright Spark User
Posts: 174
Joined: Thu Feb 22, 2007 4:39 pm

Post by Leo »

gnasher729 wrote:2. U and V is already combined when I get the image, so I can't do anything useful there.
My understanding of YUV (which is limited to say the least) is that you can quite easily calculate it from the RGB value. See
http://en.wikipedia.org/wiki/YUV
You could probably adjust the maths in any deinterlacing algorithm so that it would accept RGB.
gnasher729 wrote:1. I only have one single frame to play with. I have read about algorithms that compare with the previous frame for motion prediction, and I haven't got that.
I have no idea how hard this would be, so I'll just fire off some ideas:
Could you store the information from the previous frame (perhaps even the whole frame itself) somewhere and then access it when you're deinterlacing the next frame? e.g. somewhere in the RAM, or as an array/matrix, or a very long (1-1.2MB) variable. Presumably it would be terribly slow to save each frame to the hard drive as a bitmap in turn as you go and then read it for the next frame (obviously deleting them as you go so you don't fill the drive!). How about even using a "RAM drive" if you were really stuck for somehere to store the previous frame.
Cyander
Regular User
Posts: 94
Joined: Tue Mar 20, 2007 9:19 pm

Post by Cyander »

Leo wrote:
gnasher729 wrote:2. U and V is already combined when I get the image, so I can't do anything useful there.
My understanding of YUV (which is limited to say the least) is that you can quite easily calculate it from the RGB value. See
http://en.wikipedia.org/wiki/YUV
You could probably adjust the maths in any deinterlacing algorithm so that it would accept RGB.
Be very careful here... the pipeline in Handbrake was left 100% YUV for a good reason. ;)

Since the entire pipeline is YUV, you don't get conversion losses from converting back and forth.
Leo wrote:
gnasher729 wrote:1. I only have one single frame to play with. I have read about algorithms that compare with the previous frame for motion prediction, and I haven't got that.
I have no idea how hard this would be, so I'll just fire off some ideas:
Could you store the information from the previous frame (perhaps even the whole frame itself) somewhere and then access it when you're deinterlacing the next frame? e.g. somewhere in the RAM, or as an array/matrix, or a very long (1-1.2MB) variable. Presumably it would be terribly slow to save each frame to the hard drive as a bitmap in turn as you go and then read it for the next frame (obviously deleting them as you go so you don't fill the drive!). How about even using a "RAM drive" if you were really stuck for somehere to store the previous frame.
It would be easy, but most of your ideas are extreme... at most, he just needs to initialize a new buffer object and copy the contents from the buffer fed to him, to his allocated buffer, storing the array of frames he keeps in his worker struct. Please be careful about what you suggest, as overthinking a development problem usually: a) leads to bad code, or b) reveals how much or little you know about programming. Both can be bad. ;)

Personally, I would rather see work porting some of the better deinterlacers, by working on the support code needed to let them do their job, instead of trying to roll your own, but don't let that stop you, gnasher. :)
sych0
Posts: 1
Joined: Fri May 11, 2007 3:02 am

Post by sych0 »

icaruscollapse wrote:Handbrake really does have everything I want except a good system for interlacing. If we can get something that produces the same (Or visually indistinguishable results) from the "blend" method (My personal favorite) of deinterlacing in VLC player, I would be the happiest man alive.
I just want to emphasize this. I absolutely love handbrake, but the deinterlace system needs work. When working with interlaced material, my options are leave the video interlaced or deinterlace and deal with stuttery pans. I don't like either and usually end up doing the job on my pc with autogk. I don't know the specs, but I'm guessing autogk blends the fields, as it deinterlaces, but also does not create stuttering video.

Are there any other options I can use in the meantime?
huevos_rancheros
Posts: 12
Joined: Fri Jul 20, 2007 2:40 am

Re: Started some work on deinterlacing

Post by huevos_rancheros »

gnasher729 wrote:I just started some work trying to improve the deinterlacing. It is rather easy to change; all I needed to do is change three calls to avpicture_deinterlace to call my own function my_avpicture_deinterlace.
I was wondering if anyone was still actively looking at replacing the libavcodec deinterlacer. I haven't been real happy with the current deinterlacing either, especially with animated material, so I went ahead and hacked in the yadif deinterlacer from mencoder. This wasn't really too bad to just drop in, although it only supports the default mode zero yadif - i.e. what you would get if you used mencoder -vf yadif without any parameters. Even so, I find it to be a nice improvement since it significantly reduces the amount of jaggies and ghost images over what you see with the current deinterlacer.

There are a couple small issues related to the fact that yadif needs to buffer one frame before it begins outputting frames (unless I want to duplicate the first frame which doesn't seem like a good idea). When the deinterlacer receives the first frame, it can't really do anything other than buffer it since it needs two frames to compare. Obviously, this is not particularly useful behavior for the preview screen, so I just call the normal avpicture_deinterlace there.

Inside the renderWork function, it seems to be OK to bail out on the first frame by setting buf_out to NULL if the deinterlacer is not ready and immediately returning before subtitles are applied and scaling is done. This looks fine in the main work loop, since it just avoids adding anything to the fifo queue for the next worker if the output buffer comes back NULL. However, this results in subtitles being ahead by a frame, since the frame returned from the deinterlacer to which subtitles are applied is actually the previous frame rather than the current frame. I suppose this could be solved by caching the current frame subtitle in the renderer's private work structure and then applying to the next frame, although I'm not really sure if it's that big of a deal. The other side effect of being late by a frame is that the final frame of the video is not written out. I think this could be solved without much hassle by flushing out the last frame when the render work function conveniently receives a NULL input buffer at the end of processing, although again it might not be strictly necessary.

After putting in the yadif code, it seems feasible to port over other mencoder filters or perhaps even support the entire mencoder libmpcodecs filter chain directly by linking this in, although filters that mess with the frame rate (e.g. pullup) or frame size (e.g. scale/crop) would likely be a big headache. Dealing with variable frame latency from certain filters might suggest setting up a separate worker per filter to allow the existing Handbrake frame queuing mechanism to handle this more gracefully without needing to know the internals of each filter in a gigantic and messy render.c file. Most of the mencoder filters also rely on some frame information from the mpeg2 decoder that is not currently passed through the worker chain that would probably need to be added in - most significantly the PIC_FLAG_TOP_FIELD_FIRST and PIC_FLAG_REPEAT_FIRST_FIELD from mpeg2.h. I'm assuming YES and NO respectively for these in the yadif port, which is not completely robust, although has been kosher with all the NTSC videos I've tried thus far.

At any rate, if there is any interest in the yadif port or potentially another useful filter let me know. I realize the developers are busy with locking down and preparing the beta 2 release, so I don't want to interfere with that if there is no bandwidth at the moment.

P.S. It seems there should be be a call to hb_buffer_close(&pv->buf_deint) in renderClose() of render.c to avoid a small memory leak per job?
jbrjake
Veteran User
Posts: 4805
Joined: Wed Dec 13, 2006 1:38 am

Post by jbrjake »

OMG, if you give us libmpcodecs you will be my new hero. It's the only reason I still use mencoder.

Any chance of a yadif,mcdeint combo?

HB *desperately* needs this stuff -- including the frame queuing mechanism to handle multiple-frame filters like pullup,softskip.

I'd also love to have hqdeint3d for cleaning up animation.
User avatar
s55
HandBrake Team
Posts: 9805
Joined: Sun Dec 24, 2006 1:05 pm

Post by s55 »

huevos_rancheros - get yourself in our IRC channel (details: http://handbrake.m0k.org/?page_id=49 ) this isn't optional ;)

drop by anytime. You may well turn into another HB Hero
huevos_rancheros
Posts: 12
Joined: Fri Jul 20, 2007 2:40 am

Post by huevos_rancheros »

After evaluating things a bit more, linking in libmpcodecs directly seems like more trouble than I'm willing to deal with at the moment. However, the good news is that a number of filters from this library are relatively easy to drop directly into HandBrake. Currently, I've ported over yadif, mcdeint, pp7, and hqdn3d. This seems like a fairly good starter set since it gives you deinterlacing, block removal, and noise removal.

There are some limitations with the ported yadif and pp7, however. The yadif port supports yadif mode 0 and 2, but not mode 1 and 3 since these alter the frame rate. If mode 1 or 3 are attempted, then the yadif port falls back to mode 0 or 2 respectively - e.g. attempting something like "-vf yadif=3" would actually perform the processing of "-vf yadif=2". I did fix things up to auto-detect top-field vs. bottom-field ordering by passing through the frame flags from the mpeg2 decoder in the hb_buffer_t structure for the frame.

The pp7 port only supports a fixed quantization parameter (qp > 0) and just copies the input frame if qp parameter is passed as zero (which just so happens to be the mencoder pp7 default for this parameter). This is necessary since mencoder pp7 uses the frame's quantization table as plucked from the decoder when qp=0, but unfortunately I cannot find a good way to get this out of the mpeg2 decoder in HandBrake.

The pattern of the filter code integration is very similar to the current work object system. Each filter has its own privately defined hb_filter_object_t and hb_filter_private_t similar to the hb_work_object_t and hb_work_private_t structures. For example:

Code: Select all

hb_filter_object_t hb_filter_deinterlace =
{   
    FILTER_DEINTERLACE,
    "Deinterlace (yadif)",
    hb_deinterlace_init,
    hb_deinterlace_work,
    hb_deinterlace_close,
};

Static externs for the supported filters were added to common.h and the filter parameter string (identical to those you would use with mencoder -vf filter=parms) can be set from anywhere like so:

Code: Select all

hb_filter_deinterlace.settings    = "0:-1";
hb_filter_deinterlace_mc.settings = "2:1:10";
hb_filter_deblock.settings        = "8:2";
hb_filter_denoise.settings        = "3:2:3:3";
All of the current filters support the same defaults as the mencoder versions, so the filters should work even if the settings string is empty or NULL.

In order to keep track of what filters you want to use for a job, the hb_job_t structure has an additional hb_list_t field added. Adding filters from code is done like so:

Code: Select all

hb_list_add( job->filters, &hb_filter_deinterlace );
hb_list_add( job->filters, &hb_filter_deinterlace_mc );
hb_list_add( job->filters, &hb_filter_deblock );
hb_list_add( job->filters, &hb_filter_denoise );
Filters are applied inside libhb's render.c in the order that they appear in the job's filter list, so supporting filter reordering from the GUI is theoretically possible. However with the current filter set, the ordering listed above is probably ideal.

Currently, I use the job->deinterlace flag inside render.c to manually put in the yadif filter, but there is no GUI support for applying any of the other filters. As per jbrbrake's suggestion, I'll put in support for this in HandBrakeCLI and then we can evaluate how to deal with it in the GUI later. I don't know if there is any desire to be able to use the current libavcodec deinterlacer instead of always using yadif, but if so then that could be supported as well.
huevos_rancheros
Posts: 12
Joined: Fri Jul 20, 2007 2:40 am

Post by huevos_rancheros »

Work on the filters proceeds and you can grab the SVN diff (against revision 745) and additional files here.

There are four new files which are ports of mencoder filters: deinterlace.c (yadif/mcdeint), denoise.c (hqdn3d), deblock.c (pp7), and detelecine.c (pullup). These should be added to the libhb directory (and the Xcode project and libhb targets if you work from within Xcode).

The denoising filter should work identically to mencoder. The deblocking filter still has the limitation of only being able to use a fixed quantization parameter, rather than the mpeg2 decoder's quantization table for the frame. The deinterlacing filter contains both the yadif and mcdeint filters, and has been updated to implement the yadif frame per field modes (yadif=1, yadif=3) since those are required for mcdeint to function as intended. When in frame per field mode, the deinterlacing filter also performs a function similar to -vf framestep=2, so the frame rate is preserved. The detelecine filter is new and from testing it looks like it might not quite be finished yet. I'm seeing some jerky motion on telecined sources, especially during camera pans, but hopefully this can be fixed.

The HandBrakeCLI in this diff has options for turning on all of the various filters for testing purposes, and regardless of the order you specify the filters in the command line, they will always be run in the order: detelecine -> deinterlace -> deblock -> denoise. There is also some placeholder Mac GUI code in the diff (Controller.mm) for setting up the filters for a job.

The parameters for the deblock, denoise, and detelecine filters are identical to their respective mencoder filters (as documented here). The deinterlace filter is a little different, however, since it combines the yadif and mcdeint filters into one big tasty, yet slow running, sandwich. It takes four parameters "YM:FD:MM:QP" which are the yadif mode, field dominance, mcdeint mode, and mcdeint quantization parameters respectively.

When the deinterlace yadif mode is set to -1, the old libavcodec is run. This is also the default behavior if no deinterlace parameters are specified, for example:

Code: Select all

HandBrakeCLI -i ~/Movies/FOO/VIDEO_TS/ -o ~/Movies/foo.mp4 -t 1 -c "1-1" --deinterlace
When the mcdeint mode is set to -1, it is not run - this is the default behavior if you don't specify the mcdeint mode. If you want to run mcdeint, you should use it in combination with yadif mode 1 or 3 to get the best results. The field dominance parameter is shared between yadif and mcdeint, thus it is not repeated in the mcdeint parameters list. When field dominance is set to -1, it should correctly autodetect from the source, so that's what you'd usually want to pass. For example:

Code: Select all

HandBrakeCLI -i ~/Movies/FOO/VIDEO_TS/ -o ~/Movies/foo.mp4 -t 1 -c "1-1" --deinterlace="3:-1:2:10"
Here's a final example using detelecine, deblock, and denoise:

Code: Select all

HandBrakeCLI -i ~/Movies/FOO/VIDEO_TS/ -o ~/Movies/foo.mp4 -t 1 -c "1-1" --detelecine --deblock="4:2" --denoise="3:2:3:3"
Have fun!
eddyg
Veteran User
Posts: 798
Joined: Mon Apr 23, 2007 3:34 am

Post by eddyg »

huevos_rancheros wrote:Work on the filters proceeds and you can grab the SVN diff (against revision 745) and additional files here.
Good job!
huevos_rancheros
Posts: 12
Joined: Fri Jul 20, 2007 2:40 am

Post by huevos_rancheros »

eddyg wrote:Good job!
Thanks! It looks like I found my detelecine pullup problem. Apparently, I was neglecting to update the buffer start/stop times on frames "dropped" by the filter which Quicktime didn't seem terribly happy about on playback. If you grabbed the ZIP already, I recommend downloading it again to get the updated detelecine.c file.

I put frames "dropped" in quotes, since the detelecine filter as currently implemented doesn't actually drop any frames. Instead, it duplicates older frames in place of frames that would normally be dropped to go from 30 fps -> 24 fps. It'll actually work OK if you set the output frame rate to 23.976 fps or leave it as "Same as source", but you'll have significantly smoother motion if you set the output frame rate to 29.97.
jbrjake
Veteran User
Posts: 4805
Joined: Wed Dec 13, 2006 1:38 am

Post by jbrjake »

Incredible =)

All the stuff I've had to do with mencoder because it was telecined or interlaced or needed denoising....never again.

This is major. On par with mpeg-2 stream support. Thank you, huevos_rancheros.

The ITVC is really smooth, after that last detelecine.c update. When I put in 29.97 as the frame rate and run --detelecine, that's equivalent to pullup,softskip,harddup?

Oh, and this might very well end up getting committed to the svn in 9 hours or so. ;-)
jbrjake
Veteran User
Posts: 4805
Joined: Wed Dec 13, 2006 1:38 am

=)

Post by jbrjake »

Post Reply