
There is no universal way to reduce a video file size because various file types are not created equal. However, to compress files efficiently, we need to use codecs, bitrate, container, and fps. The command above will convert the file from the specified format to the output format. So what I can present here is a script (for the bash shell, but anyone with minimal knowledge of shell scripts should be able to convert it for other shells) to convert 4k yuv420p to 2k yuv444p retaining all color information and using all luma data to compute the downscaled luma plane (even though it's 8 bits per plane).įind the working script attached, also the test 4k image I used to render this 5 second test 4k yuv420 video (this rendering of course only retained the 2x2 chroma pattern, not the 1x1 one).Ffmpeg -i The bad news is that I've not found a method to yield a 10-bit luma output without hitting bugs in ffmpeg: It seems that ffmpeg garbles half of the image when trying to use the split and merge filters for colorplanes using values larger than 8 bit (I tried pix_fmts=yuv420p10le and yuv420p16le). filter_complex 'extractplanes=y+u+v scale=w=1920:h=1080:flags=print_info+bicubic+bitexact mergeplanes=0x001020:yuv444p' \
#Ffmpeg scale to 1080p full#
The good news is that this works well for retaining the full chroma resolution, while still scaling the luma plane down in a reasonable and artefact-free way: So I created ffmpeg command lines using "-filter_complex", to split the yuv420 planes, scale only the Y plane (bicubic still looking best), then merge the planes into yuv444. It was less easy then I anticipated, as scaling all planes of a yuv image together does not seem to retain the chroma details as untouched as I expected/wanted. I spent some hours experimenting, starting from a test image that allows to see whether a 2x2 red/green pixel pattern becomes visible as a 1x1 pixel pattern in the downscaled yuv444 image. But unless your recordings suffer from bit-rot, that's never going to happen.) (There's one irrelevant exception: The default error concealment strategy - according to the -ec option - is to apply a strong deblocking to macro blocks that are known to contain damaged data. I don't think they'll do so today, especially since most replay is today done with HW-support for decoding, anyway.īut the ffmpeg executable on its own (and also ffplay) will not use "pp" by default.
#Ffmpeg scale to 1080p software#
In the past, especially when MPEG-4 ASP and xvid were popular, some software (like mplayer and VLC) chose to insert the "pp" filter to the processing chain by default when they played back files using the ffmpeg library. If you refer to the video filter named "pp" - that filter is not a default part of the processing chain when invoking the stock ffmpeg executable. The one that is an integral part of the h.264 standard decoder is, of course, being used, as without it the decoding would just fail.

In practice, this would look nearly indistinguishable from full 10-bit 4:4:4 What deblocking filter are you refering to? While the luma blocks would contain genuine 10-bit data, the chroma blocks would contain the original 8-bit values. To make use of this technique, you'd need a custom decoder that works directly in YUV color space with the original 8-bit 4:2:0 H.264 video to produce the equivalent of a decoded 10-bit 4:4:4 H.264 video. Rather than averaging in artifacts from adjacent macroblock edges, it's better to combine 2x2 arrays of luma values within each macroblock. The reason you'd want to use direct 2x2 summing rather than bilinear or biqubic interpolation is because of the geometric macroblock tiling of the H.264 frame. The major advantage is that the original precision of the data samples is preserved without averaging - the only thing that is downscaled is the spatial resolution of the image. Each 2x2 block of 8-bit luma pixels would then be summed to produce a single 10-bit subsampled luma pixel.

The luma macroblocks would be decoded into 8-bit monochromatic pixels.

#Ffmpeg scale to 1080p 1080p#
Since the UV chroma data is already subsampled at half the horizontal and vertical resolution as the Y luma data, the chroma data could be used directly as 8-bit 1080p 4:4:4 UV data without resampling it. Downscaling an H.264 video frame by precisely 50% from UHD->1080p presents the option of working directly with the 4:2:0 YUV data decoded from the H.264 file.
