Hi there,
Thanks for checking out a 'Explained' blog! In these; my aim is to give you some details on what some common technical terms mean in digital video and film. I hope you find something useful here.
Please note; I do not consider myself an expert on the topics - if you believe I've made a mistake somewhere please do let us know! I'd love for this to be a learning experience for me as much as anyone else.
I’ve written about things like colour space, GOP structure, motion compensation and more but this one brings us back around to the basics. For those of you new to these things or for those wanting a refresh, this is for you! Here's what we'll be covering:
That’s quite a list! So without further ado, let’s get started!
- Resolution
- Frame Rate
- Interlaced & Progressive Scan
- Aspect Ratio
- Pixel Aspect Ratio
- Pixel Density
- Bit Rate
- Lossy & Lossless Compression
That’s quite a list! So without further ado, let’s get started!
Resolution
Resolution is a term thrown around left right and center, both in the professional content creation world and also in consumer electronics. There’s a high probability you’ll know most of we’ll be covering here already, so you may want to skip this or let it act as a refresher.
Resolution defines the number of vertical and horizontal lines of pixels there are in a video file or a screen. A display has a physical number of pixels while a video file can be encoded to a certain resolution.
We define resolution as a multiplication: Vertical x Horizontal, the most common today being 1920x1080, otherwise known as High Definition, or HD. That’s 1920 vertical lines of pixels by 1080 horizontal lines. If we make that calculation we get just over 2 million pixels in total. You may also see this resolution defined as 1080p, this is where ‘1080’ is a abbreviation for 1920x1080 (unless in a different aspect ratio, see below) and ‘p’ is for Progressive scan, covered later.
Below is a table of the most common resolutions today.
Resolution defines the number of vertical and horizontal lines of pixels there are in a video file or a screen. A display has a physical number of pixels while a video file can be encoded to a certain resolution.
We define resolution as a multiplication: Vertical x Horizontal, the most common today being 1920x1080, otherwise known as High Definition, or HD. That’s 1920 vertical lines of pixels by 1080 horizontal lines. If we make that calculation we get just over 2 million pixels in total. You may also see this resolution defined as 1080p, this is where ‘1080’ is a abbreviation for 1920x1080 (unless in a different aspect ratio, see below) and ‘p’ is for Progressive scan, covered later.
Below is a table of the most common resolutions today.
Name DCI 4K Ultra HD DCI 2K Full HD HD (720p) SD | Resolution/Aspect 3996 x 2160 / 1.85:1 3840x2160 / 16:9 1998x1080 / 1.85:1 1920x1080 / 16:9 1280x720 / 16:9 Various | Typical Use 4K is mostly only used in production as of now. Some 4K cinema projections are available, but still rare. There are various standards for 4K, this is for cinema projection. Resolution to be used by most ‘4K’ or UHD TV sets for home consumer content. Standard cinema projection resolution at 1.85:1 ratio. Standard for Full HD consumer content at the moment, from TV to Bluray and online streams. Common resolution for shooting and editing. Commonly used as an alternative to 1080p when bandwidth is low. Most common for broadcast and online streams. Exact resolution depends on video standard (PAL/NTSC) and signal type. Commonly used for non HD broadcast and low bandwidth streaming. |
4K & Ultra-HD
4K & Ultra HD (UHD) are tagged as being ‘the next big thing’ in terms of display technologies, but 4K recording (at least in terms of codec resolution, if nothing else) has been available in high end cinema for sometime, since the RED ONE released in 2007.
4K & UHD both offer us around 4 times the current resolution of HD (roughly doubling both the horizontal and vertical pixels), giving us a much higher level of detail compared to equivalent sized HD displays (see Pixel Density). There is an ongoing argument debating the necessity of these higher resolutions however, with professionals arguing for both sides. Some believe that at the correct viewing distance for the size of the display, 1080p is enough to make individual pixels indistinguishable, whilst others argue that not only is it sharper, it can offer other benefits, particularly in the production process.
My personal opinion of 4K is that it certainly should happen. Yes, the necessity of it can be debated, but there is no question of whether it’s a substantial improvement, and only by pushing for new do we get new. 4K viewing and shooting should, and will become standard eventually and even whilst that’s a little way off, 4K allows for greater post reframing and motion stabilisation among other tools when delivering at 1080p.
That is not to say, however, that everyone should immediately start shooting in nothing but 4K. In fact, by and large, 1080p or 2K is perfectly suitable for the majority of productions for now and doesn't entail the vast amounts of resources & workflow requirements that come with 4K. Let's let the manufacturers and companies worry about 4K whilst we go about our business as usual and soon 4K will be a far more viable and worthwhile option, and if nothing else, it’ll drive the prices of 1080p equipment down further.
4K & Ultra HD, what’s the difference?
The first thing you need to understand here is that both 2K & 4K are merely terms used to denote a resolution to the degree of 2000 and 4000 horizontal pixels, neither actually tell us a exact resolution. For example, 1080p can actually be referred to as 2K. Someone referring to 2K or 4K as an exact resolution is most likely referring to DCI 2K/4K.
So with that said, what is the difference between 4K and UHD?
As you can see, there’s only really a small difference, ultimately due to the different aspect ratios used by each. The reason we have two is actually simple, we already have two similar ratios in use today; 1080p at 1920x1080 for TV and DCI 2K at 1998x1080 for cinema. 4K and UHD represent the same differences as these at a higher resolution. Full HD fits directly into UHD when multiplied, this makes it easier when making the transition from one standard to the other, allows direct rescaling with no cropping or pan and scan. Ultimately, one could argue now is the perfect time to merge the two standards between cinema and TV, but it seems this isn’t what’s going to happen.
4K & Ultra HD (UHD) are tagged as being ‘the next big thing’ in terms of display technologies, but 4K recording (at least in terms of codec resolution, if nothing else) has been available in high end cinema for sometime, since the RED ONE released in 2007.
4K & UHD both offer us around 4 times the current resolution of HD (roughly doubling both the horizontal and vertical pixels), giving us a much higher level of detail compared to equivalent sized HD displays (see Pixel Density). There is an ongoing argument debating the necessity of these higher resolutions however, with professionals arguing for both sides. Some believe that at the correct viewing distance for the size of the display, 1080p is enough to make individual pixels indistinguishable, whilst others argue that not only is it sharper, it can offer other benefits, particularly in the production process.
My personal opinion of 4K is that it certainly should happen. Yes, the necessity of it can be debated, but there is no question of whether it’s a substantial improvement, and only by pushing for new do we get new. 4K viewing and shooting should, and will become standard eventually and even whilst that’s a little way off, 4K allows for greater post reframing and motion stabilisation among other tools when delivering at 1080p.
That is not to say, however, that everyone should immediately start shooting in nothing but 4K. In fact, by and large, 1080p or 2K is perfectly suitable for the majority of productions for now and doesn't entail the vast amounts of resources & workflow requirements that come with 4K. Let's let the manufacturers and companies worry about 4K whilst we go about our business as usual and soon 4K will be a far more viable and worthwhile option, and if nothing else, it’ll drive the prices of 1080p equipment down further.
4K & Ultra HD, what’s the difference?
The first thing you need to understand here is that both 2K & 4K are merely terms used to denote a resolution to the degree of 2000 and 4000 horizontal pixels, neither actually tell us a exact resolution. For example, 1080p can actually be referred to as 2K. Someone referring to 2K or 4K as an exact resolution is most likely referring to DCI 2K/4K.
So with that said, what is the difference between 4K and UHD?
- (DCI) 4K is the resolution set to be used in Digital Cinema, that’s 3996x2160 at a 1.85:1 ratio.
- Ultra HD/UHD is the equivalent used in TV displays at 3840x2160 with a 16:9 ratio.
As you can see, there’s only really a small difference, ultimately due to the different aspect ratios used by each. The reason we have two is actually simple, we already have two similar ratios in use today; 1080p at 1920x1080 for TV and DCI 2K at 1998x1080 for cinema. 4K and UHD represent the same differences as these at a higher resolution. Full HD fits directly into UHD when multiplied, this makes it easier when making the transition from one standard to the other, allows direct rescaling with no cropping or pan and scan. Ultimately, one could argue now is the perfect time to merge the two standards between cinema and TV, but it seems this isn’t what’s going to happen.
Frame Rate
As you probably know, video works by having a number of still frames put into a sequence to create the illusion of motion when done at a high enough rate. This rate is known as Frame Rate.
In the days of silent film, there was no standard for frame rates which typically ranged from 12 to 24fps (frames per second) and usually played back at around 18fps. A standard was eventually set in place making 24fps the norm. There are numerous theories as to why 24fps was chosen, it’s often believed to be associated with the introduction of sound in film, but I’m not going to pretend otherwise, I have no idea exactly why. But none-the-less, the decision proved to be a historic one in film history, as that frame rate continues to be the standard for cinema projections today.
Ever wondered why old film tends looks sped up when we watch it today? It’s because it was shot at a lower frame rate than what we are watching it back at. Something shot at 12fps and played back at 24fps looks fast, whilst something shot at 120fps for example and played at 24fps looks slow (this is how slow motion is captured).
What other common frame rates exist today? Let’s take a look:
In the days of silent film, there was no standard for frame rates which typically ranged from 12 to 24fps (frames per second) and usually played back at around 18fps. A standard was eventually set in place making 24fps the norm. There are numerous theories as to why 24fps was chosen, it’s often believed to be associated with the introduction of sound in film, but I’m not going to pretend otherwise, I have no idea exactly why. But none-the-less, the decision proved to be a historic one in film history, as that frame rate continues to be the standard for cinema projections today.
Ever wondered why old film tends looks sped up when we watch it today? It’s because it was shot at a lower frame rate than what we are watching it back at. Something shot at 12fps and played back at 24fps looks fast, whilst something shot at 120fps for example and played at 24fps looks slow (this is how slow motion is captured).
What other common frame rates exist today? Let’s take a look:
Name/Purpose Film PAL NTSC HFR (High Frame Rate) | Frame Rate 24fps 25fps 30 (29.97) fps 48 or 60 fps |
So, if 24fps is the norm for film, why do we have these other frame rates for TV? Why not just keep it the same? It’s mostly down to the electrical systems used. In the UK (PAL) a 50Hz mains frequency is used, whilst the USA (NTSC) uses 60Hz. As you can see, the frame rates used in each region are multiples of the mains frequency. This is not coincidence, it is done to prevent flickering and to guarantee a consistent frame rate.
Finally we have HFR. This is a newly coined term created in an attempt to create a new standard for film frames rates by certain filmmakers and companies (including James Cameron and Peter Jackson). The idea behind it is simple, increase the frame rate for a smoother image because research has shown the human eye is believed to see approximately 60fps. Whilst this works in theory, many are against the idea, believing 24fps offers a more natural look; with a certain amount of motion blur considered an aesthetic choice that we now naturally associate with film. Whilst it can be argued HFR does allow for a better 3D experience, it's place within cinema is far from decided, I personally am rarely against advancements in technology within film, but in this case I generally lean towards sticking to the ‘filmic look’ of 24p, with 48+fps tending to look unnaturally smooth and even giving the illusion of being sped up in my experience.
Finally we have HFR. This is a newly coined term created in an attempt to create a new standard for film frames rates by certain filmmakers and companies (including James Cameron and Peter Jackson). The idea behind it is simple, increase the frame rate for a smoother image because research has shown the human eye is believed to see approximately 60fps. Whilst this works in theory, many are against the idea, believing 24fps offers a more natural look; with a certain amount of motion blur considered an aesthetic choice that we now naturally associate with film. Whilst it can be argued HFR does allow for a better 3D experience, it's place within cinema is far from decided, I personally am rarely against advancements in technology within film, but in this case I generally lean towards sticking to the ‘filmic look’ of 24p, with 48+fps tending to look unnaturally smooth and even giving the illusion of being sped up in my experience.
Interlaced & Progressive Scan
Resolution describes the size of a frame and the frame rate tells us how many frames we have, but there’s also the case of how these frames are put in sequence. Nice and quick section here.
You’re sure to have seen terms like 720p, 1080i etc, if you never knew what those letters meant, that’s these; progressive and interlaced scan. Where the value tells us the number of horizontal lines of pixels (which we can often guess the overall resolution from), the letter indicates how the frames are put together. Progressive scan is the most obvious of the two, simply denoting that each frame is loaded in full, one after another, starting at the top and working down. You may see it expressed as 25p, that’s 25 full progressive frames per second. This type of scan is seen on PC displays and most TV sets today.
Interlaced on the other hand works a little differently. Instead of loading each frame in full, frames are split into two. Even and odd lines (horizontally) known as ‘fields’. This type of scan is typically seen in broadcast and is expressed as 50i. That’s 50 ‘half’ frames per second, resulting in 25 actual frames when combined.
Which is better? It mostly comes down to the purpose of the video. The future of video is certainly leaning towards progressive scan, making interlaced less necessary, but it is still a common requirement for broadcast submissions.
So why do we have interlaced? It mostly comes down to a need. Interlaced provided two main benefits, the first being that a frame rate which matched that of the mains frequency (50Hz for UK, 60Hz for USA) was necessary to remove flicker for CRT TV sets (see frame rate for more), and the second being that it requires a much lower bandwidth to stream by splitting the data requirements required at a single time by half. But interlaced scan also brought with it an added benefit, although the theoretical full frame rate is the same (25 full frames per second for both 50i and 25p), 50i actually gave smoother perceived motion on interlaced based TV’s by rendering each half of a frame separately.
The main downside we as end users experience by having the two separate methods is the motion artifacting that can occur when one type of video is played on the other type of display. When we play a interlaced video on a purely progressive screen (such as PC monitors and many modern TV sets) we can often see the de-interlacing process taking place, where the separate fields are noticeably misaligned. This is particularly noticeable on fast motion and paused frames. This issue can be resolved by passing the file through a de-interlacing process during the production stage but this will usually degrade the image quality (if only slightly) and can cause other artifacting to occur. When we play a progressive file on a interlaced based display we again can see issues in the form of ‘juddery’ playback.
You’re sure to have seen terms like 720p, 1080i etc, if you never knew what those letters meant, that’s these; progressive and interlaced scan. Where the value tells us the number of horizontal lines of pixels (which we can often guess the overall resolution from), the letter indicates how the frames are put together. Progressive scan is the most obvious of the two, simply denoting that each frame is loaded in full, one after another, starting at the top and working down. You may see it expressed as 25p, that’s 25 full progressive frames per second. This type of scan is seen on PC displays and most TV sets today.
Interlaced on the other hand works a little differently. Instead of loading each frame in full, frames are split into two. Even and odd lines (horizontally) known as ‘fields’. This type of scan is typically seen in broadcast and is expressed as 50i. That’s 50 ‘half’ frames per second, resulting in 25 actual frames when combined.
Which is better? It mostly comes down to the purpose of the video. The future of video is certainly leaning towards progressive scan, making interlaced less necessary, but it is still a common requirement for broadcast submissions.
So why do we have interlaced? It mostly comes down to a need. Interlaced provided two main benefits, the first being that a frame rate which matched that of the mains frequency (50Hz for UK, 60Hz for USA) was necessary to remove flicker for CRT TV sets (see frame rate for more), and the second being that it requires a much lower bandwidth to stream by splitting the data requirements required at a single time by half. But interlaced scan also brought with it an added benefit, although the theoretical full frame rate is the same (25 full frames per second for both 50i and 25p), 50i actually gave smoother perceived motion on interlaced based TV’s by rendering each half of a frame separately.
The main downside we as end users experience by having the two separate methods is the motion artifacting that can occur when one type of video is played on the other type of display. When we play a interlaced video on a purely progressive screen (such as PC monitors and many modern TV sets) we can often see the de-interlacing process taking place, where the separate fields are noticeably misaligned. This is particularly noticeable on fast motion and paused frames. This issue can be resolved by passing the file through a de-interlacing process during the production stage but this will usually degrade the image quality (if only slightly) and can cause other artifacting to occur. When we play a progressive file on a interlaced based display we again can see issues in the form of ‘juddery’ playback.
Aspect Ratio
The resolution of an image doesn't tell us everything about the appearance of a frame, we also need to know the aspect ratio.
This tells us the ratio between the vertical and horizontal pixels in a frame which in turn tells us it’s shape, we express this as x:x. For example, a square ratio would be 1:1.
Let’s look at some common aspect ratios:
This tells us the ratio between the vertical and horizontal pixels in a frame which in turn tells us it’s shape, we express this as x:x. For example, a square ratio would be 1:1.
Let’s look at some common aspect ratios:
Ratio 4:3 16:9 (1.78:1) 1.85:1 2.39:1 16:10 | Purpose Close to square, the old standard for TV, and going further back, film. The current standard for TV. This is the ratio of your TV set and most broadcasts. One of two commonly used ratios in cinema. Widescreen cinema standard, known as cinema scope/cinescope. A commonly used ratio for PC monitors. |
Pixel Aspect Ratio
Not to be confused with aspect ratio, pixel aspect ratio refers to the shape of the individual pixels that make up a digital video file.
Where displays always use square pixels, the video files themselves can use a variety of rectangular shapes instead. Let’s take a look:
Where displays always use square pixels, the video files themselves can use a variety of rectangular shapes instead. Let’s take a look:
Name Square DV PAL DV NTSC DV PAL Widescreen DV NTSC Widescreen HDV1080 True Anamorphic | Ratio 1:1 0.91:1 1.09:1 1.21:1 1.46:1 1.33:1 2:1 |
From the table above we can see the ratio expressed as x:x again, this time expressing the size of the horizontal side compared to the vertical size which remains fixed at 1.
There’s a few things to be aware of here, the first is that the pixel aspect ratio needs to be taken into consideration when producing content - nowadays software will sort most of this out, but it’s worth knowing what’s going on. A good example of this is shooting with a HDV camera. Although most cameras nowadays can shoot full 1080 HD with a 1920x1080 resolution with square pixels, there are some that shoot 1080 with a 1440x1080 resolution, but achieve the same frame size as a result. This is because they use a pixel aspect ratio of 1.33:1, where each pixel takes up more horizontal space. We can confirm this by calculating 1440x1.33 which gives us 1915 (where 1.33 is rounded).
When we bring footage like this into a editing program with a timeline set at full HD, we may find the image to be stretched. This is because the editing software is treating the footage as if it had square pixels. You’d solve this by either changing the sequence settings, re-encoding the footage to convert it to 1920x1080 or manipulating the footage to reverse the stretch.
There’s a few things to be aware of here, the first is that the pixel aspect ratio needs to be taken into consideration when producing content - nowadays software will sort most of this out, but it’s worth knowing what’s going on. A good example of this is shooting with a HDV camera. Although most cameras nowadays can shoot full 1080 HD with a 1920x1080 resolution with square pixels, there are some that shoot 1080 with a 1440x1080 resolution, but achieve the same frame size as a result. This is because they use a pixel aspect ratio of 1.33:1, where each pixel takes up more horizontal space. We can confirm this by calculating 1440x1.33 which gives us 1915 (where 1.33 is rounded).
When we bring footage like this into a editing program with a timeline set at full HD, we may find the image to be stretched. This is because the editing software is treating the footage as if it had square pixels. You’d solve this by either changing the sequence settings, re-encoding the footage to convert it to 1920x1080 or manipulating the footage to reverse the stretch.
Pixel Density
Closely linked to resolution, pixel density tells how many pixels are on a display per certain distance. It’s measured in pixels per inch with the unit of PPI or DPI (dots per inch). Why’s this important? Because a phone screen of 5” can have 1920x1080 pixels whilst a 60” TV can also have the same number. The TV, of course, is less sharp as it has the same amount of pixels over a larger area.
This doesn’t necessarily matter though, because with display sharpness and resolution we also need to consider how far away from the display we are viewing it from. On a phone screen the higher DPI is ideal because we are looking at it from an arms length away, whereas on a TV we are typically at 6ft or more.
You may have seen Apple use the word ‘Retina’ display, what this means in truth is that we can’t distinguish the pixels on the screen from the distance we should be viewing it from. Let’s go back to our examples; a 60” TV at full HD has a DPI of 37, and we can’t distinguish pixels on it if we are further than 2.4 meters away. A 5” phone display at full HD has a massive DPI of 440 in which pixels are indistinguishable from 20cm. With this in mind, we can tell that the main benefit of 4K for consumer TV sets is that it can be viewed from closer distances and larger displays can be viewed closer with no distinguishable pixels.
This doesn’t necessarily matter though, because with display sharpness and resolution we also need to consider how far away from the display we are viewing it from. On a phone screen the higher DPI is ideal because we are looking at it from an arms length away, whereas on a TV we are typically at 6ft or more.
You may have seen Apple use the word ‘Retina’ display, what this means in truth is that we can’t distinguish the pixels on the screen from the distance we should be viewing it from. Let’s go back to our examples; a 60” TV at full HD has a DPI of 37, and we can’t distinguish pixels on it if we are further than 2.4 meters away. A 5” phone display at full HD has a massive DPI of 440 in which pixels are indistinguishable from 20cm. With this in mind, we can tell that the main benefit of 4K for consumer TV sets is that it can be viewed from closer distances and larger displays can be viewed closer with no distinguishable pixels.
Bit Rate
For more information on bits, see our blog on Bit Depth.
Bit rate is one of the most important aspects to consider when recording, converting or mastering video. Regardless of other settings, the bit rate will always have a large impact on the result of the video.
But what is it? Bit rate is fairly self explanatory, it tells us the data rate a video file is encoded with, measured in bits per second. Different rates are suitable for different situations, for example, submitting files for broadcast typically requires a 50mbps encode, whilst Bluray tends to playback at around 25mbps and a YouTube video is in the region of 5mbps.
There is one other thing to bare in mind other than the rate itself, whether the rate is constant or variable. Again, self explanatory, a constant bit rate will remain fixed regardless of the content, whereas a variable bit rate will fluctuate depending on the complexity of the frames. There isn’t necessarily a right or wrong choice here, it depends on the purpose, but modern algorithms are making variable bit rates a more and more alluring choice in many situations. Here’s a few pointers to be aware of though:
Bit rate is one of the most important aspects to consider when recording, converting or mastering video. Regardless of other settings, the bit rate will always have a large impact on the result of the video.
But what is it? Bit rate is fairly self explanatory, it tells us the data rate a video file is encoded with, measured in bits per second. Different rates are suitable for different situations, for example, submitting files for broadcast typically requires a 50mbps encode, whilst Bluray tends to playback at around 25mbps and a YouTube video is in the region of 5mbps.
There is one other thing to bare in mind other than the rate itself, whether the rate is constant or variable. Again, self explanatory, a constant bit rate will remain fixed regardless of the content, whereas a variable bit rate will fluctuate depending on the complexity of the frames. There isn’t necessarily a right or wrong choice here, it depends on the purpose, but modern algorithms are making variable bit rates a more and more alluring choice in many situations. Here’s a few pointers to be aware of though:
- A variable bit rate will generally be of higher quality to the eye than a constant bit rate when a fairly conservative bit rate is used due to it's intelligent allocation of data.
- If the sole purpose of the export is high quality, using a constant high bitrate (equal to the source bitrate) would be best. If the bit rate never drops below the original anyway then the benefits of variable are lost.
- A variable bit rate can cause inconsistency in streaming. With a constant bit rate, the video system can fairly accurately guess how much buffering is required to get enough preloaded video for the internet connection to keep up; a variable bit rate makes this much harder.
- Encoding with a variable bit rate is far more time consuming and power intensive in comparison to a constant bit rate as the encoder has to analyse the footage in order to establish where to give the more or less data. The more accurate and efficient you want it, the longer it’ll take. Most encoders also allow for 2 or more ‘passes’ to be made, allowing for greater quality at the same compression rate due to more intelligent data allocation.
- As per above: If your primary objective is a quick export, a constant bit rate is the best option.
Lossy & Lossless Compression
The end is nigh as we finally reach the last section in this (rather lengthy) blog. This one’s looking at the two overall encompassing compression types: lossy and lossless.
Any encode we ever do falls into one of these two categories (but never both), in a way these can tell you the purpose of the encode.
An encode that falls under the lossy compression bracket is actively trying to reduce the file size, whether to be used for Bluray, online streams, broadcast or otherwise.
A lossless compression technique however, is used when the compression is not necessarily intended to reduce the file size or lose any of the images quality. As per the name, every piece of data can be recovered from a lossless compression. These types of compression actually exist outside just the realms of video encoding, a ZIP file, for example, is lossless.
Broadly speaking, lossy is used for any encodes made for the end user whilst lossless compression may be used during production, for example a post house or application may require a certain format different to the one that was recorded, but in the interest of quality preservation, no data loss would be desired during the intermediate stages in converting one to the other. In short;
Any encode we ever do falls into one of these two categories (but never both), in a way these can tell you the purpose of the encode.
An encode that falls under the lossy compression bracket is actively trying to reduce the file size, whether to be used for Bluray, online streams, broadcast or otherwise.
A lossless compression technique however, is used when the compression is not necessarily intended to reduce the file size or lose any of the images quality. As per the name, every piece of data can be recovered from a lossless compression. These types of compression actually exist outside just the realms of video encoding, a ZIP file, for example, is lossless.
Broadly speaking, lossy is used for any encodes made for the end user whilst lossless compression may be used during production, for example a post house or application may require a certain format different to the one that was recorded, but in the interest of quality preservation, no data loss would be desired during the intermediate stages in converting one to the other. In short;
Lossy Lossless | Intentionally reduces file size during encode. An encode that does not remove any data. |
That's A Wrap.
Ah finally.. The end. It's been a long one & there's a lot of information to absorb here, but I hope this has at least provided you with a bit of an insight into the basics of digital video and compression. Yes, the world of video compression is a complex and daunting one, but remember, it's not essential to know everything, but the more you do, the more you equipped you'll be going into production and making sure you get those technical things right.
Just remember, these things are really not all that important in film-making, it's the story, characters, narrative, performances and generally all the creative choices that makes a good production, whether a fictional drama or a documentary.
Just remember, these things are really not all that important in film-making, it's the story, characters, narrative, performances and generally all the creative choices that makes a good production, whether a fictional drama or a documentary.