What to know about 1080p v. 4K

A topic that comes up often is the issue of video resolution. It’s become a bit of a marketing gimmick, but it is still an important consideration depending on what you plan to do with your footage and resulting video. Real quick, let’s define video resolution. It’s simply the amount of pixels used to capture or display a digital image. That’s it! As you’ll see in this article, resolution is just a small, tiny piece of the pie when it comes to digital image acquisition, yet most of the time it gets waaaaaaay more attention than it should.

I’ve had clients request a 4K camera before. So I immediately start asking questions about where the video is going and how they want to manipulate the image later. After a little bit of conversation, I realize that what they are simply asking for is the best quality I can offer for their shoot.

Image “Quality” is a sticky topic, and it is not always positively co-related with camera resolution. We have been so inundated with marketing noise that it has led a lot of customers and clients to believe simply having 4K resolution means your video will look and sound great.

Thus, it is a loaded, complex, and controversial topic. I’ll try to give you, the reader, some takeaways that might clear up a few misconceptions. Bear in mind, the conversation entails objective information lightly sprinkled with subjective opinion. You may find opinions that differ.

Video resolution: a history

Back in the nineties, most TV sets were 480 lines of resolution (or 480 pixels, vertically measured.) This was what was known as “standard definition.” The only way to see an image with more detail was to go to a movie theater. There, you’d see a 35mm projection with a huge amount of clarity, color depth, and sharpness that you couldn’t get at home. 35mm film projection was really the only way to see a “high definition” image at the time.

Then “High definition” (aka HD) came out in the late 90’s. There was 1080i, 720p, and 1080p as the most common HD formats. (1080 or 720 lines of resolution or vertical pixels. The P or I indicated “interlaced” or “progressive” scanning, which denoted how the image was scanned onto the screen.)

When 1080p came along, we really reached what I consider to be the apex of definition in our footage. (Remember where I talk about subjective opinion? Here it is!)

Compared to 35mm film, 1080p can rival it pretty well. Using film as the basis for all comparison, in my educated opinion, we reached a level of image detail that will serve us well for years to come. Most people from a distance watching a screen don’t really benefit from increased resolution past 1080p. Again, completely my opinion, but I guarantee that most people watching an image on a 32 inch screen from ten feet away could not tell the difference between 4K and 1080p.

So what happened with resolution? Why did it keep increasing?

It became a competition, among many other reasons. On top of that, digital video camera technology grew, and each TV manufacturer wanted to create a bigger, better televisions. There was also a benefit to shooting in greater resolution: it allowed more flexibility in post-production. We could also use 4K video to grab high quality photographic stills. That allowed extra flexibility and increased capability in television production.

The problem is, it’s created a lot of confusion in the industry. Let’s demystify some of it. The first thing we need to do is understand the differences in exhibition and acquisition resolutions. This is the key to understanding how 4K can give us more benefits and when it simply isn’t necessary.

“Acquisition resolution” is different than “exhibition resolution”

“Acquisition resolution” is the resolution by which you capture raw footage in. For example, a 4K camera will capture raw footage in 4K. Four thousand pixels, in other words.

“Exhibition resolution” is the resolution that is shown to the end viewer when they watch your video. Another term for this is “display resolution.” 

Why is this distinction made?

Knowing where your video is being shown will determine whether it’s even necessary to shoot ultra-high resolution in the first place. Why spend money on a 4K camera if your website only has the ability to display 1080p?

It also lets the shooter know what kind of flexibility they have in post. Sometimes a shooter will shoot 4K and will crop the video down to 1080p later, giving them much more flexibility to fix the framing of their shots later. If there’s an unwanted item in the shot like a microphone or some person in the background giving you the finger, you can easily crop them out. That only really works, however, if the exhibition resolution is less than the acquisition resolution. (More on that below.)

4K does not automatically mean “better quality.”

Many cameras now have the ability to take a low quality video and upscale it internally. What you are getting is a low resolution image that has been magnified, artificially. It’s fake 4K, essentially. It’s like taking a blurry old Polaroid and scanning it in 600 PPI – which is considered pretty high. Just because the Image has been digitally converted to a high resolution image doesn’t mean that image is one of quality, of course.  When you open the video in your editing software, it will say “4K” – but the image will not necessarily look sharper.

If the camera actually possesses a true 4K sensor, then you can be assured you are getting true 4K. (A sensor is the chip that converts photons from the lens to electrical signals.)

What makes a camera get perceived as “higher quality” depends on a lot of things, however, and resolution is only a small part of that equation.

In 2007, a company by the name of Red Digital Cinema changed the world and released a camera called the RED ONE. Dubbed a “film-killer” by many in retrospect, this was one of the earliest cameras that professional filmmakers started using as a substitute for 35mm motion picture film.

While there were remarkable digital movie cameras paving the way as early as the 90’s (like the Sony F900 used in the Star Wars prequels), Red’s new camera made a much bigger splash in the community.

With its relative low cost, the Red One offered filmmakers of all budgets the chance to shoot digital footage that could fool general viewers into thinking was film. To understand the difference in cost, the F900 camera from Sony was upwards of 150 thousand dollars, whereas the Red One body and a decent lens was about twenty five thousand – not including tax.

Red touted this camera mainly by its resolution. 4.5K! This was the future, they said. (At the time, 1080p was just finding its first consumer market and was considered out of reach for the average person…) As a result, many amateur filmmakers and hobbyists associated the “cinematic” look with higher resolution. However, the cinematographers who were using it as the time knew what the rest of us didn’t: the Red One images looked GREAT for an entirely different set of reasons. The last of those things being resolution.

Dreamcity Cinema's Red ONE MX camera being used on set.
Fun fact: Dreamcity Cinema still often shoots on the Red One MX camera!

What made it look so good? For starters, it lacked the artificial sharpness of most video cameras. Its digital noise mimicked film grain. It had 13+ stops of dynamic range (this is what allows you to capture a wider range of brightness and shadow values) which was relatively unheard of for video cameras at the time. It also had 12-bit color science (the ability to more accurately record colors and shades of light and dark), a larger sensor (video camera’s at the time were using sensors roughly 1/6th the size of the Red’s) and a brand new codec called R3D that allowed an editor or colorist to directly manipulate the raw image data from the camera’s sensor AFTER it had been recorded to memory.

What a viewer perceives as “great image quality” or a “cinematic” look comes down to many things, both subjective and objective, quantifiable and unquantifiable.

Since the Red One came out, Red Digital Cinema has continued to up the ante by increasing resolution. Each new camera they produce claims to have higher resolution or the ability to shoot faster frame-rates than the last. (Faster frame-rates allow for better slow-motion footage.)

Their cameras indeed excel at accurate skin tones, rich, luscious color science, and a data codec that allows much flexibility in post. So why do they push resolution as the main selling point of the camera?

The answer is simple: pixel count is easier to advertise. More K’s is better than less K’s, right? How else do you show your camera is superior when things likes “good skin tones” are difficult to quantify?

What a viewer perceives as a “great image quality” or a “cinematic” look comes down to many things, both subjective and objective, quantifiable and unquantifiable.

Color science, dynamic range, lens choice, sensor size, lighting, production design, and the skill of your videographer or cinematographer will have a much more positive effect on the quality of your image.

A sample interview photo from one of Dreamcity Cinema's recent shoots in Cedar Rapids, Iowa.
You might think production design is only for narrative films. This interview for a commercial might seem “natural-looking,” but we actually spent about thirty minutes arranging the furniture, plants, and other elements in the background to look aesthetically pleasing and not be distracting to the subject. Production design is subtle but has a powerful effect on the resulting image, regardless of what you’re filming.

The bottom line: resolution is not the only thing. And, as I said at the beginning of this article, 1080p is more detail than most people need.

When is more resolution needed?

Here’s where it can get confusing.  Despite what I said before, resolution can actually positively affect the color rendition of an image. Sensors that are capable of capturing higher resolution do often tend to have the ability to reproduce better luminance (light and dark) and chroma values (color tones). BUT … it is not directly due to increased resolution. It’s a by-product of having more photo sites (areas on a sensor that gather photons) to capture color with. This is where people will often throw their hands up and run away in confusion.

There is one thing that higher resolution is needed for: the ability to crop or magnify an image. On film sets where a lot of visual effects will be used, having an image with greater resolution is extremely helpful for tracking movement and allowing the VFX artist to zoom in digitally. This is also helpful for re-composing your image later, or applying virtual stabilization to shaky footage. Having more resolution is, in-fact, helpful. It’s just that there’s trade offs, and often times a post-production team must weigh the cost.

How can I communicate with my video production company that I want the highest professional quality without just referring to resolution?

Always look at their portfolio. If their work looks the way you imagined, tell them you want your project to look similar. Again, professional looking images are created through the talent and expertise of the operator, and rarely does higher resolution immediately mean you’ll get a better looking image without any of the work.

If you are planning on distributing the final video on a platform that requests 4K, or if you want the ability to zoom in on footage later, you should raise these concerns with the video production company before hiring them. Chances are, they’ll know exactly how to help you.

My videographer is shooting my video in 1080p. Are you saying that it can still look awesome?

Yep! 🙂

—-

Dreamcity Cinema is a video production company in Cedar Rapids, Iowa that specializes in cinematic, high-quality work. If you’re looking for someone to create your next video, we’d love to help.

And… if this article helped you (or confused you) please leave a comment below or send us a message and we’ll clear things up.