Modern smartphones come with powerful camera systems, and there’s a lot going on behind the scenes to make your photos look great. One of those things is pixel binning.
You must have seen how Samsung uses terms like “nona-binning” or “Adaptive Pixel” in its marketing when referring to pixel binning, claiming it improves low-light performance. But is this really the case? Let’s learn what pixel binning is, why it’s used, and how it works.
Why smartphone cameras use pixel binning
Before learning what pixel binning is and how it works, you must first know why it exists. You see, smartphones face a big problem when it comes to cameras: size limitation. A camera sensor is basically a plate of millions of pixels that captures ambient light. So the more pixels there are, the more light they can capture to produce a better image.
When we say “pixel” in this context, we’re not talking about the pixels on the screen that emit light, but rather the photosites in the camera sensor that pick up the light. This light is then converted and used to produce the image you see on your screen.
Now here’s the problem: if we keep adding more pixels, we’ll also have to keep enlarging the sensor to accommodate them. This is difficult because a phone’s camera module is only part of its body; you also need to install the battery, motherboard, speaker, and the plethora of sensors found in a smartphone.
To overcome this limitation, tech companies came up with a clever workaround. Instead of making the sensor absurdly large, they shrunk the pixels themselves, fitting more pixels into a given space to increase the image’s maximum theoretical resolution.
For reference, the 12MP sensor in the iPhone 13 has a pixel size of 1.9µm (micrometer), but the same is 1.22µm on the 48MP sensor in the iPhone 14 Pro. And the Galaxy S22 Ultra’s 108MP sensor has pixels of just 0.8µm, one of the smallest we’ve seen.
What is pixel binning? How it works?
Pixel binning is an image processing technique where four or more neighboring pixels in a camera sensor are combined to form a superpixel (or “tetrapixel” or “nonapixel” as Samsung calls it) that carries the sum or the average value of all the pixels it contains.
Note that pixels do not physically move or morph into each other at the hardware level; it’s just their photonic data that is combined via software to mimic a larger pixel.
Let’s understand this with an example using the iPhone 14 Pro Max and the Galaxy S22 Ultra. iPhone 14 Pro Max performs 4-in-1 pixel binning (2×2 matrix) to reduce image resolution from native 48MP to 12MP. Similarly, the S22 Ultra performs 9-in-1 pixel binning (3×3 matrix) and reduces the resolution from 108MP to 12MP.
Lowering the resolution this way makes your phone process photos faster, so you can see a photo right after you click it. On the other hand, shooting at full resolution adds excessive workload and takes much longer to process.
Also, remember that megapixels and megabytes are not the same thing. Megapixels refers to the number of pixels present on the sensor (a fixed unit) and megabytes refers to the size of the image file (a variable unit), which depends on the amount of information contained in your shot.
For example, the Galaxy A53 has a 64MP camera and performs 4-in-1 pixel binning to result in 16MP photos. By default, it shoots at 4624 x 3468 resolution for a total of 16,036,032 pixels or just 16MP (one megapixel equals one million pixels). If you switch to full resolution mode, you get 9248 x 6936 resolution shots for a total of 64,144,128 pixels or 64MP.
Pixel Binning does not guarantee better photos
Here’s something that might be hard to swallow: pixel grouping is a solution to a false problem. The idea behind pixel binning is that it allows more but smaller pixels instead of fewer but larger pixels to be put on a camera sensor. This is not necessary because a larger individual pixel will always capture more raw light.
In comparison, a superpixel of the same size containing the photonic data of several smaller pixels has to guess what the final shot should look like, and that doesn’t always do a great job. This is why photos from Samsung phones sometimes look over-processed while those from iPhones look more natural and cohesive.
Tech companies love to brag about the megapixel count of their new camera sensor, and because of this, the average smartphone user has come to believe that higher megapixel counts mean better image quality. . This is not the case. Image quality is determined more by the size of the sensor itself, not the number of pixels it contains.
The number of megapixels determines the maximum image resolution in which your phone is capable of taking photos. The only practical benefit is that you can zoom and crop your photos without them becoming blurry. Megapixel count doesn’t tell you anything about color science, white balance, dynamic range, or anything like that.
The supposed benefits of pixel binning are not the result of the technique itself, but of the powerful image processing algorithms and your phone’s chipset. It’s the latter that does the hard work to make your photos brighter, less grainy, and more vibrant.
The reason a lower pixel resolution photo can sometimes be better than a full resolution photo is that applying image algorithms is more difficult on a larger photo because it uses more processing power. A smaller photo can be processed immediately.
Pixel Binning is a workaround, not a feature
The point of pixel binning, ultimately, is to allow the maximum theoretical image resolution a smartphone camera can take to be increased, while lowering it enough for your phone to process your photos quickly. for daily use.
Image resolution is important because you obviously want to zoom in on your photos without losing detail, but numbers like 108MP are frankly unnecessary.
The best way to make sure the phone you’re looking to buy has a good camera system is to simply check out camera samples and look at reviews. Don’t get too obsessed with technical details; if you like what you see, this is the camera for you.