Photography

Deep Fusion Demo: Trying Out Apple’s Computational Photography Tech


Apple has released iOS 13.2, debuting the ‘Deep Fusion’ computational photography tech that it showed off during the iPhone 11 keynote, and we decided to take it for a quick spin to see what all the fuss is about. Spoiler Alert: If you care about photography and own an iPhone 11, iPhone 11 Pro, or iPhone 11 Pro Max, update your phone ASAP.

Deep Fusion, which is only available on the latest iPhones, uses the smartphone’s powerful A13 Bionic Neural Engine to perform what Phil Schiller called “computational photography mad science” every time you take a photo in regular light situations. It essentially takes the iPhone’s Smart HDR tech a step further by intelligently combining multiple photos into a single frame with improved detail and overall clarity.

Or, in Apple’s own words, Deep Fusion is “an advanced image processing system that uses the A13 Bionic Neural Engine to capture images with dramatically better texture, detail, and reduced noise in lower light.”

As The Verge explains in this helpful deep dive, Deep Fusion comes on automatically in almost any situation when using the tele lens, and in medium to low light situations when using the Wide lens. The Ultra-Wide lens does NOT support Deep Fusion.

When active, the technology uses nine photos in total: four short frames, four standard frames, and one long exposure. First, Deep Fusion combines three standard shots and one long-exposure into one “synthetic long” frame, which it then combines with the sharpest of four short exposures, before analyzing and enhancing the result pixel-by-pixel.

The final product should be a photo with enhanced detail and decreased noise in all of the right places. This is why most of Apple’s own sample photos show people in textured sweaters:

Photo: Apple

But Apple’s own sample photos don’t answer the real question: just how much of a difference does this technology make compared to Smart HDR? Apple claims that the difference is noticeable, and the beta testers who have been posting Deep Fusion images online over the past couple of weeks seem to agree. But we decided to take a quick before and after shot of our own.

Both of these SOOC JPEGs were taken earlier today with the same iPhone 11 Pro using the 2x telephoto lens to ensure Deep Fusion would be active. The first was captured before installing the iOS 13.2 update, the second was taken afterwards (click for full resolution):

As you can see from the crops below, the difference in the level of detail in the pup’s fur, the texture on her snout, and even the texture of the couch fabric are definitely noticeable when you get in close. Both of these are 1600 x 1000 pixel crops that have not been resized or otherwise manipulated (click to enlarge):

Maybe the results aren’t mind-blowing—certainly the impromptu photos of this editor’s dog snoozing on the couch are nothing special—but Deep Fusion seems to offer a noticeable jump in texture and detail rendering that can only help improve what is already a very capable camera. If you own an iPhone 11, 11 Pro or 11 Pro Max, there’s no reason not to update to iOS 13.2 right away.

The only real ‘downside’ is that, now that Deep Fusion is officially shipping, it’s only a matter of time before we see a fresh crop of those real camera vs iPhone comparisons everyone loves so much.





READ SOURCE

Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.