Nvidias GeForce RTX 3080 graphics card symbolizes why we tell people to wait for the second generation when bleeding-edge technology appears.
The radical new-look Turing GPUs inside Nvidias GeForce RTX 20-series packed all sorts of cutting-edge technologies designed to usher in real-time ray tracing, a long sought-after goal for the gaming industry. Not only did Turing introduce specialized RT cores devoted to processing ray tracing tasks, it also debuted tensor cores, dedicated hardware that uses machine learning to help denoise ray traced visuals and enable AI-enhanced tools like the fantastic Deep Learning Super Sampling (DLSS) technology. Turings improvements also extended to the traditional shader cores, introducing an overhauled processing pipeline better equipped to handle games built using the newer DirectX 12 and Vulkan graphics APIs. All of these were huge departures from the norm.
But new doesnt always mean great. While the RTX 20-series indeed birthed a new ray-traced era, games that supported ray tracing or DLSS were few and far between over its lifetime. Worse, the RTX 20-series cards offered essentially the same performance in traditional games as their older GTX 10-series predecessors at the same price point. The initial reception (and sales) werent glowing.
The $699 GeForce RTX 3080 and Nvidias new Ampere GPU architecture changes all that. This thing freakin smokes. Its a massive upgrade over the older $699 GeForce RTX 2080, significantly faster than the former $1,200 RTX 2080 Ti flagship, and if youve been holding onto your older GeForce GTX 1080? The GeForce RTX 3080 absolutely crushes it. This is an excellent graphics card for 4K and ultra-fast 1440p gamingif you can afford it when it launches on September 17.
Editors note: This comprehensive review of the GeForce RTX 3080 goes longer than most as its our first evaluation of an Ampere-powered GeForce RTX 30-series graphics card. Check out Nvidia GeForce RTX 3080 tested: 5 key things you need to know for high-level takeaways of this in-depth info, or use this table of contents to hop between the various sections of the review.
Before we dive into whats new in the GeForce RTX 3080, heres a high-level look at the Founders Edition cards specifications. You can find more info about how it stacks up against the previous generation in our GeForce RTX 30-series vs. RTX 20-series spec comparison.
Got it? Good. Now were going to get geeky for a bit. Skip to the next section if you arent interested in some deeper details on how the tech inside the GeForce RTX 3080 works.
The beating heart inside the GeForce RTX 3080 is Nvidias new GA102 Ampere GPU. Ampere is built using Samsungs 8nm processor node, moving up from the TSMC 14nm and slightly modified 12nm nodes used for the GTX 10- and RTX 20-series cards, so this is a generational leap for Nvidia. The last time Nvidia leaped a node, the GeForce GTX 10-series demolished their direct predecessors. Its no different this time around, as youll soon see.
While Turing shook up GPU design, Ampere builds on Turings foundations. Turings simultaneous multiprocessors (SMs)the building blocks of the GPUreceived a significant overhaul, adding a new integer pipeline (INT32) alongside the floating point pipeline (FP32) traditionally used to process shading. The new pipeline let Nvidias GPU handle integer instructions at the same time as traditional FP instructions, giving Turing-based graphics cards a big speed boost in games that leaned heavily on those tasksnamely, well-optimized Vulkan and DirectX 12 games. Performance in traditional games stayed largely stagnant in the RTX 20-series graphics cards, however, partially because FP32 is generally more important for gaming workloads.
Ampere both builds on Turing and scales back that design by doubling the number of CUDA cores in each SM. The new architecture keeps a data path devoted exclusively to those crucial FP32 tasks, while the second path can process either INT or FP tasks, rather than dedicated them to INT alone. That makes Ampere much faster at traditional game rendering.
Note that overall game performance doesnt scale up perfectly with CUDA core counts, especially because INT instructions can now potentially eat into the total FP throughput on that second path. All in all, its a good, reasonable tweak. To feed the beasts, Nvidia doubled the bandwidth and partition size of the L1 cache in each SM, along with adding 33 percent more L1 capacity.
Those specialized RT and tensor cores also received upgradesthe RT cores to a second-generation version, and the tensor cores to the third generation. There are more RT cores crammed into the RTX 3080, and fewer tensor cores. But these third-gen tensor cores are much more capable than their predecessors, enabling higher performance despite the reduced count.
The RT cores also gained the ability to interpolate triangle position. This particularly helps in scenes with motion blur, potentially speeding up ray traversal by 8X. Overall, Nvidia says its new RT cores are 1.7X faster than before, while the tensor cores are 2.7X more efficientthough again, that doesnt scale perfectly to actual gaming performance.
Those gains are boosted even further by Amperes newfound ability to run tasks on the tensor and RT cores simultaneously. In Turing, ray-traced tasks ran alongside shader functions, but the GPU needed to finish that ray-traced task and hand it to the tensor cores for denoising and DLSS before spitting out the final image. In Ampere, tensor cores can run DLSS to upscale one frame while the shader and RT cores work on the next one, all three process tasks running simultaneously. Paired with the improved shader capabilities, a frame that took up to 13 milliseconds to run on Turing with ray tracing and DLSS enabled can now be spit out in 6.7ms with Ampere. Hot damn.
Bottom line? Improvements in the Ampere GPU architecture give a big boost to performance in both traditional games as well as games that leverage Nvidias RTX capabilities.
But the GeForce RTX 3080s gains dont come solely from the new Ampere architecture. Like the GTX 1080 and RTX 2080 before it, Nvidias latest flagship introduces cutting-edge memory, too. This time around, the GeForce RTX 3080 (and forthcoming RTX 3090) tap Microns blazing-fast GDDR6X memory modules. Microns memory rolls out PAM4 signaling technology, which lets them send four possible values per cycle, up from the traditional two. That lets GDDR6X move data twice as fast as previous incarnations. The 10GB of GDDR6X memory in the RTX 3080 sends that data over a 320-bit bus, for a total memory bandwidth of 760GBps. By comparison, the GeForce RTX 2080 Supers 8GB of non-X GDDR6 used a 256-bit bus for 496GBps of overall bandwidth.
Nvidia also tweaked how the memory works to improve its efficiencya good thing, as the 320W GeForce RTX 3080 draws noticeably more power than even the previous-gen RTX 2080 Ti flagship. The company added 250mV voltage steps and new max transition avoidance coding to take advantage of the four-signal PAM4 memory without having the chips swing wildly from the lowest to highest values, which could cause energy arcs. Nvidia also gave the graphics core and memory system their own dedicated power rails, letting the GPU fine-tune the power levels of each separately for greater efficiency. Previous chip designs used a shared power rail for the two.
Now that were talking power, its a great time to shift gears to the design of the GeForce RTX 3080 Founders Edition.
Next page: RTX 3080 FE design
The GeForce RTX 3080 Founders Edition differs wildly from Nvidia’s past in-house graphics cards. A lot of the change likely stems from the increased power demands of Ampere.
Turing’s flagship, the GeForce RTX 2080 Ti, topped out at 260 watts. The GeForce RTX 3080 is rated for 320W, and the step-up RTX 3090 hits 350W. Moving up a process node, like Ampere does, can offer the same performance at much greater efficiency, offer more performance at the same power, or fall somewhere between those two extremes. Ampere is more efficient than Turing—Nvidia claims the 220W GeForce RTX 3070 offers the same performance as the 260W RTX 2080 Ti—but it’s clear that Nvidia put the pedal to the metal a bit heavily here.
Pay close attention to the power supply requirements, which now hit 750W for both the 3080 and 3090 Founders Edition cards. When introducing its ROG Strix variants of the RTX 30-series GPUs, Asus even warned that “as a result of increased power demands, users may need to re-evaluate the power rating of their PSUs.” A power supply that’s toiled hard for years might not have the voltage regulation required to deal with fast load changes. Monster GPUs need to be well-fed.
Such a powerful GPU paired with such powerful memory requires powerful cooling, too. Everything from the PCB design to the power connector design to the radical cooler itself is different in the GeForce RTX 3080.
It all revolves around the new-look cooler. Prior to the GeForce RTX 20-series, all Nvidia Founders Edition and reference cards used a blower-style cooler that shoots hot air out the back of your system. With the RTX 20-series, Nvidia moved to a dual-axial fan design traditionally found in custom boards. The GeForce RTX 3080 splits the difference.
Nvidia’s new Founders Edition design deploys a unique push-pull “flow-through” design. The fan at the rear of the system stays in its normal place, embedded in the shroud at the bottom of the card. The front fan moves to the top of the graphics card, where you’d normally see a backplate or bare PCB.
The rear fan functions like a traditional blower-style cooler, pushing hot air out of the card’s I/O bracket. The top-mounted front fan, however, pulls in air at the bottom of the GPU and through the fan atop the card, exhausting it into the top of your system. The fans at the top or rear of your case then draw the air out. Nvidia says this method also keeps the intake-to-exhaust airflow in your system more consistent, rather than disrupting everything with a big honking dual-axial card between your case fans.
In effect, the GeForce RTX 3080 Founders Edition puts part of the cooling burden on the rest of your system. We’ll need to conduct more testing to see whether this scheme affects your CPU and memory performance. Be sure your case is equipped with a rear outtake fan, as not all cases include one.
The rest of the GeForce RTX 3080 Founders Edition bristles with large, chunky, black heatsink fins, and a wraparound metal shroud that looks surprisingly slick in an industrial way. The card also feels significantly heavier in your hand than the RTX 20-series Founders Edition models. It’s dense. But the new design works very well, keeping the beastly Ampere GPU decently cool while also creating far less noise than the last-gen dual-axial design. It may not work well in small form factor PCs, however.
The pewter-hued metal exterior of that shroud gets very hot to the touch after you load up the card for a few hours, though. Like the last-gen Founders Edition designs, the RTX 3080 FE’s heavily integrated design looks like a nightmare to disassemble, too.
In its reviewer’s notes, Nvidia PR pointed out that the card is “unique in both its design and assembly,” and warned that “disassembly without damaging the card requires some extra care” while posting “special ‘engineering-approved’ instructions.” Don’t expect to take this gorgeous graphics card apart very easily.
Of course, you can’t push hot air through a PCB. In another departure for Nvidia, the GeForce RTX 3080 Founders Edition uses a custom PCB rather than a reference board, and it’s really custom. As you can see from the image above, this card uses a teeny-tiny PCB with a long, angular notch taken off the far end, giving it a look somewhat reminiscent of Pac-Man with monstrous teeth. That cut-out leaves room for the funky new fan to suck air up and through the heatsink, though water block support will probably be slim pickings for DIY liquid enthusiasts.
Shrinking the PCB’s size required tight engineering—and compromises. Rather than using traditional 6- or 8-pin power connectors, the GeForce RTX 3080 Founders Edition uses an Nvidia-created 12-pin power connector that’s not much larger than a single 8-pin, then turns it vertically to save a ton of footprint space on the PCB.
The company includes a 2x 8-pin to 12-pin adapter in the box so you can use the new card with traditional power supply hookups. It’s a nifty bit of mechanical engineering. Nvidia says it submitted the design for standardization, so we may see it pop up on rival cards in the future.
The nifty 12-pin idea turned out to be less than ideal in actual product terms, at least if you care about cable management. The included adapter isn’t very long, and the shortened PCB means the 12-pin connector itself is positioned close to the center of the GPU’s long edge. In practice, you’ll have the ugly 8-pin connections looking cluttered front-and-center through your case’s side window. The odd three-quarter angle of the 12-pin connection also makes it very hard to keep the adapter’s dual prongs from covering some of the illuminated “GeForce RTX 3080” logo on the side.
I don’t like it. On the plus side, some power supply manufacturers are offering specialized standalone cables that plug into the 12-pin on one end and dual 8-pin connections at your power supply, removing the need for an adapter at some extra cost (if one rolls out for your particular PSU, that is). Virtually every custom GeForce RTX 3080 graphics card by third-party vendors like EVGA, MSI, and Asus stick to a traditional reference board with normal 8-pin connectors, so you can look that way as well if the adapter bothers you.
The GeForce RTX 3080 Founders Edition comes equipped with a single HDMI connection, and in a first for graphics cards, it supports HDMI 2.1, which lets you connect an 8K display with a single cable. Previously, you needed four HDMI cables or a DisplayPort-to-HDMI 2.1 adapter to achieve the same thing. The card also packs three DisplayPort 1.4 connections. On the video technologies front, the RTX 30-series GPUs are the first to support AV1 decode (a boon for streaming 8K content) and the ability to record 8K HDR video natively at 30 fps, without the need to invest in a discrete video capture card.
Nvidia’s GeForce RTX 30-series also upgrades to PCIe 4.0, which is currently supported only on AMD Ryzen 3000 systems with an AM4 X570 or B550 motherboard. Intel does not support the blazing-fast interface. While that sounds like a conundrum, Nvidia representatives counseled that PCIe 4.0 moves performance only by a “few percent” in best-case scenarios, and CPU selection matters much more. The company measured its own performance benchmarks in an Intel Core i9 system. Fear not: Your 9900K rig isn’t dead yet.
What’s not included on the 3080 FE is also noteworthy, though irrelevant to most gamers. The RTX 20-series Founders Edition graphics cards included a VirtualLink connection to supply audio, visual, and data to VR headsets over a single USB-C port. While VirtualLink was backed by some of the biggest names in the business, it never appeared on any of the major VR headsets, and the initiative’s website is now dead. You’ll no longer find the port on the RTX 3080 FE.
Also missing: SLI connectors. Nvidia is limiting NVLink support to the GeForce RTX 3090 alone for this generation, slamming the final nail in multi-GPU’s coffin. It’s effectively been dead for years, though. Games just don’t support it anymore, and it’s usually janky in the games that do. Modern GPUs are also incredibly powerful compared to earlier ones that required SLI to hit their best frame rates.
All in all, however, Nvidia’s GeForce RTX 3080 Founders Edition is a gorgeous, cleverly engineered graphics card that looks great while staying cool and quiet. Too bad about that adapter, though.
Next page: Odds and ends, our test system configuration
Nvidia’s also supporting the GeForce RTX 30-series launch with a few things outside the scope of this review.
Nvidia Reflex combines GPU and game engine optimizations to greatly improve your latency in games that support Nvidia’s API. Activating it zeroes out the usual GPU render queue, so the GPU renders frames fed to it by the CPU as quickly as possible, reducing latency-inducing “backpressure” on your system processor. As Nvidia’s chart above shows, simply flipping it on in a supported game instantly lowers latency, while upgrading to more powerful graphics cards and faster monitors drop it even further.
Nvidia Reflex is an ingenious way to sell the insane speeds provided by this new generation of graphics cards. This isn’t just for RTX 30-series cards, either; Nvidia says Reflex will work with most cards from the GeForce 900-series on up. I’m looking forward to testing it more extensively in the future.
Call of Duty Warzone, Valorant, Fortnite, and Destiny 2 will be among the first games to support Nvidia Reflex when it launches this month. It’ll also be coming to Call of Duty: Modern Warfare, Call of Duty: Black Ops Cold War, and Apex: Legends.
If you’re willing to spend up for the best esports experience, Nvidia’s monitor partners are also rolling 360Hz G-Sync displays with hardware built-in to measure your overall system latency. Several low-latency Reflex-supporting mice will hit the streets soon, too, though the system will also work with standard peripherals.
Just as exciting is RTX IO, an innovative new technology that taps into Microsoft’s DirectStorage API to let your NVMe SSD funnel data directly to your GPU, removing the potential bottlenecks of a pokey CPU and system memory. It sounds like the drool-worthy storage tech inside the next-gen Xbox Series X and PlayStation 5 consoles. Read our article on how Microsoft and Nvidia plan to kill game-loading times on PCs if you want to know more. (We also expect AMD’s “Big Navi” Radeon RTX 6000-series graphics cards to support Microsoft DirectStorage in some way when they’re revealed on October 28.)
Finally, DLSS and ray tracing denoising isn’t all that tensor cores can do. The new Nvidia Broadcast app leverages that AI hardware to provide streamers with all sorts of nifty on-the-fly effects, ranging from the magical RTX Video noise-cancellation feature to video effects that include adjustable background blur, greenscreen-style background replacement, and automatic head-tracking effects. They looked pretty darn impressive in the demo, but we haven’t had time to test the suite ourselves yet.
Phew! That was a lot of info. Let’s get into how this hot rod rides. Spoiler: It’s fast.
Our dedicated graphics card test system is a couple of years old, but it’s packed with some of the fastest complementary components available to put any potential performance bottlenecks squarely on the GPU. Most of the hardware was provided by the manufacturers, but we purchased the cooler and storage ourselves.
We were faced with a dilemma ahead of this review. As we mentioned in the previous section, the Nvidia GeForce RTX 30-series upgrades to the cutting-edge PCIe 4.0 interface, which could potentially improve performance in limited, ultra-demanding scenarios. But only AMD Ryzen 3000 systems with an AM4 X570 or B550 motherboard support PCIe 4.0. Intel does not support the blazing-fast interface yet, but it holds the single-thread performance crown so crucial to most games. What to do, other than moan about Intel’s failure to support it?
Nvidia representatives say PCIe 4.0 adds only a “few percent” more performance in the limited scenarios where it would make any difference. They claim that CPU choice matters more for overall performance. Nvidia’s own benchmarks were conducted on an Intel Core i9 system with PCIe 3.0.
With that info in hand, we decided to stick with our established test bench for now. Our overclocked 5GHz Core i7-8700K goes toe-to-toe with even the Core i9-10900K in gaming performance, as TechSpot’s testing proved last month, so it shouldn’t be a bottleneck here. While AMD Ryzen has greatly closed the gap with Intel on single-thread performance, it’s still a bit behind the Core chips. We feel we’d be more likely to see performance impacts from slower single-threaded performance than from lacking PCIe 4.0 support. If the new “Zen 3” Ryzen processors debuting in October equal or top Intel’s single-threaded performance, we’ll likely switch to that platform for future tests to get the best of both worlds. But it remains to be seen what AMD’s next-gen chips will offer.
We’re comparing the $700 GeForce RTX 3080 Founders Edition against a bunch of other FE cards: Nvidia’s $800 GeForce RTX 2080, $1,200 RTX 2080 Ti, and the older $700 GTX 1080. (MSRP prices for the 1080 and 2080 started at $100 less, but Nvidia charged a premium for the FE models.)
Because so many owners of the $700 GTX 1080 Ti decided to skip over the lackluster performance increase in the similarly priced RTX 2080, we’re also including the EVGA GTX 1080 Ti SC2 in our roundup. Our GTX 1080 Ti Founders Edition died years ago, the only Nvidia GPU ever to expire in our hands.
Why no Radeon comparisons? Simple: Nvidia is in a class of its own with high-end GPUs. Only the barely-available Radeon VII ever managed to come close to matching the GeForce GTX 1080 Ti’s performance, and the 1080 Ti is now 3.5 years old. The Radeon RX 5700 XT isn’t in the same ballpark either. Big Navi might change that, but Nvidia has defined enthusiast-class GPUs for years now. The time investment to benchmark AMD GPUs ahead of this launch, only for them to appear way at the bottom of the charts, wasn’t worthwhile.
We test a variety of games spanning various engines, genres, and graphics APIs (DirectX 11, DX12, and Vulkan). Each game is tested using its in-game benchmark at the highest possible graphics presets unless otherwise noted. We disable VSync, frame rate caps, real-time ray tracing or DLSS effects, and FreeSync/G-Sync, along with any other vendor-specific technologies like FidelityFX. We’ve also enabled temporal anti-aliasing (TAA) to push these cards to their limits. We run each benchmark at least three times and list the average result for each test. We tested the older cards using Nvidia’s publicly available 452.06 Game Ready driver, and the RTX 3080 FE using a 452.16 driver provided early to reviewers.
Next page: Gaming benchmarks begin
Yep, Sony exclusives are hitting the PC now. Horizon Zero Dawn hit Steam with some performance issues, but the most egregious ones have been mostly cleared up thanks to hard work from the developers, and the game topped the sales charts for weeks after its release. It also seems to respond somewhat to PCIe 4.0 scaling, which will make this an interesting inclusion when we shift to a PCIe 4.0-based system in the future.
Horizon Zero Dawn runs on Guerrilla Games’ Decima engine, the same engine that powers Death Stranding. Ambient Occlusion can still offer iffy results if set to Ultra, so we test with that setting at Medium. Every other visual option is maxed out.
Just look at that. The GeForce RTX 3080 flies. At 4K it’s a ludicrous 81 percent faster than the RTX 2080, its direct predecessor. It’s also 34 percent faster than the former RTX 2080 Ti flagship, and a monstrous 151 percent faster than the GTX 1080. If you’ve been sitting tight since spending $700 on the GTX 10-series, it’s definitely safe to upgrade, as these benchmarks will continue to show.
The GeForce RTX 3080 dominates even at lower resolutions. The lead narrows as you move down to 1080p or 1440p, settings that put more of the burden on the CPU rather than the GPU, like 4K does. That’ll be a trend in most, but not all games as we continue through these.
Gears Tactics puts it own brutal, fast-paced spin on the XCOM-like genre. This Unreal Engine 4-powered game was built from the ground up for DirectX 12. We love being able to work a tactics-style game into our benchmarking suite. Better yet, the game comes with a plethora of graphics options for PC snobs. More games should devote such loving care to explaining what flipping all these visual knobs mean.
You can’t use the presets to benchmark Gears Tactics, as it intelligently scales to work best on your installed hardware. That means “Ultra” on one graphics card can load different settings than “Ultra” on a weaker card. We manually set all options to their highest possible settings, and disable the minimum/maximum frame rate options.
Fun fact: The GeForce RTX 3080 FE is the only graphics card that doesn’t generate a “Your GPU can’t handle this” warning when enabling Glossy Reflections. Only the 3080 and the RTX 2080 Ti lack that warning for Planar Reflections. Told you these cards are monsters.
Those especially strenuous options hammer these GPUs, causing the RTX 3080 to achieve a narrower but still substantial 55 percent margin of victory over its predecessor at 4K. The RTX 3080’s extra oomph carries over to 1440p as well, where it’s 50 percent faster than the RTX 2080, 30 percent faster than the 2080 Ti, and 89 percent faster than the GTX 1080.
More interesting is the gap between the GTX 1080 Ti and RTX 2080. The two cards offer virtually identical performance in many games, but with Gears Tactics being designed around DX12, the integer pipelines introduced in the RTX 20-series get to flex their muscles. Then all those shader improvements in Ampere come to bear with the RTX 3080.
Again, CPU comes into play more at 1080p.
One of the best games of 2019, Metro Exodus is one of the best-looking games around, too. The latest version of the 4A Engine provides incredibly luscious, ultra-detailed visuals, with one of the most stunning real-time ray tracing implementations released yet. We test in DirectX 12 mode with ray tracing, Hairworks, and DLSS disabled for our basic benchmarks.
Another whupping at 4K. The GeForce RTX 3080 FE is 63 percent faster than the RTX 2080, 23 percent faster than the RTX 2080 Ti, and 136 percent faster than the GTX 1080.
The gap narrows a bit at 1440p, but Nvidia’s new card remains over 50 percent faster than the RTX 2080 and almost 20 percent faster than the 2080 Ti.
Borderlands is back! Gearbox’s game defaults to DX12, so we do as well. It gives us a glimpse at the ultra-popular Unreal Engine 4’s performance in a traditional shooter.
This is yet another game where the RTX 3080 flies beyond its predecessor by a ridonkulous 80 percent at 4K. It’s also 37 percent faster than the 2080 Ti, and 154 percent faster than the GTX 1080. The performance lead shrinks at 1440p, but not by much. And even at 1080p, the RTX 3080 is 58 percent speedier than the 2080.
To frame it another way: The 3080 is faster at 4K than the 2080 is at 1440p!
Strange Brigade is a cooperative third-person shooter where a team of adventurers blasts through hordes of mythological enemies. It’s a technological showcase, built around the next-gen Vulkan and DirectX 12 technologies and infused with features like HDR support and the ability to toggle asynchronous compute on and off. It uses Rebellion’s custom Azure engine. We test using the Vulkan renderer, which is faster than DX12.
Here, the RTX 3080 remains roughly 80 percent faster than the RTX 2080, and over 100 percent faster than EVGA’s overclocked GTX 1080 Ti SC2. Like in Borderlands, the domination continues even as you move down in resolution. While the RTX 2080 barely tops 200 frames per second at 1080p, the RTX 3080 hits a stratospheric 330 fps.
Next page: Gaming benchmarks continue
The latest game in the popular Total War saga, Troy was given away free for its first 24 hours on the Epic Games Store, moving over 7.5 million copies before it went on proper sale. Total War: Troy is built using a modified version of the Total War: Warhammer 2 engine, and this DX11 title looks stunning for a turn-based strategy game.
Like Metro Exodus (which supports DX12, but was built for DX11) the RTX 3080 is 63 percent faster than the RTX 2080 at 4K here, and 25 percent faster than the RTX 2080 Ti. The biggest leads for Nvidia’s Ampere GPU look like they come in games running more modern graphics APIs. It’s over twice as fast as the GTX 1080 non-Ti at 4K, and nearly 50 percent faster than the GeForce RTX 2080 at 1440p.
The latest in a long line of successful racing games, F1 2020 is a gem to test, supplying a wide array of both graphical and benchmarking options, making it a much more reliable (and fun) option than the Forza series. It’s built on the latest version of Codemasters’ buttery-smooth Ego game engine, complete with support for DX12 and Nvidia’s DLSS technology. We test two laps on the Australia course, with clear skies on and DLSS off.
You know how this goes by now. At 4K, the new card is a nice 69 percent faster than the RTX 2080, 27 percent faster than the 2080 Ti, and a whopping 135 percent faster than the GTX 1080. The beat-down continues at lower resolutions, where the RTX 3080 clocks in at 57 percent faster than its predecessor, 25 percent faster than the RTX 2080 Ti, and over twice as fast as the GTX 1080.
Shadow of the Tomb Raider concludes the reboot trilogy, and it’s utterly gorgeous. Square Enix optimized this game for DX12, and recommends DX11 only if you’re using older hardware or Windows 7. We test with DX12. Shadow of the Tomb Raider uses an enhanced version of the Foundation engine that also powered Rise of the Tomb Raider and includes optional real-time ray tracing and DLSS features.
The GeForce RTX 3080 is so fast, this game struggles to keep up at 1440p, where SOTR was CPU-bound almost half the time—hence the bunched-up results among the top-end cards at 1440p and 1080p. Nvidia’s new GPU shines at 4K however, where it’s 70 percent faster than the RTX 2080, 30 percent faster than the 2080 Ti, and 159 percent faster than the GTX 1080.
This DX11 game isn’t a visual barn-burner like the (somewhat wonky) Red Dead Redemption 2, but it still tops the Steam charts day in and day out, so we deem it more worthy of testing. RDR2 will melt your graphics card, sure, but GTA V remains so popular years after launch that upgraded versions of it will be available on the next-generation consoles. That’s staying power.
We test Grand Theft Auto V with all options turned to Very High, all Advanced Graphics options except extended shadows enabled, and FXAA. GTA V runs on the RAGE engine and has received substantial updates since its initial launch.
Here, the game is entirely CPU-bound with all the graphics cards (except the GTX 1080) at both 1080p and 1440p. Moving to 4K reveals differences, though Nvidia’s latest monster is straining the game engine’s capabilities even at this pixel-packed resolution. The GeForce RTX 3080 Founders Edition is 47 percent faster than the RTX 2080 (and nearing the upper limits of this game’s frame rate), 18 percent faster than the RTX 2080 Ti, and 83 percent faster than the GTX 1080.
Like GTA V, Ubisoft’s Rainbow Six Siege still dominates the Steam charts years after its launch, and it’ll be getting a visual upgrade for the next-gen consoles. The developers have poured a ton of work into the game’s AnvilNext engine over the years, eventually rolling out a Vulkan version of the game that we use to test. By default, the game lowers the render scaling to increase frame rates, but we set it to 100 percent to benchmark native rendering performance on graphics cards. Even still, frame rates soar.
There are no surprises in our final game. The RTX 3080 Founders Edition is 70 percent faster than the RTX 2080, 30 percent faster than the RTX 2080 Ti, and a flat-out stupid 165 percent faster than the GTX 1080 at 4K. The 3080’s lead over its predecessor drops to a “mere” 60 percent at 1440p.
Next page: Ray tracing and DLSS performance
Traditional gaming performance is only part of Nvidia’s value proposition with the GeForce RTX 3080. The company also packed this GPU with vastly improved RT and tensor core hardware to accelerate ray tracing and DLSS, respectively, now that those technologies are starting to gain more traction. Big-name games like Cyberpunk 2077, Watch Dogs Legion, and Vampire: The Masquerade—Bloodlines 2 will support ray tracing and DLSS as they roll out over the next few months. Fortnite is rolling out ray tracing, DLSS, and Nvidia Reflex on Thursday to roll out the red carpet for the RTX 3080’s launch.
Ahead of the blitz, we benchmarked this card using compatible games in our suite to see if Nvidia’s lofty promises for the RTX 30-series proved true. We didn’t include the GTX 1080 and 1080 Ti in this section because, well, they don’t include dedicated ray tracing hardware. We hope to add more titles to this section in the future, so stay tuned.
First up: Shadow of the Tomb Raider. This was one of the first games to include ray tracing and the first incarnation of DLSS. The game’s ray traced shadows aren’t as breathtaking as Control’s multiple effects, but they could definitely impact performance on the original RTX 20-series GPUs, especially at the Ultra setting we test at. The first-gen DLSS implementation doesn’t support 1080p resolution, it’s worth noting—it bottoms out at 1920x1200. DLSS 2.0 is much more flexible.
At 4K, the performance impact of activating ray tracing and DLSS is a scant 14 percent on the new GeForce RTX 3080 FE. The card manages to stay well above 60 fps even with all those bells and whistles active. The RTX 2080, by comparison, can’t hit a consistent 40 fps with ray tracing and DLSS enabled, due to its hefty 26-percent performance penalty. Activating RTX impacts the 2080 Ti just 10 percent, meanwhile, which makes us wonder if memory capacity makes a difference with these features active. The RTX 2080 Ti has 1GB more capacity than the RTX 3080, albeit of the slower GDDR6 variety.
Metro Exodus was the second game ever to support ray tracing, but it remains one of the most impressive. Its ray-traced Global Illumination completely changes the vibe of the game, making it feel much more foreboding, ominous, and post-apocalyptic. It also uses the first incarnation of DLSS, which looks a lot better now than it did when the game first launched.
Activating ray tracing and DLSS makes the RTX 3080 FE run about 10 percent slower at 4K. That’s not bad at all to enable this sort of eye candy, and the average frame rate stays above the hallowed 60-fps mark.
Once again, its RTX 2080 predecessor fails to crack a 40-fps average, but its frame rate only suffers by about 13 percent with RTX on. It’s just that much slower in general performance.
The GeForce RTX 2080 Ti splits the difference with an 11-percent performance impact, but it can’t crack 60 fps even with ray tracing off at 4K. The frame rate impacts become much more pronounced at lower resolutions, where Nvidia says the CPU starts becoming a bigger factor in ray tracing performance. Still, Metro manages to hit a very nice 88-fps average at 1440p.
So much for not being able to play ray traced games at higher resolutions.
Finally, while the original version of DLSS could sometimes make games look a bit blurrier than native resolution, the fantastic DLSS 2.0 fixed that issue. It’s so good that some games are starting to include it even without ray tracing, including Death Stranding and the game we’re about to test, F1 2020. That’s because DLSS 2.0 can provide a boost in frame rates with little to no visual changes, depending on whether you use Quality or Performance mode. DLSS 2.0 is much easier to implement than the first version, and it’s such a game-changer that we hope to see more titles support it soon. Anywho, onto the benchmarks:
Activating DLSS 2.0 in F1 2020 gives the RTX 3080 an 18-percent frame rate boost at 4K, pushing its average close to 150 fps, while the RTX 2080 rides a 22-percent boost to almost 100 fps. The GeForce RTX 2080 Ti, meanwhile, sees a speedy 25-percent increase with DLSS active. The RTX 3080 sees much smaller boosts at lower resolutions, but that’s clearly because it’s nearing the frame rate limit of either the game engine or the CPU.
Nvidia touted vastly improved RT and tensor cores in the RTX 30-series. But as you can see, the older RTX 20-series cards manage to float in the same range of performance impact, aside from the RTX 2080’s major blow in Shadow of the Tomb Raider. Nvidia representatives told us that the amped-up RT and tensor cores can stretch their legs more in games with more strenuous ray tracing effects.
The best results should be seen in fully path-traced games like Minecraft RTX and Quake II RTX, Nvidia said. As triple-A games start integrating multiple ray tracing effects, the performance should grow compared to these early RTX titles with support for singular effects. Nvidia cited Control as a test-worthy example, but we weren’t able to benchmark that game in time for this review. Cyberpunk 2077 and Fortnite’s RTX update will also be absolutely loaded with various ray tracing technologies. We hope to benchmark RTX performance more soon.
Regardless, when it’s bolstered by DLSS, the GeForce RTX 3080 Founders Edition proves powerful enough to play with superb frame rates even with the penalty impact of ray tracing. That’s a major change from the RTX 20-series, where even the then-monstrous RTX 2080 Ti often needed to drop to 1440p or even 1080p to maintain 60 fps in ray traced games. If you upgrade to the RTX 3080, you needn’t be scared to turn on ray tracing anymore.
Next page: Power, thermals, and noise
We test power draw by looping the F1 2020 benchmark at 4K for about 20 minutes after we’ve benchmarked everything else. We note the highest reading on our Watts Up Pro meter, which measures the power consumption of our entire test system. The initial part of the race, where all competing cars are onscreen simultaneously, tends to be the most demanding portion.
This isn’t a worst-case test. We removed the Core i7-8700K’s overclock and specifically chose a GPU-bound game running at a GPU-bound resolution to gauge performance when the graphics card is sweating hard. If you’re playing a game that also hammers the CPU, you could see higher overall system power draws. Consider yourself warned.
Yep, feeding the Ampere beast takes quite a bit of power. Remember when we said Nvidia put the pedal to the metal with these GPUs? It shows here, where the RTX 3080 draws 50W more than the former 2080 Ti flagship, and over 100W more than its direct 2080 predecessor, and an eye-opening 200W more than the GTX 1080. Spending uncomfortable levels of power to achieve uncomfortable levels of (gaming) power is well worth it by our estimation, but if you’re upgrading from a prior GeForce card, you may need to invest in a new power supply—especially if you’re jumping up from the GTX 10-series.
We test thermals by leaving GPU-Z open during the F1 2020 power draw test, noting the highest maximum temperature at the end. We observed the RTX 3080’s temperatures in EVGA’s Precision X1 tool instead, however.
Nvidia’s radical new “flow-through” cooler for the GeForce RTX 3080 Founders Edition works just fine, and it’s significantly quieter than the dual-axial fans on the RTX 20-series Founders Edition cards—especially the RTX 2080 Ti. In practice, it’s a lot better than any reference or Founders Edition cooler we’ve seen before. Keeping the GPU at 79 degrees Celsius isn’t noticeably better or worse than previous FE models in raw thermal performance, and this actually runs slightly hotter than the RTX 2080 FE. The outer metal shroud also gets screaming hot to the touch.
But the GeForce RTX 3080 is a lot more power-hungry and higher-performance than previous-gen GPUs, and I’m not surprised it took an exotic cooling design to keep the RTX 3080 Founders Edition tame while churning out frame rates like these. Nvidia couldn’t just slap a bunch of heavy metal and endless axial fans on these without severely stepping on the toes of partners like EVGA and Asus, after all.
I mention all this because Nvidia made lofty claims about the flow-through cooler’s efficiency. The company’s GeForce RTX 30-series announcement post said it offers “nearly 2x the cooling performance of previous generation solutions,” while CEO Jensen Huang claimed that this “keeps the GPU 20 degrees cooler than the Turing design” during the card’s livestream reveal. As you can see below, those lofty numbers aren’t represented in the final product, the RTX 3080 Founders Edition that you can actually buy.
The details, as always, are more nuanced. In a presentation to reporters, GeForce product manager Justin Walker shared the above slide, which revealed that the claims were made by comparing the RTX 3090 versus the monster Titan RTX at a fixed 350W. This really is a much more efficient cooler than previous-generation solutions, and it was essential for bringing Ampere to heel. But this serves as a reminder that you shouldn’t take marketing numbers at face value.
I have no complaints about Nvidia’s flow-through design, though I haven’t tested whether the hot air pulled into the top of your system can affect CPU or memory performance. I also love how quiet it is compared to previous models. I’m very curious to see what custom board makers can do with their more traditional cooling designs.
Next page: Should you buy the GeForce RTX 3080 Founders Edition?
The GeForce RTX 3080 delivers a staggering performance upgrade over its predecessor. It lets you play at 1440p and 4K resolution without compromises, even with ray tracing and DLSS enabled. It takes a lot of power, though. Nvidia's Founders Edition model looks sleek and has a radical cooler, but it offers limited repairability and puts its 12-pin power adapter in an ugly place.
If you’re interested in spending big to get exceptional 4K or ultra-fast 1440p performance on a 144Hz+ monitor, then yes, you should buy the RTX 3080 Founder Edition. Full stop.
Nvidia CEO Jensen Huang said that Ampere is GeForce’s greatest generational leap ever, and he wasn’t kidding. Remember being blown away when the GTX 1080 was 60 to 70 percent faster than the GTX 980, even with its slightly higher price? The GeForce RTX 3080 spits out frames up to 80 percent faster in several games, and 60 percent higher in the others. It’s roughly 30 percent faster than the GeForce RTX 2080 Ti, the $1,200 previous-gen flagship, and a ridonkulous 100 to 160 percent faster than the older GeForce GTX 1080. All at the exact same $700 price tag as the RTX 2080.
The promises were true. This thing is an absolute monster. Sometimes it’s faster at 4K than the RTX 2080 is at 1440p. Ludicrous.
There are no games where the GeForce RTX 3080 fails to clear a 60-frames-per-second average at 4K resolution with all possible visuals effects turned on. The exception is the ridiculously strenuous Total War: Troy, which averages 56 fps (and feels just fine at even lower speeds as a strategy game). Most games go significantly faster than that. Other than Troy, again, no games fall below 100 fps at 1440p resolution with everything maxed out. Again, Total War again falls just shy, at 98 fps, and again, most games go significantly faster than that. If you’re fine bumping graphics down to high, games fly along even faster in our off-the-cuff tests. No graphics card has come close to this level of performance before.
The “worst” (but still massive) results come in CPU-bound or older DX11 titles. The Ampere architecture screams when unleashed on properly optimized games that were built for DirectX 12 or Vulkan. More and more of those are being published these days, and all ray-traced games require DX12. The impact of ray tracing and DLSS doesn’t appear to be lessened despite the next-gen RT and tensor cores, but the RTX 3080 is so fast, it doesn’t matter. You can play ray traced games at 1440p, and even 4K now.
Just make sure you can put all this power to good use. If you don’t have a 4K monitor or a 144Hz 1440p monitor, you’re probably wasting your money here. Nvidia says the $500 GeForce RTX 3070 will be just as fast as the last-gen 2080 Ti for less than half the price when it launches in October. That should be a fine ultra-fast 1440p and 4K option too, though you’ll likely need to dial down some graphics settings in strenuous games. You don’t with the RTX 3080.
Note, too, that most games hit a CPU bottleneck playing at 1080p with this card. Unless you’re an esports pro who plans on turning down graphics settings (or playing around with Nvidia Reflex) to hit the highest possible frame rates on a 360Hz monitor, your money is better spent elsewhere if you have a 1080p screen and don’t plan on upgrading it soon.
The design, while gorgeous, is a mixed bag when you dive into the details. The radical new cooler isn’t drastically better than what came before, temperature-wise, but it works well and is significantly quieter than before. The unique flow-through cooling may not work well in small form factor PCs, though. The new 12-pin power connector is fine, but its too-short adapter for 2x 8-pin connectors looks ugly and blocks the card’s illuminated white logo, though power supply makers will sell bespoke cables to fix the issue. Nvidia has also issued a clear warning to disassemblers: Don’t expect to be able to take this card apart easily.
Virtually every gripe I can raise is nitpicking. The monstrous performance increase wouldn’t be quite as impressive if the ho-hum RTX 20-series hadn’t been so disappointing. You may need to upgrade your power supply to feed this beast, but it’s worthwhile.
I also wonder whether the 10GB of onboard VRAM will be enough capacity for 4K gaming in a few years. The cutting-edge GDDR6X chips inside the RTX 3080 moves data twice as fast as standard GDDR6, however, and showed no sign of slowing down in our tests.
None of these are major issues for most people.
You could hold off on a day-one purchase to see what custom versions of the GeForce RTX 3080 offer, But you also should feel free to spam F5 on your keyboard at Amazon or Newegg when the RTX 3080 hits online shelves September 17, especially considering rumors that stock of these cards could be in short supply during this crazy year.
Maybe you also want to see what AMD has up its sleeves with the “Big Navi” Radeon RX 6000 reveal on October 28. Sure, AMD’s RDNA 2 architecture looks promising. But don’t forget that Radeon has yet to topple the GTX 1080 Ti nearly 3.5 long years later. I hope Big Navi sparks competition in the enthusiast end of the market yet again, but it looks like a tall order for AMD.
Nvidia brought it this generation. Ampere is indeed Nvidia’s greatest generational leap in recent memory. The GeForce RTX 3080 Founders Edition isn’t just the fastest graphics card ever released (until next week, when the even more monstrous $1,500 RTX 3090 lands), it also makes its predecessor feel instantly obsolete—and instantly makes ray tracing feel so much more viable.
Like we said at the start: Nvidia’s GeForce RTX 3080 graphics card symbolizes why we tell people to wait for the second generation when bleeding-edge technology appears.