– Nvidia's top of the line RTX 3090. (upbeat music) No, no, there's no time for that! We've gotta talk about this thing now. If you haven't watched
our RTX 3080 review, here's the background you need. RTX 3000, also known as Ampere is what Nvidia is calling
the biggest generational leap in performance in
the company's history. Now, if you go back far
enough, Ima press X here, but he's also not joking.
These cards, unlike the
turing based RTX 2000 series, are smoking fast right out of the gate with our RTX 3080 reaching
speeds in the neighborhood of 75 to 100% faster than its predecessor, depending on the task. That, combined with
Nvidia's super impressive DLSS 2.0 upscaling technology
means that 4K gaming went from someday to today, right now. But wait, there's even more. Like this message from
our sponsor, Team Group. Team Group's Xtreem ARGB
DDR4 gaming memory features clock speeds up to 4000 megahertz and a stunning new design with RGB. Check it out on their launch event showing out their three brands, and you can land a $3000
PC at the link below. But the 3080 is last week's news. In my hands right now is what Jenson, CEO of Nvidea lovingly
referred to as the BFGPU. And, spoiler alert, he's right. This is an RTX Titan for scale. Heck, here's an ROG Strix OC for scale. This thing is easily the
largest, not to mention heaviest, conventional graphics card
that I have ever seen. Which means that anyone who buys one needs to also make some
additional investments.
First up, you're gonna need a
case with a solid GPU mount, and enough clearance. Being both taller and thicker
than previous flagships, it simply won't fit in most
small form factor cases. That is, unless you hot rod them. Second, you'll need a phat power supply. Rated at 350 watts of power consumption and with Nvidia's new 12 pin connector, the 3090 could easily push your
previous unit to its limits. Last but not least, you're gonna need the fastest gaming CPU in the world because quite frankly, with anything else, you are going to be leaving
performance on the table, which is something you don't wanna do when you're paying $1500 US
dollars for a graphics card. Yes my friends, compared to the RTX 3080, the world's first 8K gaming GPU gets you more than double,
not to mention faster video memory, 20% more
cuda tensor and RT cores, and even NV link support
via a new connector that was presumably shrunk
to minimize wasted space on the physically cut down PCB that makes Nvidia's new flow through cooler possible.
But wait, double the video memory? Why? Well, aside from giving it much
better compute capabilities and more importantly,
the ability to render much more complex 3D scenes
in software like blender. As it turns out, gaming at
8K also genuinely demands this much video memory. I mean, think about it. AK is 33 megapixels at 32 bits per pixel. That means to display just a single frame, 128 megabytes of video memory is needed, which doesn't sound so bad, but add in textures, models,
screen space effects, and of course, RTX real-time ray tracing. I mean, come on. And you've got yourself many times that. Ideally, 60 times per second or more. Wolfenstein YoungBlood
clocked in at over 11 gigs of memory usage at AK. Sounds fun, but at the end of the day, it all boils down to
performance, doesn't it? So by request, we built up
a Core i9 10900K test bench to minimize our CPU bottlenecks, and dog out our RTX 3080
along with a 2080 Ti and Titan RTX for comparison. Starting off with Minecraft RTX, we're seeing about a 22%
average frame rate improvement over the RTX 3080 and a whopping 51 to 60% over the Titan RTX both
with DLSS on and off.
Not bad. Wolfenstein Youngblood meanwhile gives us a much more modest performance improvement over the 3080 at around 15%, but while still pulling off a respectable 50% improvement over the Titan RTX, a card that costs a thousand dollars more. So there you have it, guys. Buy yourself an RTX
3090 and a water bottle with all that money you saved, am I right? lttstore.com. CS Go and Microsoft Flight Simulator tell a very similar story in 4K.
But interestingly, the
gap narrows significantly for Flight Simulator at 1440P because of CPU bottlenecks. That's right, despite the OPCPU
and ultra graphic settings. Finally, the parallel N64 emulator, which does most of the
graphics and sound emulation on the GPU rather than the CPU, and recently added upscaling support, shows us that the extra
umph brings us up to 99 percentile frame times, near 60 FPS. That eight times upscale, by the way, works out to more pixels than 5K emulated. But then there's the elephant in the room. AK Gaming. As Tom's Hardware reported, Nvidia's demos only showed 8K Triple
A gaming with DLSS 2.0 in what they're calling
ultra performance mode. Now, before, DLSS 2.0 offered performance and quality scaling
options with performance rendering at 1080p like DLSS 1.0, and quality rendering at
1440p before upscaling, usually to 4K.
Now, ultra performance still uses 1440p, but instead of performing
a reasonable scaling jump up to 4K, it's trying to take that data and go all the way up to 8K. I mean, that's a lot of guesses to make in terms of what are those smeary pixels? So does it look any good? Well, on a 4K display, there's
little point in using it since the only real effect is similar to anti-aliasing which can be
had with less performance penalty in other ways. But if you are one of the privileged few with an 8K display today, here are some shots of
what you can expect to see from 4K native versus 8K
DLSS ultra performance versus 8K native. You can see clearly that
8K DLSS ultra performance delivers significantly
more detail than 4K does. Even if it isn't quite up
to 8K native standards. I mean, check out the
text and distant details in this shot for example. 8K DLSS ends up much
closer to 8K native here. Even if it's still not perfect. And the same deal with
these grates on the floor in this shot.
They're almost completely wiped out in 4K, but DLSS restores that. Now, performance is
roughly 25% less than 4K, but that still puts us above the magic 60 FPS mark on average,
which is pretty sweet if you shelled out for
one of those new 8K TVs with HDMI 2.1. Even better is that not every
game, even Triple A games, necessarily needs to use DLSS. We ran Doom Eternal at native 8K with very high detail settings
for what was, honestly, the most mind-bending experience I've had with this GPU so far.
And we barely saw dips below 60 fps. Also impressive during
that session with the 88 inch TV was HTR gameplay
recording with shadow play. Now, we had wanted to
include a big section about performance and image quality, but the story is sort of
already in the footage that you're looking at. It just worked, the
performance hit was reasonable, and it looks as good as you can expect, given the 100 megabit per second bit rate at such a massive resolution.
Our entire 8K gaming
experience video yesterday was shot with it running in the background without any hiccups whatsoever. Great job. Productivity though is where the RTX 3090 both gets to stretch its legs and also get a little bit dicey. Going up against the Titan in Blender, we saw anywhere from a 50% improvement with the CPU optimized gooseberry render all the way up past 100% more performance in the pavilion Barcelone Optix render. Yes, my friends, Gooseberry. A benchmark designed for CPUs can run on all of these cards, thanks
to the ridiculous amounts of V-ram. Other rendering benchmarks
like V-Ray, Redshift, and Octane Bench all
provide similar numbers across the board with the
RTX 3090 hovering around the 15 to 20% mark over the 3080 and 50 to 80% over the Titan.
Then there's SPECviewperf where we can see that the RTX 3090 is, sorry, wait, what? No, no, go back to that. Catia ran over 40% faster on the Titan? How does that make sense? Okay, so as it turns out, this card, while it has more raw
hutzpah than the Titan RTX, and was introduced as
if it were Titan-ish. It isn't. When we spoke to Nvidia,
they explained that there are driver
optimizations for the Titan that have not been
enabled for the RTX 3090, and presumably won't. And they confirmed that these results are 100% expected behavior, which kinda sucks because if this thing is
mostly just a giant overgrown 3080 Ti at $1500, what's
the Ampere Titan gonna cost? Three grand? Anyway, back to SPECviewperf,
we did get several wins for both the RTX 3090 and the Titan RTX depending on the workload with Siemens NX not even being in the same area code, let alone ballpark.
That's kind of rough for professionals who are looking to juice
up their work station for cab work by day and
gaming at 8K by night. Which do they choose? And there's more bad news. Remember that bit about
needing a new power supply? Well, our test bench sure did. Even a high quality 850
watt couldn't keep up with our RTX 3090 routinely breaking 375 watts with Nvidia's PCAT power meter connected throughout our testing with a peak power draw of over 450 watts.
The Titan RTX, meanwhile,
barely broke 300 watts the whole time. Thankfully, the cooler,
as with the RTX 3080, was well up to the task, and it didn't even break a
sweat through SPECviewperf with thermals never
pushing past 70 degrees. Also, core clocks were ridiculously stable at around two gigahertz under load. So I guess all that weight
in metal is doing its job and then some. Nvidia was clearly really concerned with thermals this time around, and they just did a great job. There's no other way of putting it. And hopefully that
means some over clocking headroom as well. Early reports on the RTX
3080 seemed to suggest that the GDDR6X memory is
pretty easily bumped up, but we've also seen reports that Nvidia went for lower speed RAM
due to thermal limitations. So as always, stress test your over clocks with a prolonged load
before considering them tournament ready. And speaking of over clocking, we have a video coming where
we look at the differences between after market GPUs, so make sure you get subscribed
so you don't miss it.
For now, these results leave me, I mean, there's no real word
for it other than conflicted about the RTX 3090. On the one hand, it's a
beast of a graphics card. But on the other hand,
it's clear that Nvidia is holding it back so
they don't eat into their Titan or Quadra lineups. Kind of like how Intel
runs things with Xeon. It runs really cool with a fan that's not nearly as loud as the RTX 2000 series, but it guzzles power
and I can't help feeling like a doofus for buying
it for a workstation or a CAD use, knowing that there could be a Titan coming down the line.
That puts the 3090 squarely
in prosumer/enthusiast territory, so like super high end gamers, content creators, and
especially 3D modelers are gonna love it. Which isn't a bad thing;
it's just a niche thing. And I can't help but feel
that Nvidia could've done more to make the 3090 shine. Do you think that's fair? Anthony? Do you think that's fair? – [Anthony] Yeah. – How about you, Jenson? I know you watch our videos, and I also know that you don't like it when you think I'm not being fair, which is kind of funny to me because I get to sit here imagining a man who casually buys $40 billion companies and changes computing as we know it, who still gets butt hurt when some nobody YouTuber doesn't like
his gaming widget enough.
But on the other hand,
you know what Jenson, I actually really like that about you. It's that kind of passion
that makes Nvidia so great. So I resolved to keep making you mad because evidently when you got mad that I didn't like your RTX
2080 Ti, we got the 3080. So I might go even harder on you guys just to see how far
you can push PC gaming. Nah, I'm just kidding, Jenson, sort of. Since I have your attention, please unlock the Titan
driver paths and SRIOV at least for the RTX 3090. This is a niche enthusiast card, and it should really enjoy
niche enthusiast features, don't you think? Just like our viewers enjoying my segue to sponsors like Privacy and 1Password; two services that prioritize
security and convenience. Starting now, Privacy
users can automatically add their virtual cards into
their 1Password vault with a single click and 1Password users will be able to create
virtual cards in their browser and store them in 1Password. When creating a new virtual
card with 1Password, users can set a spending cap and adjust settings for a one off payment, monthly, or annual
limits, or a total amount.
Users can limit cards
so they can only be used as single service to help
keep things locked down, and when it's time to
enter payment details, 1Password will show users any cards associated with that website,
including any privacy cards you've generated. So head to privacy.com/linus
and sign up for an account. New Privacy customers
will automatically get five dollars to spend
on their first purchase. Or if you're a new 1Password user, head to the link below to
also receive five dollars when signing up. Good stuff. So thanks for watching, guys. If you're looking for
something else to watch, go check out our review on the RTX 3080 where we happen to use a Ryzen bench instead of an Intel bench.
Does it perform better with PCIe Gen 4? I'll see you over there..