Jump to content

GTX1080 who is buying it?


Recommended Posts

  • Replies 362
  • Created
  • Last Reply

Top Posters In This Topic

$100 extra for premium caps and VRM array likely

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Going to stick with my SLI 970s and sit this one out. Will get the next release most likely.

VR Cockpit (link):

Custom Throttletek F/A-18C Throttle w/ Hall Sensors + Otto switches | Slaw Device RX Viper Pedals w/ Damper | VPC T-50 Base + 15cm Black Sahaj Extension + TM Hornet or Warthog Grip | Super Warthog Wheel Stand Pro | Steelcase Leap V2 + JetSeat SE

 

VR Rig:

Pimax 5K+ | ASUS ROG Strix 1080Ti | Intel i7-9700K | Gigabyte Z390 Aorus Master | Corsair H115i RGB Platinum | 32GB Corsair Vengeance Pro RGB 3200 | Dell U3415W Curved 3440x1440

Link to comment
Share on other sites

I "Should" be getting my pre-order Rift in August,I'll have to see if my Titan Black handles VR before I start thinking of a 1080.

My GTX-780 OC, handles just fine, at least for DCS 1.5. hype is a bit overrated.

However to answer the topic question, I'll be getting 1080, skipped 9xx series.

Link to comment
Share on other sites

Will it be possible to buy (not build) a complete PC with the GTX 1080 which will cost roughly 2000€? I have no interest in building a PC whatsoever, my spare time is very limited already.

 

sure, it fits. I'd build you one and send it to Österreich ;)

Gigabyte Aorus X570S Master - Ryzen 5900X - Gskill 64GB 3200/CL14@3600/CL14 - Asus 1080ti EK-waterblock - 4x Samsung 980Pro 1TB - 1x Samsung 870 Evo 1TB - 1x SanDisc 120GB SSD - Heatkiller IV - MoRa3-360LT@9x120mm Noctua F12 - Corsair AXi-1200 - TiR5-Pro - Warthog Hotas - Saitek Combat Pedals - Asus PG278Q 27" QHD Gsync 144Hz - Corsair K70 RGB Pro - Win11 Pro/Linux - Phanteks Evolv-X 

Link to comment
Share on other sites

Will it be possible to buy (not build) a complete PC with the GTX 1080 which will cost roughly 2000€? I have no interest in building a PC whatsoever, my spare time is very limited already.

 

the cheapest prebuilds i have seen are HP's with the 30% discount codes they throw around.

8700k@4.7 32GB ram, 1080TI hybrid SC2

Link to comment
Share on other sites

Changed my mind. After lots and of research I'll be waiting for Amd to release their next and might even wait for Vega, it's not that far off and seem more future proof. If the midrange polaris can at least run Vr I'll be good for a while. Big changes seems to be coming and I play far more than just Dcs

1080 ti, i7700k 5ghz, 16gb 3600 cl14 ddr4 oc

Link to comment
Share on other sites

Polaris will Cover R9-480X and Lower, they wont have a GPU Die big enough to match Fury and nVidia's big Dies...

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Polaris will Cover R9-480X and Lower, they wont have a GPU Die big enough to match Fury and nVidia's big Dies...

 

480x? i think you mean 490x and if the 490X performs at or above 980TI levels for 350$ range would be a very competitive card for the budget people, Polaris is about laptop and mid range cards for lower $$. they say but we will find out for sure later this month cant wait as i have tended to stick with ATI/AMD cards.


Edited by OldE24

8700k@4.7 32GB ram, 1080TI hybrid SC2

Link to comment
Share on other sites

Sorry, numpad typo.

 

No More Tahiti and Hawaii Re-Brands.

 

Polaris 10/11 will cover R9-490X all the way down, no mo GCN 1.0, 1.1, GCN1.2 Rebrands

 

Everything will be 14nm FinFET GCN1.3 w/ DP1.3 and HDMI2.0a and GDDR5X.

 

Fury/FuryX will continue to be the Top GPU for the "Enthusiast market", (At Least until Vega and HBM2 are ready in 2017ish)

Though it might get a design update a lil to compete w/ the 1070/1080s (Lower Price, Higher Clock Rates, etc.)

But it will be the same GPU as before.

Clock increase and $75 Price Drop would keep it competitive w/ 1070/80. as it's about 10% behind now in DX12 (AOS Bench), That's 3 FPS at 30, 6 FPS at 60, 9 FPS at 90, 12 FPS at 120, 14 FPS at 144 Hz, easily overcome by slight overclock.

Negligible to the Human Eye.

 

Looking at the Scope of things, AMD Can Get away w/ it easily.

 

Polaris is a small Die compared to Hawaii XT and Fiji, kepler and Pascal...

 

Let me put it this way:

-Full 16nm Pascal Die 610mm^2 ( GP100 - 3584 Cuda Cores)

(This Full Pascal GPU is Retailing for $5000 in Pro Market, will launch later for gamer consumer market w/ 16GB max HBM2)

-Full 16nm Pascal Die 436 (est.) mm^2 (GP104 - 2560 Cuda Cores)

(This is the GPU used in GTX1080 and Laser Cut for GTX1070)

 

-Full 28nm Fiji Die 596mm^2

-Full 28nm HawaiiXT Die is 438mm^2

-Full 28nm TahitiXT Die is 352mm^2

-Full 14nm FinFET Polaris is 232mm^2

(This would put it about 3000ish SPs, Slightly Higher than Hawaii, and AMD has been touting 2.5x Density, so it maybe more than that if 232mm^2 = 580mm^2 density (3700ish SPs)

 

Alas we'll find out in a couple weeks..

 

Now,

The Bigger the GPU Die, the less you can put on a wafer, and lower the yields from said wafer due to defects in the manufacturing process and wafers.

 

The Smaller the GPU Die, the more you can fit on a wafer, and higher yields from said wafer.

 

ie,

if you can put 4 GPU Dies on a wafer, and the wafer has 1 defect, you lose 25% of the wafer and 1 GPU, left w/ 3 Viable Samples.

if you can put 8 GPU Dies on a wafer, and the wafer has 1 defect, you lose 12% of the wafer and 1 GPU, Left w/ 7 viable Samples.

 

 

So in conclusion:

Big GPU Dies Fit Less per Wafer and Produce Lower yields Per wafer, forcing price to skyrocket to pay for manufacturing costs (nVidia and AMD Both Pay for the wafers)

 

Small GPU Dies fit more per wafer and produce Higher yields per wafer, allowing price to stay lower and sell more units per wafer.

 

now, Add abotu 2 years to this Equation w/ DX12 and Vulkan Both using Low Level GPU Pooling (explicit multi adapter mode etc).

 

Add Interposers to the GPU Design, and Suddenly 2 to 4+ GPU Die's Per Substrate Package is more powerful and cheaper than one large chip of competing size.

 

ie:

4 Schedulers, 4 Compute Engines and 4 Command Processors Maintaining 1500SP's each (6000 Total)

Would Perform better than 1 Scheduler/Command Processor/Compute Engine trying to drive 6000SP's by itself.

Especially in Multiple Viewport Renderers (ie VR)

 

And Cost Significantly Less to manufacture.

 

 

This is where Navi takes over, One Small GPU DIE Design, that is Linked w/ Multiple Dies on the Interposer to Form High Performance GFX Cards.


Edited by SkateZilla

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Skate, It is quite obvious a smaller die will be cheaper then the bigger one for all the reasons you mentioned. The question is how can you make that smaller die do more work then the larger one. And something tells me that it is impossible (given same mm manufacturing process) to have more stream units on lesser surface

Anton.

 

My pit build thread .

Simple and cheap UFC project

Link to comment
Share on other sites

Skate, It is quite obvious a smaller die will be cheaper then the bigger one for all the reasons you mentioned. The question is how can you make that smaller die do more work then the larger one. And something tells me that it is impossible (given same mm manufacturing process) to have more stream units on lesser surface

 

 

AMD is moving to making Smaller Dies and Linking them via Interposer.

 

So you can Have a 600mm^2 interposer w/ 4 232mm^2 GPU Dies Linked together, and 8 Stacks of HBM2 Memory which would comprise something along the lines of 10,000+ GCN 1.3 Cores and 16+GB of Memory.

 

Those 4 232MM^2 GPU Diess would be cheaper and easier to manufacture than one Large 470MM^2 GPU Die.

 

Low End/Entry level would be 1 GPU Die

Mid Level would be 2 GPU Dies

Upper Level would be 3 GPU Dies

Top End would be 4 GPU Dies

 

As far as your system would be concerned, it's one GFX Card. it'll be the same concept of selling Tahiti GPU's w/ Section Disabled via Laser Cut (ie 7970,7950, 7930/7870XT), except it would be an Interposer w/ 1,2,3,4 GPU Dies Linked.

 

in 2 Years, everything will be DX11/12 anyway. and for DX9 people will start building legacy rigs, as the new GPUs in 2 years and CPU's would require Windows 10 or Above regardless.


Edited by SkateZilla

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Once again a cool video. The 1080 perf was always going to be about higher clockspeeds because that is what our games need the most right now. There are different types of workloads and some benefit much more from higher clocks than from memory bandwidth. Not all workloads are equal.

 

Going WAAAY back it used to be that if the memory bandwidth was sufficient for texturing then higher clocks directly translated into maximum framerate. Things are slightly different in the shader age but I think MAXWELL on SPEED makes a LOT of sense even if it doesn't have the memory throughput of HBM.

 

If you need the framerate for VR then you can tune memory thoughput demands via in-game settings. Correct me if I am wrong.

 

The GTX780TI has a memory throughput of 336GB/S which makes this old card the equal of any new card. BUT the card was still not fast enough for simple VR workloads. This is why NV has turned towards higher clockspeeds. Higher clocks in 900 series and now even higher clocks with 1000 series.

Link to comment
Share on other sites

Higher clocks dont automatically mean higher performance.

 

What's the Instructions Per Clock on a Pascal Cuda Core?

What's the FP Divider?

 

You can easily up the clock speed 166% while dropping the IPC 45-50%

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

Whell, sign me out of the day one purchase list. Figured it will be several month untill availability is all sorted out and good partner cards are in good supply. As nice as 1080 sounds Im really not looking to fight with the crowd for the limited supply of 700$ version card

Waited long enough, just ebayed me a used 980ti for 450$ , still plenty of GPU power and reasonably priced.

Anton.

 

My pit build thread .

Simple and cheap UFC project

Link to comment
Share on other sites

Higher clocks dont automatically mean higher performance.

What's the Instructions Per Clock on a Pascal Cuda Core?

What's the FP Divider?

 

I don't know. Without looking I'm gonna guess the same as Maxwell.

 

My point is that all else being equal , higher clocks on NV cards PROBABLY equal better VR performance and this is perhaps evident seeing as Nvidia IS using the VR perf slide (1080 VS 980/TitanX) to pimp the 1080.

 

Of course I got questions. Is memory thoughput a big deal with current VR workloads? I do not know. Maybe it is a big deal.

 

Of course HBM is going to EVENTUALLY be a big deal BUT when? When will all that extra memory bandwidth make a difference in an application/game. HBM on the FuryX did not turn it into a 980TI killer in VR workloads so memory throughput was not THE bottleneck.

 

So that video posted earlier has an unanswered question over whether people should take Nvidias GDDR5X solution ASAP or be patient and wait for a card where HBM actually makes a difference.

 

Even if GDDR5X is a stopgap; Nvidia will use it to increase clocks ENOUGH to increase VR workload perf so they have performance leadership for another year. People will pay for that. I'll certainly be tempted.

Link to comment
Share on other sites

all the VR Performance Slides in their Presentation are from using GameWorks API and VR Works Features.

 

Which other than their Tech Demo's is used by ZERO Games right now

 

 

The bottle Neck is DX11 and CPU Overhead, and the Fact that it's Rending 2 Viewports.

 

In this situation "MORE CORES" is better than Faster Cores.

 

However for DX10/11 Apps in VR, they can only process and render objects as fast as the DirectX API can process those commands.

 

The Sad Part:

NVidia's new VR Features, are pretty much rip offs of what ED already Does (to some extent), and What DirectX12 Already does..(and DirectX11).

Simply Gift Wrapping Pre-Written Code for nVidia GPU Drivers into yet another Software API Layer.

 

Stereo Single Pass Rendering.

NO Matter how Much they wanna say it's single pass it's still rendering TWICE. lol.

Renders a Scene and Shifts eye point to render the opposite eye.

DCS's Stereo Monitor Profile does this already.

 

There is no way to render offsetting Left/Right Environment in a single pass as the viewpoints are not the same.

Even Rendering the Entire World, then Processing 2 Separate Viewports is not considered single pass, as it's processing 2 view ports.

 

Multi-Projection.

Ability to render the world and project multiple viewports simultaneously, thus eliminating FoV Warping (Seen in Surround/Eyefinity Setups)

 

ED's 2, 3+ Screen Profiles, All have Separate Viewports w/ the Dx Viewpoint Shifted..

 

Same thing, No more Warping that you get when you Tell DCS to Run 1 Single 5760x1080 Viewport,.

 

 

VR SLI.

 

Side By Side Scaling of Multiple GPU's, Native to DX12, Mantle and Vulkan already.

 

-Pretty sure both Oculus and HTC intend tointegrate SBS MultiAdapter Rendering into their API that way any Direct to Rift / HTC Programs that use the Oculus Runtime will automatically use a single preset XFire/SLi/DX12 EMA Setting to Have 1 GPU Render Left and One Render Right. Once that's done, This VR SLi Part of their API will be Worthless and redundant as it will be integrated at a Low Level Via DX12.

 

Split Screen SLi was attempted by nVidia before and failed miserably, this is the same thing re-packaged into GameWorks API

 

 

VR Works AUDIO.

DCS Already Uses Positional Audio, nothing new...

Look Left, Hear engine in left ear louder than right, etc etc.

Audio Source is Dynamic to Head position.


Edited by SkateZilla

Windows 10 Pro, Ryzen 2700X @ 4.6Ghz, 32GB DDR4-3200 GSkill (F4-3200C16D-16GTZR x2),

ASRock X470 Taichi Ultimate, XFX RX6800XT Merc 310 (RX-68XTALFD9)

3x ASUS VS248HP + Oculus HMD, Thrustmaster Warthog HOTAS + MFDs

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...