Jump to content

Recommended Posts

Posted

Good link! Thx for passing that along. Now I see why you were that interested in the Lynnfield (i5) chip.

 

ATM, I'm still leaning toward the i7, as it appears to better support some things that I'm interested in (ie. SLI) and an upgrade path to 6 cores/12 threads on the 2010 horizon.

 

If I could convince myself that it would work out right, I'd entertain the option of building on a dual socket mobo with a Xeon processor or two. I haven't looked at how astronomical the pricing on that would be yet... I probably wouldn't do it, unless I came across some good articles on how someone else did it for gaming machines... getting too far off the beaten path can be painful.

[sIGPIC][/sIGPIC]

There's no place like 127.0.0.1

Posted

hey Cy,

I'm trying to love the i5(11xx), but the i7(13xx) is definitely the more future-proof way to go. There are several mobo makers that have released more moderately-priced units so that one can be bought for as little as $170.00.

I hope DCS is working on a new graphic engine, and multi-core coding for it's future efforts. New GPUs from ATi and NVidia will be coming out in a few months, with more g-ofast architecture. Now if Intel were to release a 6, 8, or 12 core i7 processor somewhere along the line, that would be great. It wouldn't hurt my feelings to see a GPU released with a 1gb bus. Open up them pipes!:D

Flyby out

The U.S. Congress is the best governing body that BIG money can buy. :cry:

Posted

I think that article indicated that a 6 core i7 was anticipated in the 1st half of 2010. That seems reasonable.

 

Got a good link on the upcoming nVidia card(s) and projected release dates?

[sIGPIC][/sIGPIC]

There's no place like 127.0.0.1

Posted

yeah man! that thing is a beast!:thumbup: With the kind of system bandwidth (+40 gb/s), and core usage, imagine our flight sims suddenly not being very cpu-intensive. At least for a while anyway. I was surprised to see that Intel used a Tyan motherboard. Don't see Tyan on the hardware sites too often. But if they have the hookup, well, there ya are!:D

Flyby out

The U.S. Congress is the best governing body that BIG money can buy. :cry:

Posted

Interesting too that Nvidia has licensed with the major mobo makers to all SLi on the P55 mobos. Well, it's just mental exercise now. ;)

Flyby out

The U.S. Congress is the best governing body that BIG money can buy. :cry:

Posted (edited)
Good link! Thx for passing that along. Now I see why you were that interested in the Lynnfield (i5) chip.

 

ATM, I'm still leaning toward the i7, as it appears to better support some things that I'm interested in (ie. SLI) and an upgrade path to 6 cores/12 threads on the 2010 horizon.

 

If I could convince myself that it would work out right, I'd entertain the option of building on a dual socket mobo with a Xeon processor or two. I haven't looked at how astronomical the pricing on that would be yet... I probably wouldn't do it, unless I came across some good articles on how someone else did it for gaming machines... getting too far off the beaten path can be painful.

 

How exactly do you intend to get a game utilizing two processors at the same time? By recoding the game?

 

SLi, and Crossfire for that matter, is poor technology. I have no idea what compels people to buy into it. Firstly, the gains added by the second GPU are horrendously inefficient, running anywhere from 0% (totally incompatible) to 40% (best case scenario most of the time).

 

More than that because the GPU's render frames independently, micro stuttering is introduced. When one GPU renders a frame faster than the other, they go out of sync, and the display stutters. This absolutely destroys effective frame rates and can make a game supposedly running at 40FPS look as poor as a game running at 20FPS.

 

As it is now they are both fundamentally broken. Amazingly, one GPU running at a lower frame rate can look better than two GPUs at a higher frame rate.

 

The i7 920 is the way to go. It has an unlocked multiplier making it a beastly overclocker. DDR3 1600 is fast, but as the latest processors for both AMD and Intel are low FSB high multiplier based all that speed winds up in a high ratio, which is very inefficient. Higher speed RAM tends to have higher timings as well, so be careful about what you are putting into your system.

Edited by TheHeretic
Posted
How exactly do you intend to get a game utilizing two processors at the same time? By recoding the game?

 

SLi, and Crossfire for that matter, is poor technology. I have no idea what compels people to buy into it. Firstly, the gains added by the second GPU are horrendously inefficient, running anywhere from 0% (totally incompatible) to 40% (best case scenario most of the time).

 

More than that because the GPU's render frames independently, micro stuttering is introduced. When one GPU renders a frame faster than the other, they go out of sync, and the display stutters. This absolutely destroys effective frame rates and can make a game supposedly running at 80FPS look as poor as a game running at 20FPS.

Never heard of buffering? Someone tried saying something along these exact same lines at the nvidia sli forums. The software takes care of syncing the 2 video cards to prevent stuttering (which do a damn good job, btw), and there are games that use multicore very well. I suggest talking to a professional techie and get some info off of them before seesaying. Theres a reason why people spend up to 8k on a computer to get the highest benchmarks, think they use only 1 video card and a single cpu?

Posted (edited)
Never heard of buffering? Someone tried saying something along these exact same lines at the nvidia sli forums. The software takes care of syncing the 2 video cards to prevent stuttering (which do a damn good job, btw), and there are games that use multicore very well. I suggest talking to a professional techie and get some info off of them before seesaying. Theres a reason why people spend up to 8k on a computer to get the highest benchmarks, think they use only 1 video card and a single cpu?

 

How is buffering going to help the inherent problems with AFR?

 

Rendering frames asynchronously is always going to run into the problem of cards producing frames at different rates: unless you can teach a GPU to predict the future. The GPU's have to deliver each frame within equal intervals. A game running at 60FPS must deliver each frame at an interval of 16ms. Multiple GPU's have a difficult time trying to deliver their frames at the correct intervals (well, the second GPU is the culprit), hence some frames come out too quickly or not quickly enough. When two frames are too close together, the gap between them creates stuttering in the image: what you'd see if a game was running at 15FPS. The more out of whack the frames are, the worse the image quality. Fraps will still deliver a higher frame rate result: but the non-homogenous delivery of the frames lowers the effective frame rate. Worse case scenario is the frames wind up being delivered 1ms apart with a large gap in the middle until the next frame comes along and you end up with a game running at 60FPS looking like it runs at 30FPS: hence the flaw in the tech.

 

To illustrate, imagine delivering 10 frames in 100ms. Each frame should come out at an interval of 10ms. 10ms, 20ms, 30ms, 40ms etc What creates a smooth image is a lack of gaps between frames: not the number of frames. Now imagine delivering 20 frames in 100ms, at intervals of 10ms, 11ms, 20ms, 21ms, etc. The gap between each frame is now 9ms. Your frame rate is doubled, but the quality of the image is essentially identical (or worse, depending on the user). You should be getting each frame at 5ms, but because GPU A doesn't know when GPU B will finish its frame, the drivers attempt to estimate. Trouble is without premonition there's simply no way of knowing, no matter how well the drivers themselves are written. Every game is very different with many different variables: without knowing what lies ahead an algorithm telling GPU B how long it should wait is almost impossible. The GPU's are trying to deliver the best frame rate possible, but spitting them out too quickly together creates huge frame gaps. How long should GPU B wait after GPU A has delivered its frame? This is an inherent problem with AFR: being unable to deliver frames at the correct intervals. This is the worst case scenario, but even with smaller deviations its possible to make 60FPS look like 45FPS.

 

On top of this the higher fluctuation between the delivery of frames is noticeable to a sharp eye. Having frames being close together and then far apart, even if the gaps between them are smaller overall than on a single GPU setup, degrades the image quality. This doesn't affect all users but its very noticeable for me and its one more reason why I dislike SLi and Crossfire. It also has a negative effect on input lag, as does triple buffering: as you seem to be suggesting.

 

People buy $8000 PC's because they have money to burn and enjoy big numbers. Benchmarks do not translate directly in game performance, you don't get micro stuttering when all you are looking at are numbers, but you will get them in game. The degree of their severity will differ from unnoticeable to cutting your frame rate in half, but why take the leap when there are single GPU options anyway?

Edited by TheHeretic
Posted

SLi, and Crossfire for that matter, is poor technology. I have no idea what compels people to buy into it. Firstly, the gains added by the second GPU are horrendously inefficient, running anywhere from 0% (totally incompatible) to 40% (best case scenario most of the time).

 

More than that because the GPU's render frames independently, micro stuttering is introduced. When one GPU renders a frame faster than the other, they go out of sync, and the display stutters. This absolutely destroys effective frame rates and can make a game supposedly running at 40FPS look as poor as a game running at 20FPS.

 

As it is now they are both fundamentally broken. Amazingly, one GPU running at a lower frame rate can look better than two GPUs at a higher frame rate.

 

The i7 920 is the way to go. It has an unlocked multiplier making it a beastly overclocker. DDR3 1600 is fast, but as the latest processors for both AMD and Intel are low FSB high multiplier based all that speed winds up in a high ratio, which is very inefficient. Higher speed RAM tends to have higher timings as well, so be careful about what you are putting into your system.

 

 

You should have seen how Crysis looks on my system after havind added the second GTX. :pilotfly:

 

To call SLI "poor technology","horrendously inefficient" is a the least unfair.

It's true that sometimes 2 gpu's or more are inefficient,but hey dude,this is the application default,not the technology itself.

 

I never faced the micro-stuttering phenomenon.

 

And yes,i7 920 is the way to go.I'm running mine @3.4Ghz without any single voltage change,and my ram is at its specs timings.

 

Try to run Panzertard's map with a single card,than add a second one and tell me face to face there are no differences (I did).

 

Regards.

Posted
You should have seen how Crysis looks on my system after havind added the second GTX. :pilotfly:

 

To call SLI "poor technology","horrendously inefficient" is a the least unfair.

It's true that sometimes 2 gpu's or more are inefficient,but hey dude,this is the application default,not the technology itself.

 

I never faced the micro-stuttering phenomenon.

 

And yes,i7 920 is the way to go.I'm running mine @3.4Ghz without any single voltage change,and my ram is at its specs timings.

 

Try to run Panzertard's map with a single card,than add a second one and tell me face to face there are no differences (I did).

 

Regards.

 

I don't think its unfair to call Nvidia and ATi out on releasing technology that is so expensive, and has serious downsides. Of course SLi and Crossfire has advantages: it makes games that were once unplayable, playable! But AFRs disadvantages aren't well reported and the more promising tech, SFR (split frame rendering) has more technical problems but doesn't have as many inherent problems. Its also much harder to do right which is presumably why AFR is so popular.

 

If people are plonking down twice the the cost, they aren't asking to for a whole new set of problems, just a performance boost. One better GPU will beat two lesser GPU's every time simply because it introduces less headaches and represents true framerates better. What good is gaining a frame rate boost of 15 in a game if so few of those frames are positioned properly. In games like Crysis, 30 FPS can look like 17 FPS this way: not something i'm fond of.

Posted
How exactly do you intend to get a game utilizing two processors at the same time? By recoding the game?

 

Multiple choice:

 

a. Some software (few games, atm, admittedly) can utilize multiple cores / multiple threads. Presumably, they would do they same, at least in most of those case, with multiple processors in multiple sockets.

 

b. Me recoding? No. However, the clear overiding trend in PC hardware is multiple core / multiple threading. With that being the case, what are the software producers going to do? If they wish to offer competitive products, they are going to have to get on the parallel processing development bandwagon. (Either that, or write incredibly tight code in Assembler that can run like a scalded dog on one core... and that ain't going to happen in the gaming world since the development cycle has to be short enough to compete).

 

- Keep in mind, in my case, my upcoming desktop purchase is not going to be solely for DCS:BS as it currently stands. I play other games/ genres and I also think that with ED obviously being in this for the long haul, that we will eventually see DCS more capably exploiting the hardware.

 

SLi, and Crossfire for that matter, is poor technology. I have no idea what compels people to buy into it. Firstly, the gains added by the second GPU are horrendously inefficient, running anywhere from 0% (totally incompatible) to 40% (best case scenario most of the time).

 

More than that because the GPU's render frames independently, micro stuttering is introduced. When one GPU renders a frame faster than the other, they go out of sync, and the display stutters. This absolutely destroys effective frame rates and can make a game supposedly running at 40FPS look as poor as a game running at 20FPS.

 

As it is now they are both fundamentally broken. Amazingly, one GPU running at a lower frame rate can look better than two GPUs at a higher frame rate.

 

There are inefficiencies in the use of multiple GPUs, just as there are in the use of multiple CPUs. I understand that when I buy something with two processors, it is not going to be twice as effective as only one of the same processors. I'm OK with that. If it is enough of an improvement to appeal to me, I want it and can afford it, then I'll do what I can with it.

 

Those of you with eagle eyes will keep holding the manufacturer's feet to the flames, and they will keep striving to improve to earn your future dollars (or perhaps just to shut you up, but I'm guessing it's the $$$ ;) ).

 

As someone who remembers what he spent on building his first PC back in 1983, and who remains mindful of what we managed to get men to the moon with... I'm pretty friggin' happy with what I'm able to get my hands on these days.

 

As for SLI, I may not use it... but I also want to keep my options open. Currently, I'm thinking of going with an nVidia GTX295, but not so much for the SLI as for the three outputs. The 2 DVI and the HDMI (with an adapter) can feed three monitors without using a Matrox TH2Go. I'll probably try the SLI with a single monitor first to see how I like it, but I think I keep hearing two more monitors calling my name...

 

The i7 920 is the way to go. It has an unlocked multiplier making it a beastly overclocker. DDR3 1600 is fast, but as the latest processors for both AMD and Intel are low FSB high multiplier based all that speed winds up in a high ratio, which is very inefficient. Higher speed RAM tends to have higher timings as well, so be careful about what you are putting into your system.

 

I'm definitely looking hardest at some sort of i7. I've seen more pricing info, etc. on the Xeon stuff since that earlier post... Short of hitting the lottery, I can't see myself spending that much on my rig anytime soon. Especially with it (Xeons and dual CPU mobo) not being really designed for gaming.

 

ATM, going up a notch higher than the 920, with water cooling on the CPU (probably Asetek) is looking pretty good. With DCS:BS in its current state, it appears that clock rate (more than cores) is the main key to the best performance in that program. Between the future, and other things I do, I still want plenty of cores / threads, though. So, if I can afford it when the purchase time comes, a fast, OC'd, water cooled i7 looks like the ticket. Of course, by the time I'm ready to buy, we may have a 6 core / 12 thread offering out there to look at... :)

[sIGPIC][/sIGPIC]

There's no place like 127.0.0.1

Posted

hey that looks pretty convincing. I always thought Crysis was a bear to run. But apparently it's a tamer bear here. ;)

Flyby out

The U.S. Congress is the best governing body that BIG money can buy. :cry:

Posted

Not bad for a quad core w/ SLI I might say. I can understand some ppls frustration with the technology, but I dont understand why one has to bash it. Its there, it works fairly obviously, and its here to stay. I agree it needs to be perfected, but whats the other alternative? Ill just say that the software and hardware engineers know what they are doing, and if there wasnt any significant advances, they wouldnt release the technology.

  • Like 1
Posted
Not bad for a quad core w/ SLI I might say. I can understand some ppls frustration with the technology, but I dont understand why one has to bash it. Its there, it works fairly obviously, and its here to stay. I agree it needs to be perfected, but whats the other alternative?

 

Multiple GPUs on one "card"? That's always been my take on what's "next". It seems logical enough. Of course, then someone will run two or more Multi-GPU cards in SLI. :smilewink:

 

I'm still betting that we'll see a single card, mult-GPU type of thing at some point. Call it "SLI for the masses". :D

 

My 2 cents is this: I'm like to build my own, but I've never been interested in spending extra $1000s getting those benchmarks just for the sake of getting them.

 

Why you might ask? Easy. A PC game like Crysis is an anomaly. The developers made a PC game fully taking advantage of the technology to make the thing look mind-blowing but with the side effect of not being able to be ported to consoles. This is far from the norm. Look at some of the newer games coming out like Modern Warfare, for example. They look pretty damn good, but it is VERY obvious that they're dumbed-down so they can also run on the consoles. Consoles are where the money is and the majority of developers know this. They'll sacrifice a PC port before they'll think of doing that for a console. I'm not seeing the point of spending extra $1000s on a PC to run games that an Xbox 360 or PS3 can run with no problems, other than just getting the benchmark for its own sake, like I said before. ;)

 

This philosophy has made me a happy camper so far and the only problem I run into is flight sims. You have a handful of developers on small budgets making these things with what appears to be no attempt to make them less CPU dependent (doesn't Windows 7 have a way of using the GPU for processing things OTHER than graphics?), so you basically have to throw a ludicrous amount of CPU horsepower at them to get them to run halfway playable. GPUs can already do physics, I've wondered if there's a way to get them to do other stuff, like the calculations needed for flight models.

  • Like 1
Posted (edited)
Multiple GPUs on one "card"? That's always been my take on what's "next". It seems logical enough. Of course, then someone will run two or more Multi-GPU cards in SLI. :smilewink:

 

I'm still betting that we'll see a single card, mult-GPU type of thing at some point. Call it "SLI for the masses". :D

 

My 2 cents is this: I'm like to build my own, but I've never been interested in spending extra $1000s getting those benchmarks just for the sake of getting them.

 

Why you might ask? Easy. A PC game like Crysis is an anomaly. The developers made a PC game fully taking advantage of the technology to make the thing look mind-blowing but with the side effect of not being able to be ported to consoles. This is far from the norm. Look at some of the newer games coming out like Modern Warfare, for example. They look pretty damn good, but it is VERY obvious that they're dumbed-down so they can also run on the consoles. Consoles are where the money is and the majority of developers know this. They'll sacrifice a PC port before they'll think of doing that for a console. I'm not seeing the point of spending extra $1000s on a PC to run games that an Xbox 360 or PS3 can run with no problems, other than just getting the benchmark for its own sake, like I said before. ;)

 

This philosophy has made me a happy camper so far and the only problem I run into is flight sims. You have a handful of developers on small budgets making these things with what appears to be no attempt to make them less CPU dependent (doesn't Windows 7 have a way of using the GPU for processing things OTHER than graphics?), so you basically have to throw a ludicrous amount of CPU horsepower at them to get them to run halfway playable. GPUs can already do physics, I've wondered if there's a way to get them to do other stuff, like the calculations needed for flight models.

 

Well there are already X2 type cards with two gpu's on a single frame. The 295GTX and 4870X2 for example, though you may be talking about something different. A GPU really is just a specialized CPU, so i'd imagine other calculations could go through them. Problem is GPU's already have a hard job and asking them to do even more is going to hurt the guy with a weak GPU. You can setup the second GPU in an SLi setup to do nothing but physics, but Phys X is Nvidia tech and I don't think all that many games justify doing so.

 

What about 51 FPS?

 

DX10/Very High/x64

 

Something of a straw man, no? It only solidifies my point: in fact achieving higher frame rates in an inproper manner pretty much is my point.

 

Multiple choice:

 

a. Some software (few games, atm, admittedly) can utilize multiple cores / multiple threads. Presumably, they would do they same, at least in most of those case, with multiple processors in multiple sockets.

 

We are talking about two different things here. An i7 has 4 cores and 8 threads, with most games using two threads, a few using four. No doubt multiple cores is the future, but multiple CPU's? Throwing in two, dual core CPU's into the same setup will give you 4 cores to work with for some games, but so few use them throwing in two i7's (and I don't know if theres a motherboard that supports this) gives you 16 threads. I think we'll be waiting a very long time to see any game utilize that sort of thing, and seems like a bit of a waste.

 

You could do protein folding though! Help cure cancer! Might be worth it right there.

Edited by TheHeretic
Posted

 

 

 

Something of a straw man, no? It only solidifies my point: in fact achieving higher frame rates in an inproper manner pretty much is my point.

 

 

 

 

You think this is an improper manner,so give us the right one.

 

Apart from the numbers and statistics,my games are running far better,loading faster,looking prettier and not lagging.

 

Yes SLI is expensive,and I saved a lot to buy a second card,but the result IMHO worth it.

 

I have a Mazda 6,and if I could afford an Aston Martin DB9,I will not think twice if it's not faster than an Enzo or cheaper than an RR,I will simply buy it because I like it!

You said you don't understand why people are using this technology,...I do!

 

Did you ever try SLI ? Are you a gamer?

 

Just Do It = Yes I Can :thumbup:

Posted
You think this is an improper manner,so give us the right one.

 

Apart from the numbers and statistics,my games are running far better,loading faster,looking prettier and not lagging.

 

Yes SLI is expensive,and I saved a lot to buy a second card,but the result IMHO worth it.

 

I have a Mazda 6,and if I could afford an Aston Martin DB9,I will not think twice if it's not faster than an Enzo or cheaper than an RR,I will simply buy it because I like it!

You said you don't understand why people are using this technology,...I do!

 

Did you ever try SLI ? Are you a gamer?

 

Just Do It = Yes I Can :thumbup:

 

Firstly theres a question of whether someone who's plonked down an amount of money is the most objective reference point for something like this. People tend to see what they want to see, not that you are being dishonest.

 

Technically speaking AFR is flawed. I'm not hearing any evidence to the contrary, just people throwing around Crysis benchmarks. The solution as of now is to buy one powerful GPU over two lesser ones. If the most powerful GPU isn't cutting it you are going to have to brand into a multi GPU setup obviously.

 

You car analogy is wierd. You can "like" a car because of its aesthetics, handling and feel, at the end of the day a GPU is there to do a job that should be as transparent as possible.

 

I've used both Crossfire and SLi. I have two 8800GT's and since then was given a 4870X2. And yes, I am a gamer, but a scrupulous one.

Posted

I had to return to the thread beginning,I think we took the wrong way...

 

 

I request your opinions on which processor will be better suited for simming over the other? I refer to the core i7 versus the core i5.

 

i7 definitely,and please put 2 gpu's or more on it...for the eye's candy!

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...