Jump to content

Recommended Posts

Posted

I've been pondering this for a while now, but would this be possible? From what I've seen, it's possible for GPUs to use APIs like PhysX to do calculations that would otherwise be saddled on the CPU. So would it be possible for GPUs to take some of the load off of CPUs?

Posted

There was a dedicated PCI physics card a few years back, I dont think it ever caught on because it was *too* specialist.

Hornet, Super Carrier, Warthog & (II), Mustang, Spitfire, Albatross, Sabre, Combined Arms, FC3, Nevada, Gulf, Normandy, Syria AH-6J

i9 10900K @ 5.0GHz, Gigabyte Z490 Vision G, Cooler Master ML120L, Gigabyte RTX3080 OC Gaming 10Gb, 64GB RAM, Reverb G2 @ 2480x2428, TM Warthog, Saitek pedals & throttle, DIY collective, TrackIR4, Cougar MFDs, vx3276-2k

Combat Wombat's Airfield & Enroute Maps and Planning Tools

 

cw1.png

Posted

Ageia PhysX was acquired by Nvidia, and they use the API to drive some physics on their GPUs. It's not really designed for the type of physics that flight sims would use.

Posted (edited)

Yes, in theory it would be possible to run stuff on the GPU, however, the nature of the program has to lend itself to this, also, the gain in calculation speed depends highly on the operations that are to be performed. The more parallelisation you can achieve, the more gain you get when moving to a GPU. The process of building such an application (or even more so, retrofitting an application) is very complex in this case, however.

Edited by sobek

Good, fast, cheap. Choose any two.

Come let's eat grandpa!

Use punctuation, save lives!

Posted
Ageia PhysX was acquired by Nvidia, and they use the API to drive some physics on their GPUs. It's not really designed for the type of physics that flight sims would use.

 

You can use shaders to run parallelized programs in GPU, PhysX is not longer required if you have the enough knowledge. You are not limited to physics calculation, you can use in whatever application you want to as long as an high degree of parallelization is required. I have seen some examples where AI is computed in GPU using OpenGL shaders.

 

Regards!!



Posted (edited)

Compute Unified Device Architecture or CUDA is a parallel computing architecture developed by NIVIDA and is enabled on most newer cards. Physics caluclations can be offloaded onto the GPU using CUDA, thus you don't need a dedicated physics card (PPU) or CPU, but only if the game supports such a system. This is possible because PhysX is coded and integrated into the CUDA framework. This makes the PPU redundant.

 

If ED were to use CUDA, then you could see physics calculations on the GPU. CUDA is very fast, for example a certain neural network operation on a core i7 would take about 3 hours and 27 minutes. On a CUDA enabled card that same operation would take 3 minutes and 16 seconds on a Tesla card. a 9800 GT it would take 51 minutes and a GTS 240 would take 24 minutes. Those cards are slower because of single presicion calculations, the Tesla can perform double precision.

 

Physics probably aren't as complicated as a neural network, so imagine how fast those calculations can be made.

CUDA is this fast becuase it takes advantage of the GPU's parallel nature and its not very difficult to program for, it uses C (with NVIDIA extensions and certain restrictions) and there are other bindings as well.

 

While it would be crazy to ask ED to do it for the A-10C on such short notice, it is still possible using PhysX middle-ware. Perhaps they can consider it in the future.

 

If ED is interested in this, the SDK is free, http://developer.nvidia.com/object/physx.html

Edited by thisisentchris87

[sIGPIC][/sIGPIC]

Antec DF-85 Case, Asus Sabertooth Z77, Intel i5-3570k Ivy Bridge 3.4 GHz, air-cooled with Noctua NH-D14,

Corsair Vengeance 16GB, EVGA GTX 560 TI, Corsair Professional Series 750W, Creative Sound Blaster X-fi Titanium HD, ASUS EA-N66 Wireless Adapter

Posted

I think the problem is the kind of physics it computes. GPUs are designed to take 100s and 1000s of iterations of a model and compute individual particle physics or AI on each one. You dont do that in a flight sim. You dont go blowing up 100s of AI or ever even come across numbers like that. Flight Sim physics is about aerodynamic calculations, and Im pretty sure that mostly the next variable is dependent on the previous which doesnt lend itself to parallel computing. Thats not to say that other things cant run in parallel, but massive parallel computing just isnt really part of those kinds of physics calculations.

  • Like 1

Intel i7 990X, 6GB DDR3, Nvidia GTX 470 x2 SLI, Win 7 x64

http://picasaweb.google.com/sweinhart

Posted

Depending on your card and your driver:

Go to Systme Control (sry I have a german Windows version, I don't know how it's really called)

-Hardware&Sound

-and there might be something like "NVIDIA System Tools"

choose "Physx" on the left side (3d settings), activate it and try if improves your FPS in FC2 or Black Shark

Nvidia-Geforce-Treiber-05.PNG

[sIGPIC][/sIGPIC] Waiting to build a F/A-18C home-pit...

ex - Swiss Air Force Pilatus PC-21 Ground Crew

SFM? AFM? EFM?? What's this?

 

 

i7-5960X (8 core @3.00GHz)¦32GB DDR4 RAM¦Asus X99-WS/IPMI¦2x GTX970 4GB SLI¦Samsung 850 PRO 512GB SSD¦TrackIR 5 Pro¦TM Warthog¦MFG Crosswind Rudder Pedals

 

Posted
Depending on your card and your driver:

Go to Systme Control (sry I have a german Windows version, I don't know how it's really called)

-Hardware&Sound

-and there might be something like "NVIDIA System Tools"

choose "Physx" on the left side (3d settings), activate it and try if improves your FPS in FC2 or Black Shark

 

ED does not currently feature PhysX in their products.

Good, fast, cheap. Choose any two.

Come let's eat grandpa!

Use punctuation, save lives!

Posted

Well, I think the greater issue is that ATI and Nvidia haven't even standardized their GPU computer architecture. ATI is backing OpenCL, and Nvida backs CUDA.

Posted
Well, I think the greater issue is that ATI and Nvidia haven't even standardized their GPU computer architecture. ATI is backing OpenCL, and Nvida backs CUDA.

 

There is no need for them to unify their architectures, there "just" needs to be a common language that can compile for both ati and nvidia gpus. The technology is still quite new, so i'm positive that it's just a matter of time until we see a common programming language for GPUs.

Good, fast, cheap. Choose any two.

Come let's eat grandpa!

Use punctuation, save lives!

Posted

I could see it being used for important things like proper munitions fragmentation. And even a better aerodynamic model. Image stalling the aircraft more accurately.

 

I would think almost everything about a flight sim is pretty physics orientated.

Posted
There is no need for them to unify their architectures, there "just" needs to be a common language that can compile for both ati and nvidia gpus. The technology is still quite new, so i'm positive that it's just a matter of time until we see a common programming language for GPUs.

 

To an extent is there not already a lot in common? There are PhysX hacks that allow ATi to use it.

  • 1 year later...
Posted

Reviving the thread, I say that if you own a second older card you can off load Physix operations on this.

 

For example have a look at my rig I use GTX570 and offload Physx to GTX260.

 

I believe that PhysX is in great need on flight sims as smoke, cloud, wind effects and their behavior can be greatly improved.

 

For example have a look at this video

 

DCS F16C 52+ w JHMCS ! DCS AH64D Longbow !

Posted

Physx effects are mainly used as physically 'correct' rendered graphic gadgets, while the weather and environment in DCS are tied to the flight model, i'm not sure if physx would be feasible, definately not without a major rewrite of the engine.

Good, fast, cheap. Choose any two.

Come let's eat grandpa!

Use punctuation, save lives!

Posted

Hi!

 

Most of these PhysiX-enabled things and other GPU-calculated physical widgets are just optical stuff like clutter flying around and things breaking. This is done directly and finally on a GPU after the stuff behind the rendering of the frame is already done. It has no impact on gameplay or the game mechanics.

 

Now, if you calculate some intricate ballistics or flight models on a GPU, you have to allocate some time for the CPU to get the data set for a frame, need to get this to the GPU, perform that calculations, get the stuff back to the CPU, calculate what should be in the frame, like geometry, and move this again to the GPU. This then renders the frame from the stuff the CPU provides directly today. See where I am going?

 

For us (and me too!) these transfers seem trivial, but they add some serious latency. You need to do this for every frame, 30-60 times a second, with all the data.

You would need a method to make physics get out of the way of this.

 

Again, mind you, I am in no way an expert on this, but that is my understanding of how stuff works in ths perspective.

 

 

Super-

Posted
There is no need for them to unify their architectures, there "just" needs to be a common language that can compile for both ati and nvidia gpus. The technology is still quite new, so i'm positive that it's just a matter of time until we see a common programming language for GPUs.

 

There is OpenCL and DirectCompute (the DX equivalent).

"It takes a big man to admit he is wrong...I'm not a big man" Chevy Chase, Fletch Lives

 

5800X3D - 64gb ram - RTX3080 - Windows 11

Posted (edited)

The big advantage of OpenCL, it is you can use the calculation done for CPU and GPU, offload what you want, as you want at anytime ( the instructions set stay the same ) or let the engine choose what suit the best for do it within the time rendering frame. Its not like PhysX who basically was running before really bad on CPU (CUDA and PhysX use basically x87 instructions who have disapear of CPU ).. Nvidia have annonce open PhysX soon ( need say it was needed as go in a computing conference, and see even peoples who use CUDA all days for scientist work and computing who are on good mood for OpenCL ).. additionally the new version is more compatible with cpu and so open more "interaction " with it. Still we have not see any project use this new version.

 

But yes offcourse, this mean rework the engine for include this. This will not happend magically, as you open a layer API.

 

Well it will be even more effective, when computing will use the " virtual memory space ", ( allready in GCN, the AMD architecture on HD7970 ), basically a virtual memory space can be done between gpu and cpu memory, both can stock and access directly data there, the GPU memory can use any x86 instruction, C++ etc data... for graphism this will specially be used for extremely big texture. For computing, this will change many thing.

 

sanstitrehr.png

Edited by Lane

- I7 2600K @5.2ghz ( EK full Nickel waterblock )

- Gigabyte P67A-UD7 B3

- 8GB Predator 2133mhz

- 2x HD7970 - EK Nickel EN H2o block

- 2x Crucial realSSD C300 Raid0

- Black Widow Ultimate - X52 -TrackIR 5

- XIfi Titanium HD

- Win 7 x64Pro

 

Posted
open PhysX soon

 

Good, as long as it's proprietary, no one will ever make any use of it, safe for the superficial graphic enhancements.

Good, fast, cheap. Choose any two.

Come let's eat grandpa!

Use punctuation, save lives!

Posted
I think the problem is the kind of physics it computes. GPUs are designed to take 100s and 1000s of iterations of a model and compute individual particle physics or AI on each one. You dont do that in a flight sim. You dont go blowing up 100s of AI or ever even come across numbers like that. Flight Sim physics is about aerodynamic calculations, and Im pretty sure that mostly the next variable is dependent on the previous which doesnt lend itself to parallel computing. Thats not to say that other things cant run in parallel, but massive parallel computing just isnt really part of those kinds of physics calculations.

Most accurate thing I've ever seen a non-developer say. This in cominbation with:

Hi!

 

Most of these PhysiX-enabled things and other GPU-calculated physical widgets are just optical stuff like clutter flying around and things breaking. This is done directly and finally on a GPU after the stuff behind the rendering of the frame is already done. It has no impact on gameplay or the game mechanics.

 

Now, if you calculate some intricate ballistics or flight models on a GPU, you have to allocate some time for the CPU to get the data set for a frame, need to get this to the GPU, perform that calculations, get the stuff back to the CPU, calculate what should be in the frame, like geometry, and move this again to the GPU. This then renders the frame from the stuff the CPU provides directly today. See where I am going?

 

For us (and me too!) these transfers seem trivial, but they add some serious latency. You need to do this for every frame, 30-60 times a second, with all the data.

You would need a method to make physics get out of the way of this.

 

Again, mind you, I am in no way an expert on this, but that is my understanding of how stuff works in ths perspective.

 

 

Super-

Result in why PhysX tends to only be used for stuff that never affects gameplay.

 

In more comical terms: A GPU is like a fast jet(F-15) and your CPU is like a car(Toyota Camry). The F-15 is quite clearly faster, but there are plenty of situations where using it will end up costing you more time - like the overhead of start/stop procedures, taking off, landing, and the inability to park outside of the grocery store...legally.

CPU: 5950x || Memory: 64GB || GPU: RTX 4090

Input: Virpil CM3, TM F/A-18 Grip on Virpil WarBRD base, WW F-16EX grip on TM Warthog base, Virpil CP1 and CP2, Cougar MFD x2 / w CubeSim screens, StreamDeck XL x2, StreamDeck 15-key, TrackIR5

Posted (edited)
Good, as long as it's proprietary, no one will ever make any use of it, safe for the superficial graphic enhancements.

 

 

Completely agree with you, i have a little bit mix CUDA and PhysX ..

for PhysX, well as many ( including the developper of PhysX for Ageia, who is now working for AMD, Bullet, OpenCL after have been the project Manager of CUDA and PhysX for Nvidia ), proprietary as it is, it run to his death.

There is not and there will certainly never get any game who use PhysX for "physic simulation " outside a bad graphism enhancement ( who can be made by any other "game physic " engine. )

 

Even if one day PhysX is used on AMD GPU, i can bet, all will be made by nvidia for promote their gpu's. Not sure it will run as it should on other GPU's. (PhysX is more a marketing point for sell gpu's, instead of a tool

for increase gaming quality )

Edited by Lane

- I7 2600K @5.2ghz ( EK full Nickel waterblock )

- Gigabyte P67A-UD7 B3

- 8GB Predator 2133mhz

- 2x HD7970 - EK Nickel EN H2o block

- 2x Crucial realSSD C300 Raid0

- Black Widow Ultimate - X52 -TrackIR 5

- XIfi Titanium HD

- Win 7 x64Pro

 

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...