Jump to content

DeltaMike

Members
  • Posts

    856
  • Joined

  • Last visited

Everything posted by DeltaMike

  1. VACQ is helpful if you're at the same altitude in a turning fight (when it's not bugged). Horizontal scan mode is useful for a sweep if you're not sure of the altitude. Gun mode is useful if they are right in front of you. I have not found the HACQ modes very helpful unless you can see the bad guy
  2. I dunno about the warthog in particular, but here's your basic functional groupings: 1. Aircraft control. Speed brake (used a lot). Trim (used very little). NWS. Gear and flaps have to be mapped somewhere, I found it easiest to map to HOTAS under a function switch. Fuel jett (as opposed to stores jet!) 2. HUD mode. Nav, BVR, next/prev waypoint is worth mapping to a hat switch somewhere. 3. Sensors. Radar elevation, radar on/off, PRF, TWS on/off, ACM modes. (You'll use vertical scan a lot, cannon a lot, bore some, flood never.) TDC slew, select, unselect 4. Countermeasures. Chaff, flares, ecm 5. Weapons. Gun, pickle, change weapon 6. Startup. Power, canopy, right engine start, left engine start, taxi light, anticollision lights, all used frequently. If you fly at night, you'll need console lights and hud toggle 7. Comms. I'd reserve a two-way switch for SRS if I were you, most of the time all you need is the comm menu though. Might consider mapping a button somewhere for bogey dope. The only thing unique to the Warthog is, I hear people griping about TDC select all the time, I guess they usually map that to the joystick rather than the TDC slew mechanism. Otherwise, long as you understand the functional groupings you should be able to come up with something that makes sense to you.
  3. Most multiplayer is happening on the open beta release. Buy now: - Agree F18 is a great module, entry barrier is not high (it's easy to get it flying) but hard to master (you'll never get tired of it) and it can pretty much do everything - Agree SRS is way at the top of the list. Adds to the fun of MP a great deal, especially with a full fidelity module like the F18. That said, both Discord and TeamSpeak are used frequently and while I prefer SRS I'll admit I get more use out of Discord. Discord is also helpful for hooking up with people to fly with. Consider: - FC3 package. You might be surprised how often those modules are used in MP, very competitive and the simplified systems makes air-to-air a lot of fun. Lots of jets for very little money. Nice for when the F18 starts driving you bonkers (which it does, sometimes) - Tacview, if you're working on your technique this is invaluable, I use it all the time. Answers the "what just happened there" question Watch for sales - Persian Gulf. A small minority of MP servers use this map but it's an extremely cool map and worth having - Combined Arms. Gives you a helpful "birds eye view" even if you aren't interested in shoving ground units around. Pick it up when the price is right Note MP will flog your system. You need lots of CPU overhead and a really fast drive. - Get your latest motherboard drivers - You want 16GB of fast RAM, 3200MHz is fine but you kinda want that. (Some people say you should have 32Gb of RAM but there's less general agreement about that) - Get an SSD and migrate DCS to that. 240GB is enough for your needs but note, once you are maintaining two releases, you have all the maps and umpteen modules 500Gb is better - Check your settings. Vis range, trees, shadows affect CPU, might need to turn those down, especially if you're running an older CPU or anything that runs less about 4.4GHz. Turn civ traffic and grass off. (On the other hand, if your GPU is working now, it'll work in MP) Georgia-At-War is the easiest way to ease into MP, they maintain a noobie server where you can practice flying around with other people while you are blowing stuff up, and their regular MP server has a lot of stuff to do, a neat dynamic campaign and believe it or not they are really really nice there and you can come in as a total rank newbie and have a lot of fun. That's mostly "Player vs Environment," there are a lot of "Player vs Player" servers too, Just Guns is a good place to visit, you can also check out our Air Combat Dojo to see how a section works together in both BVR and WVR scenarios (note it *really* helps to be on comms, right?) It's easy to look at all the griping around here and hard to really grasp how deep, and how cool DCS is until you get into it. You'll never look back. And yeah, all you really need is what you're shopping for and some way to communicate
  4. You can't look at the headset in isolation, you also have to look at how much computer horsepower you need. Rift S is probably the sweet spot, you can make something happen with a 3.9GHz processor, 16GB of really good RAM, 240GB SSD, and a 1070ti or Vega 56. Absolute minimum, if you don't have that in the box don't bother
  5. Haha that's so cool. The F18 will do that too if you do a water landing gently enough, it'll auger through the bottom and you pop out in... heaven, I guess. You're probably spawning underground, synch issues can do all kinds of weird stuff
  6. Question is, how fast can your CPU render a scene? If it can do it in less than 11ms you can run at 90fps. If you're content with 60fps you have 17ms to work with. My suspicion is your render time is more than 11 but less than 17ms. So when your rift defaults to 45fps your CPU and GPU have plenty of time to render the scene. Either way it seems your GPU has plenty left to give. For VR you want your CPU render times down around 9ms or less on an empty map, which isn't dreadfully difficult but depending on your setup that's where I would look first
  7. Yeah I'm yanking the 480 outa my wife's computer today and replacing it with a quadro 600. Difference between you and me is, I'm not telling her
  8. Oh you'll notice a difference. I noticed a difference when I installed faster RAM, and there was a huge difference going from gen1/gen2 at 3.9-4.0GHz, to gen3 4.4GHz. Not much point overclocking Ryzen for DCS. Without knowing for sure how single core performance compares between your present CPU and any of the Ryzens, I'll bet if you can push your current CPU to 4.6GHz+ it should be really close. I take it, that's a big "if"
  9. MSI B450M Gaming Plus Note I wasn't planning on overclocking, I dunno that would have been my first choice if I had been. I don't think it's shipping with the new bios but it has the button on the back that allows you to upgrade the bios without a CPU. And the bios does work. Worth checking MSI website to make sure you get RAM that works at the rated speed. Gets the job done for a hundred bucks
  10. Keep in mind, in VR you're looking to gain time, and there's more than one way to skin that cat. Easy to understand how a CPU upgrade will improve performance on a busy multiplayer map. What surprised me was, I gained 3ms in frame time even on an empty solo mission where I would have thought I was GPU limited. Which allowed me to up my settings a little.
  11. Yeah I just upgraded from a 2600 -- which wasn't any better than my 1700 -- to a 3600X. Yuge difference. Yuuuuuuge. Dude. Do it. You might even want to take the CPU for a spin before you upgrade your GPU, it's that good
  12. Then there's this https://www.gamersnexus.net/hwreviews/3489-amd-ryzen-5-3600-cpu-review-benchmarks-vs-intel however I feel you make good points. I think for people already invested in mobo and memory, 3rd gen Ryzen makes some sense --especially if you're running a 2600 which imo is a piece of garbage -- but I don't know that people should be eager to kick intel to the curb for DCS, not by a long shot. My only thing is, that jump from 3.9GHz on a gen2 to 4.4GHz on a gen3 is a huge jump, I thought it might help a little but holy cow
  13. There's an inverse relationship between the render time and the FPS. 1000ms in a second, so 1000 / frametime in ms = frames per second. In VR, you're typically locked into a certain FPS, so Oculus defaults to 90, 45 or 22.5 usually. If you want a more granular measurement -- and anything would be more granular than that -- you could turn ASW off and measure frame rates, or you could just look at the render time. Latency matters, arguably it's the key to the whole thing. You don't want a delay of much more than 20ms between when something happens in the virtual world and that information hits your retina, otherwise you'll puke. Persistence, or in other words how fast you can turn a pixel off, has an effect on latency. Faster monitors have lower persistence -- they pretty much have to -- but I don't know that you have to have 90fps to have a comfortable experience. Net of everything, VR users (I would submit) are more interested in latency than framerate. The two are closely, if not perfectly related, because render time accounts for a lot of your latency I used to think I knew what GPU render time was, now I'm not so sure. I think it's probably total frame time
  14. My thinking with the 2600 was, I can overclock that sucka and save $50! Didn't work out that way. I dunno, kinda had the idea that the 2600X was an enhanced 2600 when the reality is, I'll bet the 2600 was dug out of the trash can. But maybe it was silicon lottery, and maybe the cheap mobo Hearing much the same about the 3600, haven't seen any OC results on the 3600X, interesting question, given the modular design of the 3000 series. I'm just not sure I see the point in overclocking unless I can get it significantly above the single-core boost speed. For CPU mining, which is a parallel process, yeah I would definitely overclock, even if the final number was a shade less than the boost speed. But for DCS, I don't think I want to leave anything on the table single-core-wise. That's also why I didn't go with the 3700X, I mean maybe six cores is enough for load balancing, and the burst speed is the same... but I wouldn't know
  15. Yeah it's gotta be reporting total frametime. In VR, a lot of things have to happen sequentially. Turn pixel off, figure out which way head is pointing, render the image, turn pixel on. I get the feeling that the CPU and GPU have to do their work sequentially, at least sometimes. So, with Oculus Tray tool anyway, "how long it takes the GPU to do its thing" does not equal "GPU render time" and it's not even exactly "GPU time - CPU time" near as I can tell. So far I've found the following: 1. Anti-aliasing affects GPU time but not CPU time to any significant extent 2. Adding units to the map affects CPU time much more GPU time 3. Other things like shadows and trees affect the two numbers similarly See also https://forums.eagle.ru/showthread.php?t=200737 Bottom line is, with minimal settings I now have 3-4ms to work with. I can add a lot of units, a little bit of shadows, or a little bit of anti-aliasing. If I had a faster GPU, I would probably have more than 4ms to work with, and I could add a lot of shadows or anti-aliasing, but probably not a whole bunch more units than I can now and keep FPS at target. Likewise if I upgrade to a Rift S, I can push my frame render time out to 25ms, so I could drive more pixels and might still have something left over.
  16. Platform in sig. For reference, I typically run the same settings on my Vega as my buddies do on 1080 or 1070ti. The Ryzen 5-3600X is a six core processor with base clock of 3.8GHz and a max boost of 4.4GHz. AMD is claiming a 15% increase over the last generation in instructions-per-cycle. That doesn't mean it's 15% faster than an Iintel chip running the same clock speed, but I figure it's nothing to sneeze at. Comparison is with a Ryzen 5-2600, 3.4GHz base, it'll bost up to 3.9GHz and I've seen it do that during stress testing. I was able to overclock it to 4.0GHz, no more, and finally just wound up running it stock. I "overclocked" the memory to get it to its rated speed, the 3600X was run stock without overclocking. All tests done in the F18. Graphics settings were held constant, if you are dying to know I'll go over them but it's not really relevant here I don't think. Since we are talking about VR here, I'm going to report render times rather than FPS. To convert render time to FPS, divide the GPU render time into 1000. So, a 22ms render time should give you about 1000/22 = 45 fps. Here's what I found. 1. Free flight over the Caucasus. With the 2600, my CPU render times were in the 9-12 range, GPU time 22ms. With the 3600X, CPU render times fell to 8-9, GPU time fell to 19. 2. The real challenge, and the reason for the upgrade, is multiplayer. Previously, on Growling Sidewinder, my CPU times on the ground were ~26ms, GPU ~41. For those who don't want to do the math, that's a frame rate of 22.5ms. That typically settled down to 16/22 in the air (45fps). With the 3600X, I was running 16-17ms CPU times and 22ms GPU times, for a fairly smooth 45fps even on the ground. 3. The ultimate test, for my system anyway, is Georgia at War, a very busy map with tons of AI units. With the Gen1 and Gen2 Ryzens, my CPU times were 27ms, GPU 42ms on the ground, and it didn't improve much in the air. I've been pegged at 22.5 fps. After the upgrade, I spawned in with 16-17ms CPU times, 22ms GPU times. FPS wasn't consistent on the ground, my GPU times spiked up to 42 a couple of times, but it settled down once I took off and gave me a consistent 45fps. I'm actually kind of amazed at the results. Going from 22.5fps to 45fps in GAW is a game changer. I was also surprised that my GPU render times improved as much as they did. CPU time and GPU time are highly correlated, and I'm getting the feeling the GPU has to wait on the CPU for pretty much everything other than anti-aliasing (although I wouldn't know for sure). Upgrading was easy. I bought an MSI B450M Gaming Plus about six months ago anticipating this upgrade. An OK-ish board, probably not the best for overclocking but perhaps enough for six core chips. I don't think they are shipping with zen-2 compatible BIOS yet but it's an easy flash, even if you haven't installed the CPU yet. The BIOS is still in beta but I had no problems getting it running, it booted right up. Worst part about the whole thing was getting that frickin Wraith cooler bolted in. Net of everything, that's about the best $250 I've spent on this game yet.
  17. Near as I can tell, CPU and GPU render times are highly correlated for everything other than anti-aliasing. So, if dialing down Msaa and pd helps, you probably need more GPU. Otoh if everything is fine until you log in to a busy server, and dialing down your settings doesn't help, you probably need more CPU. If you can only do one, you have to figure out what you want to accomplish mainly.
  18. You are gonna want some anti-aliasing and for whatever reason (? deferred rendering) MSAA isn't any more efficient than PD in DCS. The effect is subtly different (I guess), some like one, some like the other, some like a little of both. Pick your poison I guess. Main thing is, if you decide to go with supersampling, remember PD *is* supersampling, don't make two or three passes with it. The in-game adjustment is plenty Weird thing about supersampling is, it's kind of a U shaped curve. A year ago people were all about pushing it as high as it would go, 1.6 not uncommon, some running 1.8. As you get much above 1.4 text gets hard to read and objects get hard to spot. While the load on your GPU goes up exponentially. Which is probably why people use both. So people seem to be favoring lower PD's these days, and if they want more they add in MSAA. And I will say, MSAAx4 looks pretty awesome if you have the horsepower to drive it
  19. https://forums.eagle.ru/showpost.php?p=3897350&postcount=12 Could be an explanation for why the Vega -- even the 56 -- is able to hold its own in DCS, why the 5700 may not be all that (although I'll bet it does OK for the money) and the VII might be pretty good. Just needs to come down a little more in price because the 2070S is probably going to be a very good GPU for DCS. The 1080 is a known quantity and if the 2070S can outrun it for $500, that's gonna be hard to beat
  20. Read a thesis (which I can't find right now) suggesting that the combo of MSAA and SSAA (supersampling, or "PD") was best for text legibility, just wondering if a) that's true (see this) and b) if using the two together really accomplishes anything. Rift users love their SSAA, we think it defeats the screen door effect, although reading Carmack's rant I wonder if it's the best solution, especially for a game like DCS. I know nothing about computers but a bit about cognition, and it seems to me that MSAA is helping with edge detection, which is what I think we need. That should be the key for text recognition, or spotting. MSAAx4 looks fantastic but my GPU can't choke it down. (Well it could, but I ain't turning off shadows, they're pretty. For reference, Vega slots in between 1070ti and 1080 in DCS.) Trying to run any MSAA with any supersampling is a non-starter. Between the two -- MSAAx2, or PD 1.2 -- the effect is about the same visually. Maybe MSAA is a tad better for the world, and PD is a tad better for text. Render times are the same. Which I found a little surprising. That's a problem for those of us too cheap to buy a 2080ti, but it could be a problem for anybody if they upgrade to a higher-res HMD. The narrative I heard from Pimax, and for the Rift S, is, yeah you're driving more pixels, but you don't have to supersample as much, and MSAA is sooooo much more efficient, so you should be fine! Well, I dunno about that. My hypothesis is that deferred rendering is the main thing holding VR back right now. (And what makes the Rift S work is the lower refresh rate.) But I would not know.
  21. Questions. 1. What are the performance implications of using Msaa with deferred rendering? 2. What about using Msaa along with supersampling? 3. What anti-aliasing strategy do you prefer for rift, and why? What about S? What about headsets with higher pixel density?
  22. It's a really clear decision for me, I can spend $250 and go from 3.9Ghz to the equivalent of 4.8GHz. For a new build it's a different equation, kind of revolves around the Mobo I think. Most of us are angling for the upgrade path and that is paying off big time. Whether it'll pay off in the future is another question. Clearly with an unlimited budget Intel is King and I imagine they will do what it takes to stay there. And amd will probably continue to stay focused on the margins.
  23. Yeah, should have clarified dcs can't use more than 2 cores yet, but worth mentioning there's talk that the bigger ryzen3 cpu's overclock better (eg at lower voltage) if that's what you're into. Between that and the IPC it's not bad. I don't think it's any better than intel, but ya know if you can get close with a $250 CPU, a $100 mobo and $200 of RAM, well... what's the marginal benefit? and what's the marginal cost? that's AMD's business model in a nutshell and I gotta give em credit You make a valid point, question is, what's enough? In VR at least i've noticed a lot of variability in CPU render times, it really depends on what's going on in the map. Flying solo over the caucasus is a lot different from spawning in GAW during prime time lol I'm not sure what it would take to cram that through a single core. But it sure is fun to try
  24. oh yeah i'm like over a hundred in pancake mode. 2 different kettles of fish a lot of us are dying to know what you come up with. what settings are you running currently?
×
×
  • Create New...