Jump to content

some1

Members
  • Posts

    3444
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by some1

  1. In VR, we're always looking at distorted images. The image put on the screen is so heavily deformed, that any discussion about the native resolution is moot. Basically every pixel is resized and moved from where the game engine put it. I don't have G2 yet, but even with G1, pushing Steam resolution slider waay above 100% greatly improves image quality. Here's what typically a VR screen is showing. This is not "though the lens" view, but a representation of actual screen output.
  2. It would be interesting to see how the same scenario looks in fpsVR. If the frametimes are really better and more stable on a card with more vram. If the performance stays the same, then the whole argument about which program measures what, is rather moot.
  3. Well, yeah, because ED can't afford to make more of them to that quality. We get one new AI aircraft every odd year - if we're lucky. At this pace I will retire before they get to replace the dozens of the old models in the game that still linger from the late 90s. Also Tu-22. Again, it's because we hardly get any AI models any more. Even those that are previewed in the newsletter often take years before they finally land in the game. Compare the older Smerch launcher to the latest Scud model. Is that extra level of detail really needed in the sim? You really need to play the simulator like an FPS to even notice that things like wipers or door handles have now three times more detail.
  4. There was a time in the past when ED was pumping out a lot of decent looking AI models for DCS. Most of the AI helicopters we have now are from that era, also some airplanes. But then they raised the bar of AI models so high that currently their quality is at the same level as flyable planes - if not higher. Ground vehicles have undersides and interiors, etc... No wonder they can't afford to release them any more for free. I think ED should step back and do a reality check. If they can't afford to make super-duper-high-quality AI models with 4k textures and put them in the game for free, they should set the targeted quality to the average level of the other assets that are currently in DCS. Some of the most commonly used models in the game are still straight ports from the Flanker game made in the 90's. Do we have to wait another 20 years until ED decides it's time to do something with them? Six models from this picture are still in DCS 2020. And there's more of them.
  5. In a turboshaft helicopter engine like we have in DCS, clutch is not needed. The power output shaft is not physically connected to the rest of engine. Once the engine spools up to high RPM and starts producing enough exhaust gases, the power turbine will start to move too. https://commons.wikimedia.org/wiki/File:Turboshaft_operation_(multilanguage).svg#/media/File:Turboshaft_operation_(multilanguage).svg There's a clutch too somewhere, but it's there only in case of engine failure.
  6. Second that. It would be nice to be able to mix those with a2a or SEAD payload. :)
  7. Makes no sense. The system automatically updates target position with TPOD in ground stabilised mode, and suddenly stops doing that when you switch to PTRK, which by definition is used to track moving objects? Yeah it also severely affects using Mavs against moving targets. Even if the missile manages to lock a target pointed by TPOD, it will break lock and slew back to old coordinates once the target moves. There's no hand-off like in F16.
  8. If you designate a target with TPOD and slew the pod around in ground stabilised mode, then the target position and AUTO bombs guidance will follow the TPOD. All fine here. But once you switch to PTRK, the target position stops updating and it remains in the place of original designation. You have to manually depress TDC to update target guidance, but it will become obsolete once the tracked object moves again. movingTarget.trk
  9. More examples of the issue. Many objects are rendered incorrectly since 2.5.6, before that it worked fine, so I don't buy the "it has to be like that" excuse. Other sims don't have such issue, and neither had DCS 2.5.5. Example one: In F2 view at about 8 clicks of mousewheel counting from the most zoomed in position, the aircraft LODs are switching unevenly. Happens maybe half of the time. Most noticeable in the Harrier, because the antennas disappear in one eye, but also in A-10C II, because suddenly the pilot has a different helmet: Example two: different count of bomb craters in left and right eye. Hard to show on the image due to size compression, but instantly visible in VR helmet, just fly around some blown up objects, the crates will change in one eye before the other. Example three, UH-1 UN Pilot campaign mission 2, tents on the FARP are not showing the same in both eyes. In the same mission also the problem with distant clouds rendering can be osbserved.
  10. About the only way to tell if more VRAM really helps in DCS, would be to test 3080 and 3090 in the same scenario and measure if the performance gain is higher than it should be just from the increased GPU core processing power. Other than that, you're comparing apples to oranges. The difference could be from VRAM size, it could be because of GDDR6X memory on the newer cards, different architecture, clocks etc.
  11. some1

    TPOD operation

    Yes, my guess is that is a general problem with Razbam MFD implementation, buttons like brightness/contrast also need to be clicked multiple times.
  12. The logic of HOTAS buttons that need to be pressed and held down (WINC, various DMS functions etc.) is rather weird in the Harrier. How it works right now: Action is performed after the button has been released if it has been held down for longer than 0.8 seconds How it (probably) should work: Action is performed when the button has been held down for longer than 0.8 seconds That's how it works in every other aircraft in DCS, and also what the instructions in NATOPS manual suggest. There's nothing there about the need to release HOTAS button before the action is performed.
  13. some1

    TPOD operation

    There is next to the coordinates. It says TPOD if you're in HTS. If something else is controlling the system designation, like INS, then it's written there.
  14. Good stuff, thanks for the write up.
  15. This search link is the closest to the "new posts" button on the old forum. It's slow to search and haven't found a way to control the number of posts per page, but better than nothing. Saves you from clicking through advanced search options every time https://forums.eagle.ru/search?searchJSON=%7B%22date%22%3A%22lastVisit%22%2C%22type%22%3A%5B%22vBForum_Text%22%5D%2C%22channel%22%3A%5B%5D%2C%22sort%22%3A%7B%22lastcontent%22%3A%22desc%22%7D%2C%22view%22%3A%22topic%22%2C%22exclude_type%22%3A%5B%22vBForum_PrivateMessage%22%5D%7D&btnSubmit=
  16. Same. "Latest Activity" tab is useless. It's bizarre but the forum does not seem to have a button to simply show unread threads/posts.
  17. I have to add my voice to the opinion that the new forum is a huge step back in usability. What I miss the most is the high level view of the new topics and unread posts like the old forum had. The "Latest Activity" view is near useless, maybe good for a forum that has ten posts a day, not a huge forum like this. It doesn't even remember which topics I've already read, it doesn't bring me to the unread portion of a thread, clicking "more" every couple threads is very inefficient.
  18. Fixed value, whichever is bigger. Just observe how ridiculous AI reacts to Aim-54s. The AI has mostly the same routines as 15 years ago, when computers had fraction of power of today's machines. So the NASA computer excuse is pretty weak.
  19. The HUD is not fixed to your head. You've got a reference frame because of how it reacts to your head movement. Our minds are really good at constructing 3d scene from limited information, even without stereoscopic or depth information. It's enough just to see that the HUD images stay in one place while you move your head forward/sideways, and your mind already knows it's far away. Even if the focal plane doesn't match what you see. HMD is very different because it's fixed to your head and you don't have any reference as to where it should be located. So in VR it "feels" like it's close to your head, while in real life it would "feel" like it's far away, because you could see it clearly only with your eye focused on far away objects. A bit like a camera on this HUD photo: https://i.redd.it/jz4066uc24r51.jpg Anyway, this is drifting more and more off-topic. More options regarding HMD display are fine, and it's great that we'll have that. I'm just pointing out that it's not really more realistic to have HMD in one eye as you're still subject to the same VR limitations, that make it harder to use than it is in real life.
  20. There is no "3D" in VR if an object is shown only in one eye. In such situation you can't rely on eye accommodation to judge distance, like you would in real life. Your brain can only "guess" where an object like HMD overlay is located in space. That's why at least some people get the weird sensation with HMD shown in one eye, that it is not in the correct place, or it's slapped too close to your face. Of course you can "train" yourself to use it, but it's not that hard in reality as it is in the VR.
  21. What you describe is a depth perception thanks to stereoscopic vision and that doesn't work if an object is displayed only in one eye. In real life, even with one eye closed, you can still roughly estimate distance thanks to the changes in eye focus between objects. You can't do that in VR as with one eye closed, the other eye just sees a flat screen with all objects at the same focus distance (1-2 meters, manufacturers don't give the exact number). So with the HMD screen displayed in one eye in VR, your mind has no option than to think the display is close to your head, where your eyes are focused (1-2 meters). It would not react like that in real life, with HMD screen focused on far away plane, basically infinity. Yep, and 1-2 meters offered by VR headsets is not "infinity". It's just a compromise so that our minds can be tricked into depth perception by only using stereoscopic vision. It works fine in most cases, as stereoscopic vision is our main source of depth perception in real life. But it hits the system limitation with things like HUDs, HMDs and the likes. That's why having the HMD screen in one eye only is not quite as realistic experience as you may think, and harder on your eyes than it would be in real life.
  22. Yeah, ED should really fix the propellers on their end.
  23. VR kits lack Depth of Field (everything is focused on the same plane at about 1.5-2 meters away), so having HMD in one eye is not a super realistic experience either. It's harder on your brain than it really should be.
  24. The devs are already aware, at least according to your predecessor. https://forums.eagle.ru/showthread.php?t=213853
  25. The roof is not casting proper shadows. Other parts of the cockpit are also suspicious, but the roof is the most noticeable. You can see the shadow of the rotor head passing through on the floor.
×
×
  • Create New...