Jump to content

Worrazen

Members
  • Posts

    1823
  • Joined

  • Last visited

Everything posted by Worrazen

  1. Looks like this was addressed in the update on 19th. Maybe you can test with RenderDoc again to verify the implementation. https://www.digitalcombatsimulator.com/en/news/changelog/openbeta/2.9.2.49629/ Perhaps the issue depends on a particular scene-s, which the benchmark by Hiob might not cover. Then again, small fixes aren't to be shrugged off, they all add up eventually, but it's still a good idea to scrutinize this to determine whether it had the fullest effect.
  2. Oh that's interesting right! Just to be extra technically correct and clear for everyone else in general, you mean those who transmit with an in-game cockpit radio that now supports DCS Voice Comms component/functionality that happens to use VOIP technology for transmitting the voice data between multiplayer PCs over the internet. This discoverability and trackability of radio voice signals goes along with radio signal simulation in general and would be needed for eavesdropping on radio comms, radio sigthing enemy contacts (AWACS, SIGINT) and ofcourse SAR/CSAR or just plain locating separated forces on ground or elsewhere, squadrons regrouping, ejected pilots needing evac, etc.
  3. Whatever mod/external program it will be, would rather be something on-top, expanding functionality and interfacting with the internal components of DCS VC, work with it instead of completely replacing DCS VC. Key lines from youtube Wags video comments:
  4. I guess just lower the graphical fidelity when using multi-view then, ofcourse perfection is impossible, there will be a set of tradeoffs for different systems. What about DLSS and FSR. I do get your points. But you can't keep holding one end of the scale fixed forever, well you can, but at some point you can start to scale up in all directions with the added new performance ... using that argument strictly, then sure games could have stayed 30FPS forever and kept evolving in graphical fidelity and resolution, for example, or you could keep getting more CPU cores and just play games, no simultaneous recording ... playing 2 games at the same time should be possible, but that's not a practical scale for us humans, so all these added CPU cores with desktop CPUs in recent years have been used for also recording and streaming on the same host machine with OBS and other solutions while also playing, without the need for additional capture HW or separate computer, which was one of the major marketing points with AMD. So none of this is solely my idea, large companies have figured out the same possibilities. The possibility is certainly there and may not be so far fetched with the core engine improvements DCS is actively receiving, the developers of DCS could as well agree but the number of users here asking for this might be the one of the reasons holding this idea back, but this goes for a lot of other ideas as well. The community building aspect and the marketing of DCS may be one of the stronger reasons they would offer more cinematic making tools to users, even if there's currently no huge demand for it, sometimes the users don't know themselfs of how handy or useful would something be until they actually test it out and use it, happens to me all the time in other places/fields. The virtual aerobatics/air shows community could be one of the bigger users of such features, it's definitely healthy trying to support niches if you want those niches to grow larger faster and contribute to various communities of DCS, ofcourse if there's some expectation that their increase would cover the costs. The most popular modules sell themselfs automatically because of many different factors (movies, people's own interest in flight, etc) and have a momentum going, they're like heavy trains and don't need much pushing to keep going, they don't need that much promotional effort; but other niches are like steam locomotives struggling uphill and giving them a boost can make a big difference and may prevent that train from stalling.
  5. This is really only the very first multi-core engine upgrade step, and even this first step from what I think were official words was the preview and in no way the final. The separate F-views would/should use their own thread or a set of threads altoghether. With VULKAN I also meant a set of other graphics related performance features and optimizations, still it's ofcourse going to be more demanding naturally, but not to a grinding halt I would allow myself to speculate. It really is heavily dependant on what kind of scenes would be rendered on those additional F-views.
  6. Hi again folks! Oh just look what did the september 2023's newsletter brought us: Are you thinking what I'm thinking? Ah, even if it's just something out of things discussed here it's a great start!!! It's very possible that perhaps it might have been something else but similar, or separate but is also part of the overall topic, planned well before my post here. Short answer reiterated, burried in my larger post at the end .... MULTI-CORE ENGINE UPGRADES and VULKAN may solve or greately diminish these concerns. ... Right, ... It's been a few months so I would need to refresh on everything I discussed, but my quick response to that is that this doesn't need to be a feature for the wider public necessairly, but a capability for the cinematic niche where they could, ... and if I didn't then this is probably something I forgot to mention in my initial posts, even do something like PER-FRAME-RENDER-EXPORTING-RECORDING similarly to how you render out a scene in Blender, it doesn't have to be real-time, but the resulting video would look like as if it was running/rendering in real-time, the engine would need to know this in order to not skip rendering frames and sync gametime tick with the frame rendering. I think this is how in the PC game Crysis was done where a modder used the CryEngine 2 Sandbox Editor to explode 6000 barrels, the video was amazing and got a couple of million views, but it wasn't really recorded in real-time in reality, I think the engine had a console command where each frame would be exported to a BMP or PNG file, and then the video was created by combining these image frames together. However, this would be only useful in pre-scripted cinematic scenes that don't require interaction, well replay is a form of script, but perhaps ED does it with programming internally, and to iterate faster probably use wireframe mode to not spend rendering when planning/drafting the cinematic scene. I think I'm touching up on separate subjects which don't necessairly require/need/involve multiple separate F-views on same host machine/instance/gamesession. I rather stop here for now before I get tangled up on too much complexity with ideas.
  7. Very good, I don't think it's that of a complicated task for the sizable improvement it'll bring, then waiting for the serious AI enhancements would be much easier. .
  8. I get your reservations, but first err ... how did you manage to get two working to even test/know that it barely handles them? You probably didn't mean that literally, considering the above one more wouldn't take that much system resources except just raw GPU horsepower, and GPUs are one of the most powerful things that keep getting a lot more powerful each generation than any other piece of the hardware. Let's keep in mind Vulkan rendering is not here yet, draw calls(number of units, smokes, effects, shells, missiles, cargos, buildings) on screen won't nearly make as big of an impact on performance as it does with DX11, so even with many additional AAVOs pointing their cameras to areas of high unit count, wouldn't necessairly hobble the (CPU) performance as it would right now. Then, various youtube videos wouldn't necessairly need to show full-sized alternative angle shots, but perhaps have two smaller sized layers and it would be a waste to have those recorded in full-resolution when they could be much lower, again making things less demanding when recording.
  9. That is indeed one big reason where replays come useful, so in no way this idea is intended to be a replacement for replays. Replays are obviously much needed for other things and should be improved as the community has been rightfully saying. However in a slower paced gameplay, single-player, the time savings and the flexibility could be well worth it for those users who would be serious about it, and they would be willing to invest in some good controls/hotkeys recording management setup and get accustomed to it. However this requires additional work from the DCS developers. For this feature to excell it has to have such flexibility, controls, configuration and management developed for it, and that would take a non-insignificant chunk of development time, but it's important for considerably raising the practical usefulness of the feature, or at least for the lower performance tiers, because higher tiers would just be able to keep recording multiple outputs all the time and not worry about switching/enabling/disabling mid gameplay. There could be a whole dedicated configuration page for these, what I'm now on going to more clearly call Additional A/V Outputs. You could first enable the feature and then add AAVO's, let's say up to 6 of them, on one hand I feel this is stretching it, but then again for very powerful rigs it feels this would be an artificial limitation. Perhaps some simple dynamic limit based on number of CPU cores, and if you have a 6-core processor we're going to let you have only up to 2 AAVOs (perhaps an override in configs, but that would involve modding and you get no "warranty/support") Currently I'm going forward the separate window-on-the-taskbar approach, not the MFCD Export approach (a big internal resolution that's a combination of the horizontal size of all the connected display devices), because I don't see how you can have audio separation with that approach, probably there could be some workaround or trick but I'm just not going to speculate and spend time on that right now. You would then be able to select each of these AAVO's in a vertical list for example, and configure all kinds of things for that specific AAVO, things like it's resolution, it's cropping, various applicable graphics settings such as anisotropy, anti-aliasing, things that wouldn't affect any other output or anything globally gameplay or simulation wise. You could also be able to attach Camera F-View playlist/presets to specific AAVOs, selecting which Camera F-views would you like to have on the quick switch list which would be mappable to controls for easy switching of your favourite views, without having to scroll through all of them that DCS offers. Normally ofcourse you should be able to switch to any* of the Camera F-views while in-game irrespective of any preset settings, what I'm describing is quick switch list, and additional functionality with it's own control mappings, and it would be exactly how you would configure it, respecting the order. So for example when a user presses a key mapped to AAVO-3_CameraFViewQuickSwitchUp (..Down), it would switch between F4, F7 and F3, in order, because that's what the user happened to configure in the preset. Why such presets, well it's simple. It's because different kind of battles such as dogfights, intercepts, patrol, CAS, Sea battles, ground, cinematics may all prefer different set of Camera F-views, which they would use most frequently. Note: I mentioned earlier switching between F-View Cameras that are on one of the outputs, or in other words Camera F-view swapping. This still holds, if you would press the quick switch for an output again, while it's displaying a camera that is not in it's quick switch preset, it would just switch to the first Camera F-view on the output's preset's list, would be the easiest, or it would remember where it was before it switched to a camera not on the preset list. Note2: So you wouldn't need to switch your main screen at all, you wouldn't need to do swapping of a particular Camera F-View between outputs if you don't have the need to do so, you could switch between different Camera F-views in the AAVOs freely, as long as it doesn't land on the Camera F-view you are currently viewing in the main output, in that case it would simply auto-swap as well, or actually ...: Additional Feature: There could be an option to "Disable auto-swapping" (checkbox) or actually an ability to lock specific Camera F-Views to a specific AAVO so that you can't accidentially switch away from it, or swap it to some other output. Say you want to keep your primary output showing always F2, you would be able to do this to prevent accidents. Additional Feature: Reset to default - when hitting this key you would reset all AAVO's to their default Camera F-views which would be configurable as in "Make this Camera F-View the default for this AAVO" Also switching would be regardless whether it's being actively recorded or not, as that's at least now intended to be done by OBS so there should be no technical concern with that, but there is a bit of a question why would it switch if we respect that you might also freeze/suspend rendering and outputting audio for that AAVO when it's not being recorded. The switch action could be queued so that it switches when you activate rendering/outputting (not necessairly recording, but usually yes). -------------------------- The DCS video content creators also do tutorials, showcase, testing, comparisons where the gameplay is slower or even as slow as step-by-step, but also highly pre-planned and therefore predictable by the author, which means there would be much less of a problem dealing with AAVO management in-the-game. Also DCS already has a lot of controls just to deal with normal gameplay, so what's a few more Otherwise you could ofcourse just keep the renders and recorders rollling all the time, which again for the powerful machines, the target audience of this feature, wouldn't be so far fetched to consider. And for example your machine may be powerful enough for 2 additional matching-resolution AAVOs, but you would still want a 3rd-angle scene in one of the moments, but you would only need to go into the replay once or so, probably much less than usual*, still saving a lot of time in the end total. -------------------------- Oh I totally forgot: AAVOs would not require simulation/physics/memory duplication !!! Additional A/V Outputs would most very likely (99.9%) not require any additional logic/AI/physics/simulation/game-state, etc calculations to be done just for themselfs, including any duplication of assets and textures, so the CPU, RAM and VRAM* performance requirements wouldn't linearly increase for every AAVO, some VRAM (to a lesser extent RAM for audio) is expected to be needed for the output it self ofcourse. -------------------------- Oh I totally forgot #2: Multithreading would be a lot more important with this. Everything that's duplicated for each AAVO would preferrably use it's own thread and be designed fundamentally with MT in mind and all of the MT's efforts should ofcourse develop around AAVO feature for best results! Or at least where it makes technical sense, if there's no sync issues. Different threads for the stuff around rendering, and ofcourse Audio. AAVO's should also have their own instance of the audio system running on a different thread, if it is still running on the one thread by then (since there's MT improvements across the board and we don't exactly know how far they plan to go in the original plans ?*not*? considering AAVO feature). But yes, if the sync issues are big enough so that it makes multi-threading less effective here, it would be again up to the power of the computer's CPU to handle all of these audio outputs in one thread and you'd be limited by that, understandable.
  10. Thank you for pointing that out for much needed validity of the idea, I didn't even know that. Your concern is absolutely valid, but technically it's not so fair to hobble progress of the DCS ecosystem overall and in other areas just because one area of it is lagging way behind. All hands are on deck to improve To be fair this feature doesn't impose any penalty on anyone, it's completely optional. To reiterate one of the major arguments, it should be developed sooner rather than later not because of some urgency for users, but because it would make big economical and technical sense to do this properly while the MT/Vulkan programmers are still in deep development, so that before the finalization (not necessairly initial release) is done, this kind of capability is accounted for in the core graphics implementation as a standard component and not a bolt-on. Once MT/Vulkan is relatively finalized these programmers could either take up other tasks or be dismissed to go elsewhere. What then if serious interest for this raises up later, are those specialized programmers be re-hired back? Pay them again just for this feature? Unlikely they would be available. Would there be enough other workforce available on short notice? Or we pull less field-specialized programmers off other tasks to bolt-on this feature as stop-gap ... ofcourse we would like to avoid these scenarios. ---------------------------------------------- Found what I was looking for earlier, an actual commentary on how one of the youtubers makes DCS videos, with an important note: Grim Reapers: https://youtu.be/2-jYlE-GfVo?t=405
  11. Perhaps could be much faster to implement than fixing (rewriting) the replay system, that may also be a major argument, but only the developers can give an idea about this. I think everything else around it, making sure it works well in practice, switching, providing each of the outputs it's own graphics settings in GUI is more of a chore. --------------------------------------------------------------------------------------- I have created an example recording process demonstration (a full guide even) using MFCD exports, that can be done with a single physical display device, just one monitor. No need for anything else. An overview of a "virtual monitor" in Windows -> https://decontev.com/virtual-monitor/ (just an explanation, no need to download anything from there) 1. Install Amyuni USB Mobile Monitor Virtual Display Driver - Source: https://www.amyuni.com/forum/viewtopic.php?t=3030 1.a You can download it from https://www.amyuni.com/downloads/usbmmidd_v2.zip 1.b Extract it rather to some special location such as C:\Additional_Utilities\usbmmidd_v2 1.c Go to Start and search for Command Prompt, right click and Run As Administrator 1.d Navigate to the extracted directory (the cd command) 1.e Run this command (64-bit): deviceinstaller64 install usbmmidd.inf usbmmidd Don't run the second command just yet, but keep the CMD window open, because once you actually add a virtual display, you can't modify it's list of selectable resolutions anymore. 2. Open Registry Editor and navigate to: 2.a Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WUDF\Services\usbmmIdd\Parameters\Monitors 2.b This is a list of allowable resolutions that will be available to your virtual display-s once they're created. Only these are the ones that can be switched to from Windows Settings. 2.c You can click on any of the string value parameters and modify their data which represents the resolution in Width[comma]Height format. Apparently there can't be more than 9, so do not create more values than there already are. 2.d For this demonstration here I am using a 3072x1440 resolution, because I'm exporting 3 MFCDs and the MFCDs in DCS can have a maximum resolution of 1024x1024 at least according to the GUI, and 3 times the width of 1024 is exactly 3072. 3. Close registry editor and run the second command in CMD: deviceinstaller64 enableidd 1 3.a This should add 1 (one) virtual display to your computer. For this demonstration I will not be adding any more displays, you can add up to 4 virtual monitors with this driver in total. 4. Open Windows Settings->Display. Your virtual display should be recognized. 4.a Select it and then choose the appropriate resolution below, because the active resolution selected by default may be different from the resolution you intended to use when you've made it available in the Registry Editor. 4.b You can close Windows Settings and CMD. 5. Navigate to C:\Users\username\Saved Games\DCS.openbeta\Config\MonitorSetup - create the folder if it doesn't exist. (due to forum itallization the backward slashes appear straight, this is only a visual effect) 5.a Create a new text file an rename it to "1P+1V_3MFCD" for example, but also rename the extension to .lua and then open it with Notepad++ preferrably, or similar. (You should have "Show file extensions for known file types" enabled as per standard when doing these kind of things. Newer versions of Windows do not enable this by default. If you do not see ".txt" in the end then you must enabled this option or else the resulting rename will look like ".lua.txt" and it won't work) 5.b copy and paste the example lua code from below: _ = function(p) return p; end; name = _('1P-2560_1V-3072-3MFCD'); Description = 'Configuration for a total output resolution of 5632x1440. A main display of 1440p on the left and a secondary (virtual) display of 3072x1440 on the right for exporting 3 MFCDs in 1024x1024 each.' Viewports = { Center = { x = 0; y = 0; width = 2560; height = 1440; viewDx = 0; viewDy = 0; aspect = 2560/1440; } } LEFT_MFCD = { x = 2560; y = 0; width = 1024; height = 1024; } CENTER_MFCD = { x = 3584; y = 0; width = 1024; height = 1024; } RIGHT_MFCD = { x = 4608; y = 0; width = 1024; height = 1024; } UIMainView = Viewports.Center GU_MAIN_VIEWPORT = Viewports.Center 5.c Save the file and close it, but you can keep the explorer window open in case you would need to correct or modify anything. 6. Install OBS Recorder if you do not have it already. https://obsproject.com/ 6.a Create a new profile that covers a canvas size of 2560x1440 and an appropirate scaled output resolution (keep the same if you want), and all of your other settings you wish to have for recording the main game window of DCS. 6.b Create a new profile that covers a canvas size of 1024x1024 and keep the scaled output resolution the same for a clean image. 6.c You can also create a combined 5632x1440p canvas profile if you wish to do additional cutting and positioning in a post-process edit step, it should be possible to achive similar results different ways. 6.d Or you can download the full example OBS profiles and scene collections package from here, for import: DCS_VirtualDisplay-MFCD-Recording-Example.zip 6.e Import the profile and scene collection but note that you have to select correct display devices as these IDs are specific to device/driver/OS/config etc and won't carry over from my computer. 7. You can now record the (invisible*) virtual display with OBS and capture those MFCDs either as one big video or open 3 instances of OBS and record each of the MFCDs separately, to it's own video file. Additionally: One major functionality that would significantly improve the performance requirements this would impose is the fact that DCS wouldn't really need to render out the additional outputs in most cases, but keep them frozen/suspended until activated. DCS and OBS could be synced together (not literally, but never say never) to record and render on manual or otherwise trigger, so there's no necessity to record, nor even render all of the additional F-view outputs at all times. The player/operator would, to the best of his abilities, be able to predict which moments to capture from more than just his primary view and would be able to control which additional outputs to activate (render resume/unfreeze) via hotkeys or otherwise. Perhaps even scripting could be introduced for creating event triggers for such recording purposes, for example "start rendering and recording if X unit fires Y weapon". There's no doubt this would help out cinematic making as the editor would then have a library of various shots from many F-views and be able to start editing from a larger set instead of spending a lot of time just recording and re-running replays.
  12. Hello I was wondering how various DCS video/stream creators record takes from different F-views of the same exact moment in a match/session, suspecting the words, it seems that they really just rely on replays, while hoping they work at all. That also means if they want multiple angles/F-views of that same thing taking place, they would have to go and run a replay for each take. And did I mention the current replay system being unreliable and buggy. But let's say a mission is many hours long, even having to go through that once (albeit there is fast forward in the form of game speed possible) just to get a 10-20 second alternative angle shot is already in it self incredibly inefficient. I can't or rather don't want to imagine the effort behind all of those cinematic videos one has to go through, if they have to rely on solely replays. Something has to seriously be done to improve this significantly, and I have the obvious idea which also completely bypasses replays, but doesn't completely eliminate the practical need, depending on the case, which is just the user's machine horse power. You'll be limited by CPU and GPU, also RAM. DCS should implement support for multiple F-views as completely separate viewports and output rendering windows that can be recorded with OBS recorder simultaneously. If these separate rendering windows can't technically be separate child-processes then at least their respective output audio streams would need to be split in this fashion order for the recently introduced OBS capability Application Audio Capture to work, so that we can capture the corresponding audio output stream for a particular rendering window or F-view. There would be no need for any additional physical displays or having DCS to know anything about display hardware, this should all well be possible on 1 monitor, the purpose is recording, not local viewing necessairly (althought possible if you buy more monitors/display devices). With OBS recorder you would capture these windows and even if you're in-game (windowed only?) it should work as long as you maximize all the separate windows and then maximize your primary one so that you can play as per normal. Alternatively if this wouldn't work so well, while OBS can't work with Windows virtual desktops, separation can be done using virtual displays (requires a special driver install from a 3rd-party), so you would be able to move the additional DCS F-View output windows to 1 or more virtual displays, which on Windows 10 and on automatically extend your desktops, even if you do not have a physical monitor for that additonal virtual display. It's there and it just works, you just can't see it, but you can preview it with OBS with display capture and that ofcourse means you can record it, which is the main goal here that counts. There could also be a rule that a particular DCS F-view can only have one output, or one output window per F-view, this means no duplication, this may be easier on technical efforts implementing this feature, but I also don't see the need for duplication right now. Even without this rule/limitation, I would still like to see enhanced switching, so when you want to switch to F2 while you are in F1 view example, if there is another window with F2 view, that one would switch with the view you were switching from. Repeat, I'm talkin about switching F-views while being in the same output rendering window. You could achieve the same by minimizing and switching the output windows around your multiple (virtual display) desktops but that is ofcourse the less practical manual way of doing it that we ofcourse shouldn't rely upon for primary usage. There would be no need for the additional output rendering windows to have input support and actually function on their own equally, they would really be pure secondary outputs, you'd control them from the main window with some added GUI or hotkeys if necessary, or just preconfigured in scripts before launch, this should make it technically easier to implement. Performance worries may not be as big because, DCS video creators usually have pretty beefy machines in the first place, and secondly, not all F-views would be as equally hard on the processing requirements, take F10 view for example, and different in-game content in those, one could be a mostly sky shot, while another could be mostly sea shot with few units. So this feature wouldn't be so exclusive to only the most powerful PCs. Better yet, there could be different graphics settings for these outputs, whow says all the additional separate output windows need to be the same resultion as the primary one, this completely alleviates the performance worries right there. I think this would revolutionize recording DCS, the cinematics making and in ways I or we can't even imagine right now. Video content creators may benefit greately, not necessairly purely in saving their time, but in them being able to do alternative angle shots at all, something that they might have avoided completely. There could be recording station machines, keeping an eye on a particular player from multiple, 5-6 angles (F-views) even, alleviating the need for multiple PC's, multiple clients as spectators trying to keep an eye on tournaments, etc. I would seriously consider this idea especially now when Vulkan API rendering and multi-threading are in the middle of development, so that this is fundamentally considered with the final design and developed with this in mind for an incredible experience in the end. There's probably quite a bit more I could go into, but I think I'll refrain from making the inital post too detailed, too complicated and too precise for now, I think I conveyed the general idea good enough for a good discussion around this. I should include some pictures/diagrams for better visualization of this idea but I just didn't want to delay this any further.
  13. It still works if you click and preview, and then zoom in to open a new tab. Otherwise you have to always click "Display as a link instead"
  14. I think I'm noticing this as well, let me do another comparison
  15. Hi There was and perhaps still is that conception regarding full-fidelity aircraft not being able to be AI-controlled because mainly it would take so much CPU horsepower that you wouldn't be able to play ... hmmm, perhaps so, perhaps not quite... How technically true is at all anyway? Is it just a DCS thing and would need an engine upgrade? Either way, it really should be attempted to try to make a special full-physics AI units, albeit not having them have to control a full-fidelity cockpit, but the full physics and flight modelling, and other things. I think the AI's right now don't work by emulating a human using input controls to control the aircraft, but perhaps in the future it kinda should, to simulate all the things a real pilot would deal with in a cockpit, while at least there should be pilot sight simulation on top of what we have now. AI's are most likely cheating in this case now by being able to see through all the % of their cockpit glass at all times, probably it's not even just glass and they can see through their cockpit nose into the forward direction, hopefully the sight cone is circular/rectangle tilted upward. So I think the AI code is interwined with the do-this-do-that interfacing with the specific physics model logic, and doing a full-fidelity AI would likely require basically making AI version for it fron scratch, and simulating an actual pilot and the inputs would I think be significantly harder and perhaps not even optimal for how to do an AI in programming efficiently, this is just speculation but I think this would be a better reason and real limitation of why there's no full fidelity AI's. Why do I sound like I'm dismissing the CPU processing requirements ... well because there's a little fact that at least to my knowledge nobody has think of ..., err rather I should say nobody has pointed out, and that is, what prevents us from just setting a limit on one single full-physics AI at a time, in a single mission, or perhaps 2-3, depending on the system's hardware, but just one would make a big difference for where it would be most benefitial, dogfights. In pure dogfights, you don't have 500 full-physics AI's running around, the hardware should be able to deal with just one (1), if we somehow make it over the other possible limitations I mentioned. There! Solved???
  16. Hi I've heard that not a small amount of people including myself have multiple DCS versions installed on the same system, even more so now with the dedicated server installation into the mix. It might be helpful to support having a common library location that could be the central place where same (or similar) content could be read from, without having it duplicated to each specific installation. There's a bit of a question how effective this would be if patch-level's vary significantly, but even with many differences I've seen cases where ample amount of gigabytes are still copied locally from a neighbouring installation and only the rest has to be downloaded from the internet. Ofcourse only applicable content would be put into this location. There would just be a bit more complexity to each installation, you would have similar folder structure in DCS Shared and DCS Server/Client/etc installations, but different contents, but still I don't think it would be that of a problem for modding, while for a normal user it shouldn't be noticable at all. There may be other issues that could arise with this approach so this has to be thought out how would it be implemented. Perhaps to avoid issues with installation-specific updaters working with this common shared assets library, the asset library could be it's own installation with it's own separate updater. Also, I am aware of the recent modular dedicated server feature, that's a big improvement in this direction, but it's not the same and this idea should still stand on it's own, and it would add ontop to make the overall DCS ecosystem storage space requirements more lighter. The more I'm thinking about it, the more issues I think could be uncovered and it might take quite a bit of re-doing the installations and the way patching and patch updates work to make it effective, and it could even bring some inconveniences to the users. That said I have my doubts but also how many people actually have multiple installations and use them much, I don't really know. I just had it on my mind for some time and I thought I should at least mention it, perhaps someone thought about it before and what they've figured out discussing about it. EDIT: One fairly good reason against this is beta testing where you'd actually want to have separation, and other players and groups who would want to hoop between patch levels when one doesn't work out for them. So updating the shared content library would update and ofcourse an installation wouldn't be able to work EDIT2: This kind of a feature might only have a good practical use case for where a user would want to host a server of the same edition on the system where he plays at. So for example OpenBeta Client and OpenBeta Server on PC1. The idea would morph itself into just a server installation's capability of something along the lines of "let's just use the files we can from the OpenBeta Client installation, without copying/duplicating it, and download the rest which are different to our directory". This would work out because in that scenario a user would most likely want to keep both installations on the same patch-level, because he is to be participating in the server he is hosting. The server's updater would have an extra dialog where it would ask you to select the location which it would use existing content from another applicable installation, a dropdown box to select which installation (or a manual browsing). If it wants to update, it should notify the user that it has to modify that "host" installation, but that would most likely render that installation broken, so it should, with user prompt, launch the host's updater and update that first. Ofcourse this feature could be done vice versa, with the client having this capability and not the server, but I think that on systems when only a server is first installed, there's probably no plan to be playing due to it being a different tier of hardware system.
  17. Ah I realized now there's a MP/Server section on the forum I should have posted this in. Anyway I also viewed this as a backup-fallback solution in most cases in practise. This shouldn't be a replacement for the other remedies and solutions for spotting/LODs and rendering. Again apologize for my habit of editing posts after submitting, there shouldn't be much changes only additional info in my previous post. I got carried away into the general discussion sorrounding that, it wasn't my intention to mix the two issues together, I admit. Speaking of controllers, I actually tried using Nintendo GameCube controller for DCS a few times, with proper adapter/mode and I think without any special drivers or extra tweaking, it surprisingly worked better than I expected (after just binding the controls in DCS properly). Even if 1/6 the servers use this feature it's better than nothing, and yes it's exactly for the niche situations. For example if a streamer invites some 4-6 ex-pilots to some kind of a sponsored show and they get on a locked server, in addition to explaining the rules, the admin might use some of these kind of enforcements as a verification step to remind and make sure it's all right on the client site. The few who would join might have accidentially forgot to set something up correctly, this would verify those setups and settings. I really don't see it necessary for me to twist arms to try to come up with positive examples, this is enoguh, we can all see it's not optimal across the board and highly dependant on case.
  18. Ofcourse for the various types of servers it's not the best solution, but that should depend on the type of server and eventually down to the owner's and server's community philosophy and opinion (taste), which is no offense subjective. Which is what I was going to mention as I saw it coming. This philosophy of trying to combine everything including things that are clearly incompatible is extremely prevalent across many fields and this is partially adding to the severity of this problem, and if this flat monitor vs VR is the biggest reason for this recent controversy then I'm even more in disagreement with the exaggerated reaction to the IC enforcements of the complainers. VR and flat monitors shouldn't be piled together for many many other reasons and in general in the first place, at all. It should be regarded as mere digital convenience to allow both together, no doubt stuff like this is going to happen because of the technicalities of these systems themselfs and that becomes less of a DCS fault the more you drill into it. The IC enforcement for the dot system was absolutely the correct thing to do, even if a replacement solution hasn't been provided yet. Let's remember that this isn't just a spotting matter, it's totally a cheating matter first and foremost. The cheating/fairness aspect should always have the precedence, over what in this case is selective realism. In this case every DCS online player is important because of the limited size of the MP community, therefore this is all understanding, but it can't be this feature's fault. And yes, I was about to mention console games and their controllers. I never agreed that actually, PC+console cross over was something I never agreed with. The fact that you mentioned consoles and xbox at all as an argument, the mainstream gaming, also shows we come from different background and have different ideas how things should be, so we have incompatible philosophy. This is also yet one more example how players coming from technically-kindergarden from mainstream console gaming want to take their bad habits (promoted by pureply profit oriented organizations) and shove it into DCS. Ofcourse my philosophy here is going to be different, I never owned any Xbox, any Playstation and I never played any FPS multiplayer game online on a console, with the exception of Metroid Prime 2 Echoes local split-screen multiplayer, but even that was considered as a FPA* not FPS if we're super technical (* First-Person Adventure). On the other hand I happened to moderate and co-administrate a Call of Duty 2 custom server for a few years where I was the main modder after older generations moved on, coincidentially (and really appropriate for this discussion) the server happened to have a strict rule of rifles-only, it was a rifle-only server by definition in it's name, that rule never changed in it's 10+ year history, where you could only use single-bolt action rifles with 100% lethality no matter which body part it hit (instagib), no machine guns, no pump-shotguns, no snipers (zoom scopes). This was absolutely nothing unusual at the time, and nothing unusual in the decades of Quake and Unreal Tournament games. We were no less welcoming than any other server. This type of game mode was obviously the one I spent most time on, but I would say this really isn't the reasoning behind my opinion on this topic here today, I almost wouldn't have remembered it. This should be a good idea for many other types of game modes that perhaps don't even exist yet, highly competitive PvP dog fights might be one of those that would welcome such features, where there might also be a need for input control enforcement, so that both players play with an appropriate aircraft-specific/allowed controls and not some inferior/superior unrealistic control system.
  19. I understand, but this is just mathematics, there isn't much of any other factors in this equation to my knowledge so far (and if they are they have their own downsides, and they wouldn't make this idea any less valid or at least detrimental, server adming could ignore it completely), that thread makes it clear (if those people there are actually corrcect, idk) that there's a big difference between flat monitor screen and VR spotting (mod's effect), so the logical solution is to separate them and get them their own spotting rules/mods/fine-tuning. The total amount of players and servers is just what it is, it's unfortunate if it makes this solution less practical, but it can't make it invalid. That's another problem.
  20. Hello Referenced topic: https://www.reddit.com/r/hoggit/comments/13l30ah/effect_of_resolution_on_the_apparent_size_of/ While everyone seems to be talking about the dots and the dot mod, you know you can always turn an equation around and find solution somewhere else to solve the same problem differently. At least in the meantime, but these options I'm proposing can exist separately and general can be useful for other kinds of reasons, replicating equal experience and fairness in general, not just spotting. For example a server admin could add and combine (with exceptions) several rules such as these: - Require / Prohibit -> All Desktop VR - Require / Prohibit -> VR Varjo - Require / Prohibit -> VR Valve Index - Require / Prohibit -> VR HTC Vive Pro 2 - Require / Prohibit -> All Flat Desktop Monitor / TV - Require / Prohibit -> Monitor Screen Size 32'' - Require / Prohibit -> Multi-Monitor - Require / Prohibit -> Output Render Resolution 1440p - Require / Prohibit -> Output Aspect Ratio 16:10 Ofcourse the server configurator would not allow to combine filters that prohibit everything. On the client size the server browser would also be updated to show display configuration requirements in appropriate columns without having to connect to the server, and attempting to join would display appropriate error messages. This would allow servers to efficiently ensure a level playing field when it comes to spotting/LODs, and that whatever solution they come up with the dot scaling system would work the same for all clients. And briefly, this could be supplemented by the ability for server admins to register/serve a custom dot/spotting mod which would instead of simply testing for IC on client-side, be automatically downloaded and enforced for each client, rendering any user custom dot mods irrelevant and disabled for the duration of their session on such a server.
  21. A bit long ago now, but I noticed how air refulers would change speed instantly, by 5-10 knots and jitter up and down, not smooth, and this would throw me off a lot. I don't think that's realistic and I hope it's not doing this anymore. In the big picture, all air refulers should be much more of a dynamic unit because of interaction with real players, not just the simple SFM AI's, the boom and basket should be collidable/soild eventually. The AI logic could be improved for AI refulers to have some kind of a dynamic pathing where they could deviate from the ME (mission scripted) waypoints depending on whether they're called in for, so they would avoid turning unless threatened (or receiving override commands from EDDCE strategic AI) but that would also make them terminate the refueling procedure and go for evasive manouvers,etc. And you should be able to call refulers to GET THEMSELFS OVER HERE instead of you having to fly to them all the time (without having to specifically script that player action for each mission, if it's technically possible right now) Right, if he has enough to spare for a RTX 4090, then he should be able to afford the correct controls for the F-16, just saying. Then that would be something optional on-top after the base simulation is accurately replicated.
  22. 1. To be precise I was referring to various posts here and in other flight communities about people expressing the desire of being able to fly from one to another terrain, the fear of having to completely restart custom missions once an upgrade to a map makes them incompatible, etc. 2. I don't object in general, but as @cfrag pointed out the sole spherical technology in it's raw barebones isn't that of a major job, once it is set up it just works, it can't be half a sphere, it's round and you can travel around it so it works, it's more about making everything else work with it, and correctly it's not ready for the community that wishes to have more than an empty ball to fly around, so I don't disagree there. Besides all of the surface terrain and assets, how many other engine components need to be made compatible and how complex that job would be is hard to guess, there migth be other upgrades and new additions done along the way that don't exist in these current public DCS builds, but if it's only about transfering what DCS has today into spherical, I would say that I doubt the task is as big as MT or Vulkan or EDDCE, in complexity and time, because it should be about adjusting, hopefully not having to re-create physics modelling, what for every single aircraft and movable object, I doubt it. After that comes all the static map objects and the surface it self, texturing, but that's a separate field that can be done concurrently(independently) by the map making specialists. In the end I'm just responding with additional thoughts as just thoughts and entertaining my side of the argument, it's not an absolute belief that what I say must happen or else, I'm not part of those kind of discussions that you guys would be used to from the rest of the community, so I hope mine didn't sound so demanding, but if it does it's not intentional. Having for example Caucaus and Syria on a preview spherical map, linked by a few key roads and railways, with 3-5 airports while much of the in between surface (Turkey ...) is low-quality, wouldn't be too much to ask in my opinion, because everything else has been released in such early-access fashion (that doesn't mean the future has to be so, I get it, but that shouldn't nullify my argument completely). Doesn't even have to be necessairly circumnavigatable all the way around and could be artificially playarea limited to this region but it would still be spherical under-the-hood and provide a good beta test non-the-less. That said, I do actually have no idea what the community thinks about having these low-quality gaps, how would it work out for gameplay, would it be acceptable for the community, worthwhile, or would it look broken and end up being annyoing, I honestly don't know and I admit this was been my sort of less conscious speculation that I only realized right now. The community would have to vote on whether they would find this kind of merging of exisiting regional maps toleratable.
  23. I didn't wish the "ASAP" context to feel like I would want things rushed, but after I already edited the post and title, I rather just leave it like it is and everyone that reads into it will see my explanation regarding that, so begging isn't intentional. On your main point, didn't I heard that quite recently some priorities were changed with F-16 or F-18's sensor pod development so that one that was scheduled for later is now going to come sooner to make it not feel like it's a downgrade, but I do understand that it depends on other factors and we can't expect to adjust every bit of back and forth with development by just making a few arguments in threads. Many things in such threads are meant to be discussions, ideas, thoughts and opinions which is just there, I would say threads like this are far less of a nuisance than some the low-effort bickering that most likely happens as it does in general in gaming community discussions. The "community's needs" in this case was true yes, but not in a way that I would speak for the community. I kept reading various posts over the months around maps, the splitting, server admins, etc ... and then I thought that many of those complaints could be solved by the whole world spherical Earth map which was mentioned further back in interviews so it was something that's going to happen, then the January 2023 newsletter finally mentioned more details about it, some of which I had my own comments on and that's how this thread got made, with good intentions. Yes I rather make such big well-done threads rather than sprinkling it over 50 posts all over the forum or elsewhere.
  24. The several different half-finished notes I had in many places have all had different terms for the same thing such as "zones / area / border / boundary / sector" or "sphere, globe, whole, 3D" - I just had to rewrite and normalize it to one coherent piece so that even I could find my head and tail in there , the content was expanded and separated into bullet points and paragraphs. Indeed and appreciated a lengthy dive into the end-result effects of just this change affecting or not affecting gameplay. Perhaps if my OP had the indication that I'm trying to prove some kind of a significant gameplay effects/benefits/realism with this then wasn't my true intention apart from just being enthusiastic about the subject, admittingly focusing more on the benefits rather than challenges yes. I'm However I did infact isolate every other factor on purpose and I was aware that I'm disregarding air/temp/wind and surface feature factors, in order to first get an idea about the raw difference of curvature alone, but you are correct that I should have added context to that sentence that is indeed a raw test condition and the difference between hit and a miss would be just in those conditions of the test, ofcourse actual surface is rough on both flat and spherical types, not perfectly smooth. The distribution and revenue model is something that I think I included in the original OP briefly, either way it was in my notes but I removed it along with some other parts I didn't felt were to be discussed yet as I had to finish these main points that we discussed so far, so I definitely agree and have foreseen there might have to be changes and negotiations around this. At this exact moment I'm recovering from deep focus into these primary discussions so I'm kinda out of ideas and a good writeup for the distribution/updates/revenue/prices side of things, but certainly anyone can discuss this in the meantime while I take a bit of a break. On the other hand perhaps it may not be that necessary for us to speculate on distribution, if the technicals, wishes, and economy pans out, they'll surely figure it out and adapt the distribution and revenue model to it, not the other way around, ...rrright? Mainly that we established some understanding around this topic, my self included as I was getting more familiar and re-checking my knowledge while writing the OP; the expectations and wishes, and that'll help out the developers shaping their priorities. So I'm glad I finally managed to get this one written down and the added discussions from everyone else here would also give the rest of the community the idea what's going on around the spherical map technology.
  25. I can ofcourse apologize and agree this shouldn't be a habit, but this was a special case of this idea brewing for a couple of months and me combining several temporary notes, merging them and constructing the final post that I always wanted. It took me considerable amount of time to finish it (over a span of 3-4 days) as I was not satisfied with the initial post. To me it however doesn't seem that it would severely affect your post except maybe 10% of it, even the direct quote you made of one of my previous sentences has practically remained the same in effect, just worded differently. More than half of your initial post is about other topics which I didn't brought up in my original OP, mainly issue of memory management of such a large map and other challenges. Your worries are all valid. Though my goal with this post wasn't to chase the challenges associated with the idea too much, but if I can I will address them, but I'm not actually an expert and my familiarity of potential challenges and the significance of them may not be the best, though I have done extensive troubleshooting with DCS and it's memory management in the past and there should still be old threads on this forum about this, but those are quite unrelated in this case and it doesn't really help when speculating about spherical maps, and the engine is going through a transformation to multi-threading and the rest so things will change dramatically for the better, but yes the developers have to take everything into account. My position currently isn't that worrying, multi-threading should give good boost to the dynamic memory management, but it was already functioning better in the last few years. Hardware it self is well to blame, thankfully AMD has additionally upped the standard amount of VRAM on Radeon GPUs so that will help a lot going forward. Memory management could take sectorization for help to simply ignore anything on sector designated "disabled" or something, roughly speaking, you could improvise easily, you could "put" Normandy 2.0 into a spherical model today and disable all other areas/sectors and it would result in practically the same experience when it comes to RAM/VRAM memory. They surely will upgrade the memory management aspect along the way for a full globe experience. 1. Yes the "ASAP" doesn't use the common meaning of doing it so fast you ignore everything else, but faster, certainly much faster if you don't set a goal of trying to finish the whole world in high/full quality. 2. If the barebone spherical technology is complete, that it-self either works or not, correct distances, measurements, coordinate displays either work or not, but yes it's about other systems being sphere-aware that would take more time before it's ready for release and/or beta testing ... In programming however, generally speaking, there are lots of common functions and common libraries, and what we're talking about here is the lower levels (coordinate system, gravity) which do affect many many things, but at the same time many many things can use that same common library and function. You change the function once, you change it for everything, without having to go to an individual unit and change it for that unit (but depends, long story). Many units (ground) could be using some common function-s that are responsible for sensor/positional data/indicators which depend on the coordinate system, it's not that hard for a skilled programmer to adjust that function, and if it works there, it'll work for all units/things using it, you test it once, twice, but that's enough, there's nothing more to test, you're done. You could have it even earlier if you just have a subset (build) of it with errr "spherical caucasus" and only a few compatible units, but the lesser it is the less worthwhile it is for the community. 3. No dates ofcourse, the proposition isn't about an actual barebone sphere being released, but the one which would have a functional but low-definition coverage of the whole Earth, with at least 1 major region in high-definition (Caucasus) and I would consider that good enough for closed-beta, I think 1 or 2 3rd-parties would be able jump on the bandwagon and convert their flat maps for spherical a few months prior to closed-beta and 6 months later you'd, somewhere in the middle there could be an open-beta and the players would get ~2 regions of high quality, but just one would be enough for starters. Isn't that reasonable? I can't know that sure if it is reasonable enough, perhaps the whole thing would take much much more time and they would agree that there should be a preview earlier and we might get half the functioning coverage, with the other half of the Earth being hardcoded off-limits. There's gaps that I do not have familiarity with in terms of time/effort/research wise so I'm ofcourse avoiding any such guesses. 4. Not that worrysome since the same newsletter mentions the adaptation of the fog system for spherical awareness and integration of the cloud system with dynamic weather engine is ongoing, complex but it's happening, they're surely not going to stop until they make it work. I'm not sure Vulkan's rendering it-self would be that of a factor in spherical efforts.
×
×
  • Create New...