Jump to content

Worrazen

Members
  • Posts

    1823
  • Joined

  • Last visited

2 Followers

About Worrazen

  • Birthday 05/05/1994

Personal Information

  • Location
    Slovenia
  • Interests
    many!
  • Website
    http://www.techpowerup.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Looks like this was addressed in the update on 19th. Maybe you can test with RenderDoc again to verify the implementation. https://www.digitalcombatsimulator.com/en/news/changelog/openbeta/2.9.2.49629/ Perhaps the issue depends on a particular scene-s, which the benchmark by Hiob might not cover. Then again, small fixes aren't to be shrugged off, they all add up eventually, but it's still a good idea to scrutinize this to determine whether it had the fullest effect.
  2. Oh that's interesting right! Just to be extra technically correct and clear for everyone else in general, you mean those who transmit with an in-game cockpit radio that now supports DCS Voice Comms component/functionality that happens to use VOIP technology for transmitting the voice data between multiplayer PCs over the internet. This discoverability and trackability of radio voice signals goes along with radio signal simulation in general and would be needed for eavesdropping on radio comms, radio sigthing enemy contacts (AWACS, SIGINT) and ofcourse SAR/CSAR or just plain locating separated forces on ground or elsewhere, squadrons regrouping, ejected pilots needing evac, etc.
  3. Whatever mod/external program it will be, would rather be something on-top, expanding functionality and interfacting with the internal components of DCS VC, work with it instead of completely replacing DCS VC. Key lines from youtube Wags video comments:
  4. I guess just lower the graphical fidelity when using multi-view then, ofcourse perfection is impossible, there will be a set of tradeoffs for different systems. What about DLSS and FSR. I do get your points. But you can't keep holding one end of the scale fixed forever, well you can, but at some point you can start to scale up in all directions with the added new performance ... using that argument strictly, then sure games could have stayed 30FPS forever and kept evolving in graphical fidelity and resolution, for example, or you could keep getting more CPU cores and just play games, no simultaneous recording ... playing 2 games at the same time should be possible, but that's not a practical scale for us humans, so all these added CPU cores with desktop CPUs in recent years have been used for also recording and streaming on the same host machine with OBS and other solutions while also playing, without the need for additional capture HW or separate computer, which was one of the major marketing points with AMD. So none of this is solely my idea, large companies have figured out the same possibilities. The possibility is certainly there and may not be so far fetched with the core engine improvements DCS is actively receiving, the developers of DCS could as well agree but the number of users here asking for this might be the one of the reasons holding this idea back, but this goes for a lot of other ideas as well. The community building aspect and the marketing of DCS may be one of the stronger reasons they would offer more cinematic making tools to users, even if there's currently no huge demand for it, sometimes the users don't know themselfs of how handy or useful would something be until they actually test it out and use it, happens to me all the time in other places/fields. The virtual aerobatics/air shows community could be one of the bigger users of such features, it's definitely healthy trying to support niches if you want those niches to grow larger faster and contribute to various communities of DCS, ofcourse if there's some expectation that their increase would cover the costs. The most popular modules sell themselfs automatically because of many different factors (movies, people's own interest in flight, etc) and have a momentum going, they're like heavy trains and don't need much pushing to keep going, they don't need that much promotional effort; but other niches are like steam locomotives struggling uphill and giving them a boost can make a big difference and may prevent that train from stalling.
  5. This is really only the very first multi-core engine upgrade step, and even this first step from what I think were official words was the preview and in no way the final. The separate F-views would/should use their own thread or a set of threads altoghether. With VULKAN I also meant a set of other graphics related performance features and optimizations, still it's ofcourse going to be more demanding naturally, but not to a grinding halt I would allow myself to speculate. It really is heavily dependant on what kind of scenes would be rendered on those additional F-views.
  6. Hi again folks! Oh just look what did the september 2023's newsletter brought us: Are you thinking what I'm thinking? Ah, even if it's just something out of things discussed here it's a great start!!! It's very possible that perhaps it might have been something else but similar, or separate but is also part of the overall topic, planned well before my post here. Short answer reiterated, burried in my larger post at the end .... MULTI-CORE ENGINE UPGRADES and VULKAN may solve or greately diminish these concerns. ... Right, ... It's been a few months so I would need to refresh on everything I discussed, but my quick response to that is that this doesn't need to be a feature for the wider public necessairly, but a capability for the cinematic niche where they could, ... and if I didn't then this is probably something I forgot to mention in my initial posts, even do something like PER-FRAME-RENDER-EXPORTING-RECORDING similarly to how you render out a scene in Blender, it doesn't have to be real-time, but the resulting video would look like as if it was running/rendering in real-time, the engine would need to know this in order to not skip rendering frames and sync gametime tick with the frame rendering. I think this is how in the PC game Crysis was done where a modder used the CryEngine 2 Sandbox Editor to explode 6000 barrels, the video was amazing and got a couple of million views, but it wasn't really recorded in real-time in reality, I think the engine had a console command where each frame would be exported to a BMP or PNG file, and then the video was created by combining these image frames together. However, this would be only useful in pre-scripted cinematic scenes that don't require interaction, well replay is a form of script, but perhaps ED does it with programming internally, and to iterate faster probably use wireframe mode to not spend rendering when planning/drafting the cinematic scene. I think I'm touching up on separate subjects which don't necessairly require/need/involve multiple separate F-views on same host machine/instance/gamesession. I rather stop here for now before I get tangled up on too much complexity with ideas.
  7. Very good, I don't think it's that of a complicated task for the sizable improvement it'll bring, then waiting for the serious AI enhancements would be much easier. .
  8. I get your reservations, but first err ... how did you manage to get two working to even test/know that it barely handles them? You probably didn't mean that literally, considering the above one more wouldn't take that much system resources except just raw GPU horsepower, and GPUs are one of the most powerful things that keep getting a lot more powerful each generation than any other piece of the hardware. Let's keep in mind Vulkan rendering is not here yet, draw calls(number of units, smokes, effects, shells, missiles, cargos, buildings) on screen won't nearly make as big of an impact on performance as it does with DX11, so even with many additional AAVOs pointing their cameras to areas of high unit count, wouldn't necessairly hobble the (CPU) performance as it would right now. Then, various youtube videos wouldn't necessairly need to show full-sized alternative angle shots, but perhaps have two smaller sized layers and it would be a waste to have those recorded in full-resolution when they could be much lower, again making things less demanding when recording.
  9. That is indeed one big reason where replays come useful, so in no way this idea is intended to be a replacement for replays. Replays are obviously much needed for other things and should be improved as the community has been rightfully saying. However in a slower paced gameplay, single-player, the time savings and the flexibility could be well worth it for those users who would be serious about it, and they would be willing to invest in some good controls/hotkeys recording management setup and get accustomed to it. However this requires additional work from the DCS developers. For this feature to excell it has to have such flexibility, controls, configuration and management developed for it, and that would take a non-insignificant chunk of development time, but it's important for considerably raising the practical usefulness of the feature, or at least for the lower performance tiers, because higher tiers would just be able to keep recording multiple outputs all the time and not worry about switching/enabling/disabling mid gameplay. There could be a whole dedicated configuration page for these, what I'm now on going to more clearly call Additional A/V Outputs. You could first enable the feature and then add AAVO's, let's say up to 6 of them, on one hand I feel this is stretching it, but then again for very powerful rigs it feels this would be an artificial limitation. Perhaps some simple dynamic limit based on number of CPU cores, and if you have a 6-core processor we're going to let you have only up to 2 AAVOs (perhaps an override in configs, but that would involve modding and you get no "warranty/support") Currently I'm going forward the separate window-on-the-taskbar approach, not the MFCD Export approach (a big internal resolution that's a combination of the horizontal size of all the connected display devices), because I don't see how you can have audio separation with that approach, probably there could be some workaround or trick but I'm just not going to speculate and spend time on that right now. You would then be able to select each of these AAVO's in a vertical list for example, and configure all kinds of things for that specific AAVO, things like it's resolution, it's cropping, various applicable graphics settings such as anisotropy, anti-aliasing, things that wouldn't affect any other output or anything globally gameplay or simulation wise. You could also be able to attach Camera F-View playlist/presets to specific AAVOs, selecting which Camera F-views would you like to have on the quick switch list which would be mappable to controls for easy switching of your favourite views, without having to scroll through all of them that DCS offers. Normally ofcourse you should be able to switch to any* of the Camera F-views while in-game irrespective of any preset settings, what I'm describing is quick switch list, and additional functionality with it's own control mappings, and it would be exactly how you would configure it, respecting the order. So for example when a user presses a key mapped to AAVO-3_CameraFViewQuickSwitchUp (..Down), it would switch between F4, F7 and F3, in order, because that's what the user happened to configure in the preset. Why such presets, well it's simple. It's because different kind of battles such as dogfights, intercepts, patrol, CAS, Sea battles, ground, cinematics may all prefer different set of Camera F-views, which they would use most frequently. Note: I mentioned earlier switching between F-View Cameras that are on one of the outputs, or in other words Camera F-view swapping. This still holds, if you would press the quick switch for an output again, while it's displaying a camera that is not in it's quick switch preset, it would just switch to the first Camera F-view on the output's preset's list, would be the easiest, or it would remember where it was before it switched to a camera not on the preset list. Note2: So you wouldn't need to switch your main screen at all, you wouldn't need to do swapping of a particular Camera F-View between outputs if you don't have the need to do so, you could switch between different Camera F-views in the AAVOs freely, as long as it doesn't land on the Camera F-view you are currently viewing in the main output, in that case it would simply auto-swap as well, or actually ...: Additional Feature: There could be an option to "Disable auto-swapping" (checkbox) or actually an ability to lock specific Camera F-Views to a specific AAVO so that you can't accidentially switch away from it, or swap it to some other output. Say you want to keep your primary output showing always F2, you would be able to do this to prevent accidents. Additional Feature: Reset to default - when hitting this key you would reset all AAVO's to their default Camera F-views which would be configurable as in "Make this Camera F-View the default for this AAVO" Also switching would be regardless whether it's being actively recorded or not, as that's at least now intended to be done by OBS so there should be no technical concern with that, but there is a bit of a question why would it switch if we respect that you might also freeze/suspend rendering and outputting audio for that AAVO when it's not being recorded. The switch action could be queued so that it switches when you activate rendering/outputting (not necessairly recording, but usually yes). -------------------------- The DCS video content creators also do tutorials, showcase, testing, comparisons where the gameplay is slower or even as slow as step-by-step, but also highly pre-planned and therefore predictable by the author, which means there would be much less of a problem dealing with AAVO management in-the-game. Also DCS already has a lot of controls just to deal with normal gameplay, so what's a few more Otherwise you could ofcourse just keep the renders and recorders rollling all the time, which again for the powerful machines, the target audience of this feature, wouldn't be so far fetched to consider. And for example your machine may be powerful enough for 2 additional matching-resolution AAVOs, but you would still want a 3rd-angle scene in one of the moments, but you would only need to go into the replay once or so, probably much less than usual*, still saving a lot of time in the end total. -------------------------- Oh I totally forgot: AAVOs would not require simulation/physics/memory duplication !!! Additional A/V Outputs would most very likely (99.9%) not require any additional logic/AI/physics/simulation/game-state, etc calculations to be done just for themselfs, including any duplication of assets and textures, so the CPU, RAM and VRAM* performance requirements wouldn't linearly increase for every AAVO, some VRAM (to a lesser extent RAM for audio) is expected to be needed for the output it self ofcourse. -------------------------- Oh I totally forgot #2: Multithreading would be a lot more important with this. Everything that's duplicated for each AAVO would preferrably use it's own thread and be designed fundamentally with MT in mind and all of the MT's efforts should ofcourse develop around AAVO feature for best results! Or at least where it makes technical sense, if there's no sync issues. Different threads for the stuff around rendering, and ofcourse Audio. AAVO's should also have their own instance of the audio system running on a different thread, if it is still running on the one thread by then (since there's MT improvements across the board and we don't exactly know how far they plan to go in the original plans ?*not*? considering AAVO feature). But yes, if the sync issues are big enough so that it makes multi-threading less effective here, it would be again up to the power of the computer's CPU to handle all of these audio outputs in one thread and you'd be limited by that, understandable.
  10. Thank you for pointing that out for much needed validity of the idea, I didn't even know that. Your concern is absolutely valid, but technically it's not so fair to hobble progress of the DCS ecosystem overall and in other areas just because one area of it is lagging way behind. All hands are on deck to improve To be fair this feature doesn't impose any penalty on anyone, it's completely optional. To reiterate one of the major arguments, it should be developed sooner rather than later not because of some urgency for users, but because it would make big economical and technical sense to do this properly while the MT/Vulkan programmers are still in deep development, so that before the finalization (not necessairly initial release) is done, this kind of capability is accounted for in the core graphics implementation as a standard component and not a bolt-on. Once MT/Vulkan is relatively finalized these programmers could either take up other tasks or be dismissed to go elsewhere. What then if serious interest for this raises up later, are those specialized programmers be re-hired back? Pay them again just for this feature? Unlikely they would be available. Would there be enough other workforce available on short notice? Or we pull less field-specialized programmers off other tasks to bolt-on this feature as stop-gap ... ofcourse we would like to avoid these scenarios. ---------------------------------------------- Found what I was looking for earlier, an actual commentary on how one of the youtubers makes DCS videos, with an important note: Grim Reapers: https://youtu.be/2-jYlE-GfVo?t=405
  11. Perhaps could be much faster to implement than fixing (rewriting) the replay system, that may also be a major argument, but only the developers can give an idea about this. I think everything else around it, making sure it works well in practice, switching, providing each of the outputs it's own graphics settings in GUI is more of a chore. --------------------------------------------------------------------------------------- I have created an example recording process demonstration (a full guide even) using MFCD exports, that can be done with a single physical display device, just one monitor. No need for anything else. An overview of a "virtual monitor" in Windows -> https://decontev.com/virtual-monitor/ (just an explanation, no need to download anything from there) 1. Install Amyuni USB Mobile Monitor Virtual Display Driver - Source: https://www.amyuni.com/forum/viewtopic.php?t=3030 1.a You can download it from https://www.amyuni.com/downloads/usbmmidd_v2.zip 1.b Extract it rather to some special location such as C:\Additional_Utilities\usbmmidd_v2 1.c Go to Start and search for Command Prompt, right click and Run As Administrator 1.d Navigate to the extracted directory (the cd command) 1.e Run this command (64-bit): deviceinstaller64 install usbmmidd.inf usbmmidd Don't run the second command just yet, but keep the CMD window open, because once you actually add a virtual display, you can't modify it's list of selectable resolutions anymore. 2. Open Registry Editor and navigate to: 2.a Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\WUDF\Services\usbmmIdd\Parameters\Monitors 2.b This is a list of allowable resolutions that will be available to your virtual display-s once they're created. Only these are the ones that can be switched to from Windows Settings. 2.c You can click on any of the string value parameters and modify their data which represents the resolution in Width[comma]Height format. Apparently there can't be more than 9, so do not create more values than there already are. 2.d For this demonstration here I am using a 3072x1440 resolution, because I'm exporting 3 MFCDs and the MFCDs in DCS can have a maximum resolution of 1024x1024 at least according to the GUI, and 3 times the width of 1024 is exactly 3072. 3. Close registry editor and run the second command in CMD: deviceinstaller64 enableidd 1 3.a This should add 1 (one) virtual display to your computer. For this demonstration I will not be adding any more displays, you can add up to 4 virtual monitors with this driver in total. 4. Open Windows Settings->Display. Your virtual display should be recognized. 4.a Select it and then choose the appropriate resolution below, because the active resolution selected by default may be different from the resolution you intended to use when you've made it available in the Registry Editor. 4.b You can close Windows Settings and CMD. 5. Navigate to C:\Users\username\Saved Games\DCS.openbeta\Config\MonitorSetup - create the folder if it doesn't exist. (due to forum itallization the backward slashes appear straight, this is only a visual effect) 5.a Create a new text file an rename it to "1P+1V_3MFCD" for example, but also rename the extension to .lua and then open it with Notepad++ preferrably, or similar. (You should have "Show file extensions for known file types" enabled as per standard when doing these kind of things. Newer versions of Windows do not enable this by default. If you do not see ".txt" in the end then you must enabled this option or else the resulting rename will look like ".lua.txt" and it won't work) 5.b copy and paste the example lua code from below: _ = function(p) return p; end; name = _('1P-2560_1V-3072-3MFCD'); Description = 'Configuration for a total output resolution of 5632x1440. A main display of 1440p on the left and a secondary (virtual) display of 3072x1440 on the right for exporting 3 MFCDs in 1024x1024 each.' Viewports = { Center = { x = 0; y = 0; width = 2560; height = 1440; viewDx = 0; viewDy = 0; aspect = 2560/1440; } } LEFT_MFCD = { x = 2560; y = 0; width = 1024; height = 1024; } CENTER_MFCD = { x = 3584; y = 0; width = 1024; height = 1024; } RIGHT_MFCD = { x = 4608; y = 0; width = 1024; height = 1024; } UIMainView = Viewports.Center GU_MAIN_VIEWPORT = Viewports.Center 5.c Save the file and close it, but you can keep the explorer window open in case you would need to correct or modify anything. 6. Install OBS Recorder if you do not have it already. https://obsproject.com/ 6.a Create a new profile that covers a canvas size of 2560x1440 and an appropirate scaled output resolution (keep the same if you want), and all of your other settings you wish to have for recording the main game window of DCS. 6.b Create a new profile that covers a canvas size of 1024x1024 and keep the scaled output resolution the same for a clean image. 6.c You can also create a combined 5632x1440p canvas profile if you wish to do additional cutting and positioning in a post-process edit step, it should be possible to achive similar results different ways. 6.d Or you can download the full example OBS profiles and scene collections package from here, for import: DCS_VirtualDisplay-MFCD-Recording-Example.zip 6.e Import the profile and scene collection but note that you have to select correct display devices as these IDs are specific to device/driver/OS/config etc and won't carry over from my computer. 7. You can now record the (invisible*) virtual display with OBS and capture those MFCDs either as one big video or open 3 instances of OBS and record each of the MFCDs separately, to it's own video file. Additionally: One major functionality that would significantly improve the performance requirements this would impose is the fact that DCS wouldn't really need to render out the additional outputs in most cases, but keep them frozen/suspended until activated. DCS and OBS could be synced together (not literally, but never say never) to record and render on manual or otherwise trigger, so there's no necessity to record, nor even render all of the additional F-view outputs at all times. The player/operator would, to the best of his abilities, be able to predict which moments to capture from more than just his primary view and would be able to control which additional outputs to activate (render resume/unfreeze) via hotkeys or otherwise. Perhaps even scripting could be introduced for creating event triggers for such recording purposes, for example "start rendering and recording if X unit fires Y weapon". There's no doubt this would help out cinematic making as the editor would then have a library of various shots from many F-views and be able to start editing from a larger set instead of spending a lot of time just recording and re-running replays.
  12. Hello I was wondering how various DCS video/stream creators record takes from different F-views of the same exact moment in a match/session, suspecting the words, it seems that they really just rely on replays, while hoping they work at all. That also means if they want multiple angles/F-views of that same thing taking place, they would have to go and run a replay for each take. And did I mention the current replay system being unreliable and buggy. But let's say a mission is many hours long, even having to go through that once (albeit there is fast forward in the form of game speed possible) just to get a 10-20 second alternative angle shot is already in it self incredibly inefficient. I can't or rather don't want to imagine the effort behind all of those cinematic videos one has to go through, if they have to rely on solely replays. Something has to seriously be done to improve this significantly, and I have the obvious idea which also completely bypasses replays, but doesn't completely eliminate the practical need, depending on the case, which is just the user's machine horse power. You'll be limited by CPU and GPU, also RAM. DCS should implement support for multiple F-views as completely separate viewports and output rendering windows that can be recorded with OBS recorder simultaneously. If these separate rendering windows can't technically be separate child-processes then at least their respective output audio streams would need to be split in this fashion order for the recently introduced OBS capability Application Audio Capture to work, so that we can capture the corresponding audio output stream for a particular rendering window or F-view. There would be no need for any additional physical displays or having DCS to know anything about display hardware, this should all well be possible on 1 monitor, the purpose is recording, not local viewing necessairly (althought possible if you buy more monitors/display devices). With OBS recorder you would capture these windows and even if you're in-game (windowed only?) it should work as long as you maximize all the separate windows and then maximize your primary one so that you can play as per normal. Alternatively if this wouldn't work so well, while OBS can't work with Windows virtual desktops, separation can be done using virtual displays (requires a special driver install from a 3rd-party), so you would be able to move the additional DCS F-View output windows to 1 or more virtual displays, which on Windows 10 and on automatically extend your desktops, even if you do not have a physical monitor for that additonal virtual display. It's there and it just works, you just can't see it, but you can preview it with OBS with display capture and that ofcourse means you can record it, which is the main goal here that counts. There could also be a rule that a particular DCS F-view can only have one output, or one output window per F-view, this means no duplication, this may be easier on technical efforts implementing this feature, but I also don't see the need for duplication right now. Even without this rule/limitation, I would still like to see enhanced switching, so when you want to switch to F2 while you are in F1 view example, if there is another window with F2 view, that one would switch with the view you were switching from. Repeat, I'm talkin about switching F-views while being in the same output rendering window. You could achieve the same by minimizing and switching the output windows around your multiple (virtual display) desktops but that is ofcourse the less practical manual way of doing it that we ofcourse shouldn't rely upon for primary usage. There would be no need for the additional output rendering windows to have input support and actually function on their own equally, they would really be pure secondary outputs, you'd control them from the main window with some added GUI or hotkeys if necessary, or just preconfigured in scripts before launch, this should make it technically easier to implement. Performance worries may not be as big because, DCS video creators usually have pretty beefy machines in the first place, and secondly, not all F-views would be as equally hard on the processing requirements, take F10 view for example, and different in-game content in those, one could be a mostly sky shot, while another could be mostly sea shot with few units. So this feature wouldn't be so exclusive to only the most powerful PCs. Better yet, there could be different graphics settings for these outputs, whow says all the additional separate output windows need to be the same resultion as the primary one, this completely alleviates the performance worries right there. I think this would revolutionize recording DCS, the cinematics making and in ways I or we can't even imagine right now. Video content creators may benefit greately, not necessairly purely in saving their time, but in them being able to do alternative angle shots at all, something that they might have avoided completely. There could be recording station machines, keeping an eye on a particular player from multiple, 5-6 angles (F-views) even, alleviating the need for multiple PC's, multiple clients as spectators trying to keep an eye on tournaments, etc. I would seriously consider this idea especially now when Vulkan API rendering and multi-threading are in the middle of development, so that this is fundamentally considered with the final design and developed with this in mind for an incredible experience in the end. There's probably quite a bit more I could go into, but I think I'll refrain from making the inital post too detailed, too complicated and too precise for now, I think I conveyed the general idea good enough for a good discussion around this. I should include some pictures/diagrams for better visualization of this idea but I just didn't want to delay this any further.
  13. It still works if you click and preview, and then zoom in to open a new tab. Otherwise you have to always click "Display as a link instead"
  14. I think I'm noticing this as well, let me do another comparison
  15. Hi There was and perhaps still is that conception regarding full-fidelity aircraft not being able to be AI-controlled because mainly it would take so much CPU horsepower that you wouldn't be able to play ... hmmm, perhaps so, perhaps not quite... How technically true is at all anyway? Is it just a DCS thing and would need an engine upgrade? Either way, it really should be attempted to try to make a special full-physics AI units, albeit not having them have to control a full-fidelity cockpit, but the full physics and flight modelling, and other things. I think the AI's right now don't work by emulating a human using input controls to control the aircraft, but perhaps in the future it kinda should, to simulate all the things a real pilot would deal with in a cockpit, while at least there should be pilot sight simulation on top of what we have now. AI's are most likely cheating in this case now by being able to see through all the % of their cockpit glass at all times, probably it's not even just glass and they can see through their cockpit nose into the forward direction, hopefully the sight cone is circular/rectangle tilted upward. So I think the AI code is interwined with the do-this-do-that interfacing with the specific physics model logic, and doing a full-fidelity AI would likely require basically making AI version for it fron scratch, and simulating an actual pilot and the inputs would I think be significantly harder and perhaps not even optimal for how to do an AI in programming efficiently, this is just speculation but I think this would be a better reason and real limitation of why there's no full fidelity AI's. Why do I sound like I'm dismissing the CPU processing requirements ... well because there's a little fact that at least to my knowledge nobody has think of ..., err rather I should say nobody has pointed out, and that is, what prevents us from just setting a limit on one single full-physics AI at a time, in a single mission, or perhaps 2-3, depending on the system's hardware, but just one would make a big difference for where it would be most benefitial, dogfights. In pure dogfights, you don't have 500 full-physics AI's running around, the hardware should be able to deal with just one (1), if we somehow make it over the other possible limitations I mentioned. There! Solved???
×
×
  • Create New...