Jump to content

Worrazen

Members
  • Posts

    1823
  • Joined

  • Last visited

Everything posted by Worrazen

  1. Can you post a screenshot of the actual VRAM usage? Probably not normal if no mission or editor was started. But yes, quitting a mission will not clean the RAM/VRAM completely.
  2. Keep in mind the more memory you have the less speed you will usually be able to run it at. But for DCS, primarily it's the amount, so I wouldn't worry about it, but forget about XMP IMO.
  3. Oh I keep looking at things always from a tester's eye, ofcourse it's great. But I would like to point out that for it to look correctly the animations have to be appropriate and synced up, if you have the standard bomb-hit-terrain explosion overlapping the unit-hit explosion, or vice versa it would look a bit out of place, and different kind of smoke kinda on top of each other, I'm not sure if this is happening right now but take a look at the slow-mo. When a bomb hits a certain vehicle head on it needs it's own animation of that unit being exploded and blown up, it is possible to do it other way but IMO it's probably harder to mess with timing multiple animations and smokes, the explosion that a bomb does when hitting something directly probably a bit different than hitting just dirt, enough for it to be visually distinct IMO. With splash damage, where bomb hits ground or air the same as if there was no unit there, that's different ofcourse you'd have a separate animation as then it has to be timed with the shockwave and then the blastwave, that's natural ofcourse, and the shockwave should also get some g-force going into the units, if not already? The best way would be for each weapon and unit combination to have their own specific direct hit damage/death profile. Yes that's quite a load of animation stuff at first but wait, there's no need to do so many animations from scratch for each combination, using grouping/templates and have individualy units in that group just vary slightly for some esthetical difference, so all MBT's would be one group/template, all APC's another, all fuel trucks another, based on the shape/size and type of armor the unit has, possibly some kind of parameters chosen what fits into one group, country difference too, russian fuel trucks don't look anything like the US ones for example. On the other hand I wouldn't be surprised if there isn't probably already some real-time randomness or pre-scripted variance to the explosion and smoke animations, such that isn't particularly tied to a unit and/or weapon type, and that's great, you just can't be as bold with that as it has to fit the size/shape of the unit, now most units are small enough from high above so I guess that kind of detail doesn't have to matter for now, but in cases when you watch it with TGP and other things ... but it's not high priority ofcourse. And all of this ground unit explosion/effect/model/texture/realism/rearm/refuel/repair stuff will all help Combined Arms in turn as well so it's hitting two birds with one stone. I'm being a perfectionist again, there I just said it for future sake, I have had this few years of things going on, getting familiar with DCS, I haven't been able to sit down and just enjoy it, going to get back to it from now on, after the A-10C cockpit upgrade lands I'm going full blown into campagins, and more.
  4. Well guys if this is filmed with gopro or similar, which unless outside of it's enclosure which has the clips so it can be attached, is made for water proofing, means there is no direct air contact, rendering the audio completely invalid for such a purpose.
  5. But I don't think all the weapons are in on it yet, because of the complexity I think each weapon is handled separately IMO, probably some grouping and shared properties but mostly it's all specific to each weapon so it's probably all WIP. Splash damage isn't that easy either, it's not just distance, it has to all dynamically work with terrain, obstacles (and their type) and armor types of the unit.
  6. Sometimes the changes are more significant than the changelog makes it look like. SysInfo has been overhauled, great!!! Load times seem to be better, snappier in some parts, more responsive, and also it loads faster when reloading with the same data being in RAM.
  7. I really agree with that, I think DCS is complex enough for it to deserve and improvement in that area, possibly even as a standalone utility or integrated into the launcher. Among other things I can think of right now.
  8. Let me before I say anything else, as I can't write that of amount of context into the title directly. The "blame" used in the title is meant for what unfamiliar people think when they experience various performance issues in DCS, but also for everyone at large, even me, it's something ever present, it's constant for the past so many years, so we may be forgetting about it, getting used to it, adapting, we should not, I wrote this to kind of remind us about this, but at the same time to mention some good news ahead as it's the video below that sparked me to write this here. However when you do blame the industry the blame is a real blame I guess, maybe not directly, but consequentially, this semantic/meaning stuff is complicated so I won't go that deep into it here, but in this case I chose not to put "quotes" on it in the title. We *all* know it was never the focus of ED to be an engine developer so you can't blame something that was chosen not to be focused on, in basics. TLDR is the bottom paragraph. Also this thread is a bit more relaxed, a bit ranty but in a good way, I'm actually writing it with enthusiasm, not with the angry tone. Okay, the first thing is the CPU's them selfs, single-core performance has been disastrously stagnating for almost a decade, yes small improvements but those are ridicolous to what should have been by now if the industry actually innovated or we moved away from this horribly inadequate PC-ATX standard of a little case that you can barely fit hands to civily connect cables. I have EATX Full Tower case for 6 years now and I'm never moving to anything smaller ever again and this not spacey enough still, not at all, I have SSDs dangling down and had to use some stings to attach to the top ceiling to hold things in mid-air because there's no slots to put things anywhere. This obsession with power saving and size factor has severely slowed down actual performance improvements IMO. Every time a new product is released they say ""look at what we can do at this XYZ wattage"" Okay sure whatever yeah, now show me what your CPU does at 500 Watts please! They keep going lower and lower and lower, what if they just stayed at something like some manageable wattage that most people can afford to cool like 150W-200W , is there a law or something that power has to go down? Nope, sure saving is good, yes the cores draw less when they're idle, yes I turn off the PC when I'm not home, but that's about it, the rest is for lower end, the rest is for different types of use cases, fields, etc, I would have said okay if it was meant for only those segments, but it is not, we're all forced into it, there is nothing for the use cases that wants just performance, all that cooling and power is a cost I'm willing to pay, I also have much better solutions to cool and avoid noise anyway, mainly involving the enclosure and form factor which makes all those arguments invalid as these things become a total non-issue. The segments aren't really there, once you get up higher into the "enterprise" segment, it's not a segment of gaming anymore, it's either only workstation, only server, only labolatory, there is no higher gaming/workstation/everdayuse segment, this is the biggest problem, because those higher segments don't actually care about single-threaded performance. Also I didn't say TDP because it has nothing to do with electrical watts (power consumption). Whatever that standard could be it could be kept to market segment, let's say the low ends would usually come with may 100W, the mid would be 200W and high end would be 300-400W and for each of that segment there would be cooling solutions tied to it and it would be all cherry picked to be a good balance between cooling and performance and you'd just be done with it, no more fiddling with which cooler and all the drama, if a power standard would be picked it would just made things so much easier. When you increase performance per watt YOU ARE ALREADY SAVING POWER - But they purposelly design it so it lower the total power consumption of the predaccessor AT THE COST OF SOME PERFORMANCE and they label it as some kind of a power saving feature, no, it's all PR, they BUILT it with less cores or the size of the cores were smaller, they could have built it bigger. They're not honest in PR, sure AMD reddit page is more open than I would have expected, but IMO it's about YIELDS, smaller chips make better yields per waffer so our performance suffers for their maximization of profits as they want to have better yields. TDP explanation is here: The second thing is the huge inefficiencies created by OpenGL and DX which had their way too extended IMO including because of people who were talking but not doing much about it like John Carmack for crying out loud, I followed everything he said in the past 10 years, interviews, etc, from way back before mantle got announced. He would always talk about "coding to the metal" and how consoles are efficient and PC isn't and and I would be so excited but he was way too deep into his stuff and would really not do much to pressure the industry, he did some things like pushed to do Adaptive VSYNC, that happened becaue of him spamming nvidia about it, that's the reality of some of the innovation, for what one would expect from a studio like that, john carmack is way too ego on his career than the PC platform CLEARLY, I guess he was only there where it suited his devving experience. I basically from just logic knew how fundamental the change is and I didn't knew how to program squat, but I followed tech news and read a lot, just on that kind of research I was able to work out this is a big deal and it's taking them so long and I was getting tired of it, , and so many of these so called "industry buffs" out there were so ridicolously not getting it, there was a LOAD of almost targeted spam and hatred against Mantle API when it was announced, and released. https://www.bit-tech.net/reviews/tech/graphics/farewell-to-directx/1/ Notice the DATE: March 2011 So everyone got tired and fed up of waiting and DICE's Johan Anderson teamed up with AMD, but this did not happen out of the blue, as I mentioned it was brewing for a long time among the actual programming people in game studios, Johan and a few others less popular took up upon themselfs to break this stalemate, but it should have happened earlier. All of that draw call, multi-threading, low-driver overhead and ofcourse are all major things but not the only ones, there's huge benefits with the Vulkan and possibly other newer APIs, on the lesser known developer sides, it's IMO also important for end user experience in terms of stability (less bugs), quality of a fix (less side effects) and response to a bug (how fast a fix can come) A quote from wikipedia Mantle API page, this was said by Firaxis Games: (I contributed to that page myself at the time so I can vouch for the accuracy, ofcourse wikipedia is not friendly toward original information, unless it's "reported by a reputable 3rd party source".) I could go on in depth but basically, what DX11 and OGL drivers are is that they're basically hacks, when those GPU manufacturers release fix, they basically don't write real code as it should be, the developer would need to write actual source code in the application to fix it properly but the older API doesn't allow that accesss, the GPU manufacturer have to write some instructions and things to instruct the GPU "to do this when you play DCS.exe in when you experience that unit that uses that" ... that's why the drivers are so big with so many scenarios, ofcourse it's probably heuristics so it doesn't have to be super specific but it's just not a quality fix, that's why you get so many side effects and compatability issues, some machines work, other don't. With Vulkan API when there's a bug pertaining to that area, it's all fixable by the developer most of the time, there's no need for GPU driver update, and ofcourse, let every developer care for their own games rather than GPU manufacturer having for every single PC game released out there. This is where the difficulty aspect comes from, because a lot is the application responsibility, most developers that were PC only weren't used/experienced with this stuff in general, so that's why it takes a learning curve to get up to speed, it's a major transition, ofcourse it takes time, but this added more fuel for the stupid anti-Mantle API trolls back then. The third, The rest is down to efficient multi-threading which is something that is up to the developer. There is quite a bit of room for splitting things off that are splittable, what is siplittable, in basic, multiple non-interdependable serial workloads (if I got that term right) that are all running on one thread, DCS does have some separation, some are split in "half" some here some there, it is better than I thought more than a year ago, which I think I didn't do good enough of a test back then, from what I can see I can say Quad Core should be worthwhile for DCS. Perhaps if resources are the issue, why can't there be some kind of one-time "major upgrade" kind of a paid campaign that unlocks it for all after it reached the goal. Something like this could be used for other major upgrades that pertain to the CORE DCS and not to any particular module, at least what the logic/technicals and commuinty thinks about it, anything can be ofcourse made into a module if one wants it, I do agree that for now the beginnings of a new ATC are part of a carrier module, but for when it comes to the whole DCS then maybe a different kind of model is more appropriate, it's not really an established model but hey DCS does things different so why not. A major upgrade being it's too big for it to be free, but this method of compensation would be would allow for more hardcore fans to help out the rest of the community by being able to fund more. Once a major upgrade of the engine or core content or a module is ready to release, then a special upgrade specific fund page would be opened up where users could give from let's say 5 to 1000€. For more privacy the exact goal amount can be hidden and amount completed can be hidden under percentage. Once goal is reached it would release for all for *free* and both sides would be fullfilled, the upgrade would not have to be forced into a separate module which is technically(devs) and practically(users) inoptimal (eg. all users need X unit in same session) The goal amount would need to be smartly picked, and I think it works best if used to cover raw costs (break even) otherwise it might not work if set too high (profit) if many low-end people just wait and wait for other's to drop 1K's that would never come. During the upgrade fund campaing to remind folks there could be checkpoint or milestones which would drop more bits of info about the upgrade (youtube video quick preview one feature, screenshots, etc), and it could scroll in the special web page like a timeline... etc. But not crowdfunding, you'd use your same DCS account the same way you buy existing stuff, I don't think there's need to bother with any "awards" and "signed copy". So maybe that's how we could get to some better perf in that regard, possibly faster. Ocassional boosts like this would be welcome for DCS to not lag behind in that regard. The bottom line is the home PC platform should be like 10 times more powerful than it is, the industry is kind of purposelly shifting focus on world of holdables, wearables, and cloud stuff. Unnless it's a genuine bug, in general you would kinda need to go down a checklist to make sure who the perf issue should be addressed to. But it looks like AMD is apparently looking to break that stalemate with quite a surprising Zen 2 debut althought the reviews are still a bit, industry knows it's good because motherboard prices went up quite a bit, the partners know it's a big deal, they're not treating AMD as the budget option anymore they apparently are saying. It's still only reaching intel, and not quite going over it in terms of IPC and frequencies but this is very close and people didn't expect it. If the Ryzen IPC uplift continues on in addition to increased core counts then there's some good future ahead, if they start recognizing some of these type of workloads and giving it equal support and not treating it as if it doesn't exist.
  9. That spread through the cores in Task Manager's CPU view or similar does not necessairly indicate the parallelization of the software in question,, due to a normal default behavior of most thread scheduling algorithms which do not keep threads tied to a particular core, this isn't a feature or a purposeful thing, it is a phenomenon and without it you would probably get slowdowns in many situations/circumstances, I called this as"thread bouncing" initially when I was , but that's the whole point, they are bouncing all the time, but we who aren't the makers or really familiar with this don't know of the little fact that rarely only a certain core is responsible for a set for threads and that this bouncding happens at a very fast rate inside the CPU and because the CPU Core Utilization graphs work at a significantly slower refresh rate to show these changes in addition to the sample data being averaged out, this gives the appearance as if there are "two or more threads working on multiple cores"(work evenly spread out) while in reality it may just be be only one thread that was "thread bouncing" between two or more CPU cores. So most of the time this kind of data presentation should not be used to gauge the parallelization of the software at all as it will be inaccurate most of the time. A different kind of data presentation would be needed which does not exist in the Task Manager so far, unfortunately there has been little to no effort from the industry such as MS and CPU manufacturers, probably because the industry actually benefits from the customers thinking as if everything they run on their new 8 core machines gets magically "parallelized", there is no such thing, I believe they are abusing this confusion in marketing ans PR when they say "oh that shows that your CPU is efficiently spreading the load", it is spreding it yes but if it's only one thread then that one thread isn't going to run any faster in practice (there is a separate debate whether it makes some bit of difference, it has to do with cache stuff, arhitecture type). An application needs to built for parallelization (multi-threading) depending on the complexity and type of work even from scratch for the new multi-core CPU to be of any actual worth. Then it is a question how well is that implemented parallelized and also important is how parallelizable are the workloads that the application requires to reach it's goal. The per-core CPU Utilization graphs do show accurate information in practice, albeit it depends on their refresh/sample rate and how much rounding/averaging is used but if the person viewing it is not fully qualified in this area and knows exactly what he is looking at it will give a false impression, it will be wrongly percieved. Most of the end-users who are not industry insiders expect these CPU Utilization graphs that it will show them the activity also on a per-thread level, it does not show anytihng on a per-thread level at all. I wouldn't be surprised if even popular reviewers are fooled by this. Not saying it's necessairly MS's fault, but due to IMO massive confusion out there I think in their area that they should provide a warning to or some kind of disclaimer by adding a note directly into the Task Manager's CPU View, visible by default and and option to hide it maybe, to truncate it to an "i" info icon. One or two sentences wouldn't hurt, people need to be reminded about this. But this is only one fast and cheap remedy, the real solution is to actually provide another type of graph. There is another behavior that is described as "core preferral", which means that some threads that are very active for most of the time have a tendency to stay on that particular core for that particular session or at least some amount of time. Whether or not this is only a scheduler thing or also an application software thing I do not know for sure, but this is what makes one core stand out of one CPU in the Task manager being at 100% ... as there's no need for thread bouncing that very active thread, rather other lesser threads bounce off that core so that one thread has all the space on that one core, that's the idea, althought that may not always be the case it could be a number of factors I have no idea about, scheduling is one complicated thing, and AFAIK it's proprietary, at least on Windows, well the code at least, I haven't gone into that area deeper since I got to the bottom of the local issue and I turned my focus back to DCS, but I might go and learn more about thread scheduling in depth in the future https://docs.microsoft.com/en-us/windows/desktop/ProcThread/scheduling-priorities This is normal, DCS isn't as parallelized yet, but it's more than I thought over a year ago, or it has changed since. When single-threaded operations are required for the work at hand, SMT (HT) will not generally not help. It will only help if there's many other smaller threads, for those to move away so the most important thread can have the most resources available on the physical core (can't say one particular core cause scheduler may bounce things ofcourse) ---- ---- Also, with process lasso, I learned one thing just recently which should have been obvious, others may known it here much longer, there is a trick with CPU affinity about SMT (HT) that you can "disable" it per-process without having to do it in BIOS, all you have to do is disable half the cores, alternating, one after another is what I think the way to go (CORE 1 CORE 3 CORE 5 CORE 7, so that how you would prohibit the scheduler from putting the threads of a process onto multiple "hardware threads" (logical cores) on the same physical core.
  10. That's what I was saying ... need to look 10 years ahead so better that all the core-engine and tech gets done ASAP so you can have a base upon which is expandable and updatable without having to redo it all and avoids situations like this. There should be infrastructure in place now for all the HDR/WCG stuff so the textures are all ready for when that comes to the users monitors, so textures can be developed earlier and not be outdated so quick.
  11. Not sure whether or not is this related to the supposable terrain engine bug that was officially mentioned to be behind the VR performance issues, but here is an example of this normandy run where it happened: Since I got some new goodies with the summer specials, I went and did a fast test, but right off the get go I noticed this and fired up the tools, this isn't from the start of the gameplay, it's some minutes in, after a few such stutters occured. I was playing with Spitfire on a generated mission, which smaller than usual. Just explaining this for interested people who aren't familiar with such analysis tools: The busiest DCS thread at the top is further broken down to show the amout of work that each module spent in that thread, that's what all the additional color are showing at the top of the colums. You can see when the FPS drop happens in the below GPU Activity graph, there's a big change above, first the other CPU threads don't happen to do much of anything, that's isn't necessairly the problem, even if the other threads were active like they were before the FPS drop it may not have made a difference, it's all about how much over the budget of one core a thread gets and how much of that work is required for engine to run. The other threads probably depend on the main thread to give them work as the engine needs to deal with something pretty big in edterrain4.dll. While the important part of looking at this graph is that the main thread completely saturating one of the cores on this particular CPU which in this case is a Quad Core without SMT/HT enabled 100% divided by 4 is 25%, but it is more important to to note when looking at the main thread in the per-module mode you notice how the amount of work edterrain4.dll is doing when the FPS drop happens in comparison with the rest of the time, that proportion is quite big. to how much edterrain4.dll was working at the time and the complete lack of other module's work in that same thread, edterrain4.dll being the highlighted dark-red colored bar (which is why other colors look a bit greyed out) But the fact that other threads are stalled and not doing anything during that time reinforces the suspicion this is a bug and not some required calculations for the proper simulation, if that was the case the other threads wouldn't be stalled normally, ofcourse that wouldn't mean it's really a valid amount of calculation but these are some of the ways one could figure out what is a valid slowdown and what is a bug/optimization issue. One thread can only go up to 25% here as that's where it saturates one hardware core on this CPU, these graphs work to show things ontop and hence another thread that can use other CPU cores is shown stacked up that's, , i could have for example removed those from showing, but for this purpose it's necessary to keep things for proper context, depends on what is being analyzed so it may sometimes be okay to remove things for better clarity and only see relative to that area, this is ofcourse relative to DCS.exe process as all other activity from +30 processes are removed from these graphs, but they were all idling in this test (no recording). Now, this graph does not include a per-CPU-core column, this is on purpose,it is where the thread bouncing thing comes in, but it doesn't come in literally ... what we do here is simply ignore the bouncing(scheduling), if the scheduling is correct and as optimal as it can be it won't matter how much the each thread was on CPU0 or how much it was on CPU3 during any of this, because on thread can only run on one CPU at a time and other threads won't be in it's way, it be in-theory the same as if these threads were on their own CPU cores, but let's leave those small "load balancing" cache/optimization differences aside now because that isn't a done debate and the industry has nothing to show, only a bunch of claims. Showing per CPU would make the graph far harder to interpret as well as it would be much more complicated for no reason, so sometimes things work so interesting indeed, all that crazy brainstorming (and I have a few regrets how it was done) I did a year ago in that thread was kinda needed just to learn how to use these analysis tools and how to properly present data so it can be accurately shown for the relevant area we're trying to troubleshoot. Per-CPU-Core analysis would ofcourse be a good way to look for scheduling issues, where quite busy threads would be scheduled to execute on one CPU, none of that would be DCS area of responsibility tho unless some kind of freak bug messing with the OS, to remind everyone of that point that would all be down to either or a combination of hardware, firmware, manufacturer, drivers and the operating system. The FPS not only drops down to zero, this wasn't just a stutter it's quite a freeze, but it's an extreme case, usually they don't last as long or so severe so I caught the worst example, you can see there's a smaller one right after it, it kinda happens every 2-3 minutes but also can do more in one minute, but later it kinda stopped it went for 20 mins straight without anything, might be something specific some units are doing when interacting with terrain when there was less action. This is 2-5-4 Release version ... which is current at the time of this test and writing, that bug might have been fixed in 2.5.5 OB but I don't know yet, I might consider switching when I'll do some more testing, but want to first do more stuff queued on Stable, don't have space for both version ( I could make space, but don't want to bother with it as I had big motherboard sata port issues, PC is all full of disks anyway) If it's related to the VR one and fixed already then no prob, this is just for reference then and for anyone interested to see it how it looks like in the graphs. EDIT: THIS IS NOT VR - THIS IS NOT VR - It's 1440p on high settings.
  12. My latest findings show DCS does use more than two threads at once at certain circumstances and does happen all the time during gameplay, so you better up the afinity mask to use 3 cores. And if you have SMT/HTT disabled then you might not need afinity mask, unless as you said for another reason like keeping recording software at bay, for example when I did some tests I put DCS to cores 0 1 2 and OBS to core 3.
  13. It's even possible the DCS dev team was doing something with the thread affinity in response to our discussions about optimizations and it may had an unintended side-effect, just speculating. I think people really need to check with Process Lasso what kind of affinity is set on all of the other programs/games and make sure, so we at least know it's not something DCS specific or not.
  14. But you need to understand first that you have an outsider viewpoint. You assume "DCS is stuck in a niche", if the developers and community thinks that way is right and doesn't need changing that that would be an invalid assertion, and if what we're talking about here is a standalone game (MAC) which is intended for the market you're talking about, so if MAC fills that void how is it DCS's problem of community being split? If it is so designed then from DCS's point of view MAC does not exist, and vice versa, DCS is just sitting there minding it's own business, so when standalone MAC releases it's only up to your call what perception of DCS you make out and I wouldn't be jumping to conclusion blaming either one of them based on that perception when comparing them, they're split exactly for the purpose so they are not compared against, you can compare it on a feature type level just in terms of what offers what but the usual mainstream comparisons are actually biased always toward this idea one-in-all, the whole entertainment/social industry is obsessed with this one-solution-for-all idea, many of these review sites on anything don't get that they may be biased from the get go because they always search for the ultimate-perfect-best-choice, and there is no such thing, it's all split into sectors, you can only objectively compare multiple examples within exactly the same field, scope, purpose, and none of that is taking account the capabilities and resources that were even available, or happened to allocated, other real-life factors which is a ball of 1000x factors and circumstances, as well as emotional attachement (motivation) of the people that made it, a reviewer is truly just looking at the peak of the mountain which is what matters to the end-user, that is however a 100% objective product review, scientifically correct it may be, but it's really artificial, not human, and could even be part of the reason why in everything out there some parts of the community are exceptionally tough, not necessairly completely wrong, but just rough. I'm not sure how big the split is technically as not full details were released, but it will be treated as a separate game install that's what I understood and I think that's better in the long run, will it be better for DCS, that's another story, but, will people flock to MAC that were in DCS all the time, probably not, initially to check it out maybe. The people that will stay with MAC longer or forever are the people who were never really truly DCS fans or couldn't get themselfs past the first huge wall of ice when being first introduced to it. Others forced themselfs to be with DCS while waiting for something like MAC. Not defending DCS just saying I think the reasons are probably more elsewhere first than DCS. Maybe the people don't really know what they want at first and once offered many choices they finally decide their place, so you might have been a MAC fan since beginning but it just didn't happen to exist before. Practically, when reality comes in, resources, than that may have an impact on changing that relationship, if there's a lot of people that wouldn't agree with this separation then I guess it could have an impact later on, I'm not sure what the problem would be right now. The multiplayer compatability, sure if it's possible why not but it may not be philosophically correct due to being just incompatible in terms of fairness, no result would be valid in such a scenario, unless some kind of limited cooperation where applicable. But if you were a MAC fan and not a DCS fan then why would you need DCS anyway. Because people want to fly together? You can't really approach this issue from a server admin/moderator standpoint where the whole point of the management is to increase in size and keep everyone happy, you can't just combine and morph incompatible products/ideas in order to inflate the number of a particular community, which seems to be this obsession out there, as if people are playing civilization in real-life building their communities. As games become more real and more expansive in everyday life, more social, it seems to be the case that people are literally building their own cottages and castles withing the gaming ecosystem. Gaming may have become too social, you don't really need to have a big popular server to succeed in life, the tech giants have created this idea of success in life meaning a huge amount of followers, don't take that bait, normal numbers should keep you happy, you don't have to prove to your friends, yourself, who's going to have more likes, more karma points. EDIT: BTW I just offered a viewpoint, I'm nowhere near that of a DCS guru myself either.
  15. While I do believe Vulkan API will be a big deal, it won't be everything, but it may just be well enough to give time, no need for everything to happen all at once. @Nagilem - When we say CPU we kinda mean "The CPU side of DCS", there's confusion because sometimes it's not necessary, these are the kinda communicational shortcuts that I'm guilty of as well as it can get complicated. Don't get too hyped up really, the focus was on the game it self, projects in the middle of development probably have to be carried out they can't just stop and fire people and hire 10 programmers all of a sudden, every season of new players there's people asking it, I was the same, we all know it, let's not dramatize and let it play out smoothly. Besides if the devs already know that, they don't need to be banged over the head repeatedly, not much was stated because they don't know about what/when/howmuch, the statement was "it will change" AFAIK. Your thread point is completely correct, there is a limit how much this can sure but the CPU industry's stagnating single-core performance is also worth noting. Making it multi-threaded and it being worth it requires a rewrite of a lot of code closer to effecively rewriting an engine, that's the industry word, nothing DCS specific.
  16. Probably not, they wouldn't invest in something like that only to stop in the middle, the last update was about the shader stuff, that's a large part.
  17. I've seen many bugs I reported taken care of, over half of them didn't receive a reply/acknowledgement. Yes the business, I also think in most such software development, it can also be more of a "mind my own business" thing of not getting too personal, it may just be one of the ways of dealing with a large global audience, funny I say stuff like this when even I'm guilty of some things on here as well in the past and I'm ashamed now I was a bit rough for wrong reasons at first. In general, developers need to be calm to do their job, they aren't your neighbour, your fitness coach, your side-kick for cleaning the kitchen (I would need one right about now), so I think it's also about not wanting to get personal with the issue and even valid criticism can be demotivational and stressful too so keep that in mind, plus if they do talk most of them talk on the Russian forums AFAIK. You want more spent on the game, not on a team of PR people making the forum look nice, right :) But I do technically somewhat agree with this thread just not sure right now and not in the exact same way, so I'm avoiding ticking any option.
  18. You guys would need to provide more details for it to be taken up, because if it's too vague it could be anything, browsers take a lot of RAM even when not playing video and it can easily be confusing, need some good data, take screenshots of the task manager Memory view, the whole thing so we can see Commit as well, and also check how big the pagefile is.
  19. Mention the idea on the DCS Wishlist.
  20. Some LOD Details change a bit as if the zoom was changed when selecting units, it pops out once nothing is selected. I forgot now which way is the correct way, selected or unselected, it's subtle but visible at higher zoom levels up closer, the trees, buildings, certain features on a unit, not the whole object, but just part of it, LOD. EDIT: Actually the zoom level may be partially changing under the hood when something is selected, but not affecting the camera, thus giving an impression the LODs were changed.
  21. I guess it went past me, I thought the first update didn't had it and there were some speedups even with out but I guess I wasn't paying attention.
  22. So you OB guys didn't get the VR improvements yet right? Chill, chill. Were in a season of performance boosts for DCS with all the future stuff announced and I'd just let them do it fair and square, no need to lose sleep over it.
  23. I wouldn't bump it to 3.0 with just Vulkan API and/or Dynamic Campaign ... without some CPU multithreading improvements tho, the way modules are integrated, perhaps the livery submodule idea, filestructure and modding improvements, isn't "mod folder" supported already?, proper FLIR rendering, ... just saying. I just don't like fast versioning it really makes it so dull, takes the credit away, just look Chrome/Firefox, the version numbers became meaningless. Hey I just installed Chrome 252.32.4282.39.78179 ... yeah, whatever.
  24. Actually ... I keep figuring this out and it keeps on going and I kinda learn more and more, so back to square one, won't be doing any quick screenshots. It's been 3 times I wanted to post just a quick example, but in the cause for doing it right, I keep raising that bar of what should be okay and correct enough of "quick example with just basic details" I keep getting back to the point what I had earlier, I really shouldn't be doing quick anything with this. I wanted quick just to add to this threads discussion but by the time I finish it'll also be way over the scope and it'll be more suitable for it's own proper forum thread. The whole point of this will be just thinkering with where there can be some optimizations done, not that developers don't know, but for something to fiddle and talk about with while we wait, and what a good way to learn these tools on so I can use them with all the other stuff non-DCS when debuggins/troubleshooting in general.
×
×
  • Create New...