Jump to content

Bosun

Members
  • Posts

    179
  • Joined

  • Last visited

Everything posted by Bosun

  1. [Photos located a few posts down] Air to air refueling is the most difficult task in the game by far for me. It takes so much focus and control calibration, that I've gotten plugs for maybe 4-5 times, for a total 5 to 7 minutes out of first 20 hours of trying in an F-14. While I am getting better at it and can reliably plug the basket now, there remains some issues with the bounding box of 'Pre-Contact' for drogue engagement being too forward and inboard for proper approach. Also difficult is when a tanker will suddenly turn a course with no warning while I'm about to hook up. Would it be possible to have the tanker do a friendly call when it is about to turn? The main issue though, is when I'm slowly approaching the basket, less than 5 ft away from it on track, and they tell me to return pre-contact and retract the basket, presumably because I've 'taken too long.' That's utterly maddening, when I've spent an extra few minutes slowly notching the plane in, and finally getting it on a track to nock it at a steady speed, only to have them retract it. So my wish here would be, that the area wherein the tanker 'is aware' of a plane lining up after the ready call, be increased or adjusted to reflect the ideal position of the aircraft prior to pre-contact, and to reflect a more realistic and logical application of "Cleared Contact" calls so that aircraft holding station just aft, and just outboard of the drogue's final extended position be included in that bounding box. Also - expand the area, and the timing of, 'return pre-contact' triggers so that when an aircraft suddenly turns, for example, it doesn't knock me out of pre-contact simply for falling a few more meters behind after adjusting to the turn. In a real scenario, would a tanker crew really retract the drogue each time an aircraft that was actively attempting to refuel fell outside the radius of the tanker wingspan or tail length? I'm willing to admit they might as I'm not an actual pilot - but I have a doubt they would. It's trouble enough to practice this skill without the tanker making it unnecessarily difficult, so perhaps there could be 'Difficulty Settings' for the tanker that implement different bounding pre-contact box distances and behaviors for the tanker. 1. Easy Fueling - Bounding Box is larger for pre-contact, basket or boom stays deployed until an abort is called or distance of a quarter-mile is gained from tanker. Tanker gives out a friendly call on channel when it is turning, 'Coming Left to new heading XXX." for example. 2. Realistic Refueling - Bounding box is smaller, but still more-appropriately placed than is current, and distance for drogue or boom to remain deployed is less, as well as the current 'timer' that exists for when the tanker crew gives up on you. (lol.) Finally - tanker does not give any warning for heading changes. I think if it would be possible to implement that, it may make learning to refuel a little more palatable, and make the more realistic refueling feel a little more authentic and organic.
  2. Not sure if this has been brought up - but more fidelity in damage modeling would be cool to see. I realise this isn't a priority due to the nature of missile combat having the same effect on a plane no matter where it hits - but it would be neat to see the actual body of the plane damaged and broken, instead of just the insta-wing-pop-off no matter where the missile strikes. As well, there is rudimentary modelling of bullet damage, but it would be neat to see more systems independently affected with bullet strikes or missile damage. Having a wing partly sawn off, for example, instead of it breaking cleanly at the root...or having the back quarter of the jet mangled and damaged from a missile, instead of just the wings falling off. This game has a lot of immersion and this one area where it breaks it for me sometimes.
  3. I've been having an issue where, if I spawn on a carrier in the cat position (hot-start), I cannot set destination waypoints. I can make a map marker, and then select "From Map Steerpoints" and select my waypoint, but the DEST display doesn't vector me towards it. Am I doing something wrong with them perhaps?
  4. Though I'd like to add to the thought process how cool it would be if Jester was an actual AI program. Each pilot has certain characteristics to our flight and play style that Jester would begin to adapt too. For example, with a LANTIRN pod, I always do a right-hand circle around the target area. What if I could just tell Jester to 'Search for Targets" without any previous input on where to start looking, and just because the AI has learned that I normally circle that direction, begins his search in the direction of where I'm likely circling? What if Jester learned that when I flip the toggle switch to master arm on, I'm likely going to want him fenced in to countermeasures, and prompts me by asking which mode I'd like? Or if I've already flown in a scenario a few times, maybe he'd confirm he's setting it up the same way this flight? What if I'm often prone to forgetting to raise the flaps on take off, and one day he suddenly says, "Don't forget the flaps" as we lift off? None of these would be programmed responses. They would be the AI program learning how to prompt me for the most likely outcome I often try to engineer. We're a long way from that, but imagine if, on a return from a strike, the AWACS pre-emptively called you to check your fuel state and then informed you it had already vectored a tanker to intercept? Just because it had learned that the strike area was a long way away, and several planes prior that flew that route, had called tankers halfway back? No one told it to do so, it just picked up that it was common, and incorporated that information in to how it checked in on the planes it was tracking to help generate the most desired outcome that it has been trained on. This is where AI can get results that are of a much higher fidelity than hard programming could ever approximate. Say the AWACS example above happened once, and a tanker diverted to help a plane, at the cost of having another aircraft that was vectoring towards it, lose the ability to catch it, and ran out of fuel. Or had to expend too much energy in catching up to it. The AI would begin to factor that negative result in to it's continued calculation, and the next time it went to divert a tanker somewhere, it would take in to account other aircraft and their states, before doing so, and possibly choose not to vector based on that evaluative strategy. Now you've got a very high fidelity model of flight direction and command, and it would only get more complex and dynamic as time went on. To the point, even, when one day we may ask, "Why is the tanker not responding"? The answer wouldn't be 'Oh, it's a bug." The answer would, at that point, more likely be a real world answer of, "They've probably got more info than we have, and is managing a larger fish than our situation represents." Imagine the day where you can curse the command chain of AI for failing to prioritize you, for very real-world scenario reasons of difficult prioritization! High-Fidelity sim, here we come! haha.
  5. While that is true, somewhat, it fails to address an actual human's ability to generate new parameters on their own, without programmer input, based on changing information, and to 'think on the fly', to re-evaluate strategy and approach based on subjective analysis through experience. Or that the 'finite' number of parameters we operate under much higher, and are structured under heirarchies of priorities that we continually and dynamically change as the situation changes, based on aforementioned subjective analysis. While there are programs that have approximated this, the depth to which you would need to build the logic trees to really make this approach work well is not reasonable or feasible for most development teams. In a turning dogfight, a human can 'assume', or 'guess' what an opponent pilot may due based on the evaluated energy state and previous maneuvers pulled, that they've observed, and fly to better take advantage of that assumed strategy by their opponent. The computer-controlled pilots cannot 'think ahead' like that. They only ever react to what you have already done. None of their parameters include planning out iterations of what you may do, and choosing the best one based on it's own subjective analysis. In a BVR situation, a computer can only intercept what course you're on. It cannot know, nor even venture a guess, that you're taking a circuitous route to avoid interception somewhere else, or that your intended target is not in your flight path. That means it can only vector toward your current intercept, and cannot actually think ahead to simply be at your intended target to actually, effectively stop you. You can program computer-controlled planes to patrol that target when a plane gets wtihin "X" distance of a spot, as in triggers in the Mission Editor - but you cannot get a computer pilot to intepret the routes they're seeing - surmise likely target destinations, and change their flight goals or mission parameters to reflect that new information...until you do something that they can react too. Meanwhile - AI programs are fast-learning to look at sets of parameters and previous outcomes, and build ahead to generate outcomes that will likely be desired, but which have not been programmed as responses by any human input. IE - they're learning to think ahead, and to create profiles of action or output that are based on what the likely possible outcomes a human would want, would be based on 'learned' or 'trained' experiences. That's a defining characteristic of AI that the computer-controlled aircraft in this game do not have, and likely will not have any time soon. Computer Controlled Example: You are flying north to reach a target to the east, to approach it from the north, and not the west. Each time you turn towards the target after vectoring to the north of it, your flight path falls across the target area. This triggers the computer pilots to react to you, and they vector from the target area, towards your plane, as they have been specifically programmed to do, based on that criteria. They cannot deviate from that programming, if there is no further programmed function for them to do so. No matter how many times you fly that path the computer pilot will never react until your flight path crosses that trigger-defined airspace within a certain range. AI Example: You are flying that same flight path described above. The 2nd time you fly it, the computer has 'learned' from your previous example that being 'to the north of it', regardless of your course, may indicate that you intend to attack it, and the planes from the target area vector towards you within a certain range of the designated airspace they're protecting, regardless your course. The 3rd time you fly the route, the AI has picked up that it is your specific plane attacking it, and as soon as you enter in range of their intercept fighters, they vector toward you, even though you're still flying north and not approaching the target, and not within any range of their designated airspace. No one has programmed the AI to respond this way - it has simply generated the responses based on an over-arching goal to protect the target the most efficient way possible. This is much more similar of how humans set and think about goals, and this mimics what an online human player might do, using their experiences playing against another human pilot. That's the difference between what real AI would be, versus what we have now - computer-controlled.
  6. Edit - nm DLC toggle and adjusting up and down, I am not sure how, but my DLC up/down had been mapped to a joystick switch that threw it all the way up when I toggled it. Fixed.
  7. Discussion? Ok. That trailer - they were flying F-4s. And I saw a 15E in there. You're all thinkin it. I'm just sayin' it. "Soon."
  8. Except that when Steam itself has authentication issues - you can't play any of your games, at all. Not even offline. At least with DCS if they have issues, I can still get it to run offline and fly solo. I will never, ever return to Steam for a game platform if I can get the game directly from developers. I've been burned by the Steam too many times.
  9. I hear you on the navigation side. As others have commented - VoiceAttack and Viacom are both very good options to mitigate (not entirely get rid of, however, that effect.) AI is a general term commonly used. Prior to modern-day, I wouldn't have been bothered by it. In recent years, however, with the rise of actual-form AI algorithms, I think it is worth noting the differences, and perhaps coming up with a different term to use for the style of Computer Controlled Unit programming you see in most games. That is only due to the growing prevalence of true-to-form AI, and the ease with which the public is increasingly associating that term with certain styles of algorithmic programming, and possibility. In short - we should consider evolving the terms we use, based on new and actual AI that is being created and used, so that realistic ideas of what capabilities it has are implied, and inherent, in it's name, and less people will have a disconnect on why it isn't as intuitive as actual AI. I can call a house fly a bird all day long - until an actual bird shows up. Then I need to re-evaluate how I label it, so that when I speak of it, the reader or listener has the proper context in which to place their frames of reference. (Crude analogy, sorry!)
  10. Yes - because a pilot continually evaluates as the situation progresses, whereas the computer-controlled aircraft can only re-evaluate within specific parameters at a much lower fidelity than the human brain can, due to lack of actual awareness, and lack of fidelity being input into the computer controlled aircraft, for it to react too. For example - a computer controlled aircraft cannot necessarily look at the route you're flying, and make judgement calls on what it believes will be your intended target, or your likely reaction to being engaged. It can only respond to what you do. It cannot generate it's own behavior patterns based on subjective analysis, the way a real pilot can. It cannot use a black-box algorithm of AI programming to 'train' it's responses over and over and be able to approximate that either.
  11. I think what you lubbers are referencing is the difference between 'swell' and 'wind-waves' which are both valid parameters. Swell can be quite large, but still hits limits, without wind. Windwaves can be quite choppy, but still hit limits, without swell. They are both interwined in each other's states, and while you could set them separately, there would need to be parameters for what is possible. You will not see 30ft swell on a dead-calm day. You won't see 5 ft swell in a 60 knot gale. The reason is energy. Think of a wave in the ocean as being a wave of pure energy. Because it is. The water isn't moving - a floating duck on the top of the water doesn't travel with the swell. The swell passes underneath it as an undulation, and if you're swimming underneath the surface, you won't feel that swell pass by, because that swell is simply potential energy, raw and pure. The only time a wave becomes kinetic and tangible is when it piles that energy up in shallow water, and then breaks, usually close to shore, but sometimes during high-wind, high intensity storms. That also means that without a lot of energy, swell and waves do not form. Storms and wind contribute to putting energy into the ocean quite a bit, and without that energy, swells are only formed through reverbration of energy from other places, and currents. Unless an earthquake happens, those are not nearly as immediately impactful of sea state, as a storm, which dumps energy into the ocean in very short amounts of time, and can whip a sea state in a localized area like no energy wave bouncing off a coastal shelf and travelling back a 1000 miles over 3-mile deep water ever could. I think what would be easier, and better - implement the Beaufort Scale as a collection of presets, and have clouds be able to be a separate, but still related parameter. You can have 40 knots of breeze on a clear day, with 15 ft seas. You can have a thundercloud that only generates 5ft seas and mild winds - but even within those, there are scientifically studied norms that certain weather patterns fall within, and create. It is necessary to adhere to those. Actually, what we're really asking for here, is to have pressure differences organically generate wind and cloud conditions that mimic real-life permutations. To which, I imagine, is every developer's dream.
  12. While I hear the desire for more human interaction here - I think it is worth noting a few things: 1. They had a live, real voice actor record the Jester audio. That isn't a computer speaking, it's a human voice. 2. They've layered and strung the commands together in a way that is far more organic than any simulation I have ever played. When Jester is already hyped, he has different filler and connecting phrases to link statements and cues. 'Nails, 5 o'clock..uhh, and nails, 6 o'clock!' - While it may sound repetitive after playing a lot, it is worth recognizing how much intricate work was done to make the dialogue cues respond to the situations in a more organic way, as opposed to 'Nails 6 o clock, Nails 7 o clock. Nails 10 o clock. Missile launched, evade." That would be boring and truly robotic. Instead, you have 'Nails, 5'oclock, and missile missile MISSILE Break right!" I think, when you say it is too robotic - what you're really highlighting is the repetitiveness of continual play, hearing the same phrases. While that's understandable, it is, currently unavoidable with any possible technology, and even in real life, cues and communication in aircraft are both routine, and repetitive - for a reason. The only possible way to increase the variety of what you're hearing, currently, would be to have several full sets of Jester cues for each individual call out - and the program would randomly string them together each time the cue was called by the situation. That means having 4 or 5 different ways for Jester to say '5 o'clock', and each iteration selected at random each time it was called in game. That would also require the program to string them together organically as it does, for each iteration. Those organic styles of strings were crafted individually by the designers across 10,500 individual sound recordings. Asking to make it 'more realistic' would require - at minimum - double that number, meaning a designer would have to sift through 21,000 individual sound bytes and program them to work and sound correct in each permutation. That would take years. Until we get a 'general AI' that has the awareness to properly string these things on their own, and situational context with which to do it that is organic, I do have to say that the current Jester format is about as close to a real human as you're likely to see. Can it be improved? Certainly - and they're already doing that. Can it get a lot less repetitive and respond to your funny jokes? Likely not. All in all, they've done a herculean effort in individualizing call outs during game scenarios and situations, especially in the scripted mission campaigns. I'd be hard pressed to say many folks have their immersion broken by Jester before the plastic joysticks, computer screen border, comfy office chair and pixelation does. Looking forward to Jester 2.0. On another note - Grinds my gears - Calling any computer-controlled unit in a computer game "AI." There is no such unit or character in a video game that is computer controlled, and AI. Doesn't exist. It can't yet. (Though we're getting closer.) All the computer controlled aircraft, units and characters are simply rules-based trees of logic that have very specific programming allowing them to mimic behaviors seen in real life, without the context, awareness or responsiveness of an actually-aware entity. A computer opponent doesn't turn and fire because you're nearby and they see you're a good target. It does so because you're inside a prescribed range, meet a certain amount of set criteria, and it's own status matches compatible parameters for the series of actions of engagement it's been programmed to perform. Any algorithms involved are only factoring one or two things a time, like speed and distance calculations, and outputting a number that gets referenced continuously as the engagement goes on to be balanced against criteria for certain triggers to perform certain maneuvers. There's no "AI Logic", no 'black-box' of calculations that are minimizing deviation from trained results, nothing that would be considered any interation of what we currently call 'AI.' And it is leagues far and below anything that a 'General AI' would embody. And the same goes for Jester - it's not an AI, and we don't yet have the ability and technology to implement Jester as an actual AI - so the closest we can get is this manual approach to fidelity where you need 11,000 sound cues, individually recorded, and painstakingly paired in the game. If you can invent a way for an AI algorithm to run the Jester program - that would be the next leap forward. The AI program could then take those 11,000 recordings, and approximate, interpolate and generate new responses based on them. Theoretically possible - but I don't think anyone has tried it, and the reason why is computer power. You'd be flying that machine on an online server while having massive algorithms running continuously in the background, like having the ChatGPT server running in the background of your machine all the time. It would wreck your frames, to say the least.
  13. I've never understood why DCS replay system is different than other sims. I've flown flight sims for decades, and even going back to the early aught's, I could record a flight and play it back in other simulations. I couldn't jump in and 'take-over' the flight, like you can here when replaying - but that's never, ever why I would 'record' a flight. I record it to watch it and evaluate, not to take back over and fly it again. I have felt, for a long time, that they'd have been better off to create a system that simply records position information and aircraft status at regular intervals and interpolates the steps for a smooth video replay from different angles, rather than try to directly track control inputs, just like many sims have done before. It may not be elegant, but it would work. I can respect that they wanted to try something innovative - but not all innovation works out.
  14. BringThe Reign, Any plans on going back and touching up the missing calls that are still the original jester?
  15. I feel like modern game development for sims does not need to trip over the same issues gaming had 15/20 years ago. Imagine a solid wall getting a door, but instead of using it, someone just kept walking straight into the wall for no reason other than, "it's always been a solid wall...since forever." The logic behind the "it's always been that way" argument is a side slip in the debate made by folks who themselves are frustrated at something about the topic, but have completely missed the point the poster was making. I may be naive - but I would think that modern technology in simulation recreation has progressed past the 90s? I mean, correct me if I'm wrong here, but folks saying "It's been that way forever" as an excuse to still have games that are coded to have this kind of wider discrepency, ignore the last 20 years of game development. The point is not to argue that things have never been this way before. The point is to wonder why they still are, given the progress we've made in graphics over the last decades. If no one ever changed anything for the better when they were able too due to it 'always being that way', we'd never have computers. Someone thought an electronic computer could solve a computation faster and someone, somewhere with a sliderule, said, "Meh, math is slow. It's always been that way" as an attempt to dismiss the argument. I suppose folks should start tossing out their computers and going back to caves if they're going to throw that argument around. It adds nothing to the debate discourse, as it fails to address the original point behind the ponderance. Will variation in graphical ability of machines always exist and need to be accounted for? Yes. Literally no one made the claim it won't, or hasn't. Read the original post. Does it need to be as wide a discrepency as is coded in this game, currently? Absolutely not, by modern standards. It could be tightened up, while still allowing for gaps in performance across many machines. There are plenty of titles for PC out there, that do not have the variability of options that this game does, for changing its rendering. We can all appreciate the depths that the developers are going to, to keep older systems in the game while still understanding the need to have a more uniform experience. You can understand and empathize with both of these at the same time. Believe it or not - there are ways to code things so that: - Shadows and Fog are more universally experienced - Distant LOD objects take up the same physical space on the monitor, whether Standard, High, or Ultra Def. The developers have chosen not to pursue these items, either due to lack of knowledge in how, or lack of monetary need or justification to do so. And you'll likely not see this addressed any time in the near future, as their work on a new engine progresses, making any significant updates to this one rather a waste. The new engine is several years away, so hang in there, and perhaps the new engine will solve some of this. We all can hope that the last 20 years of technological progress hasn't been for naught. For myself - it is frustrating to buy a super-machine for 4k play, only to have the developers continue to promote 1080p as the ideal play resolution, especially in the 20s here, where we're about to be overtaken by 8k resolution cards. I say 'promote', because, as stated, it's entirely possible to make the ideal resolution to play be 4k, or 2k AND 4k, but that is a choice the studio has to make, and then act on. They've not done that yet, and that represents a choice. I believe it's due to the new engine coming, and I believe that new engine may solve some of those problems.
  16. It isn't really though. While you could have an AI pilot that issued you commands - what you really want in an AI Pilot is to make their own executive decisions, based on not only what you are doing, but what is going on outside. Jester's level of interaction is "Jester, press this switch". "Ok, switch pressed." The Pilots' interaction would be "I need this switch pressed." and then for every permutation of you hitting in in a timely fashion, or not hitting it, or accidentally cycling through something else, even by mistake, would need to be taken into account. It is more complex, by far. While someone could code all that in via current narrow-AI abilities, it would be a herculean task to get it to feel dynamic enough to be playable. However - for this new Jv2 that we're getting, who knows? Perhaps they've built in more adaptability from the get go?
  17. Draconus - Yes, AI planes and pilots fly. I was referring to having the wider dynamic range of human-level awareness to create goals and then create input data to match them too. Even current plane AI is run by rulesets and does not fly with any 'awareness' outside of very defined sets of parameters that guide actions. It can be complex enough to mimic fairly well - but it is a well-crafted illusion, and why it is very difficult to get AI to have a wide range of flight profiles and responses. You can have that, and DCS does, but it is not from an innate awareness that the AI has, or any sense of self and grounding in the world it is in, but rather another set of defined inputs and parameters that are meant to garner certain outcomes. Until an AI can generate its own priorities, goals and create its own desired outcomes, it will be difficult to make one 'human' enough to solidly mimic a real pilot that you, as a RIO, can interact with in that way, due to the more dynamic nature of the pilot's duties and job. In short - it is much easier to create a realistic facsimile of a 1st mate, than a Captain, in terms of making them autonomous - because like a 1st mate, the RIO is taking inputs and desired outcomes from the Captain, or player, who is in essence controlling the AI. Turn that on it's head, and you have an AI needing to control a player, which is a much more complex and challenging task that requires a level of fidelity that just isn't there yet. When you fly against AI aircraft - again - you're actually controlling them. By maneuvering your aircraft, you're inputting parameters for that AI to respond too. It is reacting based on that and what its programmed abilities are.
  18. It is important to note that all the AI that currently exists in the world is 'narrow,' meaning that it does not, and can not, have a greater awareness, or any awareness. What it can do is evaluate input parameters, and calculate results that minimize errors related to trained results. For anyone worrying about whether AI can take over, or whether you can train it to fly a fighter-craft - not yet, and not likely in the next decade at least, maybe not ever. The reason AI works for the RIO, is you can have more defined parameters for them to respond too, with more set and defined outcomes to train them on. (See target - Lock it on radar, provided x, y and z). This is entirely different when you take into account the variables involved in tight dogfights, or broader situational scope analysis against multiple threats. (This is why Iceman is a more limited AI - it is as dynamic as Jester, but the role it has to fill requires dynamic range that no AI could truly respond too. That kind of awareness requires something no AI can yet perform: The ability to create it's own goals, and it's own set of trained outcomes that have not been input by anyone else, nor would be seen by anyone else. The moment an AI manages to do that - you've created a General AI, with capabilities more in line of what we think when we envision Sci-Fi characters. Right now, we simply do not have the technology, specifically I would say, processing power and storage capacity, for such a unit. Humanity also does not have the maturity, discipline or wherewithal to properly host such a technology, and it is not likely we will ever develop that level of conscience. So if you want a more immersive AI in DCS - it would be a similar module to Bing, or ChatGPT, where it still only learns from trained outputs and constrained parameter input. And I would give about 12 hours before it would be ruined by people purposefully feeding it mal-adaptive data and feedback, so that it became ineffective, abusive or otherwise a detriment to flight.
  19. That's exactly the kind of picture I was looking for - just a really good comparison gif.
  20. Mike, Thanks! Saw that mod, and it just sparked the idea - "what if it didn't take recording 10,500 samples individually to change the voice?" This wasn't so much a feature request as a more academic ponder on what would be possible given current AI tech.
  21. Posited this in another response, but thought I'd start a thread on it. Currently, the AI tech exists to 'deepfake' voices. That means you could type out a phrase and have an AI interpret it into the voice with tonal inflection and pacing that resembles the sample data. So Does anyone think it would be possible to create a plug-in for the Jester AI, that would allow the plug-in to use an AI algorithm to read and listen to Jester AI as a sample, and then choose from a bank of other sample audio that it could 'swap voices' with? Essentially, being able to 'Hot Swap' in different voices for Jester, after running all those files through a processor AI to 'deepfake' a new voice from the example audio file. I'm imagining being able to take famous characters from movies, your best friend, etc, and after a small sample of maybe 40-50 lines, being able to have the AI deepfake the rest of the 10,500 samples. Possible? Or pipe dream?
  22. I'll look forward to it! Also - what is the possibility of having Jester use multiple voices in the F4? Will that still require a labor-heavy mod to re-record all the audio, or has anyone pioneered a fancy AI-program that will take a sample from someone's voice and process it through the 10,500 commands? Thinking similar to how you can type out a text, and have an AI program run a famous person's voice through a pre-record of someone else, and have it generate a new audio of that person using the same inflection and wording a'la deepfaking. (If not - for someone in AI - there's a project there for you). Having my best bud give me a simple audio test recording that I can then translate in to Jester for a more personalized, and funny, experience would be cool haha.
  23. Is it possible to elaborate on 'Mass Dynamic Systems" and the Jester v2 (how it's different/better?) Great update!
  24. I've also noted the loading screen images at higher resolutions are fairly noisy and low quality. Is there somewhere these are stored in the game folders? I want to run them through Topaz Photo AI to denoise them and up their resolution.
×
×
  • Create New...