-
Posts
544 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by nikoel
-
Thank you boys appreciate the advice @Gunfreak love the profile photo! The Ying to the Yang
-
There is a recommendation from Virtual Desktop to use a specific version of Nvidia Drivers. I can not recall what they are, but it's plastered all over their Discord. Maybe try them and see if you're affected
-
Hey Folks, I have recently delved into the WWII aircraft scene with the purchase of the Spitfire (and the entry wall of the WWII asset pack and map). Loving the plane and everything about it. There are countless threads about the Spitfire snout being a little in the way. I'm reasonably okay with deflection shooting, but I would like to improve—especially since the poor ol' girl, in typical British spy fashion, brings only a couple of magazines to the show. So far, I have been setting up a 2 vs. 4 Dora mission against AI with unlimited rounds to get the hang of it and keep frustrations to a minimum. It's going well, and I can shoot them all down eventually. One thing I’ve found a little frustrating, especially in VR, is the lack of tracers—which, of course, is realistic, and I fully support it. However, I was hoping to see if there is a mod you’d recommend that lights up the tracers so I can have better feedback on where my rounds are going while the training wheels are on. So far my search has not turned up anything. As I get better and more confident, I’m planning on taking away the tracers and then the unlimited ammo - once I can shoot down 5 planes in a row without running out of ammo I will be happy P.S. In most threads I’ve read here and "elsewhere," people who are answering say they get the bandit a lot closer than the 250-yard range where the rounds converge. Makes sense—closer = larger target = fewer ways to miss. Yet they still set up their gun sights. Can someone explain to me what the point is of setting up one’s sight to 250/35 for a dog fight on the rotaries if the recommendation is to shoot closer than that? That really confused me and makes no sense as the sight calibration will be wildly off
-
Download and replace the DLSS files inside the bin folder with the new version. OVGME advisable, but you do you boo You need a slightly modified version of Nvidia Profile Inspector. If you trust me, here is my current version you can just skip to the last step. If you don't, create a CustomSettingNames.xml file inside the same folder as The Nvidia Profile Inspector inside enter the following beside the other presets: <CustomSettingValue> <UserfriendlyName>Preset J</UserfriendlyName> <HexValue>0x0000000A</HexValue> </CustomSettingValue> Then open Profile Inspector and under the global profile select J. (Don't worry if you have other games, if they don't have J they will use an other profile) ProfileInspector.zip
-
I have a 4090 so no point going up against 3080 I appreciate how hard you’re trying As of few hours ago and with the release of DLSS 4 we can now run DLSS performance with very little loss in visuals. Some have reported that it looks better than DLAA looked before. I still run DLAA and settings above DCS default maximum settings with use of mods My true advice to you would be download the DLL package. Enable Profile J. Activate DLSS. See for yourself If you’re now CPU limited (borderline likely in some scenarios but probably GPU will be holding you back in most) then you can deactivate eye tracking to gain back CPU frametimes Also remember that some of us have gone to complete extremes of overclocking deactivating hundreds of background tasks and features to claw back every last FPS. Everything from automatic update checks, RGB, disk defragmentation schedule, one drive, copilot AI BS, every piece of background monitoring, XBOX and gaming services. Dont get me started on the new NVIDIA app and how if you’re not careful it can cost you 5%. Anyway, On average I have around 55 services running in the background. CTRL+ALT+DEL and go to activity to see how many you’re running Individually these tweaks might gain you 2-3FPS maybe. So many don’t bother. However claw back 2FPS here an other three there. You’re soon above 10 and that’s no joke You’re welcome to join VR4DCS discord and have a look at what others may have done to get an experience that you’re looking for Also I will tell you straight up whilst my experience is pretty much locked above refresh rate I do get the occasional hitch here and there. Don't let perfect be the enemy of good and enjoyable
-
You're not limiting your FPS in any way? I run mine with no cap. From memory you can set them in OpenXR Toolkit, Nvidaa Settings and within DCS itself Also you may benefit from Turbo mode
-
Thanks mate, glad its working for you
-
Hi Baltic, Thanks for the campaign. It’s been a mixed bag so far, but I’ll reserve my opinion until I’ve completed it and had some time to reflect. Like squeaks and rattles in an otherwise luxurious car, I think it’s the combination of small details that is affecting what is otherwise an overall decent campaign. I wanted to let you know that, in Virtual Reality, the popup that instructs you to switch PRI to Chainsaw and SEC to Base, etc., is overlaid directly on top of the kneeboard. This prevents the player from clearly being able to read from the frequency list. Or see the map, or anything that the player has on their kneeboard throughout the campaign Although not strictly realistic, I really appreciate the approach taken by Sedlo, who, within the comms, simply states something like, “Flight, push Chainsaw on 8 left.” - You actually do this a couple of times yourself. In DCS, navigating through pages to find frequencies and then returning to your previous page can be cumbersome. I believe the graphics used in your implementation, while visually appealing, are less effective than the default DCS popup system. The default system dynamically resizes and adapts to aspect ratios and player settings, allowing control over elements like font size, which greatly improves usability and readability Additionally, while also not strictly realistic, it would be helpful for VR users if default messages with coordinates and nine-lines remained displayed a little longer. In VR, transcribing information takes longer and can easily slip out of short-term memory. I know you’ve implemented a popup for this, but it can be very hard to read when overlaid on the kneeboard, and it isn’t always available. I have been forced a number of time to hit escape and scroll to 'messages' to review the info Lastly, I recommend revisiting the kneebriefs. Many of them are very difficult to read in VR, even with a high-end setup running a high pixel count and with good vision. For specific examples, consider the map in the earlier missions. Its low-contrast monochrome design made it difficult to discern, and during the first (second?) mission, I found myself backtracking multiple times just to figure out where I was supposed to park the jet because I didn't even see the writing on the other side of the map as it too was dark grey and in small font. In later missions you started using a blue sharpie which was much more helpful Consider giving the player at the end of the main mission a question from Magic - did you want to RTB or do you need to refuel. I actually did a A2A refuel on the mission where I started WW3 because the flight lead I was following had a bad habit of going in and out of burner the entire time and the Tupolev had legs behind it too when I jinked and had to catch back up. It semi bricked the mission with the triggers re-aligning when I landed and the flight lead was already shut down on the ramp, still got the 100% and moved on to the next mission There are more points I could mention, like blasting the Eurobeat at the end of the mission but I believe the issues above would significantly improve the experience for players. Some, like the frequency presentation, may come down to personal preference, but others—like the popup covering the kneeboard—are higher-priority concerns. I understand this campaign is your passion project, and I don’t want to diminish your efforts. However, I wanted to voice these concerns in case you’d like to address them in this or future campaigns. I trust you’ll decide how best to handle this feedback All the best,
-
Something exciting might be soon coming. I bought access to a Whisper fine tuning training Repo and been running experiments For those who have offered I will take you up on transcription and training of the model. Only need 10 minutes of training and 10 minutes of validation. I will transcribe and then we will need to correct the transcription and I will push to train a new model. I will then upload it to Huggingface for everyone to use As an example down below, you can see in the screenshot the WER (Word Error Rate) going from 55 down to 38 utilising LoRa
-
Pimax conducts its beta testing on paying customers by releasing products that look good on paper but are unfinished, buggy, misadvertised, over-promised, and incomplete. These products are often delivered well after the promised date when pre-orders were taken, yet somehow still feel rushed, with promises to “fix it down the line.” Once the products finally reach customers, Pimax promptly announces another new product, reallocates resources to it, and abandons the previous one—often leaving promised features and inclusions unfulfilled. All of this is wrapped in a support network that ranges from indifferent at best to downright illegal (in some countries) at worst—only for the same cycle to repeat again and again There’s a guy running around claiming he’s joined Pimax to fix this, but he clearly hasn’t realised he’s not the first. It doesn’t take much digging to see it thats it's likely just all talk Nothing screams “confidence in the brand and the products we make” quite like an extended one-year warranty for—checks notes—$250 AUD. Well, nothing except the realisation that even at that price, it’s now “out of stock.” How terrible must their average quality control be if they’re losing money on a warranty that is legally mandated in some countries? https://pimax.com/products/pimax-crystal-1-year-extended-warranty If you’re planning to share this with your local shady used car salesman, just remind them to consult a healthcare professional if it lasts more than four hours. It all is not helped because the one of the two VR content creators who I actually trusted (Wolta) has sadly deleted his youtube account. The rest of the landscape are paid promoters, shills and salesmen who are trying to peddle whatever this weeks sponsor has given them and are too scared to say anything bad about the product in case they do not receive the sponsorship next time or it affects their affiliate sales. Or my favorite throwing tantrums because a headset company did not send them a headset for a review. 11 year old children, all of them - I digress P.S. The company reminds me of another startup in the UK called Hill’s Helicopters. They’re a little behind Pimax, but the grift is eerily similar
-
A quick edit since people are treating 70% as a given. This is an estimation—an educated guess—based on the advertised resolution of the Pimax with DLAA. It also comes down to the preference of the end user. With lower-resolution headsets, such as the Quest Pro, the percentage will be relatively higher (even though the end resolution is lower) due to the headset starting with a lower resolution native There’s also an elephant in the room named MSAA, which, granted, produces a sharper image but will likely require a higher resolution, as the shimmer is on another level and highly distracting. That said, I know many people prefer it over other options. Here’s a screenshot of the settings I was using. They’re probably not the most optimal, but with a 4090, being in the ballpark is good enough null
-
Okay, I’ll bite. This is why it’s called CPU and GPU pairing Within DCS, it’s not difficult to create scenarios where one is CPU-limited or GPU-limited, even with the same hardware. It depends on the scene being rendered. For instance, if we were to use a carrier deck population template, you would have seen a massive performance uplift by upgrading from the 8700K to your 13K. However, in scenarios where you are GPU-bottlenecked like a 1:1 knife fight, getting a faster processor only results in very minor gains. It all depends on what is causing the bottleneck For people with identical hardware, the experience can vary greatly depending on what they are doing within DCS and this is why opinions vary so much There are tools like QuadViews that can shift some of the load from the GPU to the CPU. This is one of the reasons why it’s incorrect to say it “gives you more performance.” It provides more performance in GPU-limited scenarios but reduces performance (or performance headroom) in CPU-limited scenarios. Or Increases CPU frametimes and decreases GPU frametimes Now, regarding your question about where you’re going wrong: this brings me to the topic of resolution. Pimax’s new headsets are indeed high resolution, but it’s unwise nuts not to run them with foveated rendering as it brings the GPU to it's knees This means the total number of pixels being rendered is far lower than you might think, and the uplift in resolution of the headset is much greater than the actual number of pixels the GPU has to render. This is solely because of QuadViews. It allows you to significantly reduce the number of peripheral pixels without noticing a difference. It's not small either, depending on the end user's taste, the difference is over 70% fewer pixels rendered for roughly the same picture quality. No difference because our eyes perceive sharpness and focus within roughly a 3° cone, so the periphery pixels matter far less than the ones inside that 3 degree arc. You can test this for yourself with the Quest Pro in your signature. Eye tracking allows you to lower GPU overhead at the cost of slightly higher CPU frametimes. I run mine on a 4090 with all settings dialed to the moon with the use of QuadViews, something I could not do without it. It's comes at a cost when I am on a fully populated super carrier deck and this is where performance can suffer So because of my CPU bias created by QuadViews, compared to someone who is running QP at 5.5kx2.8k, I am running a lower overall resolution, a sharper higher resolution image within the zone that I can see, at higher overall settings at a cost of some stuttering where I am severely cpu limited (usually in that said super carrier in campaigns) Finally, youre right to doubt the precision of frametime tools offered. if you’re looking for a proper utility to measure frametimes, check out Fred’s OpenXRFrameTools here https://github.com/fredemmott/XRFrameTools
-
It's difficult to tell as it's a black box. However 10hrs would be a good starting point from what I have read, especially if one were to use LoRA to nudge the model into the right direction Hilariously the way it works via Whisper eating it's own dogfood. We would take the voice file, and then us whisper to transcribe that file. Then we would edit parts it got wrong, and feed it back into the model to create a new one
-
VAICOM Pro integration and instructions by @sleighzy are now live https://github.com/nikoelt/WhisperAttack/blob/add-vaicom-integration-instructions/VAICOM PRO/VAICOM_INTEGRATION.md New version is now up- WhisperAttack Changelog VAICOM Support Thanks to @sleighzy For integration instructions, see: VAICOM PRO Integration Guide Script will now Auto-Install Missing Dependencies A mechanism has been introduced to automatically check for and install any missing Python packages required by the script, ensuring seamless setup and execution. Just double click and you're good to go Buuut you still need Python and Ffmpeg! Support for External Word Matching and Replacement Added configuration to dynamically load fuzzy matching terms (fuzzy_words.txt) and word mappings (word_mappings.txt) for direct word replacement - thank you @sleighzy for external file handling Enhanced flexibility for text correction and fuzzy matching, allowing dynamic updates without modifying the script itself. Feel free to edit either file with more keywords! We have pre-populated examples Fuzzy Matching Implemented functionality for fuzzy matching of DCS callsigns and phonetic alphabet terms. Integrated RapidFuzz for more robust and accurate text correction using configurable thresholds for both phonetic terms and DCS callsigns. Introduced weighting adjustments to control the level of correction interference: High thresholds: Minimal correction, preserves user input closely. Low thresholds: Aggressive correction, may be "trigger-happy." Configuration Example: dcs_threshold = 85 phonetic_threshold = 80 Improved Clipboard and Kneeboard Handling Revised logic to distinguish text destined for the clipboard versus text forwarded to VoiceAttack. To transcribe speech directly into the DCS kneeboard and clipboard, users now need to say "Copy" followed by the text they want in clipboard/kneeboard This bypasses VoiceAttack entirely for faster processing and will only copy to Clipboard and DCS Kneeboard!!! End users can change this key phrase to whatever they like within the code For standard VoiceAttack commands, the script no longer copies text to the clipboard or DCS kneeboard, improving overall performance. Enhanced AI Initial Prompt Whisper now has an optimized initial prompt for voice recognition, significantly improving its handling of DCS-specific callsigns like Deathstar and Enfield. This ensures more accurate transcription from the start. Bug fixes Coordinates starting with 0 will now populate VoiceAttack commands - thank you @sleighzy Many more code revisions to keep the code tidy
-
Since you asked - if you guys really want to help to finetune this model specifically for DCS it involves work. Simple but boring work. Basically, we need a sound file (aka a guy putting in lots of DCS commands, talking to ATC and wingmen, VIACOM etc...) and a transcript for that audio - this is usually a VTT file. I then need your permission to train a new finetuned whisper model on these files Basically it will be a file of you talking a bunch of words and frequently used phrases. Like: 'Ragnarok 1-1 ready pre-contact' 'Sochi Tower, this is Enfield 1-1 in company with Enfield 1-1, inbound' etc... Then we will take this data and push into the whisper validation model and let it transcribe it into a VTT file. Then we will open that VTT file in a word editor and make changes where whisper made mistakes. Then we feed that data back into the model Glad it's better for you, however you're not really using or testing the model correctly. Whisper is different to a simple voice transcriber. By just feeding words into it you're denying it it's biggest advantage against dumb V2T apps like Windows Voice Recognition. The model takes into account what you have spoken, and then uses the meaning of the sentence to retroactively correct words. By saying "Sochi Tower this is Uzi 1-1, inbound" vs just "Uzi" you will see a better transcription because the model will be able to derive what you're trying to say. See below as an example of what is happening behind the scenes. Make sure you're sitting down
-
Can you please test this pre-release. I will delete when it goes live so we don't have double ups Adjust how trigger happy you want FuzzyWuzzy to be via these two. It's pre-rlease I have no idea if 70/80 is a good number but need more data. The lower the number the more FuzzyWuzzy will correct. If you set it too loose it will start changing things it really should not, too tight it won't do its job. Don't get carried away as these will be in an external file which will have these for friendly editing to the end user Now for things that Whisper gets *Consistently* wrong. For instance if Axemen always is X-Men, then I have included text replacement feature. Use this instead of trying to somehow change the pronunciation or AI prompting. See below for examples.
-
This project builds on what Bojote has developed. It’s not an overhead, as using a server-based approach provides a 10-20x speedup for the script. This is because the AI model doesn’t need to reload every time a command is sent. It minimizes stuttering and hitching for those not using the latest or most powerful hardware and allows us to integrate it with VAICOM and VoiceAttack profiles without the lengthy wait.
-
It might look like this because of terminal, but it isn't, there are only two copy and paste commands to do there. The readme on the GitHub looks advanced because I spelled out every step. But it's actually simple when you get your head around it. All we are doing is installing Python and ffmpeg as Path (you will see checkbox to install them as path as part of the installer so make sure to tick it) Then in the same terminal the you installed ffmpeg you enter the two lines of code and it will install dependencies Then you download from the releases page here the version you want from here: https://github.com/nikoelt/WhisperAttack/releases/tag/v0.2.2-beta. With a 3090 VRAM is not an issue so small.en is a good version to start with That's it. It's working. The other steps are to make voice attack start and stop voice recognition. Which is again a simple task. When you press a button it sends a command 'Start' and when you release it sends a command 'Stop' to let whisper know when to start/stop transcribing. If you take your time you will have it done in 15min
-
Can you please expand on that. I have so far used Ford and Uzi with out issues But for your information the next release will have two types of direct entries to steer voice recognition into the right direction. The first is something called FuzzyWuzzy, which uses an algorithm and a 0-100 threshold. Which is just a fancy way of saying of how sensitive or broad you want the algorithm to be when it changes what you say into what you think you meant. This is it here in action: Gudalta Tower becomes Gudauta Tower. It ain Pokemon, so it won't catch them all. But it should give you a few commands that otherwise would have been misunderstood with a "typo" or two. Set it too tight and it won't correct much, set it ultra loose and it will start being a hammer and seeing nails. Currently set around 70-90 (out of 100) Here is the code and the output Then it also uses direct replacements. This is 100% foolproof catchment, good if Whisper consistently misunderstands you. For instance when I say 'Enter' it interprets it as "Inter' so instead of trying to change this, I simply put it into this list. So now I have a direct replacement that looks like this. If there is a word that is constantly intruding itself you can simply replace it with the one you want
-
Latency is one yes; and is a major issue within Virtual Reality as it can really make you feel sick within a few seconds if it's off Frame time precision is another. OpenXR has a way of rendering the frametimes so that frames are held back to enable a better, more consistent experience. Some titles work better than others. This is partially why Turbo Mode was introduced by Mbucchia—it ignores this and gives us BRRRT, which can have its own drawbacks too. Stereoscopic vision is another challenge. Synchronizing synthetic frames across both images is very difficult because each eye has its own scene. ASW is not strictly frame generation because it reuses the last known good frames and blends them smoothly with head-tracking data to maintain a consistent experience. The key difference is that ASW doesn’t use AI generation, whereas Nvidia Frame Generation does. It essentially creates an in-between frame but still assumes that both the base frame and the generated frame are valid and align perfectly. If in some parallel universe humans could hold their head perfectly still and had better sea-legs and didn't want a VR experience where they moved, it might have been possible to utilise. However, it simply isn't developed for VR; otherwise, Nvidia would have used it straight out of the gate
-
No Frame generation as trademarked by Nvidia does not work in VR Like speed of heat already said, our mum already has frame generation at home, it’s called Reprojection/Asynchronous Time Warp/ASW and whole bunch of other names Apart from that unfortunately it’s only real frames that matter Worse still, that bus width is looking rather toy’t on the new cards (5090 excepted)
-
You likely haven't installed Python as PATH - you should see the following as PATH (google how to check it) C:\Users\YourUsername\AppData\Local\Programs\Python\Python311\ C:\Users\YourUsername\AppData\Local\Programs\Python\Python311\Scripts\ If you're confident of the above and are sure pip is missing, you can install it using Python's ensurepip module python -m ensurepip --default-pip