Jump to content

tom_19d

Members
  • Posts

    443
  • Joined

  • Last visited

Everything posted by tom_19d

  1. Great point there, thanks for bringing it back to the OP!
  2. @Vitormouraa Thanks, I totally agree that there is a change in thrust over speed, but as I said, in this very specific case I don't find it playing enough of a role to consider. In your examples in the other post, static thrust v something around mach 1, the difference is much more pronounced (which I totally agree with). Here, even if we could somehow induce a 10% increase in speed, we are only talking about a difference of ~30 knots IAS. The engines go though a difference of that magnitude 5 times just taking off. Thats why I decided to assume thrust as a constant in this small window so drag alone could be considered. But just to make sure I am following, the loss of thrust is not exponential, correct? Just a geometric reduction against speed? Cheers though, it was a good discussion on efficiency. And like you said, so many other factors at play here; in addition to the ones you listed, controlability issues after losing an engine during takeoff would have to be addressed, probably with bigger horizontal stabs, adding weight and drag, ect ect ect haha! As you said, there is a lot more too it than bolting on new engines...
  3. EDIT: As I preview this I see Vitormouraa has sniped me but I think our points are different enough (he has focused on the economy issue) that I will post also... @Shadow, I take some disagreement here Why is it not reaching overspeed limits? Its not just the wings. As has been correctly alluded to above, it is the drag. When your total drag curve meets your thrust available curve, that is your max speed in level flight. -Total drag is the sum of induced drag and parasite drag. At high speed, parasite drag is the limiting factor and it increases with the square of your airspeed. -Thrust available isn't quite constant with airspeed at a given density altitude for turbofans but we will say it is for our purpose, it is close enough. Vertical axis is pounds, horizontal is knots airspeed. Where the red line (total drag) meets the blue line (available thrust), that is max speed. You can see at that point that the drag line is increasing very rapidly (since it is an exponential curve) with airspeed, so it takes much more thrust to continue to increase the speed. It has been a long long time for me, but I believe at the high speed end it takes an approximate 30% increase of thrust to increase speed 10%. More power will help runway performance, climb rates, OEI performance, and sustained turning performance. But if you want to make an aircraft faster, you really are spending your time and money more efficiently by making it have less drag, not more power.
  4. Good deal, thanks for checking.
  5. Away from my game machine so I can't see the track but in his first post OP said he had ground power on -- I agree with the posts calling for enough engine rotation to sufficiently power the gen, but isn't that a moot point with ground power? USAF 1F-86F-1 shows that on the -35 series, the radio is attached to the primary bus. The primary bus receives power when the battery-starter switch is at BATTERY OR the generator is operating OR ground power is applied (pages 1-27 and 1-28 ). IIRC, when running on ship power in the game, the radio DOES NOT work when the generator isn't making power (as has been said), but according to the -1 that is incorrect behavior. However, if you can't make the radio work on ground power something isn't right...
  6. Hi Avantar, The ME and the F10 map ruler tool display bearings relative to true north. You are correct that on the Caucus map the difference between magnetic and true is 6 degrees (I always use 7, but I'm not going to claim I can fly a compass heading within +/- 1 degree so no big deal). So if you measure a true course of 276 in the ME, you would need to fly a 270 magnetic course to follow that line (caveat-- this is true for the A10, the F5, and the F86. I have not explored the F18 enough to know how it handles this, although traditionally American aircraft display navigation data referencing magnetic north). It is also my understanding that wind direction is entered relating to true north. Also, I am sure you have noticed, but DCS is a little odd in that the wind direction ended in the ME is where the wind is blowing TO, rather than where the wind is blowing FROM as it is reported in the US (it seems you probably already know this since you have your carrier moving along a 360 and a 180 wind, but I thought it worth mentioning).
  7. I agree, hopefully some day there will be some resolution to this question.
  8. As I literally said, I don’t have a side in this and these same conversations with the same references have been going on since the inception of the F5, to no effect. I was merely pointing out that without something new, this is merely us banging our heads against a collective wall because this has been gone over. The ball is in the dev’s court for better or worse.
  9. +1, I have wished on several occasions for such an ability. I would think the simpit crowd in particular could make great use of this.
  10. Hi Tom, There are much more qualified people here on the A10C than me, but a couple quick questions... -Super obvious, but you are using a TMS short up (or forward) to try to initiate the track, correct? -Your reticle is showing the INR-P annunciation, which to my understanding means that the pod is reaching the limits of its travel/is nearly masked/ect, and is attempting to track by ded reckoning rather than visually. This is backed up by your situational awareness cue, the little dot that shows your TGP is looking back well to your low 4:30 or so. What happens if you try to track a target out in front of you?
  11. Yes, manuals can be unclear. But right in that thread it was an ED team member saying the issue is not a bug by referencing 1F-5E-34-1-1, the non-nuclear weapons delivery manual for the USAF F5E. So unless someone has better documentation, this has been gone over many times. And not to disparage the opinions of the Swiss technician, but you can go through the whole section of the above referenced document about AN/ASG-31 LCOSS, which is what the BST (now ED I guess) game manual says our F5 has, and never does it mention a pipper that can slew to show where the AIM-9's seeker is looking. (It does seem to say that in MSL mode, with a locked target, the pipper should follow the radar antenna [page 1-47] but I believe that one has been talked into the ground too.) My point, however, is that while I am sure the gentleman is recalling his F5 experience correctly, I don't think it was with our F5's combination of equipment. I'm not trying to pick a side here, but unless more clear documentation comes up I am just saying this might be a fruitless chase.
  12. This one has has been kicked around a little bit, here is a good example, post 28 starts the discussion. (Thread features SUI-Mustang, also referenced by Ramsey above). I don’t get the feeling a change is in the cards on this issue...
  13. Thanks Gliptal -- I'm a little floored a question like that only brought one response and no argument, I was speechless for a little bit haha.
  14. Hi Scuby, sorry for the long delay, I have been dogged with some hardware problems that are eating up my time. Upon looking again I did notice what you were saying and Frederf confirmed - the FIXED HI remaining on the profile, even after the bombs were changed in inventory. After digging around I found an old thread on the 476th forum that described using PLT OPT 1 when wanting to have the ability to use both types of fuses. I loaded up an A10 with 4 82 AIRs and went straight to the inventory and set them "PLT OPT 1". If I edit the MK82APO profile that comes in the DSMS, it still says "FIXED HI" in the config, nor matter what changes I make (just like you said). However, if you just select one of the bombs in the DSMS and bring up its manual profile (m/MK82APO), enter a new name, and create your profiles off of that, it will display the correct config. So for example, using the 476ths naming conventions, I made a profile called A1 20HD2 (2 high drag bombs for a 20 degree delivery) and another called A1 30DB2 (2 low drag bombs, 30 degree delivery). You can set your parameters for both profiles (the config will say PLT OPT 1) to include the ripple settings, spacing, fuzing (obviously), and DTOF. Then I turned off the MK82APO profile in the DSMS. That way, when I pull up CCIP in the HUD and use the rotary, it will alternate between the high drag and low drag configurations (WPN SAFE/A1 20HD2/A1 30DB2/WPN SAFE). Each profile can access all the bombs on the aircraft, which is what you want, and I also confirmed the IFFCC is properly computing for the selected parameters (I did a few passes, but even watching the PBIL swing back and forth to account for the wind when switching between high drag and lo drag profiles was enough to convince me it was going to be correct). I realize this is all a lot to visualize but hopefully my point is coming across. And thanks for getting me to dig much deeper into the inventory system than I ever had!
  15. Thanks to everyone at the 476th for sharing so much of their knowledge and work for free, this product and your other work is priceless. I have been getting into CCIP deliveries in the A10C mostly for the airmanship challenge and have been trying to do lots of reading. In a couple of memorable threads going way back, several of the 476th SME's stated that the MRS is not reliable. (Here, Here, and Here, threads from 2013-2014). In the more recent TTP, however, page 157 states that the MRS should be utilized. I wasn't able to find any threads stating that the MRS issues were fixed, but given that its use made it into the TTP, is it the position of the SME's at the 476th that the MRS is working correctly? Before I start really working on this I want to be sure a pipper X'ed by the MRS is a valid indication and not just a sim problem. Thanks in advance.
  16. As Eagle said, if you enter the mission planner before the mission you can see the weights. Specifically for this case, BFT02 (the EFATO mission you are playing), has nothing mounted on the hardpoints and full load of CM gun ammunition and countermeasures. It gives you these amounts. Empty Weight 24967 Fuel 7489 Weapons 1775 Total 34231 I don't think you will be expending any ordnance on that mission so as has been mentioned you only have to worry about how much fuel you burn off. Depending on how quick you get going 153-154 KIAS should be a safe number. And if you can fly +/- 1 knot single engine you probably don't need advice from anyone here... Also, just in case you weren't aware, each DLC campaign has its own dedicated topic in the forums. Here is the one for BFT.
  17. Thanks, as stated in post 37 of the thread I am aware, Bitmaster was just pointing out there was a point of failure I had failed to cover. Thanks for the advice about CMOSclear though. Flew a 1:45 minute campaign sortie this morning, no issues. So hopefully I am in the clear. Thanks everyone for all the help!
  18. I have run the mission 3 times now, at least 30 minutes at a go, with no crashes. Hopefully I am not speaking too soon but this will be very anticlimactic. I did pull the DIMMS out of A2 and B2, but before I did I booted in safe mode and ran DDU. It then ran fine on 2 DIMMS. Then 4 DIMMS. Then on all 4 sticks and with my graphics settings where I like them. I had performed a clean install through the NVIDIA driver but apparently that wasn’t good enough...if the problem is indeed solved. And I feel like I have wasted lots of good people’s effort.
  19. I believe I have ruled out the RAM question. Mobo is an ASUS Z-97 and if you only have 2 sticks it recommends running them in slots A2 and B2. So i pulled the pair out of A1 and B1, set aside in their packaging, and ran the game. About 25 minutes into the same mission I have been using, hard crash. GPU-Z log attached. I pulled out the pair from A2 and B2 and placed them aside. Reinstalled the previously removed RAM into slots A2 and B2, and ran the game. Same mission, same crash. Log attached. After the second crash I combed through the windows event log carefully. Not a single systems messaged posted in the 5 minutes before the crash. There is a 5 minute gap, then an "operating system started at XX:XX time" event for when the machine started back up. I have a hard time believing there is a bad stick in both pair of RAM that finally decided to manifest itself just when I installed a new GPU. PSU next is my plan, any thoughts? EDIT Note on the first crash. The log shows a stretch of reduced load on the card right before the crash. This was because I had cleared out the road my friendly tanks were to advance down and I was "heads down" in the F10 map ordering my units forward. I got them moving, clicked F1 to get back into the cockpit, looked over my shoulder and BAM, crash. The second time I was just setting up for a mav shot and it crashed. GPU-Z Sensor Log 30 August Crash 1.txt GPU-Z Sensor Log 30 Aug Crash 2.txt
  20. David- no overclocking, now or ever Goa- it is an EVGA. And I am using two cables and from my understanding of rails I am doing what you suggest. In post 25 I have a picture of my PSU and explain how I am plugged into it (as well as at the very end of my last post). If I don't have this correct please let me know (no sarcasm here, just acknowledgment that I might be way off and I want to get this fixed haha).
  21. Alright everybody, I ended up getting home more than a day early this week so I had some time tonight to try some things. NVIDIA had a new driver out so I started with that as a clean install. Then I deleted my FXO and metashaders2 (because I saw somewhere where someone thought that helps with something) and started playing. Once again, on the 2nd mission of Georgian Hammer CA, about 30 minutes in, watching one of my mavs making a nice smoke trail towards a T72, and hard reboot. This makes the 3rd crash, all during DCS. This time however, I had GPU-Z logging to a fresh file so I will attach that. Additionally, I have my GPU-Z from the 2 hour RealBench stress test. Finally, I will attach a 3rd log, this one from another benchmark called Valley. I ran this after the computer rebooted tonight, I thought it might help as kind of a control. Some things I have noticed -on the GPU-Z from the benchmarks I notice my card's memory is hardly touched, but on the DCS log it keeps becoming more and more filled. I have no idea if this is normal. -I notice that while I am playing DCS most of the time the PerfCap reason is 4, which I am guessing is vRel, with a voltage of 1.05. However, there were times it went 16 which I am guessing is idle (I'm going to self edit myself right here, I just remembered I ran the first mission of the campaign again. I'm sure that ~5 minute period of idle was between missions while I was planning and editing my loadout. However, I do notice it briefly goes to idle during the second gaming period. I have "max power" or whatever set in the NVIDIA control panel so I don't know why that would have happened). -I went into the Windows event logs after the crash, because after the second crash i couldn't find any errors or messages. I think (although using the windows event viewer is new to me) that I was mistaken after the first crash. Because after the last two crashes I have examined the time stamps very closely and it appears to me the machine is trucking along fine and just suddenly dropping. So not the "graceful" shutdown as we previously thought. My apologies for the false symptom there, I realize that makes all our your lives harder as you help me diagnose this. All of that said, I'm open to whatever insight anyone has about these logs. In the meantime, my plan of attack is to follow the advice of many of you and start trying to isolate components. Thanks to everyone pointing out RAM. I have 4 8GB sticks, each identical. However, I only added the 2nd pair earlier this year. I will pull that pair first and try to replicate, then repeat with the second pair installed. If that fails to reveal a bad stick, I have a friend in town with an identical PSU (we essentially have twin machines) so I will try to talk him out of his PSU for a little bit, test, and go from there. If that all doesn't reveal an answer, I will be really looking at the GPU at which point I will have to decide if I have a case for an RMA or if I should try toutenglisse's suggestion of getting more voltage to the card. Thanks again everyone And to answer a couple questions -hansangb I will check my BIOS setting for that. As I said early on, to me the behavior when it happens is reminiscent to someone triggering the reset switch on the case while I am playing. -David- yes they are 2 discreet cables that came with the PSU. They are labeled PCI-E on the end that plugs into the card. They are the style that can either be 6 pins or 8 at the load end, the plug clips together kind of like a lego at the end if you need all 8 pins (which I do- so 2 discreet cables with 8 pin plugs on the end of each). GPU-Z Sensor Log - 2 Hour Stress Test 27 Aug.txt GPU-Z Sensor Log 29 August Crash.txt GPU-Z Sensor Log Valley Benchmark 29 Aug.txt
  22. Bitmaster-- sorry, I just reread your posts about what you were getting at with Linux. If I am understanding it right, you are saying to essentially do all of my "normal" stuff with Linux to see if I can generate a crash? If so, I have two problems. -I have only ever seen a crash in DCS (twice now), no crashes during any of the stress testing -This is probably because I use that machine solely for gaming. I won't even open a browser on it under most circumstances
  23. Wow, thanks everyone. I travel for work so it makes it a little hard to keep up right away sometimes, particularly when the issue is hardware and my machine is hundreds or thousands of miles away. I'll start with Automan. I attached the (stock) photo of my PSU. The single/multiple switch isn't present. Initially after installing the card, I had both PCIE cables plugged into the top two blue connectors. While replugging all the power connections, I moved one of the PCIE cords to the bottom blue plug, not that I think that would matter. Also not that I think it would matter, but I am using two of the peripheral plugs. Bitmaster, thanks for your detailed write up on booting from a Linux drive. I have zero Linux experience so I guess my question would be if I boot from one USB drive, I will still have access to all of the software I need for testing (DCS obviously, TrackIR, ect)? Toutenglisse and BitMaster- I have never done any overclocking but I think I am mostly following you here. (And just to confirm, I am air cooled on the GPU). But if I may ask some questions to make sure I am following you. -I have always used GPU-Z to monitor new cards/setups/ect but I never really understood the vRel which always shows up as the "capping" factor on the card. Also, I can confirm that when the card is being pushing I almost always see 1.043 as the voltage, although I have seen it on occasion go to 1.050, never higher than that. So are saying that as the card senses it is warming up, it is limiting itself to 1.043 volts, when it is capable of handling more? I guess I just don't have the background to understand what vRel really means. -Second, not that I am questioning anyone's expertise, but if the PSU/CPU/GPU were all able to run through a 2 hour stress test without a hickup, I would think that points to hardware not being the problem. Am I missing something? -Finally, and again not to seem disrespectful to anyone's time since you are all being so generous, but I see all sorts of people with 1080ti's in the signature lines, lots (or most, idk) of which I am sure are air cooled. If the 1080ti has such a potential for instability in its natural state, wouldn't there be more people in my shoes? Essentially, I have always been taught in airplanes if something suddenly stops working, undo the last thing you did...the last and only thing I did here was install a factory new GPU and suddenly a once robust systems is failing. Not to be a defeatist but would attempting an RMA with EVGA be out of line? (I don't think I am quite at this point yet, I also did do the very basic step of running a DCS repair after I ran the last stress test. I would like to give it another chance at actually running the game before I make another move when I get back home).
  24. Precisely what I have been saying. I guess I need to apologize for being unclear, I thought my last post laid out most clearly that I have no false notions of the "magic" properties of the antiskid switch. Yet twice in this thread you have set up and knocked down the strawman argument that I am claiming antiskid has anything at all to do with how a tire grips the runway (where the rubber meets the road, quite literally in this case) when I have clearly never said that. If something I said made you infer that, then my mistake for being less than succinct. That would be a welcome addition IMO.
  25. No, respectfully they are not. Leaving a 2500 foot skid mark from locked tires and being able to simply refuel, rearm, and go is not realistic. And I don't think that being able to consistently and repeatedly stop shorter with the antiskid off, as documented by bbrz in post #4, and doing so with an airplane that is fit to immediately fly again because such effects are ignored, is realistic. The points are one and the same. And once that tire would blow, the pilot would be faced with learning how much braking action can be achieved on a bare wheel. Suffice to say the landing distance required calculation will be exceeded. All that said, while an interesting issue it isn't something I would advocate ED dropping everything to fix when there are other pressing issues; this is a relatively minor thing. And if you think that aircraft tires can stand that sort of abuse without ill effect, then party on.
×
×
  • Create New...