Jump to content

Free Falcon 5.5


asparagin

Recommended Posts

  • Replies 161
  • Created
  • Last Reply

Top Posters In This Topic

Great that you wanted to step in here Mav-jp. I have never seen the equations myself, but

made the assumption that the calculation of drag/lift coefficients could not be reasonably

simplified/approximated as two-variable functions (Your 2D tables).

 

Naturally you can make N-D tables and you end up with just as good results as with a solver,

but the problem then is, you have resolution 10.000 pts per dimension and 4

dimensions? You wont even have space on your harddrive for enough points.

 

In order to solve any such equation (assuming it requires 4 or more variables), you need to

solve it, in real time and not use table data.

 

Now assuming that some of these equations can be reasonable simplified to 2-variable

functions (again I have never seen these equations or studied aero/fluid dynamics, so I just

draw conclusions on the two different cases) then of course the tables work :P.

 

Basically

 

There are 6 “global” aerodynamic coefficients to completely describe a model:

 

The Forces:

 

·CL : Lift Coefficient (reference airflow)

·CD : Drag Coefficient (reference airflow)

·CY : Y-Axis Force Coefficient (reference body)

 

The Moments:

 

·Cm : pitching moment

·Cn : yawing moment

·Cl : rolling moment

 

 

List of Variables:

 

α = Angle of Attack

β = Side Slip Angle

p = Roll rate

q = Pitch rate

r = yaw rate

δsb = Speed Brake deflection

δlef = Leading Edge flap deflection

δh = Elevator deflection

δr = Rudder deflection

δa = Aileron deflection

 

 

If you can get the tabular data for the dependances as below , you will have a VERY accurate description of forces and torques. Those data can be calculated via a 3rd party NS solver (not in real time) or from wind tunnel testing

 

 

Variable dependence

 

CL / CD ................................................ Retro Engineered

 

CY........................................................ α, β, r, p, δa, δlef, δr

 

Cm........................................................α, β, q, δh, δlef, δsb,

 

Cl........................................................α, β, r, p, δh, δa, δr, δlef

 

Cn.......................................................α, β, r, p, δh, δa, δr, δlef


Edited by mav-jp
Link to comment
Share on other sites

 

Variable dependence

 

CL / CD ................................................ Retro Engineered

 

CY........................................................ α, β, r, p, δa, δlef, δr

 

Cm........................................................α, β, q, δh, δlef, δsb,

 

Cl........................................................α, β, r, p, δh, δa, δr, δlef

 

Cn.......................................................α, β, r, p, δh, δa, δr, δlef

 

Well that is a problem, isn't it?

If you have a function Cn = F(α, β, r, p, δh, δa, δr, δlef) as tabular data, you will

require more storage space than is available on all harddrives in the world together..

 

One dimension per variable, 8 variables, 1000 steps per dimension,

requires 10^24 entries..

 

I'm saying tabular data would be fine if we could store it, but I'm also saying,

there is no way we can store all that data. Or am I missing something?

 

---------

 

Edit: You know what would be a cool idea for a flight simulator?

You have a small miniature USB-wind tunnel and you put in the model

of the plane u want to fly :P


Edited by =RvE=Yoda

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

Well that is a problem, isn't it?

If you have a function Cn = F(α, β, r, p, δh, δa, δr, δlef) as tabular data, you will

require more storage space than is available on all harddrives in the world together..

 

One dimension per variable, 8 variables, 1000 steps per dimension,

requires 10^24 entries..

 

I'm saying tabular data would be fine if we could store it, but I'm also saying,

there is no way we can store all that data. Or am I missing something?

 

meuh non LOL!!

 

you dont need 1000 steps per dimesion, you FOOL !! ;)

 

for alpha, a step of 1 degree is largely sufficient in the -20 / °35 range, step of 5 or more after that,

 

beta : no need more than -30/+30

 

Deltah : between -20 / +20 ; step 1+ largely sufficient

 

etc etc ..it is doable with sufficient precision :)

Link to comment
Share on other sites

meuh non LOL!!

 

you dont need 1000 steps per dimesion, you FOOL !! ;)

 

for alpha, a step of 1 degree is largely sufficient in the -20 / °35 range, step of 5 or more after that,

 

beta : no need more than -30/+30

 

Deltah : between -20 / +20 ; step 1+ largely sufficient

 

etc etc ..it is doable with sufficient precision :)

 

1000 points is an example

 

Let's say you want to spend 1GB on FM data in your RAM. Each entry is one float, so 4 bytes.

That means you can have at best 250 million entries.

 

250 M ^ (1/8 ) gives that (under the assumption that you have as many points on each dimension)

you can have 11 points per dimension.

 

Your plan is then to have varying density throughout the dimensions, using some algorithm to find where it is most suited?

And then you plan on having some pretty interpolation ?


Edited by =RvE=Yoda

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

You missed the point then...and yes i know where your mind is going

 

Why easy, if you can make it complicated :D

 

No I would like to have Mav-Jp explain how he would use the 1GB of data to create

a table based flight model. I am not saying it isn't possible. But what I am saying is that

it likely isn't going to give you what you want if you use AF's linear interpolations.

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

ask X-plane (mav-jp pointed to it) ...or even better ask FO (im curious myself, how they put all that in that so called "magic black box")

 

or even better...just open the stuff i sent you, cuz there you shall find the answer :)

 

but im eager to read this post going along ....:book:

 

I think Mav-jp already stated that X-plane doesn't use table data but solves simplified equations instead.

I am eagerly awaiting his answer on how he would use 1 GB of FM space. I think it should be possible,

but not so simple as he has made it sound before. Basically he needs to get the avg points per dimension

somewhere around 10, then use some nice interpolation (and possibly extrapolation) on that data.


Edited by =RvE=Yoda

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

Im really curious how FO will do that after the hints they have given, especially the fact that all nessary variables are even calculated on different surfaces...etc etc

Nice, I was suggesting such a flight model also, but they have to be careful

so that they also remember that each flight surface also affects the others.

 

 

I dont know Yoda, i think the answer is really in the documents i sent you, where professionals face this questions of computational modelling and simulation of aircrafts and the environment.

 

I cannot give you specifics, but it happens that the commercial sims have

more accurate flight models than the military ones ;).

 

 

But that reminds me; what made you say, that BMS is "dynamic" ? :huh:

What have you meant with that quote?

 

Maybe I was wrong, but when I flew it (;)) and those I spoke to I got the impression

that table data was not used in the same way, but instead the flight parameters were

calculated in real time like ED's AFM.


Edited by =RvE=Yoda

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

The same as everyone else who's making FMs in this day and age ... since the underlying method is based on pretty much that idea ... for everyone.

 

Im really curious how FO will do that after the hints they have given, especially the fact that all nessary variables are even calculated on different surfaces...etc etc

[sIGPIC][/sIGPIC]

Reminder: SAM = Speed Bump :D

I used to play flight sims like you, but then I took a slammer to the knee - Yoda

Link to comment
Share on other sites

Yoda, drop it. There is no such thing as a 1GB necessity of data. In engineering, we solve the problem, we are not modelling metaphysical "reality". The parameter equations you want spelled out would generate useless, *excess* data.

 

Compare it to anti-aliasing: we will eliminate high-resolution data to render images in such a way that our brain optimally can process them and this way create a more natural visual experience. The same has to be done in *any* flightmodel.

 

Whereas real-time calculation is essential for rapid prototyping, for *new* designs, it is very interesting that we have on existing designs a lot of measurement data that can be interpolated. Against real measurements, any functional modeling will always be a simplification.

 

You can "fly on rails" just as well using real-time functions of course, it has nothing to do with that.

[sIGPIC][/sIGPIC]

Link to comment
Share on other sites

Yoda, drop it. There is no such thing as a 1GB necessity of data. In engineering, we solve the problem, we are not modelling metaphysical "reality". The parameter equations you want spelled out would generate useless, *excess* data.

 

I never said you required 1GB of data. I said with 1GB of data you can store 11 steps per

dimension in an 8-parameter function, if you want to have the values calculated from 8 dimension tables.

 

You can "fly on rails" just as well using real-time functions of course, it has nothing to do with that.

And this is what i've said all along... no in fact, I never used any "fly on rails"

 

 

Whereas real-time calculation is essential for rapid prototyping, for *new* designs, it is very interesting that we have on existing designs a lot of measurement data that can be interpolated. can "fly on rails" just as well using real-time functions of course, it has nothing to do with that.

 

The question is not whether the principle works. The question is can we store enough data in RAM so we can

use efficient enough interpolation to represent the flight regiemes interesting ?

 

For example, in Falcon terms, how many alpha breakpoints would you consider enough?

Now make the same for the remaining 7 parameters and assign more points to those parameters you

consider more important. Multiply them all (like 3*4*5*6*7*8*25*36 = N_table_entries)

Now multiply by number of aircraft you wish to have this type of flight model and multiply

by how many such functions you need, you will quickly run out of space if you are not careful.


Edited by =RvE=Yoda

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

1000 points is an example

 

Let's say you want to spend 1GB on FM data in your RAM. Each entry is one float, so 4 bytes.

That means you can have at best 250 million entries.

 

250 M ^ (1/8 ) gives that (under the assumption that you have as many points on each dimension)

you can have 11 points per dimension.

 

Your plan is then to have varying density throughout the dimensions, using some algorithm to find where it is most suited?

And then you plan on having some pretty interpolation ?

 

your assumptions are incorrect

 

for all elevator positions, 5 points are very sufficient to get correct interpolation

 

for some alpha cases 20 are sufficient

 

for beta 19 are okay

 

like this :

 

# deltah "m" BREAKPOINTS

#

5 # Number of deltah "m" Breakpoints

-25 -10 0 10 25

#

# deltah "m2" BREAKPOINTS

#

7 # Number of delta "hm" Breakpoints

-25 -10 0 10 15 20 25

#

# deltah "m3" BREAKPOINTS

#

3 # Number of delta "hm" Breakpoints

-25 0 25

#

#

# alpha "m" BREAKPOINTS

#

20 # Number of alpha "m" Breakpoints

#

-20 -15 -10 -5 0 5 10 15 20 25

30 35 40 45 50 55 60 70 80 90

#

# alpha "m2 - LEF" BREAKPOINTS

#

14 # Number of alpha "m2" Breakpoints

#

-20.0 -15.0 -10.0 -5.0 0.0 5.0 10.0 15.0 20.0 25.0

30.0 35.0 40.0 45.0

#

# beta "m" BREAKPOINTS

#

19 # Number of beta "m" Breakpoints

#

-30 -25 -20 -15 -10 -8 -6 -4 -2 0

2 4 6 8 10 15 20 25 30

#

 

of course for CL/CD , more breakpoints are necessary like :

 

# MACH BREAKPOINTS

#

22 # Num MACH

0 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.9 0.95 1.0 1.05 1.1 1.2 1.3 1.4 1.5 1.6

1.7 1.8 1.9 2.0

#

# ALPHA BREAKPOINTS

#

46 # Num Alpha

-20 -15 -10 -5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 14.5 15 15.5

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 40 55 70 90

#

# BETA BREAKPOINTS

#

19 # Num Beta

-30 -25 -20 -15 -10 -8 -6 -4 -2 0

2 4 6 8 10 15 20 25 30

#

 

 

all in all, i can guaantee you that you dont need 1GB ram to simulate a very accurate tabular model :)

 

if you want more explanations about tabular data : read :

 

http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19800005879_1980005879.pdf

 

and feel free to count the number of values :)

Link to comment
Share on other sites

Aditionnaly some parameters have a importance of a second order ,

 

for example Cl vs deltar , therefore a simple table like 20 coefficients is only required to handle the coupling...

 

second order parameters dont need as much values as first order :)

 

 

i can GARANTEE you that our computers today have FAR enough RAM to handle the necessary tabular data to go donw to second order influence which is by far too much already

 

of course, getting those coefficients is quite tricky and time/cost consuming.

 

For the F16, hopefully, Wind data tunnel exist and of course experiment has been done for a reynolds that covers most of the flight regime... for Cl,Cm,Cr (mach 0.6)

 

And HFFM covers other Reynolds for CL/CD because it is retro engineered :)

 

the time when real time aero parameters computation is better than tabular is not here yet, maybe in a dozens of years ?


Edited by mav-jp
Link to comment
Share on other sites

your assumptions are incorrect

 

for all elevator positions, 5 points are very sufficient to get correct interpolation

 

for some alpha cases 20 are sufficient

 

for beta 19 are okay

 

like this :

 

# deltah "m" BREAKPOINTS

#

5 # Number of deltah "m" Breakpoints

-25 -10 0 10 25

#

# deltah "m2" BREAKPOINTS

#

7 # Number of delta "hm" Breakpoints

-25 -10 0 10 15 20 25

#

# deltah "m3" BREAKPOINTS

#

3 # Number of delta "hm" Breakpoints

-25 0 25

#

#

# alpha "m" BREAKPOINTS

#

20 # Number of alpha "m" Breakpoints

#

-20 -15 -10 -5 0 5 10 15 20 25

30 35 40 45 50 55 60 70 80 90

#

# alpha "m2 - LEF" BREAKPOINTS

#

14 # Number of alpha "m2" Breakpoints

#

-20.0 -15.0 -10.0 -5.0 0.0 5.0 10.0 15.0 20.0 25.0

30.0 35.0 40.0 45.0

#

# beta "m" BREAKPOINTS

#

19 # Number of beta "m" Breakpoints

#

-30 -25 -20 -15 -10 -8 -6 -4 -2 0

2 4 6 8 10 15 20 25 30

#

 

of course for CL/CD , more breakpoints are necessary like :

 

# MACH BREAKPOINTS

#

22 # Num MACH

0 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.9 0.95 1.0 1.05 1.1 1.2 1.3 1.4 1.5 1.6

1.7 1.8 1.9 2.0

#

# ALPHA BREAKPOINTS

#

46 # Num Alpha

-20 -15 -10 -5 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 14.5 15 15.5

16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 40 55 70 90

#

# BETA BREAKPOINTS

#

19 # Num Beta

-30 -25 -20 -15 -10 -8 -6 -4 -2 0

2 4 6 8 10 15 20 25 30

#

 

 

all in all, i can guaantee you that you dont need 1GB ram to simulate a very accurate tabular model :)

 

if you want more explanations about tabular data : read :

 

http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19800005879_1980005879.pdf

 

and feel free to count the number of values :)

 

I already wrote you can of course distribute them freely, I merely mentioned 11 for the case

where you use a uniform distribution...

 

How am I incorrect in mentioning an example, I don't understand.

Let's go back to your original post (I'm sorry but I cannot understand what your post above means):

 

Cn................................................ .......α, β, r, p, δh, δa, δr, δlef

The function..........................................The independent variables that the function depends on

 

To be able to Map from variable space to the resulting function space using table data,

each independent variable is given one dimension.

 

Then we must give each dimension a set of steps (of course step range can vary by alot).

 

Then we must calculate how many table entries we get, it is

N_entries = N_α * N_β * N_r *N_p *N_δh * N_δa *N_δr *N_δlef

 

My question to you, is how many steps do you believe the different N_[]s require independently.

 

Then the total memory usage for the lookup table becomes

N_bytes = N_entries * sizeof(entry)


Edited by =RvE=Yoda

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

I'm not saying who does it better; just giving you my opinion: FM's created in the same epoch are generally of the same quality (not fidelity - some may be more accurate than others, if you catch my drift. Data vs. model). The reason for this is that the basic technology and knowledge for programming these FMs is essentially the same and available to everyone. Some programmers might use different solutions to the same problem, and one might solve it better than other, but in general implementations will 'average out'. Just my opinion.

I am, of course, not considering greatly simplified FMs like eh, HAWX, just to give an extreme example.

[sIGPIC][/sIGPIC]

Reminder: SAM = Speed Bump :D

I used to play flight sims like you, but then I took a slammer to the knee - Yoda

Link to comment
Share on other sites

True. Thats why most payed coders are mostly the ones which can replicate stuff in depth and detail in most efficient ways.

 

I think a better way to say it is: .. those who find a good compromise.. between fidelity and efficiency. Because you can't obviously have both.

Spoiler

AMD Ryzen 9 5900X, MSI MEG X570 UNIFY (AM4, AMD X570, ATX), Noctua NH-DH14, EVGA GeForce RTX 3070 Ti XC3 ULTRA, Seasonic Focus PX (850W), Kingston HyperX 240GB, Samsung 970 EVO Plus (1000GB, M.2 2280), 32GB G.Skill Trident Z Neo DDR4-3600 DIMM CL16, Cooler Master 932 HAF, Samsung Odyssey G5; 34", Win 10 X64 Pro, Track IR, TM Warthog, TM MFDs, Saitek Pro Flight Rudders

 

Link to comment
Share on other sites

I already wrote you can of course distribute them freely, I merely mentioned 11 for the case

where you use a uniform distribution...

 

How am I incorrect in mentioning an example, I don't understand.

 

To be able to Map from variable space to the resulting function space using table data,

each independent variable is given one dimension.

we must give each dimension a set of steps (of course step range can vary by alot).

 

 

My question to you, is how many steps do you believe the different N_[]s require independently.

 

Then the total memory usage for the lookup table becomes

N_bytes = N_entries * sizeof(entry)

 

Your assumption is incorrect , In m'y previous post i was stating the dependancies but i should add the order of it becuse in order to correctly and accuratly compute a variable dependant of a second order parAmeter you dont need N but most likely N/50 variables.

 

Read the Nasa document i have provided the simulation used in there has been used to develop thé fLCS of the real. As you CAN see there are indeed thousands of variable here but nothing close to 1 Gb :) and Bélieve me this was accurate enough to test and develop the deep stall situations which means a lot ( extrême enveloppe )

cheers

Link to comment
Share on other sites

Your assumption is incorrect , In m'y previous post i was stating the dependancies but i should add the order of it becuse in order to correctly and accuratly compute a variable dependant of a second order parAmeter you dont need N but most likely N/50 variables.

 

Read the Nasa document i have provided the simulation used in there has been used to develop thé fLCS of the real. As you CAN see there are indeed thousands of variable here but nothing close to 1 Gb :) and Bélieve me this was accurate enough to test and develop the deep stall situations which means a lot ( extrême enveloppe )

cheers

 

What assumption, I dont know which one. You say I made an incorrect assumption but

I dont know what is incorrect, not even what is the assumption.

 

Compute a variable, hang on here, what variables, we were talking about functions made

up of independent variables right? In order to calculate a function of N independent variables

you cannot use N/50, NASA cannot change that. It is math.

 

You may say that some have less impact on the function due to their impact of the

overall value compared to the less, but that is just dancing around the question. The question is

whether flight models can be developed to take into account multi-variable solutions of the

constants mentioned before using only data tables.

 

I browsed through the document but see only 2-dimensional data tables at first glance.

I dont think this document has anything to do with our case.

 

Strictly mathematical question

Are you saying Cn is an unknown function of 8 independent variables which are all in tables or not?

 

Example: If an unknown mathematical function such as F = F(A,B,C) is to be tested empirically and

the results stored in tables (in order to describe the function for a specific regieme), then

you need to create a 3 dimensional table, such as

double[N_a][N_b][N_c]. 

Nothing can change that, because we have said the variables are all independent, and the function is completely unknown.

 

Consider:

A 1-variable function F=F(A). To numerically explain this function you need a 1-dimensional data set.

A 2-variable function F=F(A,B). To numerically explain this function you need a 2-dimensional data set.

An N-variable function F=F(A_1, A_2....,A_N). To numerically explain this function you need an N-dimensional data set to include the effects.

 

Other Example:

Falcon calculates (afaik) some CL (is this a lift coefficient of some sort) from a function CL(AoA,Mach).

This function is a 2D linear interpolation of a 2-dimensional data table. However the real CL value changes

with a lot more parameters than just AoA and Mach, but because working with only 2 parameters is easy,

generates a reasonable low data amount and requires low amount of computations

it can be used to reasonably describe basic flight characteristics of an aircraft.

 

Now assume a new situation:

We wish to create a new flight sim based on data tables.

We come up with a kick-ass interpolation algorithm and another kick-ass method to determine what

parameters a function mostly relies on and which ones need the highest data resolution (step distance).

 

Let us say we want to know a basic parameter (such as some force on the aircraft) F and our algorithms

say we have F = F(A,B,C,D,E), so a function of 5 parameters mostly. Now what wind tunnel data do we need?

 

Now we must go back and see what our second kick-ass algorithm did and get the parameters' individual required

resolutions. Let us say they are 5,6,7,8,9 entries respectively. So we once again ask our fictional kick-ass algorithm

to tell us where to best place our measurements, we run them for all the combinations possible.

 

This generates 5*6*7*8*9 data points, which we store in a data table. This data table is in fact our source for how

to determine the function F while flying the sim together with our kick-ass interpolation algorithm.

 

The total memory required for this function was about 60 kB when using floats to store the data.

Now increase the parameter count of F and you run into trouble.

 

 

Lastly

You cannot say that a function an unknown function F(A,B,C,D) can be simplified to F(A,B,C,D) = F1(A,B) (+,-,/,*,...) F2(C,D),

because then it is no longer unknown. You have assumed a lot about the function and you can no longer claim that

your model is entirely data based.

 

 

My Question to you

What is the highest dimensional data table you would need for your sim?


Edited by =RvE=Yoda

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

Would be nice to get back on topic. All these formulas, etc., we understand are needed to compute code for all sorts of reasons, not just game simulations.

This is why there are teams set up to work on projects and a team leader to keep engineers, programmers and everyone else involved on the right track. They always get caught up in their own environment and loose track of the intended project. So lets just say its a win, win situation and look back at the first thread. Cheers.:thumbup:

Link to comment
Share on other sites

I'm not saying who does it better; just giving you my opinion: FM's created in the same epoch are generally of the same quality (not fidelity - some may be more accurate than others, if you catch my drift. Data vs. model). The reason for this is that the basic technology and knowledge for programming these FMs is essentially the same and available to everyone. Some programmers might use different solutions to the same problem, and one might solve it better than other, but in general implementations will 'average out'. Just my opinion.

I am, of course, not considering greatly simplified FMs like eh, HAWX, just to give an extreme example.

 

Yeah I have to say the VRS Superhornet has an awesome FM but what shocks me is their plans to make a Pro version with an extreme Flight model(EFM)!:music_whistling: Just how far can they go is the question I have to ask, although it has been mentioned the pro has departure(as difficult as that situation is to get into with a superhornet)so its going to be very interesting:D

Would I take it loaded with weapons down the Jet canyon race, probably not as my FPS won't handle it.:doh:

[sIGPIC]2011subsRADM.jpg

[/sIGPIC]

Link to comment
Share on other sites

What assumption, I dont know which one. You say I made an incorrect assumption but

I dont know what is incorrect, not even what is the assumption.

 

Compute a variable, hang on here, what variables, we were talking about functions made

up of independent variables right? In order to calculate a function of N independent variables

you cannot use N/50, NASA cannot change that. It is math.

 

You may say that some have less impact on the function due to their impact of the

overall value compared to the less, but that is just dancing around the question. The question is

whether flight models can be developed to take into account multi-variable solutions of the

constants mentioned before using only data tables.

 

I browsed through the document but see only 2-dimensional data tables at first glance.

I dont think this document has anything to do with our case.

 

Strictly mathematical question

Are you saying Cn is an unknown function of 8 independent variables which are all in tables or not?

 

Example: If an unknown mathematical function such as F = F(A,B,C) is to be tested empirically and

the results stored in tables (in order to describe the function for a specific regieme), then

you need to create a 3 dimensional table, such as

double[N_a][N_b][N_c]. 

Nothing can change that, because we have said the variables are all independent, and the function is completely unknown.

 

Consider:

A 1-variable function F=F(A). To numerically explain this function you need a 1-dimensional data set.

A 2-variable function F=F(A,B). To numerically explain this function you need a 2-dimensional data set.

An N-variable function F=F(A_1, A_2....,A_N). To numerically explain this function you need an N-dimensional data set to include the effects.

 

Other Example:

Falcon calculates (afaik) some CL (is this a lift coefficient of some sort) from a function CL(AoA,Mach).

This function is a 2D linear interpolation of a 2-dimensional data table. However the real CL value changes

with a lot more parameters than just AoA and Mach, but because working with only 2 parameters is easy,

generates a reasonable low data amount and requires low amount of computations

it can be used to reasonably describe basic flight characteristics of an aircraft.

 

Now assume a new situation:

We wish to create a new flight sim based on data tables.

We come up with a kick-ass interpolation algorithm and another kick-ass method to determine what

parameters a function mostly relies on and which ones need the highest data resolution (step distance).

 

Let us say we want to know a basic parameter (such as some force on the aircraft) F and our algorithms

say we have F = F(A,B,C,D,E), so a function of 5 parameters mostly. Now what wind tunnel data do we need?

 

Now we must go back and see what our second kick-ass algorithm did and get the parameters' individual required

resolutions. Let us say they are 5,6,7,8,9 entries respectively. So we once again ask our fictional kick-ass algorithm

to tell us where to best place our measurements, we run them for all the combinations possible.

 

This generates 5*6*7*8*9 data points, which we store in a data table. This data table is in fact our source for how

to determine the function F while flying the sim together with our kick-ass interpolation algorithm.

 

The total memory required for this function was about 60 kB when using floats to store the data.

Now increase the parameter count of F and you run into trouble.

 

 

Lastly

You cannot say that a function an unknown function F(A,B,C,D) can be simplified to F(A,B,C,D) = F1(A,B) (+,-,/,*,...) F2(C,D),

because then it is no longer unknown. You have assumed a lot about the function and you can no longer claim that

your model is entirely data based.

 

 

My Question to you

What is the highest dimensional data table you would need for your sim?

 

Your assumption of equal dimension is incorrect:

 

As the parameters have diffferent order of importance, the dimensionnal table to decribe their effect is not identical.

 

stop thinking Math or Code, think Physics, you need to know the order of importance.

 

To give you an exemple :

 

if you take Cm (alpha,Beta, deltah, lef, sb,q ,ds)

 

Cm can be expressed (read page 38 in Appendix B):

 

Cm (alpha,Beta, deltah, lef, sb,q ,ds) = Cm(alpha,Beta,deltah) *

Nu,deltah(deltah) + DELTACm,Lef(1-deltalef/25) + DELTACm,sb(alpha) *(deltasb/60) + Cbar*q/2V*[ CM,q(alpha) + DELTA Cm q, lef (alpha) *( 1 - deltalef/25)] + DELTACm(alpha) + DELTA Cm,ds(alpha,deltah)

 

 

* Cm(alpha, Beta,deltah) has 20 alpha Bkpts, 19 beta Bkpts, 5 deltah Bkpts.

 

this means a table of 1900 float values

 

 

* Cm,lef(alpha, Beta) has 20 alpha Bkpts, 19 beta Bkpts, this means a table of 380 float values

 

 

* DELTACm,sb(alpha) has 20 alpha Bkpts, this means a table of 20 float values

 

 

*CM,q(alpha) has 20 alpha Bkpts, this means a table of 20 float values

 

* deltah(deltah) has 5 deltah Bkpts, this means a table of 5 float values

 

* DELTA Cm q, lef (alpha) has 14 alpha Bkpts, this means a table of 14 float values

 

* DELTACm(alpha) has 20 alpha Bkpts, this means a table of 20 float values

 

Finally to accuratly describe Cm , you need here 2359 float values.

 

Size in the RAM : 2359 * 4 = 9436 bytes = 9,436 Kb !!

 

Assumin you need more or less the same size for the 5 other force and moment adimensionnal coefficient (which is wrong because you dont need as much for Cn or Cl for instance)...

 

you would get : 9,436 Kb * 6 = 56,616 Kb.

 

Now , in the extreme case, you need all those coefficients for lets say 20 mach values (reynolds),

 

Overall 1,132 Mb is needed....

 

NOT A BIG DEAL for the RAM,

 

Believe me, the problem is not the storage in the RAM of those coefficient, the problem is the calculation of those coefficients and to WRITE THEM in a file LOL ;)

Link to comment
Share on other sites

 

Nothing can change that, because we have said the variables are all independent, and the function is completely unknown.

 

 

This assumption is very incorrect, saying that the function is unknown is saying that we have absolutly no knowledge of what aerodynamics is. OF course we know what kind of function is interpolating the best the reality (see equation i wrote above) and THANK GOD people have worked during hte 20th Century to understand the order of importance of PArameters !!!!

 

You way of thinking remind me myself while i was studying purely math at University (before studying enginnering ).

 

Think ENGINEERING, think PHYSICS :)

Link to comment
Share on other sites

Your assumption of equal dimension is incorrect:

As the parameters have diffferent order of importance, the dimensionnal table to decribe their effect is not identical.

 

As I've been saying all along...Again, I merely gave one of my examples where the they

were equal, some of my other examples were of non-equal. (Here comes the "kick-ass algorithm

for finding parameter importance out of empirical data" :P)

 

 

stop thinking Math or Code, think Physics, you need to know the order of importance.

 

To give you an exemple :

 

if you take Cm (alpha,Beta, deltah, lef, sb,q ,ds)

 

Cm can be expressed (read page 38 in Appendix B):

 

Cm (alpha,Beta, deltah, lef, sb,q ,ds) = Cm(alpha,Beta,deltah) *

Nu,deltah(deltah) + DELTACm,Lef(1-deltalef/25) + DELTACm,sb(alpha) *(deltasb/60) + Cbar*q/2V*[ CM,q(alpha) + DELTA Cm q, lef (alpha) *( 1 - deltalef/25)] + DELTACm(alpha) + DELTA Cm,ds(alpha,deltah)

 

And this answers why you can do it. You were able to derive an expression of the equation

which made it no longer "completely unknown", so it is no longer an entirely empirical table

of Cm, but rather empirical tables of its individual sub-functions.

 

 

...

Yes, you were able to separate teh equations into smaller functions of maximum 3

parameters, thus it is easily feasable to use data tables. No disagreement here :)

 

What I'm saying, is that if you had an completely unknown function of 8 variables,

there would be no guarantee you could split it, and you would worst case end up with

an 8 dimensional table that likely would be too large.

 

Again in this case, your function isn't completely unknown (the opposite case of what

I've been talking about all along), so it isn't an 8D table.

 

 

This assumption is very incorrect, saying that the function is unknown is saying that we have absolutly no knowledge of what aerodynamics is. OF course we know what kind of function is interpolating the best the reality (see equation i wrote above) and THANK GOD people have worked during hte 20th Century to understand the order of importance of PArameters !!!!

 

You way of thinking remind me myself while i was studying purely math at University (before studying enginnering ).

 

Think ENGINEERING, think PHYSICS :)

 

This is not an assumption, this is the general case. You could say "an example". The general

case is never "incorrect", but can in special cases be simplified. In our case you have just

proven that we DO have knowledge about this function, enough knowledge to simplify it into

several sub-functions and making the table approach possible.


Edited by =RvE=Yoda

S = SPARSE(m,n) abbreviates SPARSE([],[],[],m,n,0). This generates the ultimate sparse matrix, an m-by-n all zero matrix. - Matlab help on 'sparse'

Link to comment
Share on other sites

I think yoda just tries to describe conditions using more complex calculations (inlcuding the all important variables given) and tries to achieve a more precise result by math, whereas engineers or others just use the results in certain conditions in form of datas gained by wind tunnels tests or flighttesst and only implement those as boundry conditions or parameters.

2nd solution seems to appear more "static" in terms of flying but i think, its still a good solution in terms of cpu resource, if the relations of certain variables and their dependence to eachother is included or interact in that matter with eachother.

 

hmm :book:

 

Well what people tend to forget is that EVERYTHING IS A MODEL, even the Navier Stokes Equations are a model ; the Turbulence equations are a model....so saying you can find the analytic solution of the problem is pure Math mind masturbation.

 

The reality :

 

* Navier Stokes equations + Turbu + boundary limit equations are a model and can not be used at the moment in real time.

 

* For Real time simulation you can use

* Simplified real time computation models

* Physics model with tabular data (pre calculated by NS or testing)

 

I'm saying when you are able to get sufficient data, tabular model is far far better than simplified real time computation. however, getting those tabular data is very time consuming and can be costly.

 

NOW this is only for aero forces, of course after that you need to use a real EOM solver to compute the trajectory

 

Falcon4 has very poor Physics modeling and tabular data

Falcon4 have no EOM solver.

 

End of story

 

cheers

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...