Jump to content

Recommended Posts

Posted (edited)

I’ve not used the programs but theoretically could you use chat gpt or other AI to write mission codes?

Or maybe one step further, could you integrate it to rewrite mission scripts on the fly for a Dynamic Campaign?

 

ETA: I asked it, it suggested if you had access to its API it possibly could learn, but it wouldn’t know how to code within games.

Edited by Mr_Burns
Posted
6 hours ago, Mr_Burns said:

I’ve not used the programs but theoretically could you use chat gpt or other AI to write mission codes?

In a manner of speaking yes. You can use ChatGPT and similar AI to write some rudimentary, trivial mission code. 

That being said, anything worth your while can't be created with ChatGTP because these programs aren't specifically designed for creating code (they are designed to wow and entertain the masses). Their learning DB is a jumble of many, many scripts that all use different libraries, and since these AI bots have no understanding of context, they can't separate these, and do not know if what the use to 'learn' actually works.

You therefore end up with a script that might work (on a log shot). That's when you get lucky. The problems start when you either encounter a bug, or need to change an aspect: you have no idea what the code does, and end up with losing every effort you may have invested 

7 hours ago, Mr_Burns said:

Or maybe one step further, could you integrate it to rewrite mission scripts on the fly for a Dynamic Campaign?

Yes. But if you added the constraints that the missions should a) work and b) be interesting (you merely said 'rewrite missions' - you don't seem to know in which direction they should be rewritten; which is vital. Why do you want to rewrite missions for a Dynamic Campaign? Which aspects do you want to change and why?). Mission designers don't rewrite missions for the sake of rewriting them; they design a mission with an idea in mind. These Chat bots don't have that, they don't create new things, they merely react to keywords in your input string and then collect bits from a vast library to string together and give the appearance of conversational ability / coding ability. You can't point a bot AI to a game and tell it 'make it more fun', and you can't just tell an AI to take a mission an 'make it dynamic'. Someone has to write the definition and parameters of what that means, and AI is not (yet) capable of that.

7 hours ago, Mr_Burns said:

ETA: I asked it, it suggested if you had access to its API it possibly could learn, but it wouldn’t know how to code within games.

It wouldn't indeed - it strings together parts of code from its learning DB to make it appear like something that you mentioned in your input string. That's it. There is no inherent understanding of the context of your question, it's pattern matching and weighing of keywords. It's our interpretation of the bot's answer that suggests that there may be intelligence involved, a linguistical version of pareidolia, so to speak. It doesn't understand what it's doing (no reflection), hence it can't help you create something. 

AI may help you shorten your work if you know what you are doing, and have a deep understanding of what is involved - that is where AI is already creating a lot of added value. However, if you don't know how to create good missions, AI can't help - they are a multiplicator for your ability, so you need to build up that ability first for meaningful gain.

 

  • Like 1
Posted

I think one thing it could be useful for is helping ppl who do not understand coding at all do some simple things, or take some simple things and go from having to setup multiple triggers to having it be done in a script. 

An example is I have been kicking an idea around of some missions/a campaign and wanted to have a script where the AI calls their missile shots. I had a version of this already but experimented with having chat GPT make its own version. Doing that I was able to get something that not only has the AI calling missile shots, but it can also call out its target. For someone who knows coding well that not overly impressive (And they could likely make something better, as the current version is reliant on unit names), but for someone who doesnt know programming well it gives some ability to expand ideas.

Posted

I have had mild success using the AI to generate simple python scripts. The thing is that to do so, one has to have a good understanding of the code itself. ChatGPT is language model. It constructs phrases by probabilities. It's quite impressive at times but also quite underwhelming at others, but being a language model, it is particularly good with codes.

 

 

Posted (edited)
22 hours ago, Chenstrap said:

I think one thing it could be useful for is helping ppl who do not understand coding at all do some simple things,

Except for the most trivial things, the current versions of AI fail in that task. I think that if you have no understanding of coding at all, it could be more time efficient, if you learned a bit of coding and then tried your hand at doing this based on other peoples (as opposed to AI) working examples. We all learned from doing that - I distinctly remember way back when Dinosaurs roamed the earth, and I typed into my computer (an Apple II) the code to a game from a magazine (to clear this up: "magazines" were publications made from paper that you could purchase in a brick & mortar store).

22 hours ago, Chenstrap said:

I had a version of this already but experimented with having chat GPT make its own version. Doing that I was able to get something that not only has the AI calling missile shots, but it can also call out its target.

And you obviously already have some understanding of coding. AI can be very helpful in pulling in bits of code to feed you for experimenting. I see AI as a great potential aid for mission design in other areas: create organic ground / air traffic, make background chatter (with text-to-speech built in), create realistic and balanced group compositions for a time period, suggest real/historic engagements and adapt your mission template to this, and many things more. AI is a multiplicator for your talent/ability. It can't replace it. So if you can't code, AI coding will not be helpful. If you can code, it can be very helpful to unearth some obscure code that did something similar for you to adapt and expand on.

22 hours ago, Chenstrap said:

for someone who doesnt know programming well it gives some ability to expand ideas.

I think this is where I somewhat disagree: if you don't have the means (i.e. no coding knowledge) to expand, AI won't enable you to either. As soon as you do have the ability, AI indeed may become a multiplicator. 

Edited by cfrag
  • Like 1
Posted

I think it can be summed up as this:

ChatGPT is no substitute to learning for yourself. It can be an effective tool to those who already know their field, and a misleading tool to those who use it for areas they don't understand what they're doing.

It's very helpful for me throwing code at it when I've been looking at a screen too long and I can't see why something doesn't work (especially if it's down to a single case being incorrect - I hate that about lua). 

It's like talking to a salesman. It'll say anything. You need to know enough information to know when it's BS'ing it's way through - or when it's actually on point. 😉

 

Posted

You know, you've got to be careful with ChatGPT, it's not a reliable source of information.

I wanted to check how smart it is and asked it a misleading question about the "famous Ju 88 strategic bomber".

Here's what I got:

Ju88.jpg

Can it be used for generating missions? I dunno, judge yourself:

 

  • Like 1
Posted (edited)
2 hours ago, Hog_driver said:

judge yourself

For those who have watched the video, and did not notice it because they don't yet have the required experience in DCS Lua scripting (i.e.: they are exactly the people who would try this): the code the Chat Bot (it's chat bot, not a coding bot, so what it produces is supposed to be small talk) doesn't work and contains multiple errors because the code is copied nilly willy from different sources using different frameworks. If you know how to code for DCS, it's obvious. If you don't, the answer seems as convincing as the (exceedingly nice! 🙂) bit about the Ju-88 being a prime 4 engine bomber. Chat bots are 'yes' bots, and their answers are accordingly airheaded.

The narrator of this video appears to have a lamentably low-skill understanding of DCS scripting and ChatGPT: he can't spot obvious mistakes, and asking the AI (the 'yes'-bot) if it understands you isn't proof of the AI's reflective ability - it's not reflective at all - it's a test of the bot's ability to say yes. It does not understand what you want. It understands that it has to grab code pieces that have 'DCS', 'Lua' and 'mission' as tags, and string them together in a manner similar to other code pieces; that's just like telling an art bot to draw an image in the style of Picasso.

Worst, the video suggests that the code would work, but never tests the code that the bot creates. Let's be generous and say that the narrator overlooked producing this part, perhaps because he ran out of time, and not because it would have severely diminished the video's attractiveness.

So both generated scripts contain bad, obvious errors, for what essentially are indeed trivial things. Of course, @Hog_driver knows this, and I believe "I dunno, judge yourself" was exceedingly tongue-in-cheek. The problem here, to me, it that the latter part goes over the head for those to whom it may be relevant.

Apologies if I'm (again) elaborating the obvious.

 

Edited by cfrag
  • Like 2
Posted (edited)

I was asked by someone who was slightly taken aback by my flat-out dismissing AI as not understanding what they do if I was able to show an example from the video. I believe it's fair to challenge me when I just bad-mouth something without offering up some evidence, so my apologies when I before was just assertive without being factual.

What I mean with 'AI doesn't know what it's doing' is well illustrated in the following code example, taken from the video where the aim of the code is to determine if a unit has landed on a carrier

image.png

 The obvious mistakes here are in line 1

  • you can't subtract vectors in plain Lua (dist = unit:getPoint() - carrier:getPoint()) and
  • :get2D() invocations that don't exist in vanilla DCS. That's copied from code that uses a framework
  • get2D() returns a vector, not a scalar

That's 3 breaking errors in a single line. But that's not the insidious part. Far, far, worse is the killing blow to "AI" in the following, proof positive that the AI does not understand what it is doing: it commits a rookie logic blunder in line 2: 

The code (incorrectly, but let's assume that it works) sets up 'dist' as the difference between the unit's position and the carrier's position. To do that, it subtracts the carrier's location from that of the unit, which indeed gives us the difference.

What it attempts to do next, is to determine if the distance between the two locations (unit and carrier) is less that 200, to see if the unit has landed. What it actually codes in line 2 is

if dist < 200 and dist > 0 then

This code will fail 50% of the time because dist can well be smaller than zero with the distance between the two location being less than 200: for example if the unit sits at 5 and the carrier at 7. dist will be -2, only 2 away from the carrier's center, directly on top of it. In this case, however, the code will say 'not on carrier' because dist is smaller than zero. Proof positive that the bot can't internally verify the integrity of the code that it puts out, that is does not know the difference nor significance between a difference and a distance.

So ChatGPT makes 3 severe semantic/syntactic errors plus a show-stopping logic error in 2 lines - the two lines that are the heart of the code (deciding if the unit landed on the carrier).

This only leaves the conclusion that the bot has no idea of what it is doing (the correct way would be to see if the absolute value of dist was smaller than 200):

if math.abs(dist) < 200 then

Above is rudimentary logic, coding 101. Anyone who knows programming can see this immediately and knows that this bot produces trash code. Someone who trust this AI's code is up the creek with neither paddle nor, I surmise, a boat.

That is why I'm claiming that ChatGPT produces singularly unhelpful code, and that you would spend your time much better learning to code than to entrust writing code to a party novelty. 

Edited by cfrag
  • Like 2
Posted
1 hour ago, cfrag said:

I was asked by someone who was slightly taken aback by my flat-out dismissing AI as not understanding

I think this is one of the bigger issues we face. People don't understand that AI doesn't understand. (Even though it says it does). It's not cognitive. It doesn't think. I guess decades of sci-fi TV series and movies have probably conditioned us to expect more than what it really is, and when it says it understands - we trust it.

Don't get me wrong - what it's producing is darn impressive - and can be very helpful. But the moment people make the mistake that they think they're talking to a machine that actually understands (in the way us humans do) - they've lost the reality of what ChatGPT really is.

I found this to be a good intro video for those who seem to think AI is more than it is. I liked your example above as well with the coding that was done. (I've seen it make up functions that it thinks should exist out of thin air too). 

 

Posted

I understand now I read this and used it for a bit, i was being vague by saying rewriting missions, what I meant was assuming a dynamic campaign needs some form of AI, maybe simply “you have blocked that route with infantry and tanks, reroute to bridge, oh that been blown, let’s send SU24 to clear that other route” I assumed that the deep pockets of Microsoft google or whoever release a free AI code, could it be better than in house ED and used to speed up the dynamic campaign delivery or other missions with coders focused on “teaching it” and implementing it in DCS. That way, the AI code could be improved out of house letting the team focus on the simulation of mil tactics.

On a similar note, I headed a team that used MS Power BI suite to create a complex contractor onboarding system, I am no coder but designed what it needed to do to meet the business unit requirements. The Aussie Microsoft rep was so impressed how we had used their tool we presented it at a BI user meeting and shared it with MS.

Of course these AI systems are new to the open market so probably not right now.

It reminds me of WOPR, it calculates their response to your response and so on to calculate how to win!
 

Just musing, not asking for it to be done, just thought it might be an interesting topic.

Posted
On 4/28/2023 at 10:45 PM, Dangerzone said:

I think this is one of the bigger issues we face. People don't understand that AI doesn't understand. (Even though it says it does). It's not cognitive. It doesn't think. I guess decades of sci-fi TV series and movies have probably conditioned us to expect more than what it really is, and when it says it understands - we trust it.

Don't get me wrong - what it's producing is darn impressive - and can be very helpful. But the moment people make the mistake that they think they're talking to a machine that actually understands (in the way us humans do) - they've lost the reality of what ChatGPT really is.

I found this to be a good intro video for those who seem to think AI is more than it is. I liked your example above as well with the coding that was done. (I've seen it make up functions that it thinks should exist out of thin air too). 

 

I asked it if I could drink the Holy Spirit, after a bit of to and fro I asked if I captured the Holy Spirit in a bag, coul I ransom it to grant 3 wishes from God!

Its response cracked me up:

God doesn’t grant 3 wishes like a genie, people seek gods Devine intervention (or something) by praying, I suggest instead of trying to capture the Holy Spirit in a bag you pray to god!

Posted
7 hours ago, Mr_Burns said:

It reminds me of WOPR, it calculates their response to your response and so on to calculate how to win!

... just to conclude that the only way to win is not to play at all. How fun is that? No, we need better game AI than that 🙂

I think that yes, AI can have a great deal of input to make games better: suggest fun yet realistic responses to the current situation, make it balanced and challenging (instead of crushing you with unrelenting logic) - in short: make sure that the game remains fun. That would be a great help - people tend to enjoy engaging games over pure realism, and since AI can take into account so many more items in a short time span than humans can, taking the fun route versus the logical could be a boon (as always, this should be an option). 

Just don't name it 'Joshua'. 

Posted (edited)
7 hours ago, Mr_Burns said:

On a similar note, I headed a team that used MS Power BI suite to create a complex contractor onboarding system,

Last year I was involved with reviewing if and how AI can help with a (big) bank's employee hiring process. The idea was that AI would not be prejudiced and give equal opportunity to every applicant based on their merits and history alone: their CV. These hires would not be tainted by racism, misogyny, prejudice, nepotism or other influences. The problem was that the AI was trained on the bank's history from past 10 years of applications and hiring, which was. Badly. Even worse, because nobody understood the 'black box' of decision making inside the AI, we could not devise a good strategy how to de-contaminate it, and the project is now on hold. Which I find somewhat disappointing because I feel that the project itself is sound, it gets punished for what people did wrong in the past. But that's currently where we are: a system that we don't sufficiently understand that is fed questionable sustenance. We need to control both before we can make significant progress in AI. 

Edited by cfrag
  • Like 1
Posted
8 hours ago, Mr_Burns said:

I understand now I read this and used it for a bit, i was being vague by saying rewriting missions, what I meant was assuming a dynamic campaign needs some form of AI, maybe simply “you have blocked that route with infantry and tanks, reroute to bridge, oh that been blown, let’s send SU24 to clear that other route”

It would require an "AI" in the old-style RTS gaming sense, basically a logical algorithm system that will appear to make sensible military decisions. That has been achieved as far back as Falcon 4.0, albeit at great cost (the aforementioned sim was notoriously overbudget). This does not require neural networks or anything fancy, although this doesn't change the fact AI coding in this sense is notoriously difficult to do right. Now, given that ED is slowly working on the plane AI not acting like total morons, and what they've got so far is pretty neat, I'm assuming they do have a talented AI coder on the team, maybe even more than one. DC will require another type of decisionmaking algorithm altogether, but I'm pretty sure it'll work out OK. If nothing else it should be more competent than actual humans running the Russian army. 🙂 

  • 1 month later...
Posted (edited)

Okay, another angle.  I'd like to use the LLMs for their ability to interface with natural speaking.  I'd like to use a (currently fictitious) database of procedures like startup and missile shooting, to have the AI (ahem) respond with the proper steps.  One step at a time, waiting for a go or nogo response.  Ultimately, I'd like to have this run in the background waiting for me to ask a question (usually in a panic'd tone) verbally nulland then respond verbally, but that's just a stretch goal.  I've been playing with Langchain and Flowise, but all I gots is ideas and time.  I need talent as well.  Anyone interested in this type of project?  DCS Jarvis?

image.jpeg

Edited by shortdood
  • 11 months later...
Posted (edited)

At this point in time, any script worth writing for a DCS mission (i.e. anything but the most trivial of scripts) is beyond the abilities of the current crop of chat bots. This may change in a couple of years. Even then you'll be facing a similar problem: if you don't understand enough about scripting (Lua) and DCS's scripting environment to check what the chat bot created for you, that code must not only be perfect, it must also come with perfect instructions on how you'd put it into a mission. Otherwise you wouldn't know how to use it.

Today it's IMHO easier and more time efficient to simply learn the damn thing (DCS mission scripting), and let a chat bot be a chat bot. When there are mission scripting bots, trained on creating DCS mission scripts, then there's a chance.

Edited by cfrag
Posted (edited)

I don't think ChatGPT could handle writing mission codes or dynamically rewriting mission scripts for a campaign. It's good, but not that good.

Sure, it's not flawless and may occasionally throw unexpected responses, but overall, it's incredibly helpful in finding solutions to problems, especially when you need some guidance. I use ChatGPT for my airplane blog, where it helps me brainstorm ideas, draft posts, and polish my writing.

In addition to ChatGPT, I also use Word Hero for the content of my blog. It assists in generating engaging copy, saving time and effort. And I pay $49 per month for it. It may sound much, but my content is polished and resonates with my audience. Some AI writing tools cost $9-$12, but they don't do the job as good as the more expensive ones. There's a list of them on https://writingtools.co.uk/pricing.html that shows their features and prices.

Edited by Lee19822
Posted (edited)

I highly doubt it'll ever get there. DCS is too specific for a general purpose model to work, and there's not enough content to train a special purpose model. Also, DCS is an environment that's constantly changing as ED develops it. Humans can make use of documentation, changelogs and other descriptive sources, something which AI inherently can't do. It needs raw code that's 100% correct.

On 4/30/2023 at 11:19 AM, cfrag said:

Which I find somewhat disappointing because I feel that the project itself is sound, it gets punished for what people did wrong in the past. 

AI, ultimately, does nothing more than regurgitate what people did, over the years. It learns from an astounding amount of data (more than any human could ever hope to use), but it's never going to go beyond it. This is the starkest indication that it's not actually thinking: a human could be told something like "try that again, but without racism". AI can't, it has no concept of fairness, racism, bias or anything like that. It can't critically assess past data and correct for known biases. GIGO still applies, and deciding what is garbage and what isn't is simply out of scope for the whole paradigm modern AI runs on.

LLMs are good at pretending, and that'll be their primary niche, IMO: interactivity and presentation. There are several places (way too many, in fact) where acting like a human is enough, thinking not required. It's great in those roles. 

Edited by Dragon1-1
Posted
8 hours ago, bertog said:

Sorry for reviving this thread, but has anyone succeeded in doing this?

Just put the effort in. It's not hard. It's scripting. Basic logic and typing is all you need. 

Posted

Freaken microshaft...  Instead of this silly Recall thing they should make their Copilot (the proper name is already there! 😄 ) get involved in our gaming activities... jump in the back seat, monitor online/offline traffic and keep track of things,  monitor how DCS manages GPUs, cores and memory... yeah!

Posted

Even the dedicated coding bots like CoPilot are pretty stupid. I don't recommend using a LLM to "write me code that does x" unless you know what you are doing. I had to write a small bit of python which involved some graphical output and I had never done it before, so I got GPT to write it. This gave me an idea of what libraries and functions I had to use but the code it spat out was hilariously bad. ChatGPT is actually quite helpful when constructing Powershell commands though (it is written by MIcrosoft after all).

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...