Dangerzone Posted June 6 Posted June 6 Chat bots can be brilliant for assistance with DCS Scripting. However, not in the way that people want or think of it. I agree with the others that to use AI for writing scripting from scratch is a bad idea. I've tried, I've seen it do very stupid things, misunderstand, or more often than not, make up function calls that don't even exist. In that context, ChatGPT is like a salesman. It's great if it knows the answer, but if it doesn't, it'll make stuff up and sound just as convincing - and if you don't know what you're doing - you can be very easily fooled. However, the time of spending 20 minutes looking over a LUA table for a missing comma, or trying to find out why a particular function isn't working right is disappearing with LLM's. The ability to throw your script in and say "What's wrong with this script" and it give you numerous pointers can save a LOT of time. Plyers are fantastic if used properly, but can lead to disaster if you use them as a wrench. Socket sets are great, but you don't want to use them for tuning a piano. LLM's are the same. They have some very handy features that can be used - but use them outside the boundaries of their capability can lead to much time wasting.
maxTRX Posted June 6 Posted June 6 If someone prepared it properly and extensively for the 'genre' I'd definitely buy it... so would DOD probably
Squiggs Posted June 6 Posted June 6 7 hours ago, Dangerzone said: The ability to throw your script in and say "What's wrong with this script" and it give you numerous pointers can save a LOT of time. They are kinda crap at debugging too. I use it as a debugging last resort and it often tries to re-write stuff and breaks it, or the best one is where it gives you a suggestion, you test it and it doesn't work, so you tell it it's wrong, but it just keeps spitting the same piece of code out at you over and over.
Dangerzone Posted June 6 Posted June 6 15 hours ago, Squiggs said: They are kinda crap at debugging too. I use it as a debugging last resort and it often tries to re-write stuff and breaks it, or the best one is where it gives you a suggestion, you test it and it doesn't work, so you tell it it's wrong, but it just keeps spitting the same piece of code out at you over and over. I guess it depends on what sort of debugging. I find it helpful for the "What is wrong with this code" - especially for LUA and JSON tables that I've handwritten. I've also had it pick up potential errors in my code the compiler doesn't, such as potentials for infinite loops, or memory leaks, etc. It's not a tool I would rely on, but I have found it can increase the speed at which I develop because it's just an extra 'set of eyes' on some of my code. Sometimes it says stuff that's false, for sure. It can be handy if you already know what you're doing. If you're relying on it though for your shortcomings, it is indeed going to lead you astray.
TEMPEST.114 Posted June 7 Posted June 7 (edited) 5 hours ago, Dangerzone said: I find it helpful for the "What is wrong with this code" - especially for LUA and JSON tables that I've handwritten. I've also had it pick up potential errors in my code the compiler doesn't, such as potentials for infinite loops, or memory leaks, etc How are you learning if you don't work it out? LUA isn't compiled, so what are you talking about with a compiler?!?! You shouldn't even be able to make any memory leaks because it's an interpreted scripting language, so the interpreter and C backing library should prevent all that. You're implying that you don't 'write' most of the scripting you're using. Are you just dumbly copying and pasting and then using vile AI just to hack it to work? Are you not actually learning anything? LUA has one of the smallest command sets in any scripting or programming languages. It truly is one if the the most simplest 'languages' ever made... What is is that you’re struggling with? Have you read the programming guide? Is this the first computer language you’ve ever tried to use? How have you tried to learn? There is virtually zero reasons to use ChatGPT or any other evil ai tool as most if not all the problems you’re going to have are because of the DCs s.e. and the inability for a live debugger. what do you do to debug? Edited June 7 by TEMPEST.114
Dangerzone Posted June 7 Posted June 7 25 minutes ago, TEMPEST.114 said: How are you learning if you don't work it out? Learning? I've got 3 decades of full time professional development experience behind me. I'm not needing to learn what I'm doing - but even after all those years, missing a symbol that I can't see clearly and spend 20 minutes trying to find 'where is the problem' still occurs. One benefit of working with others is to get a second eye. The amount of times I've had a fellow dev take a look and can see instantly what I'm missing (or vise versa) is countless over those years. ChatGPT can be a helpful 'second eye'. It's got nothing to do with learning, so I'm not sure why you're implying that. 27 minutes ago, TEMPEST.114 said: LUA isn't compiled, so what are you talking about with a compiler?!?! You shouldn't even be able to make any memory leaks because it's an interpreted scripting language, so the interpreter and C backing library should prevent all that. My comments re ChatGPT weren't limited to just LUA in that respect - I have used it for other languages as well, hence the memory leak and compiling references. Sorry if that was confusing - I should have made that clearer. 28 minutes ago, TEMPEST.114 said: You're implying that you don't 'write' most of the scripting you're using. Are you just dumbly copying and pasting and then using vile AI just to hack it to work? Huh? How on earth are you coming to that conclusion from what I've written? I wrote most of my script from hand. I was mentioning I use ChatGPT as a 'second set of eyes' over my script after I have written it. If we want to talk just DCS - it has saved time because I have thrown my code at it, it has told me potential issues, such as a potential for infinite loops before I load up the mission, run the mission, and get to that part of the lua, or case sensitive stuff where I've missed something, etc. 32 minutes ago, TEMPEST.114 said: There is virtually zero reasons to use ChatGPT or any other evil ai tool as most if not all the problems you’re going to have are because of the DCs s.e. and the inability for a live debugger. "Evil AI tool"? Maybe that's where your issue is with me using it? I remember people saying the internet is evil back in the day. Fact is - there are very evil things done on the internet, but it doesn't make the internet itself evil. It can be used for good, or for evil, it's just a tool. Just like a gun, screwdriver, car, etc. There are right uses, and wrong uses for it. All I have been saying is that ChatGPT is a tool. It can be used for it's strengths, just like anything - be aware of it's weaknesses/limitations. I hope that helps clarify. 1
Squiggs Posted June 7 Posted June 7 (edited) 11 hours ago, Dangerzone said: I guess it depends on what sort of debugging. I find it helpful for the "What is wrong with this code" - especially for LUA and JSON tables that I've handwritten. I've also had it pick up potential errors in my code the compiler doesn't, such as potentials for infinite loops, or memory leaks, etc. It's not a tool I would rely on, but I have found it can increase the speed at which I develop because it's just an extra 'set of eyes' on some of my code. Sometimes it says stuff that's false, for sure. It can be handy if you already know what you're doing. If you're relying on it though for your shortcomings, it is indeed going to lead you astray. Maybe it's better at certain languages than others. I can tell you that C is not one of the languages it likes (I don't like it either but hey). Also in your case where you know what you are doing it can be helpful because if it suggests something outright stupid you can say "hold on a minute, this is wrong..." The issue is when people with no coding experience take what LLMs say as gospel. Edited June 7 by Squiggs 1
maxTRX Posted June 16 Posted June 16 Seems like we're heading in the right direction... https://www.zerohedge.com/technology/maladaptive-traits-ai-systems-are-learning-lie-and-deceive 1
marit92 Posted September 28 Posted September 28 В 28.04.2023 в 12:40, cfrag сказал: For those who have watched the video, and did not notice it because they don't yet have the required experience in DCS Lua scripting (i.e.: they are exactly the people who would try this): the code the Chat Bot (it's chat bot, not a coding bot, so what it produces is supposed to be small talk) doesn't work and contains multiple errors because the code is copied nilly willy from different sources using different frameworks. If you know how to code for DCS, it's obvious. If you don't, the answer seems as convincing as the (exceedingly nice! ) bit about the Ju-88 being a prime 4 engine bomber. Chat bots are 'yes' bots, and their answers are accordingly airheaded. The narrator of this video appears to have a lamentably low-skill understanding of DCS scripting and ChatGPT: he can't spot obvious mistakes, and asking the AI (the 'yes'-bot) if it understands you isn't proof of the AI's reflective ability - it's not reflective at all - it's a test of the bot's ability to say yes. It does not understand what you want. It understands that it has to grab code pieces that have 'DCS', 'Lua' and 'mission' as tags, and string them together in a manner similar to other code pieces; that's just like telling an art bot to draw an image in the style of Picasso. Worst, the video suggests that the code would work, but never tests the code that the bot creates. Let's be generous and say that the narrator overlooked producing this part, perhaps because he ran out of time, and not because it would have severely diminished the video's attractiveness. So both generated scripts contain bad, obvious errors, for what essentially are indeed trivial things. Of course, @Hog_driver knows this, and I believe "I dunno, judge yourself" was exceedingly tongue-in-cheek. The problem here, to me, it that the latter part goes over the head for those to whom it may be relevant. Apologies if I'm (again) elaborating the obvious. It’s also important to emphasize that relying solely on AI for coding tasks might not be the best approach. In many cases, using the best AI for business can assist in generating ideas or offering guidance, but it shouldn't replace the expertise needed to create functional code. Great points raised here! It’s crucial to understand that while chatbots can generate responses that seem coherent, they often lack the depth and accuracy required for specialized tasks like DCS Lua scripting.
Recommended Posts