

Someone
3rd Party Developers-
Posts
95 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Everything posted by Someone
-
@VirusAM: The coding is easier than you might expect. You should try walking through some of the tutorials on the Tensorflow or pytorch websites (do yourself a favor and pick pytorch, though. Tensorflow is a nightmare). Re: "ML is just stats", yea, that's not an entirely unreasonable characterization, but it's worth pointing out though that neural networks are where that generalization stops being true. There are entire classes of problems that were mostly intractable until less than 5 years ago, and are only cracked because of these approaches. Specifically, if you want to nerd out: LSTM's and Transformers. They are the sort of dominant architectures right now in the NLP space. CNN's (convolutional neural nets) are an important part of the story also, as pertains to image recognition, and have applications elsewhere. You might keep your friend honest by pointing out that statisticians got precisely nowhere on these problems, and it wasn't for a lack of effort. :)
-
I'm glad you found them useful!
-
@Aurelius, for you, perhaps a masters thesis. For me, 20 minutes well spent. You really should read it all though. Most of it isn't about you in particular. You might learn something. Otherwise, you have nothing else to say?
-
@Anklebiter, I'm glad I've found an audience. As to who is the fraud here, I think it should be pretty clear by the end of this post :) So, I'm going to split this response up into two sections: 1) a final response to @Aurelius, and 2) A discussion of how ML/AI research actually works, the relationship between that research and open source software, and the nature of value as it pertains to machine learning applications. I promise that this will be more interesting than it sounds :) Section 1 @Aurelius, my goodness, we've come a long way! Let's examine how we got here. First, @Aurelius posted some irrelevant gobbelygook about Hamiltonian Operators, and discussed neural networks in a way that bore no resemblance to the way that researchers in the field discus them. Given that he had previously represented himself as an electrical engineer running a media lab, I raised an eyebrow at his insistence that he is "a neural network specialist." (Note: Again, not the way people discuss such things... a person who does this work would characterize themselves as being an AI researcher, who may or may not employ neural networks in their work, but I digress). In response to his post, I pointed out the above: that @Aurelius is very clearly not a neural network specialist, even per his own previous statements. I also directed readers to a strange page on his website where he gives thanks to anonymous experts for assistance in the writing of a yet-to-be-completed book, but goes to otherwise great lengths to ensure that the logos of these anonymous expert's institutions are pictured. Across the next few messages @Aurelius: - Calls me a troll - Implies that I am simply angry about being shot down by his AI bot (I still find this part of the exchange just incredible. Life really can be stranger than fiction) - Writes some weird stuff about moustaches, St Judes, and childrens toys - Claims that I could easily google my way to his identity (You can't, I tried. Occam's would conclude that it's because he's, you know, not who he says he is) - Crucially, fails to dispute any of the assertions of my original post: specifically, that he has represented himself as a member of an entirely different field, and has most certainly not produced the AI bot described in his first post. Attempting to offer you a way out, I propose the following mechanism for verification: 1) Show me the code that was used to train the bot 2) Disclose which university he is affiliated with, so that we can verify that he does in fact run a media lab, and 3) Demonstrate that the bot exists. Preferably by slaughtering me in aerial combat. @Aurelius then pivots, citing the need to protect valuable proprietary software, imply I want to steal his code, weirdly introduces the name of my employer (while simultaneously questioning whether that company does in fact employ me), and asserts that unless I am Mark Zuckerberg (which, maybe I am :)), he will not share his work product. Side note to readers: @Aurelius, not I, introduced the fact that I work at Google. He knows this information because some time ago, he posted a request for qualified collaborators on yet-to-be-defined DCS software projects. I messaged him, privately, and by way of qualification shared my employer and first name. After the first exchange, I never heard back. It's also worth pointing out here that, despite what @Aurelius is trying to argue: I am not the person who is making extraordinary claims here and my identity is not really relevant. Finally: my pseudonym is neither more nor less opaque than that of everyone else here. At this point, we're a few thousand words in, and @Aurelius has still offered zero explanation or response to the central argument of my original post. Which... weird, right? I mean, who spills that much ink in self defense, while not actually offering a defense? And why go to such lengths to re-categorize what was originally described as a hobby project into something so secretive? And also valuable? When all he's gotta do is show the receipts? I don't know, but I'd expect that the commercial value of a DCS AI bot is exactly zero dollars to anyone not employed by Eagle Dynamics. There's a saying that extraordinary claims demand extraordinary evidence, and though I generally agree with this principle, I am only requesting ordinary evidence. And yet, we've got nothing. So who the hell is this guy? Here's my guess: @Aurelius IS: A) Probably an electrical engineer of some type. I read his review of the VKB joystick and he seems to have at least some expertise in that field. How much, I am not qualified to say. B) Probably a staff member, though not a researcher or faculty member, at a university media lab. He's been pretty consistent on this point, and it would be a weird lie. @Aurelius IS NOT: A) A person who knows a damn thing about neural networks. I recognize however, that to some readers the discussion of sharing code, the value of such code, and the need to protect "proprietary information" may seem, on the surface at least, compelling. Following his blanket refusal to produce a shred of evidence, @Aurelius wrote a longer follow up post to @Anklebiter wherein he characterized the state of AI research as "a bit like the guilds of medieval times in Western Europe" and stated that often engineers at places like Apple will steal a piece of code and then collect license fees from the derivative work. He also writes that the value of such models is a function of "mathematics and physics behind the network and how it is implemented exactly", and that a company like Google would be interested in running such things on "large mainframes" or "supercomputers." It would be difficult, if I were trying to do so, for me to conjure a more misinformed view of how AI research works, the appropriateness of sharing code, the physical machines upon which such models run, and the nature of the US patent system. So, this is the going to be our focus in Section 2: Disassembling the characterization of AI research, and the reasonableness of sharing one's code, as offered by @Aurelius. Section 2 Let's start with a question: what, exactly, is a neural network? We know there's code involved, but what else? As it turns out, the process of training a neural network to predict something is somewhat straightforward. The reason for this is that, despite what Aurelius writes regarding "exact implementations", all AI researchers today use one of two open source frameworks (there are a few others, but these are the only two that really matter). They are: a) Tensorflow, a project funded, open sourced, and given away for free by.... you guessed it: Google. https://www.tensorflow.org/ b) Pytorch, a project funded, open sourced, and given away fo free by.... you guessed it again: Facebook! https://pytorch.org/ So when a researcher has an idea for a different type of neural network, the basic building blocks that they use to assemble it are very standardized. To be clear on this point: no one is re-implementing anything. That's the job of the framework. Every once in a while though, someone comes along and advances the state of the art in the field. Which might sound a little bit like what @Aurelius is describing, and ya know, maybe he's got a point? Maybe researchers are really worried about their work being stolen and appropriated by others? The good news is that, to answer this question, we don't actually have to guess at all! We can just look at what happens when these advances are made, who makes them, and in what way they are disclosed! And you know what? It turns out that everyone does the same thing: 1)Publish a paper in an academic journal describing what you did, why its different, and how good it is, and how much smarter than everyone else you, the author, are. 2) The code used to train the model. 3) The model itself. But don't just take my word for it: Google researchers, in late 2018, made a giant advancement in a subfield of AI called NLP. NLP stands for Natural Language Processing and is basically the field whose work allows Siri or Alexa to understand and answer your questions (note: I don't mean the part where the speach gets turned into text. that's called Speech Recognition). This new, groundbreaking model architecture was named BERT (a nerdy joke... the previous state of the art model was named ELMO...so yea.... AI researchers love Sesame Street, I guess). And you know what the researchers did: Immediately published a paper disclosing all the details, shared the model, and put the code on Github for anyone to see. You can look at it here: https://github.com/google-research/bert And if you think that this advance maybe was less recent than Google was letting on, and that Bert was old news by the time they told the rest of the world about it, that would be wrong too. Here's an article from a few months ago announcing that Google Search engineers had JUST finished incorporating the Bert language model in the core search algorithm: https://www.theverge.com/2019/10/25/20931657/google-bert-search-context-algorithm-change-10-percent-langauge To be clear about what happened here: Google spilled the beans about a game-changing AI advance a full year before their own co-workers could even put it to use! On purpose! And this is not a weird anomaly. It literally happens all the time. A few months later, researchers at Facebook announced they'd improved upon the Bert model with a new variation called RoBERTa (I know, the names...ugh), and did the same thing: shared the code, the model, and a paper about the details. Code here: https://github.com/pytorch/fairseq/tree/master/examples/roberta This is just how science works. But hey, I get it, nobody likes to give away valuable stuff for free, so maybe @Aurelius has a point, and sharing his code would be the same as giving away something really valuable. So do we square this with the fact that Google and Facebook are constantly giving away their code? Aren't they profit motivated businesses? Surely they aren't just doing this out of the goodness of their hearts! And you're right! They aren't! The thing is: the code is not worth all that much, and that's why they share it. What's valuable, and what these companies would not share, is the data that they used to train their models. And you will notice that I did NOT ask @Aurelius to share his data, either. So what's the relationship between the data, the code, and the thing we call the neural network? Here's an analogy; it might seem a little weird at first, but stick with me. Imagine you are in a kitchen, and you'd like to cook yourself a hamburger. You know what hamburger is and how one should taste, but you don't know how to make it. So you open up a cookbook, turn to the Hamburger Section, and take look at the recipe. And the recipe tells you all sorts of information about how the end product should operate: it should be juicy, topped with pickles, tomato, and lettuce, and served on a bun, etc. Easy! That is the code: the recipe. Having the code/recipe, does not mean you have a hamburger though. To actually make the hamburger, you'll need the ingredients: the meat, tomato, lettuce, pickles, and bun. That's the data: the ingredients. To make the actual hamburger/neural net, you need both. And different data/ingredients, processed using the same receipe/code, will produce different tasting hamburgers/neural nets. Or maybe totally different things entirely: imagine you substitute turkey for ground beef! You could still follow the hamburger recipe! So, the code is the commodity, and not worth all that much. AI code is shared freely, and written using the same frameworks. Only the data is sacred. To drive this point home, consider: If I posted the entire source code of Google search right here, you would be any closer to building a competing search engine because to do so you would need to have the zillions of terabytes of data that google has collected about web search over the past 15 years (not that I'd know....I would be in jail). And that data is not the code. A few small notes, to wrap up: 1) Machine learning/AI models/neural networks do not run on "Mainframes" or "Supercomputers". All work in this field is done either using GPU's (literally the same kind of GPU as the ones you use to run DCS), or specialized processors designed exclusively for these applications. E.g: Google TPU's: .https://cloud.google.com/tpu/ There are lots of technical reasons why this is the case (ie: that we don't train these models using normal CPU's). If anyone cares to know more, PM me and I'm more than happy to elaborate it. 2) Regarding @Aurelius claim that, e.g. Apple regularly takes free code, modifies it some, patents it, and then "licenses" the final product: Despite the fact that the US patent system is a horror show, you cannot patent math. And neural networks are math. And all of this code is written using open source frameworks, which are not patentable. If someone, anyone, can show me a single example of a company that has patented and is successfully licensing a neural network building block I will, I dunno, eat my hat, or something. ...And that's all I've got. I hope this was at least somewhat interesting. But even if it wasn't, in world of internet charlatans, at least we've caught one. :)
-
Jay, It seems that you are saying to my questions, in order: 1) No 2) No 3) No Am I misunderstanding anything? You discuss open sourcing in strange terms. People open source code that was time consuming to write all the time! That’s the whole point! It’s also worth pointing out that, if what you say is true, the valuable asset you have is the data collected from thousands of hours of simulation, not the code. I’m not asking for the data. So again, I don’t see the concern. But anyway, just to be clear, are you saying that if I identify myself by my full name and perhaps a link to my LinkedIn profile, along with some code I wrote but have not previously published, that you will do the same?
-
So, this is generally not responsive my the points I made above, although it actually does a pretty good job a distracting from the actual argument at hand. Lets go through it point by point. 1. Hamiltonians: What you wrote is 100% not relevant to this discussion. This discussion is not about whether you know more about physics than I do. It's about whether you are misrepresenting yourself here. 2/3. Numbers of hours necessary to train a model / "Who says you have to iterate in the same environment": I'm combining these two because they are related, and do a better job than I ever could demonstrating your lack of experience in this field. But, good news, we have a way to resolve this: Share the code! Just post it publicly on github, or wherever you'd like. 4. The book. This one is interesting. You write that I am making an attack on you, and that its very reasonable to give thanks to experts consulted in the course of writing a book. And that's right! It is normal! It's also not what I was pointing out in my comment, though. What I pointed out was the conspicuous use of university logos, and the very existence of a "thank you" page for a book that, as you point out, is not yet finished. The fact that the experts are not named makes it all the more conspicuous. Are you saying that you expect these people to come to your website to receive your gratitude? I dunno...feels weird to me. Regarding the end of this section where you discuss St Judes,moustaches, catapults, and kids toys: I have absolutely no idea what you're talking about here. 5. I am angry at you because your AI shot down my plane in DCS. Hahahahah oh my goodness what are you talking about? Up until this point, I thought you were actually doing a good job of skillfully deflecting my argument while casting me as an angry person making personal attacks on the internet (I am/was not). But here, we just go completely off the rails. The thing, though, is that I have actually never wondered if the enemy who killed me as an AI bot. Do you know why this is the case? Because it is always an AI bot that shoots my down! Literally, every time I get killed in DCS, I am killed by an AI bot. The reason I know this is because I play on GAW/PGAW, and that environment is PvE. So yea, I am for sure not angry at your AI bot for killing me. But even after you suggest that I have this weird ax to grind with you and your imaginary AI bot, you go on, about... humans and dogfighting and the singularity? I mean, sure, I 100% agree with you... Automation is coming for most things, thats true, including fighter jet pilots. That's what makes the field so exciting right now. It's what I work on every day, although unfortunately not fighter jet AI...that would be neat. So, to wrap up, instead of writing more paragraphs in the forums of a Russian made fighter jet simulation, lets just resolve it amicably. All I need are three things: 1) The code you wrote to train your model 2) The name of the university where you are a faculty member. Given that academics are very social, you have no doubt published, your website solicits business, and has a photo of your face, I assume this wont be a problem. 3) A private 1v1 session against the AI bot, so that it can slaughter me and I can gain appreciation for my coming robot overlords. Looking forward to your response!
-
@Aurelius: What in gods name are you talking about? Two points: 1. "Hamiltonian operators" have absolutely nothing to do with the AI application described above. This is word-salad, designed to impress. I am not fooled. You've written elsewhere that you teach electrical engineering. How is it that you now claim to teach "neural networks" to undergraduates? You write that you have "engineered several networks". This is not remotely the way that people working in this field describe their work. Before posting this comment, and in an effort to give you the benefit of the doubt, I reviewed the machine learning section of your website where you describe "improving networks designed by others." This simply makes no sense. A person who actually does this work would never discuss improving a neural network, as if it were a physical object. 2. As an actual machine learning engineer, I find it utterly implausible that you have deployed trained neural works to Il2 or DCS multiplayer. Even more unlikely is your claim that the trained models exceeded or even approached human level performance. And there's a simple reason why I know this: neither game supports the extremely high level of time compression needed to simulate the millions of flight hours necessary to train such a mode without, ya know, waiting for millions of actual hours. Whats amusing about this is that the original poster recognized this exact problem. And what was his solution: to train the model in Unity, which does have APIs for such things! But before you suggest that you did the same, I'll point out that while Unity might provide a physics environment robust enough to train a flight model, it does not provide any such equivalent for the complex reality of DCS combat. In short, what you are describing is impossible, although I doubt you knew it when you authored your comment. Beyond the above, lest anyone think that I'm being unneccssarily mean spirited and failing to give the benefit of the doubt, I'd direct you, the reader, to the absurd "technical assistance" page on @Aurelius's website: http://jaytheskepticalengineer.com/books/the-song-of-kiri/song-of-kiri-technical-assistance/. This is just a bridge too far. I mean, seriously, what kind of person devotes an entire page of a personal website to anonymous, but mostly self congratulatory, statements of gratitude while plastering said webpage with images from an assortment of institutions designed to impress? All of this in the context of a book that appears not to actually exist? I don't know who you are, and I don't much care, but people lying on the internet is grating. Do us all a favor and keep it off these forums.
-
So I guess what you’re trying to say is that you don’t know the answer to my question?
-
On the F/A-18 SA page, airborne contacts current render under the rings displaying surface to air threats. As a result, in an environment with many SAM threats, it can be very difficult to see the enemy air threats as they are covered by the rings displaying ground based threats. It would seem that rendering the airborne threat markers on top of the ground based/sam threats would make the display easier to read. I often find myself enabling DCLTR just to see enemy air contacts. Is this the correct behavior?
-
SZS : F/A-18 Super Hornet Project
Someone replied to SkateZilla's topic in Flyable/Drivable Mods for DCS World
I'm curious; could you explain the reasoning for the removal of cockpit/systems? -
@Nineline All of this makes sense and is surely the best approach for the longer term. At the same time the infamous ”kegeteys vr” mod reliably delivers 50% improvements with minimal graphic quality compromises. Any chance you could officially support a similar approach so that such a mod would be usable in multiplayer?
-
Ultimate setup for Buttkickers?
Someone replied to _outcast_'s topic in PC Hardware and Related Software
Superjoe, How did you mount the buttkicker to the joystick and throttle? Could you share a picture? I've mounted my throttle with a Monstertech table mount, and am somewhat unsure of how I'd attach it. -
Can someone explain to me what the effect of these modification are? I understand that, in theory, you wouldn't want unnecessary re-scaling going on in the render pipeline, but I haven't seen anyone describe the outcome. Are people getting better frame-rates after making these changes? Better clarity at the same FPS? Thanks to anyone who can clarify this for me.
-
For @ram0506 and others getting the fatal error message: as strange as this sounds, that is expected. You will have to dismiss the various error messages a number of times (>10) over perhaps 2-5 minutes. It's extremely strange, but if you keep dismissing the errors, DCS will eventually load. If you open task manager in you will be able to see that DCS.exe is running, despite the application window not being visible.
-
PointCTRL - Finger Mounted VR Controller
Someone replied to MilesD's topic in PC Hardware and Related Software
Firecat, if you could add me to the list as well, I'd be grateful. Thanks- 3421 replies
-
- vr flight simulation
- vr gloves
-
(and 1 more)
Tagged with: