Jump to content

gwheeler

Members
  • Posts

    7
  • Joined

  • Last visited

About gwheeler

  • Birthday 11/30/1977

Personal Information

  • Flight Simulators
    DCS World
    Falcon 4 (BMS)
    Arma 3
  • Location
    Paris, KY
  • Occupation
    Technical Consultant

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I am also exploring a browser-based client/server display and input system so I've thought about this scenario some. One possibility I'm looking into involves using a dummy display dongle to create an extra virtual monitor on the simulation host machine that will receive the exported display renders. Then, use something like OBS to capture the virtual display contents and stream them over the network (I think people have made recent strides toward getting OBS to output an HLS stream that could be referenced directly in an HTML <video> tag.) It may also be possible to use modern native browser APIs (as seen here: https://www.jitbit.com/screensharing) to share the contents of the virtual screen from the source PC to a browser running on the destination machine. In essence, the "client" PC browser connects to a web server running on the DCS host and uses websockets for comms between the client UI and the server app which bridges to DCS via DCS-BIOS or other direct Lua socket hooks. The client UI also accesses the video stream(s) from the server and using various bits of CSS trickery shows the bits and pieces of the source video in the correct areas. I've not really had much of a chance to experiment with either method yet, but will keep an eye on this thread and update you if I do.
  2. Where are these $5 cameras of which you speak? They'd have to be high-res and wide-angle to capture a console's worth of controls without getting into the complexity of an array with overlapping visual fields.
  3. My model isn't using QR codes for individual controls, but rather as a registration for the panel itself. So QR codes in opposite corners identify the panel, and that Id maps to a profile which contains information about the panel's dimensions and controls (control type, position, state-to-action mapping) - each panel has a local coordinate system where the QR represents an origin corner and there's a global "console" representing the camera's total FOV. When you drop a panel into a slot in the console the software registers which panel was placed, uses the QR codes to determine the bounding box within the camera FOV to associate with the panel, and uses known information about the panel's dimensions to figure out the necessary offset values so it can translate panel-relative component coordinates to absolute viewport-relative coordinates, re-generating a global lookup table. This would be necessary I think because if the camera is centered underneath the console, an inch in panel space will take up a different amount of pixels in the camera's FOV depending on how close to the center the panel is located. This de-skewing is of course more complicated from a software standpoint but once the functionality is established it would allow unlimited permutations of drop-in panel arrangements and the only coordinates you have to worry about when mapping controls are panel-relative. As far as tracking state of individual components, I think it would be down to simple visual indicators. Rotary switches could have a high-contrast arrow or something on the underside that directly shows the panel-relative angle of the knob (albeit mirrored.) Rotary encoder functionality could be achieved by comparing adjacent frames for state changes to determine direction of rotation. Buttons could be designed with some sort of push-rod mechanical apparatus that reveals or changes a visual flag when pressed/latched (I could see a bolt rotating as its cams engage with spiral grooves in an outer housing as one design that would be compact in the horizontal plane) - ideally production versions of controls would derive from a library of just a few universal 3D-printed mechanisms, with interchangeable caps/knobs/etc. Obviously none of this is practical for a cockpit that's only meant to closely replicate one specific platform, but the up-front complexity of the image processing solution could yield a system that allows incredible versatility and the ability to change between fairly high-fidelity cockpit layouts within minutes.
  4. So definitely underneath then, where the lighting and other visual characteristics can be more uniform and tightly controlled to allow for faster image processing. If the panel ID is readable on the underside (QR code) it could reference a profile that specifies the panel dimensions (while the QR marker also functions as an origin point for the panel-local coordinate system.) That way you can correct for perspective issues caused by the camera being relatively close to the panel. Prototyping new panel layouts could be really fast and cheap this way, because there's no wiring involved and you can experiment with component placement before producing a final version with more fancy and expensive materials like laser-engraved acrylic.
  5. Now I'm imagining a system where the buttons, knobs, and switches have no electrical connections AT ALL but merely change some visible property on the underside of the panel, which is seen by a camera underneath the console pointing upward. For example: Rotating a knob topside would cause a marker at the other end of the shaft to rotate and the camera captures the precise position of the control to within a degree or so. You could swap out modular sections, which could be hot-pluggable because each one could have a QR code or some such in the corner to cue the system as to which one was installed, which would map to a profile describing the inputs and their locations. Of course, it's probably possible to do this from above as well, with a CV solution that simply looks at the actual panel state the same way you see it. That would introduce a bit more latency since it would be doing more image processing in a less controlled space but man, the possibilities...
  6. Updated FIXED problem by using nVIDIA control to set primary Windows display back to left monitor (which is probably the one that's plugged into the primary physical port on the vid card)
  7. KEYWORDS -------------------------------------- DCS Black Shark v1.0, GUI, displays, video, screen DESCRIPTION -------------------------------------- When video mode is set to fullscreen, or resolution is set to display's native resolution, BS GUI jumps between displays each time mouse is clicked anywhere within GUI. Furthermore, it appears that I have to click on the same item on each display in turn in order for the click to actually register. Great fun for a practical joke, but rather irritating if one is trying to accomplish anything within one's lifespan. I suspect this has something to do with using DualView as the multi-monitor mode, and/or setting the display order in the NVIDIA control panel as opposed to using the order in which they are connected to the DVI ports. CONFIGURATION -------------------------------------- Processor: Intel Q6600 Motherboard: Gigabyte GA-P35-DS3L RAM: 8GB (4x2GB) G-Skill DDR2-800 HDD1: WD 500GB 7200rpm SATA HDD2: WD 360GB 7200rpm SATA Display Adapter: EVGA NVIDIA 8800GT Display #1: Samsung Syncmaster 931B (1280x1024) Display #2: ViewSonic VP 912b (1280x1024) Controllers: TM Cougar, Logitech 3D Pro, TrackIR v4/TrackClip Pro OS: Vista Ultimate x64 SP1, DirectX 10
×
×
  • Create New...