Jump to content

Gaussian Splatting - the future of 3D and DCS ? Revolutionary photoreal 3D rendering even in your browser


winghunter

Recommended Posts

Gaussian Splatting is a new breakthrough method for photorealistic rendering of point cloud data. Its using less GPU than conventional rendering, while achieving photo-realism at ease. I'm not very deep into 3D engines so I dont know if the point cloud data means its unusable for a game at the scale of DCS. But neither less, we can expect some great stuff coming from the rapid development of those applications.

Browser demos:

Hilltop church
https://gsplat.tech/hilltop-small-church/

3D Boat wreck
https://poly.cam/capture/35c1c8f4-a904-408f-8b25-90680fc1f143

3D Castle
https://poly.cam/capture/be67b0d3-38d1-4e09-b15a-cfe5ef76c2a4

Scout Helicopter

https://poly.cam/gaussian-splatting?capture=78c91eeb-f78d-4db2-af72-80fd321030ab

More scenes:

https://poly.cam/explore

 

Intro to Gaussian Splatting

 


Edited by winghunter
  • Like 1
  • Thanks 1

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

Impressive, but ultimately, the thing with getting photorealistic art is how long it takes to create it. This is a rendering technique, but it does nothing for the modeler side of 3D work, which is the current bottleneck in DCS. Fancy rendering techniques won't magically change the Tu-95 model from Flanker 2.0 into a photorealistic one. That takes a lot of modeling work. Sure, maybe our rendering engine could use it at some point, but ultimately, it's only as good as the assets, and with good assets, DCS looks pretty darn impressive already.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Dragon1-1 said:

Impressive, but ultimately, the thing with getting photorealistic art is how long it takes to create it. This is a rendering technique, but it does nothing for the modeler side of 3D work, which is the current bottleneck in DCS. Fancy rendering techniques won't magically change the Tu-95 model from Flanker 2.0 into a photorealistic one. That takes a lot of modeling work. Sure, maybe our rendering engine could use it at some point, but ultimately, it's only as good as the assets, and with good assets, DCS looks pretty darn impressive already.

Its not just rendering though. The method creates 3D scenes from photos or videos. So it's photogrammetry on steroids. Which can currently help modelers, i.e. you have a walk around video of a jet and can turn that into a full photorealistic 3D model, import that into your modeling software for reference. Ultimately, it could replace the existing modeling and texturing pipelines entirely.

I.e. this  Scout Helicopter was created from a few photos

https://poly.cam/gaussian-splatting?capture=78c91eeb-f78d-4db2-af72-80fd321030ab

Or checkout the APC

https://poly.cam/capture/3417AAA9-AA38-45D6-A625-1644D22D3AE1

You can upload your own photos to create those scenes on that website. Hopefully more tools will come to cleanup the results.


Edited by winghunter
  • Like 1

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

Those scenes are not DCS quality, though. If so, the question is, can those models be worked on further, and can it produce high quality results? As it stands, it doesn't seem so, plus it has a major weakness: hidden parts. Look at the underside of the tracks on that APC. A modeler would easily have extrapolated that the track links that we can't see look exactly like the ones we can. A computer doesn't know that and produces gibberish. I suppose a dedicated walkaround would be free of those issues, but it also has no idea which parts move and which don't, so the model is a slab. How hard would it be to separate those bits out and make them move?

This method can't replace the current pipelines. It can augment them, but in the end, photos only get you so far. As you said, photogrammetry on steroids. It'll have its limits a little further out, but you'll still need a modeler to clean the model up and make it look like the real thing. UE demos always look good, but this doesn't mean the results will be as good in reality. Aircraft are complex beasts, and a modeler with an understanding a mechanics can never be replaced by an algorithm, because there's just too many moving parts involved. All those scenes in the demo are very static. I can see some uses for making terrains, but then, this assumes you've got photos and documentation of how the area looks, or looked like in the period you are modeling.

  • Like 2
Link to comment
Share on other sites

The examples are static yes. But there's already a paper released which turns a video of a man walking into an animated 3D splat.

It won't replace modelers but the pipelines could be all different, this is just the beginning. One month into the paper and we're seeing new stuff almost on a daily basis. So in 1-2 years time we may have the tools to build complex high quality models and cleanup the results.

For now its a nice reference if you can import these into your 3DS rather than working from blueprints only.

  • Like 1

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

I don't think it's going to beat laser scanning, which is the current gold standard. You get sub-millimeter accuracy with that. It's not for everything, obviously, but I can't see it being displaced. I can see this as being a replacement for photogrammetry, but not a whole lot more. Also, turning a video into an animation is one thing, but turning a static model into something that can move its control surfaces in an arbitrary way is another. Besides, you're unlikely to be allowed to fire up hydraulics on a vintage F-100, for instance, so many of those things can't even be seen in motion for real. They have to be made from blueprints and static pictures. It remains to be seen how good a tool will this be a modeler's toolbox, I suspect it'll have its place, but won't transform the way aircraft are made. It could be a lot of help with terrains, but we'll have to see.

  • Like 1
Link to comment
Share on other sites

 Although real time reconstruction with such consistent quality was impressive but it's not a new thing. Gaussian Splatting and NeRF(Neural Radiance Fields). it was used for photogrammetry for a while. It's not replacement for photogrammetry. It was still AI photogrammetry. We can already do photorealistic scanning ib4 in controlled environment, just took longer time. it will only get faster and more available and require less and less user input without dedicated professional equipment for a better result.

No matter how good it looks at the surface level to be "photorealistic", a lot "realism" came directly from the "nature lighting and shading", because everything sampled in real life after all.It only looks good in that specific lighting and environment scene. The 3D mesh underneath is still very ugly, and the texture is pre-baked by the natural lighting. If you directly took them out of the scene, then they would not feel fit. There are tasks that need to be done afterward, like cleaning the 3D mesh or rescanning the object with a laser scanner, resampling, and retouching the texture under controlled lighting. then you can take those assets out of the original scanned scene and use them for more universal purposes.

People already use photogrammetry as part of their 3D scanning/modeling work for saving time or for better quality before you even realize it. CGI, geospatial surveying, industrial anywhere you need to reconstruct a 3D thing. A lot of modern game 3D modeling and scene creation is the result of modern 3D scanning and photogrammetry, then cleaning the mesh, retouching the texture, and importing it into the engine to fit the style.

3D scanning for mesh reconstruction is still superior to just AI photogrammetry. A lot of your 3D cockpit and DCS model are 3D-scanned. the mesh is 👌.

But DCS lighting is not "photorealistic" or mimics nature, so the scanned texture must be retouched. You have to lose "photorealistic" to fit the DCS world.

If you want DCS work in this way, then you need to go out and "scan the world".

Most DCS maps use >50meter ish satellite image. no photogrammetry not gonna fill the missing details.

And the price to get <1meter level satellite on 200,000 sq km of area to create a reasonable detailed map isn't cheap. It could easily cost you millions to obtain the image alone from geospatial service providers and brokers. Not even mention street images. And copyright concerns. Still, lots of labor needs to be done to get rid of all the signs and stuff.

You want new model right now? good. Go find a TU-95 on the ground, then ask for clearance. drone scan it. go back, then import the work. Let the software do the job, then retouch the 3D mesh and texture. There are still a lot of intense labors to be done, but they are definitely faster than sculpting manually from pure polygon. You can do it now then import the result into 3d max then covert them to edl then you get new tu95 model in DCS.

It just means someone still has to do those. It's still labor. And a lot of textures. probably GB level of textures, so it finally can be photorealistic (in that very specific lighting).


Edited by Insonia
  • Like 1
Link to comment
Share on other sites

Gaussian Splatting is not AI based though, thats why its 10x faster than NeRFs. Its plain old math.

You can't put an F14 into a 3D scanner, but you can now create a 3D scan just from images or photos. This puts the workflow in the hands of everyone not just people with commercial GPU racks or 3D scanners.

Also true, it may be better at scanning large objects, which are too large for a 3D scanner. I.e. turning drone footage into terrain with houses, trees etc should be useful.

As for lighting and shadows, yes these are baked ( for now ). But the development hast just started, we will see tools for removing light/shadows from GS scenes to light them dynamically.


Edited by winghunter
  • Like 1

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

Another advantage is that the method doesn't require LODs. It can render infinite detail without costing more GPU power. Its currently only limited by VRAM which has a lot of room for optimizations in the next months. Still early days, but it seems to have tremendous potential.

If we had a gigabyte scene running locally, instead of a megabyte scene running in the browser, the potential would be more evident.


Edited by winghunter

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

It's not usable for DCS. Not at all. Gaussian Splatting is a neat way to view point clouds. You cannot relight it, you cannot interact with it, only move in/around it.
If you want to use the point cloud data for interactive experiences, you have to extract meshes and texture data and with that you are back to the workflow that is already well established and already used by many DCS developers.

You could use GS to explore spaces or objects in a VR headset for example, but you could only look at it or move through it. You cannot use it in the way you are probably thinking about. Currently there isn't really an alternative to using textured meshes for truly interactive experiences.


Edited by twistking
  • Like 1
Link to comment
Share on other sites

I'm aware that DCS currently can't render splats. Other game engines like unity or unreal already have plugins though. It can still be useful for DCS content / terrain creation.

Check this out, automatically turning a 2D image into a 3D textured object. Could be useful for small map objects etc.

 

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

For someone who is

On 10/1/2023 at 11:47 AM, winghunter said:

... not very deep into 3D engines ...

you are quite persevering. I have already tried to explain and i don't know what to add.

Photogrammetry is already used in games and is already used in DCS. Maybe GS will allow artists to generate a mesh a little bit quicker. So what?
The interesting aspect of GS is that you can visualize point clouds in a photorealistic way WITHOUT needing to create a mesh from it. But this way of rendering a pointcloud is utterly pointless (ha!) for DCS.
It is cool, that GS may improve photogrammetry workflows by making it compute quicker, but in the end it's a tool that a dev can decide to utilize or not. And it would be utilized very early in the production pipeline. Wishing for GS in DCS is exactly the same as wishing for the newest version of photoshop, or blender or whatever in DCS. I would think that the devs will always choose the tools that are best suited for the problems they face. Displaying point clouds in DCS via Gaussian Splatting makes absolutely no sense and i can't see any scenario  where it would make sense in the future.

  • Like 2
Link to comment
Share on other sites

I agree, it has no use in the "current" rendering pipeline in DCS, hence why i was referring to the "future".

Aka, you'd have to write a new GFX engine to take full advantage. GS is an ultra efficient rendering technique, requiring less GPU cycles per frame than traditional tris, while allowing for more visual fidelity. It also requires no LOD's.

However, for a game at the scale of DCS you'd have to figure out an efficient streaming method for all that data. And that's what i'm wondering on how much this can actually scale. Research in this area is progressing really fast, but also there are still lots of things to be done to replace traditional polygon pipelines.


Edited by winghunter

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

2 hours ago, winghunter said:

I agree, it has no use in the "current" rendering pipeline in DCS, hence why i was referring to the "future".

Aka, you'd have to write a new GFX engine to take full advantage. GS is an ultra efficient rendering technique, requiring less GPU cycles per frame than traditional tris, while allowing for more visual fidelity. It also requires no LOD's.

However, for a game at the scale of DCS you'd have to figure out an efficient streaming method for all that data. And that's what i'm wondering on how much this can actually scale. Research in this area is progressing really fast, but also there are still lots of things to be done to replace traditional polygon pipelines.

 

sure, i agree that research is progressing fast and that particular technique is super interesting, but i just don't expect it to ever be usable for real-time lighting. the thing is, it does not need to be usable for real-time lighting to be impressive, so i'm not arguing against the merits of gaussian splatting. but to be usable in modern games with a high degree of interactivity, you need to be able to (re)light the game objects and you need some form of geometry for physics and other interaction. Gaussian Splatting doe not provide for either (it's just a visualisation of a cloud), so i really really don't see that future. Especially now, that real-time-graphics are on the verge of being "solved" with path tracing and the likes.

  • Like 2
Link to comment
Share on other sites

10 hours ago, winghunter said:

while allowing for more visual fidelity

What? Where? Unless you put the time to clean everything up it's a mess.

Remember that apart from a static scenes and models DCS has to have PBR textures, normals, IR textures, radar reflection, DM on models, frequently with moving parts/animations.


Edited by draconus

🖥️ Win10  i7-10700KF  32GB  RTX3060   🥽 Rift S   🕹️ T16000M  TWCS  TFRP   ✈️ FC3  F-14A/B  F-15E   ⚙️ CA   🚢 SC   🌐 NTTR  PG  Syria

Link to comment
Share on other sites

He doesn't seem to be aware of the limitations of this technique. Which I pointed out in the second post in this thread. This is useful, but only for model creation. As a rendering method for DCS it's useless. While geometry for physics interactions is easily handled with collision meshes (which can be vastly simpler than visible geometry), inability to respond to light is a big problem that makes this technique unsuitable even for rendering terrain, which is possibly the only place a point cloud could be used in DCS.

If it even becomes able to be dynamically illuminated, I can see it getting consideration for rendering static parts of the terrain. It's efficient, so high view distances and high details up close could be realized.


Edited by Dragon1-1
Link to comment
Share on other sites

12 hours ago, draconus said:

What? Where? Unless you put the time to clean everything up it's a mess.

Remember that apart from a static scenes and models DCS has to have PBR textures, normals, IR textures, radar reflection, DM on models, frequently with moving parts/animations.

 

Well, you won't need normal maps for GS 😉. Everything else you mentioned is rather game engine than rendering.

Sure you also may need collision models for the physics engine, but these are typically low poly and separate from the rendering part.

GS is re-inventing step 0 of the rendering pipeline, as mentioned in the video I posted. It doesn't mean that it's the end of the pipeline, you can build all the other steps like animation, effects and lighting on top of it.

But still my question is what are the scaling limitations of having the entire DCS map as splat data. Whether there is a limit which can't be overcome with the hardware advances of the next 5-10 years. I.e. it could turn out to be limited by the disk I/O rather than the GPU as it renders effieciently but ultimately needs to stream more data. Assuming 1 TB per map, its all about the effieciency of the streaming algorithm. Ideally you want to load only the splats which are visible to the player. But to determine those is going to be tricky. Its different from loading an entire terrain texture tile into the GPU, it has to be more granular than that to be efficient for GS.

And how to get map data with this much detail is an entirely different topic, but i believe its one that can be solved.

And ultimately the good old triangles may still win the race to realtime photorealism. With techniques like nanite and lumen, as mentioned by others.


Edited by winghunter
  • Like 1

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

23 minutes ago, winghunter said:

It doesn't mean that it's the end of the pipeline, you can build all the other steps like animation, effects and lighting on top of it.

No, you can't, at least not the way it's been implemented so far. This is what we've been trying to tell you. You can't do these things with splats, they're neither polygons nor voxels. Triangles are doing a good job already, and stuff like Nanite and Lumen is way more integrable with existing meshes and model data. Plus, plain old quality modeling, of course. If you look at HB's latest Phantom screenshots, you could mistake them for a photo. 

Right now the big challenge for triangles is performance, followed by relative difficulty of making a photorealistic model in first place. Improved photogrammetry can help with the latter slightly. 

Link to comment
Share on other sites

  • Like 1

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

26 minutes ago, Dragon1-1 said:

No, you can't, at least not the way it's been implemented so far.

Correct, but my point is, that its not theoretically impossible ?

You'd essentially have to reverse-calculate the effects of a single light source ( the sun ) and how it bounces around and affects each splat. Sounds like a great task for some ML model. Similar as tools to remove baked lighting from textures.

This sounds like 1-2 years away to me, not like something that is fundamentally impossible. And once you have such an unlit splat scene you can light it any way you want by changing the HSL of each splat, right ?


Edited by winghunter
  • Like 1

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

7 minutes ago, winghunter said:

And once you have such an unlit splat scene you can light it any way you want by changing the HSL of each splat, right ?

I'm not quite sure it works that way. A point cloud is not a solid mesh, and as such, many things that work with meshes won't work here. How would it cast shadows, for instance? I strongly suspect that any lighting solution that would work would also hog performance so much it'd be comparable to the old approach. If that is even possible, of course. Remember, lighting is not just light, but also shadows. As it is, figuring out how those shadows should look like takes quite a bit of processing power.

Also, in most photos we're talking more than a single light source. You'd have to very precisely know how the splats were originally lit, and that's not a trivial question. So far, you've shown a lot of marketing material designed to showcase good-looking best case scenarios. Those are not real world applications. Stop looking at ads, they never tell you the whole story. Give me a video made by someone who doesn't like this technique (because then all its problems will be highlighted instead). 

  • Like 2
Link to comment
Share on other sites

Makes sense, I would have thought something like raytracing is possible.

Or maybe one of the UE5 lumen (quite performant) tricks adapted for GS. Or do you think it would require an entirely new method ?
 

Quote

By default, Lumen uses software ray-tracing (doesn’t utilize RT cores/accelerators), a highly optimized form of it. It uses multiple forms of ray-tracing including screen-tracing (SSRT), Signed Distance Fields (SDFs), and Mesh Distance Fields (MDFs) in parallel to calculate the global illumination of the scene depending on the objects, their distance from the screen, and certain other factors.

 


Edited by winghunter
  • Like 1

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

I love this hilltop church

https://gsplat.tech/hilltop-small-church/

and this nike shoe, the fabric is see-through

https://gsplat.tech/nike-next/


Edited by winghunter

DCS Web Editor - New 3D Mission Editor for DCS that runs in your browser

DCS Web Viewer free browser based mission planner / viewer

dcs web editor new(2).png
4090 RTX, 13700KF, water cooled

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...