Futurecraft Forums A forum dedicated to communication and innovation! |
Welcome, one and all, to the Futurecraft Forums! |
|
| Additional heightmap compression. | |
|
+12Groot Ljubljana Iv121 Keon Ivan2006 Last_Jedi_Standing Joel Tiel+ MercurySteam ACH0225 fr0stbyte124 Dux Tell31 16 posters | |
Author | Message |
---|
fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Additional heightmap compression. Wed Aug 07, 2013 6:32 am | |
| Warning: random thoughts below. Don't worry if it doesn't all make sense. I don't know whether this is even worth it or not, but I saw something in the latest video out of voxel farm that caught my eye. First of all, the amount of difference the texturing makes is astounding, but the main thing here is how simple the geometry actually looks when the texture is stripped away. Sure there are a couple places with high detail, but for the most part is is smooth rolling hills with gradual variations is base colors and heights. To me it looks like a prime target for wavelet compression, the sort you might find in jpeg compression. Rather than represent a block of heights or colors are absolute values, you apply multiple patterns at different magnitudes which when combined can reproduce the original perfectly. Taken naively, this new format takes just as much information to reproduce, but you can do a trick which shifts all the strong pattern responses to one side and the weak ones to the opposite end. Once you have that, you can quantitize the table to a smaller space and round the least visible details to zero, depending on how much compression you need. Even a minor reduction in quality, say 85-90% can result in a massive improvement in storage size. If the distant terrain heightmaps could be stored like this, natural variations in height (natural terrain is complex but decidedly non-random) could be represented which much less data than otherwise possible, making it possible to stream more data onto a planet without exploding the memory footprint. The question, though, is whether this would truly be the case. The downside of jpegs is the workspace required to process them. Tons of lookup tables and buffers for the many tiers of decompression and decoding which must be done. Plus it generally needs to be decoded to a raw format before it can actually be used for anything. Questions like whether it is viable to dynamically decompress the terrain on the GPU, whether this will even save memory (since it still needs to be unpacked), and what sort of impact decompression will have on the performance remain to be seen. I haven't given it a whole lot of thought yet, and it might be foolish to even both trying to do it on the GPU. On the other hand, it may be perfect for a new coarse level terrain layer. Google Maps uses tiles of jpg images at different levels of detail to show aerial photographs. We could use the same method to stream not only surface images of the terrain, but also the heightmap and all the different layers and materials. The coarsest layers won't do a terribly accurate job of portraying the terrain details, but thanks to the nature the jpg, the errors show up in the places we are least likely to notice them. Plus, we aren't restricted to using native jpg format. What I'd really like to try is a continuous pyramid of iterative improvements, where each level is the difference from the levels above them. Jpg consists of 8x8 blocks of pixels where all the compression takes place. The only global values is a decoder tables for the blocks. Because of this, any updates in a tree structure only affect a local area of the map. You could also add multiple layers at whatever level of detail they become significant, creating a natural way for the renderer to decide whether to fill in 3d gaps or show them, which shows up only when that detail becomes prominent for the target resolution. There is also the matter of expanding heightmaps to 3D, which I still haven't fully worked out. The easiest thing would be to mark various cubes in a virtual oct-tree as having different orientations and have one heightmap assigned to it, based on whichever one can produce the least error, but I think there ought to be a better way that doesn't make such discriminations. Again, it all depends on whether the coarse structure can be safely used as an acceleration structure. In the past I had been customizing lower mips of the heightmaps to store the max of the area it represented, so that the sample ray always knows for sure whether it is safe to make a large hop. Basing it on something which is usable as a standalone heightmap may give it false negatives when combined with a more accurate heightmap. I keep coming back to the tree concept, in which each layer links to other layers in the texture rather than at redundant vertices. The shader should know how deep it needs to go, and should be able to operate whether or not that level of detail is present. Furthermore, if there is a max delta between resolution layers, we can make assumptions about safe buffer distances in the acceleration structure without relying on nearly redundant data, and we can further compress the representation of the delta layers based on that variation. If it is, say, +- 4 scale blocks you only need 3 bits to encode the next layer and you know that a height of +4 above the base layer is definitely safe to traverse. If the deviance is greater than that, then the layer can be split in two at a coarser resolution until everything is represented. This starts to merge back into the concept of oct-trees once distilled, so I am not sure whether this is a better idea or nor. One of the nice facets of the heightmap style is that all the lower resolution data is directly available as mips without pointer magic. This wouldn't continue to be the case if the texture contained the tree structure, but without it, the layers must be rendered independently. If we end up using cubes as guidelines, I would like to make the rays to interpolate naturally across the same texture resolution regardless of the size of the initial cube. The idea here is that the cube determines what octave the height map represents and that the rays know when to bail out. The problem comes from the fact that the rays are supposed to be in world coordinates and aren't aware of the scale of the polygon which spawned them, only their position on its surface. Maybe if all the blocks of a single scale are grouped together, a different shader can be used for each. An 8192 grid planet would have 13 possible octaves, and most likely only a handful would be in use at any given time. Given that the highest resolution heightmaps will appear closest to the camera, they could be drawn first and worked backwards from there, with the cpu carefuly pruning the list as the camera moves. Just realized multiple layers can also overlap. Normally, you would have z-fighting if two polygons overlapped perfectly, due to imperfections in the depth calculation, but so long as the steps are identical and the camera doesn't move, two consecutive layers will match perfectly, and you can make the ray bail out if its depth is > depthbuffer or >= depthbuffer, depending on which layer you want on top. They don't even need to be heightmaps in the same orientation. Holes would be more difficult, but would not be impossible. So that's some food for though. I spent too much time on this, so that's all for now. I plan to revisit this soon. At first glance, there is nothing stopping the jpeg bit from being useful. The heightmap part needs some more planning to account for the new ideas. | |
| | | Dux Tell31 Recruit
Posts : 379 Join date : 2012-01-05 Location : That is out of the question
| Subject: Re: Additional heightmap compression. Mon Aug 12, 2013 9:58 pm | |
| Wow that made me stop and think! Using the jpg style approach to viewing planets seems cool and good. But can you clarify what your "octaves" are? From your descriptions I think there sort of like the layers of the atmosphere, is this right? And why would there be 13 of them?
As always, thanks for keeping us in the loop Fr0stbye. | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Wed Aug 14, 2013 2:55 pm | |
| Octaves can be thought of as like an order of magnitude. In music, you go up an octave by doubling the frequency of the pitch. In a quad-tree structure, and octave is a square made up of a 2x2 set of smaller squares, and that in turn makes up a 2x2 square of the next octave. In oct-trees, it is 2x2x2.
Another place you see it is in noise for terrain generation. Here, you are applying the same type of noise at different octaves, producing small, high frequency variations and large, low frequency variations. For instance, an island might be a high point in one of the lower octaves, but the higher octaves give it tiny variations when you zoom in. By tweaking each octave individually, you can get different effects. Technically, Minecraft uses 3D noise over (I think) 5 octaves which translates to a density. Past a certain threshold, this density becomes rock. Above it and you have air. To make it look like land, the density is more heavily weighted the lower you go until eventually it is all rock. Once that is done, the rock is converted into other materials in a second pass. | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Wed Aug 14, 2013 3:47 pm | |
| Just read an interesting paper on voxel representations. This one is based in the sparse voxel oct-tree family, which is a data structure in which a node is subdivided into 8 sub nodes. If a sub node is completely empty, you don't need to represent it any further. Each sub node with data has a pointer to another node with the same structure, only this one represents a region 1/8 the size. In this way, SVOs can efficiently encode scenes with large empty spaces. Another perk is the built-in level of detail. To reduce the detail of distant geometry, just chop off a whole sub tree.
However, to efficiently use this structure in realtime, the scene must be stored in onboard memory on the GPU. Additionally, the amount of data required to represent a particularly detailed scene can get into gigabytes, even with SVO.
And this brings me to the paper I found. It is a compressed method of storing SVO by turning the tree into a directed acyclic graph (DAG). The gist of this technique is that you still have SVO structured nodes, but each node can be re-used if there are two places with identical sub-trees. Nodes can also be reused between octaves, but the pointers can't point to a sub-tree which contains itself (thats acyclic part). How it is constructed isn't terribly interesting, but what is interesting is the effectiveness. As far as the ray traversal is concerned, it is the same as an SVO, but in the test cases presented in the paper, the node reduction ranged from 28x (1 DAG node averages 28 SVO nodes) up to 576x, varying with the noisiness of the scene. That's freaking huge, no matter how you look at it.
The structure of a node in this case is an 8-bit occupancy mask (1 bit for each sub node to represent whether it is empty), followed by 0-8 pointers to other nodes. Because of the variance in the number of pointers and which one corresponds to which bit, there is more logic than you normally find in a GPU routine, which unfortunately makes it difficult to implement outside of a compute shader (which are relatively new and would increase the minimum requirements).
Another pretty serious downside is that this implementation doesn't describe any way to get unique information once you start traversing a subtree. In other words, you can't encode block types or mix in higher level data like heightmaps because the sub-tree is shared by who knows how many other things. The only obvious way to use it is to encode a single type of voxel all the way down.
But that's only the obvious way. I don't have a solution for how to use this, yet, but the memory efficiency is too high to ignore. Maybe it can be used as an acceleration structure to get the rays close to the surface of heightmaps, or to help with occlusion testing, or maybe it can be used by itself for sunlight shadows (since that doesn't require block material). The lowest levels are the most noisy, so if we were using, say 8x8 heightmaps, that is 3 octaves of nodes that can be skipped, resulting in more coherency from then up and thus more savings (assuming, of course, we've found a way to identify the correct heightmap for a ray).
This will require more investigation. | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Wed Aug 14, 2013 9:55 pm | |
| You know, for the life of me, I can't see this working for Minecraft. Every re-used sub tree needs to be identical to save any space, but looking at arbitrary screenshots of terrain, just how much of that can really take advantage of this? 2 octaves? 3? It's not enough that the entire box exists identically somewhere else, it has to be aligned to the oct-tree grid, which bars even most self-similar arrangements. One thing I didn't mention was that the scenes being used in the experiment were seriously high resolution. I thought that should be fine, because a planet is seriously high resolution, too, but the difference is that in these scenes, the voxels become tiny enough that they are basically representing flat or smooth surfaces, particularly in the 576x test (which was the CryTek Sponza model, for the curious). I would suspect the merging opportunities mostly come from the over-saturated geometry from these smooth surfaces. However, in Minecraft we are dealing with particularly coarse geometry, and while there might be repeating features, it will only be over small areas. If we could go the other direction, merging higher-order parts of the tree structure, that might be handy, but due to the nature of merging the trees, you lose track of where you are, hence why we couldn't merge this technique with any other. So I'm leaning back toward heightmap again. In fact, I am beginning to question the need for multi-layer heightmaps. The mip-accelerated variety is faster to traverse than the sparse oct-tree, because your rays can travel very quickly over the tops of the terrain, and trees, water, and artificial structures are nearly the only places where multiple layers pays off. I'll probably want to do something special for trees and water anyway, and cities will definitely do better with SVO. The only other place for multi-layer heightmaps are natural caves and overhangs, and representing patches of different material in rock walls. For this, I think I have another solution, and this one also involves SVOs. Say you have a cave in the wall of a mountain, represented by the heightmap. To "carve" into the mountain, so to speak, you don't change the heightmap. Instead, you introduce an SVO object just for the local vicinity. In this SVO, you have a special block type we'll call "force-empty" which is treated as air regardless of what was in the screen buffer before. Regular empty blocks will let the rays be stopped by existing structures, as usual. Now say, instead of a cave, it is a natural land bridge which you can see out the other side. You need to render the heightmap first so that you can carve stuff out of it, but once you carve out the hole, you have no idea what was originally on the other side of the mountain. So what do you do? The answer is actually pretty simple: you draw it again. Specifically you set the depth buffer to the exit points of the SVO, and mask them in a separate channel, so that you will only draw in the masked area and the rays will already start right where they need to be. If this reveals another area which must be decorated with an SVO, then you just keep going. You can use efficiently use software occlusion culling over these large hulls to work out how many passes you need and what will probably be visible each time. This hybrid heightmap/SVO method not only resolves some complications from the layering, it should also be simpler to implement. Since the SVO can correct errors in the heightmap, there is no need to run them in the same pass or keep a unified data structure, and I can use the compressed textures knowing that the svo will cover the discrepancies when it gets close enough to matter. Once you are down to one surface per planet face, texture storage becomes really easy, and you can do meshing over the planet surface to get a really tight fit and save even more refinement steps (especially if tessellation is added). Maybe I can even add the multiple-pass trick to avoid worst-case on the edges of hills, where the ray misses the hill altogether and has to keep flying until it reaches the next one. In this version, the surface hint mesh will mark the max traversal distance before the ray exits the mesh (but only the exit, so the other geometry will need to be drawn again). The last thing I would like to do is try and implement a decal system. With this pass, you could project arbitrary textures or modifiers onto individual faces using polygons like you can normally do in the polygon layer, but it only applies if it is flush with the applied surface. With this, you can apply custom textures, make patches is rock walls without doing a full SVO, or do more interesting things like large moving shadows under ships (if dynamic shadows aren't being used). In the distance, this is the only way to accomplish this, but even in polygon range, it can be useful because being a post processing effect, it doesn't need to be baked into the chunk geometry, and you can move it around like an entity. | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Fri Aug 16, 2013 5:10 pm | |
| The trick to ray tracing height fields is to efficiently step through the volume as efficiently as possible. A trick often employed for this is to vary the step size based on some heuristic. https://www.shadertoy.com/view/MdX3RrThis is a ray traced version of the Elevated demo. It runs a max of 128 steps, and each step size is a function of the distance between the sample point and the ground. The higher it is off the ground, the greater the stride becomes. And that works pretty well here. However, if you look closely, you can occasionally see an interesting artifact. Over the lips of some of the mountains, when the mountain behind it is really far away, you can sometimes see a blurry white outline. These are points where the raytrace gave up after 128 steps and declared the pixel to be sky, even though pixels above it correctly located the distant mountain. The reason for this is that this particular ray skimmed too close to the first mountan range, which shunk its step size down too far and it couldn't make any progress towards the second mountain. As it turns out, this is a big problem with all raytracers using variable step sizes. The are efficient in wide open spaces but lose efficiency where they brush close to other objects (which is by design because they need the extra resolution there). The Elevation demo alleviates the problem a little bit with a hack. If the ray gives up without hitting anything, it will check the distance to ground. If it is within 10 units, it will still consider it a "hit". This will produce a little distortion and lead to false positives, but in practice it is at least in the right ballpark, and looks much better than white holes. --------------------- Ideally, you would want to avoid long strides as much as you could, especially if you are reading from a texture for reasons of cache coherency. But what if you could somehow ensure that the mesh spawning the rays would have a guaranteed maximum distance to the virtual surface, or perhaps more applicable, a maximum number of steps? Imagine with the mountain scenario that the mesh is never more than 20 steps distant from the surface at any point. Once you hit those steps, you bail out, but then you immediately draw a polygon further down the mesh which lies beyond the aborted region. Polygons in a mesh are dispatched in the order they exist in memory, but you can't reliably predict which one will finish first. However, if you test against the depth buffer, you will get the same result either way. I'm working on a way to design a mesh with this sort of guarantee. "But Fr0stbyte," you might ask, "what if your ray is parallel to a gap between two blocks? It could legitimately travel for miles before hitting anything." And this is true. That would make it impossible for an isosurface mesh to guarantee a max distance to the surface. However, we don't need to restrict ourselves to isosurface meshes. The GPU doesn't care, hell, it can't even tell. What we can do instead is lay down polygon borders wherever the step size gets too large, and a new ray can start there, even if that is below the top surface of the mesh. In fact, if we generalize it, we could mark the borders to be at the 32x32 chunk boundaries at whatever scale it is using, ensuring that all the threads working on that specific polygon keep the entire polygon in cache. In fact, if we do that, a rough block mesh might even be more efficient than tessellation to get the mesh closer. One a texture is cached, its access time is a single clock cycle, putting it on part with any arithmetic operation. If that is the case, transforming additional vertices could be the slower route. I had investigated this route before, but I didn't think it would work with the primary height map. Now I am starting to think it will, and if it does, it will work great. | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Mon Aug 19, 2013 4:27 pm | |
| I mentioned this in the shadertoy thread, but the ability to access neighboring voxels looks to be more important than I anticipated. On top of that, implementing sparse virtual textures for shits and giggles is a really bad idea. Handling not only rendering but also streaming and management of the memory is a full time job in itself, and while it is not impossible, it definitely should not be attempted until proven necessary. What I would prefer is to have a basic height() function you can call from anywhere that gives you the local height regardless of the underlying texture data structure. With SVT, that means a ton of indirection, even for adjacent texels, which is something I wanted to avoid. At the same time, single-patch texture can't be guaranteed. If it was, fake AO and ray-traced shadows would be impossible. The shadows might not be high priority, but the fake AO is too valuable to overlook. So what I want to try first is an older terrain technique called clipmaps. Clipmaps are a bunch of maps sandwiched on top of one another, each one 2x the size of the one before it. This is similar to the concept of mip-maps, except here each octave has an identical resolution. Besides having a reliable footprint, there are some nice properties of this. For one, you can "scroll" a map by copying quadrants of it and leaving the rest of the texture in-place. Texture addressing simply involves wrapping around the coordinate system. Additionally, you can do trilinear filtering efficiently. Bilinear filtering is where you take the weighted average of a 2x2 patch of texels depending on the absolute point of the texture pointer. Trilinear does the same only it also blends two adjacent mips. The result is a smooth transition between LODs with no visible popping. Since the shader will turn even smooth terrain into equal-sized cubes, this is a good behavior to have. But the biggest reason is that clipmapping, though less flexible than some of the alternatives, will run much faster on older hardware, and if we want to make this a proper engine replacement for Minecraft, it has to be able to run at least as well as the original. I also want to try some tricks with the trilinear filtering, such as using the lower mips to extend the dynamic range of the height field. Normally, an 8 bit channel can only go +- 127 blocks, but if you include a few more heights, you can get ranges in the thousands, assuming the local variation isn't massive. So I'm trying this next. If it works, it will reduce the complexity of the render pipeline considerably, and more importantly, it will separate the height field access method from the raytracing algorithm, allowing us to experiment much more easily. I still don't know how to integrate this with the oct-tree structures, but that will come once this one is solidified. Additionally, I've decided to include trees in the global heightfield again. This way, aside from water, every exposed surface will have by definition a 15-level sunlight value, meaning one less thing that needs mapping. Additionally, leaves are only usefully transparent at a relatively close range, so it is better to make the general version opaque. Any areas which are shadowed will need to be corrected by some other mechanism, such as the oct-tree. *Edit* The numbers are looking promising, too. Assuming a DXT compressed texture channel, which can either be an alpha or a 16-bit color of sorts, the compression rate is 16 texels (4x4) in 64 bits, or 4 bits/texel. If we make the base texture 64, so the draw distance only goes ~2 chunks out before splitting, there will be 8 mips of 64x64 each. That comes to 2KB / channel per mip, or 16 KB for raw height. If we do 128x128, so resolution halves at 4 chunks, that is 7 mips of 8KB each, or 56 KB for raw height. To split at vanilla "Far" draw distance of 16 chunks, you need 512x512, in which case it is 5 mips of 128KB each, for a total of 640KB for raw height. In other words, if you have a 6 year old gaming gpu with 512 MB of memory, you are using 0.12% of it. Granted, this will be several times higher including the other necessities, such as block types and lighting. Another way to look at 640KB is 5,120 vanilla block faces worth of memory, which is not much considering there can hundreds of thousands of faces on "Far". Because the raytrace can accept interpolated values, making it difficult to spot the lower resolution texture, higher than 512x512 is probably not beneficial, and even 256x256 (6 mips @32KB = 192KB) might produce comparable results to existing vanilla (not identical, but you'd be hard-pressed to identify which was the original). Starting out with raw 8-bit values, 512x512 x5 mips is 1.25MB. Twice as large, but nothing to write home about. The fact is, this is still a terrifically efficient way to encode the vast majority of terrain data, enough that we can afford to be a little greedy. I'm also working on a new acceleration structure called cone step mapping (CSM). It represents a cone of safe space centered over the sampled texel which defines how far the next step can be. It is more difficult to generate than the max mipmap from before, but there are two advantages. The first is that the mip is no longer hijacked, so you can have natural averages and proper blending between mips. The other is that there will no longer be any complicated resolution hopping, meaning more lightweight logic in the marching loop and better branch behavior. Additionally, even if they are comparable to the max-mip technique, because the cones all point upward, it will be exceptionally effective bouncing back towards the sky for shadow tests. | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 4:06 pm | |
| I really need to stop doing this every day. I have other work to attend to dammit! But since I'm already here...
Not sure if I mentioned relaxed cone step mapping, but it is really neat. The idea behind cone step mapping is that you have a cone which represents empty space above a height field texel, and the traveling ray can cross to the border of this cone with no worry of hitting anything. To cross large spans while the ray is sufficiently high above the height field, the steps are large and the trace is efficient. However, once you start getting close to the ground, the steps get smaller and more conservative. It gets even worse with Minecraft, because the walls are 90 degrees, which makes it difficult to represent the useful slope.
Enter relaxed cone step mapping. It is a variation of CSM with a more liberal cone size. Instead of describing a volume where they ray will not intersect the surface, it describes a volume where the ray can only intersect the surface once. All those 90 degree corners are now inside the cone and don't matter, and once the ray passes below the surface, you know without a doubt that the intersection must be somewhere between there and the previous stride.
The NVIDIA article describing this technique suggests a binary search to home-in on the final point, but I don't think that will work since we aren't only trying to hit the top surface. Instead, we just start raymarching linearly like normal.
There are some possible optimizations here: 1) Tetrahedron instead of cone. Minecraft is blocky, so why not make the cone blocky as well? It will be easier to generate the cone field with this shape, and don't have many down sides. If we wanted to get fancy, we could represent the X and Z slope separately to make the cone step faster in one direction than the other. Probably not worth the extra memory, though.
2) Maximum altitude. Because the height field will be covered by a surface mesh (necessary for defining the starting positions of the rays), if we can guarantee a maximum error between the height of the surface mesh and the heightmap, then any ray passing above that height has without a doubt exited the mesh. While it is possible the ray could re-enter the mesh at some point, you could just as easily start a new ray from that place and avoid the long strides.
3) Limit effective cone range. Essentially, you would cut off how far away the cone considers blocks. It would cap the max stride distance, but if we are doing (2), then the max stride shouldn't get all that big to begin with. Instead, you can make even wider cones in the plains and take full advantage of the whatever the new stride distance actually is.
4) Slightly more conservative cones. Instead of being as wide as possible without letting a ray intersect the surface twice, cones can be arranged to limit the number of steps required to march to any place on the intersected surface, which is a simple Manhattan distance. Then you have a defined exit point in the shader, which helps even more.
------ On the downside, self-shadowing becomes a little different. Instead of tracing a new ray toward the sun like you normally would, you have to start a new ray from the sun and check whether it hits at the same mark, meaning the surface mesh won't help and (2) is invalidated, and (3) becomes a bad idea. On the plus side, due to how the cones work, if you ever cross under the height field, the target point is guaranteed obstructed, and if the target point is inside of the cone, it is guaranteed visible. Because of this, you'll never need to do a local search, which actually is a big deal.
Shadowing for everything else still must be done with shadow mapping, meaning we don't get to save on memory, but it's still an improvement. To generate a shadow map, everything needs to be re-drawn from the perspective of the sun, so that you can get depth samples, and these samples are compared against the camera POV to determine which areas are in shade. Objects in the shadow map will cast shadows on any object in the scene, regardless of whether they are in the shadow map or not. To cast shadows on those objects from the height field, you just do it the same way as before: step along the ray and see if you intersect the terrain before reaching the target depth.
In other words, the full search is only needed for drawing the terrain itself, not shadows. Special cases need to be made for "carving", but those should be infrequent enough that we don't need to worry about it yet.
*Edit* Actually, I think we can save a step and do the shadow tracing at the same time as reading from the shadow map.
*Edit Edit* Hmm, it occurs to me that the cones will necessarily violate their rule of one intersection when the ray is traveling upward, resulting in the need for potentially multiple linear searches. This technique was designed for relief mapping, where you would only ever have downward rays, but here that's not guaranteed. The way to avoid this is either rely on (2) and hope for the best on shallow traces, or include the conservative cones as well. If we did have both cones, we could accelerate shadow tracing further by starting at both ends and meeting halfway. The downward ray terminates early, while the upward ray accelerates the higher it gets, so which one is faster would depend on the terrain. | |
| | | ACH0225 General
Posts : 2346 Join date : 2012-01-01 Location : I might be somewhere, I might not.
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 5:51 pm | |
| But fr0st, what if the dual generation perlin engines don't fluctuate the noise properly!? | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 8:42 pm | |
| Throw more perlins at it till it do. | |
| | | ACH0225 General
Posts : 2346 Join date : 2012-01-01 Location : I might be somewhere, I might not.
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 8:56 pm | |
| But fr0st, what if the variables don't actuate the cross-reciprocator correctly!? | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 8:57 pm | |
| I might be over thinking the importance of the CSM. Not for shadows-it'll be awesome for that, but for regular forward rendering. The surface mesh will mostly be flush with the surface, or sloped for a few blocks. If it's more than that, we can just increase the geometry in that area to get a tighter fit. At this point, the mesh contains nothing but a coordinate, which is nothing. It's definitely worth it to mess with that to avoid worst-case in the shader. | |
| | | ACH0225 General
Posts : 2346 Join date : 2012-01-01 Location : I might be somewhere, I might not.
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 8:59 pm | |
| But fr0st, what if the dual band processing overpasses collapse because of workload!? | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 9:28 pm | |
| Then we make it singleton and run it out-of-core. Honestly, what are they even teaching in schools these days? - ACH0225 wrote:
- But fr0st, what if the variables don't actuate the cross-reciprocator correctly!?
Some error is to be expected in any reciprocator, but the trick is to not let it accumulate in the field variance. So long as the operator is deterministic (which can be assumed in shader model 2.0+), we can error correct the variance in post-processing. It won't affect the occlusion results, so the ordering doesn't really matter. Realistically, though, I don't think it will even be noticeable. | |
| | | MercurySteam Infantry
Posts : 543 Join date : 2013-06-22
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 10:45 pm | |
| But Fr0st, what happens if the soup can won't open? | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 10:57 pm | |
| - MercurySteam wrote:
- But Fr0st, what happens if the soup can won't open?
See what you can do with a secondary can opener, preferably a manual one. Failing that, use a flat-head screwdriver to finish the job. If you have no hole or ability to punch a hole, carefully work your way around the rim with needle-nose pliers, unrolling the metal as you go. This will let you remove the lid. | |
| | | Tiel+ Lord/Lady Rear Admiral 1st
Posts : 5497 Join date : 2012-02-20 Age : 27 Location : AFK
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 11:01 pm | |
| I've also heard reverse-firing handguns seem to do the trick. Never seen anyone complain about the results. | |
| | | ACH0225 General
Posts : 2346 Join date : 2012-01-01 Location : I might be somewhere, I might not.
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 11:04 pm | |
| But fr0st, what if the if-then statements all are proven false!? | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 11:07 pm | |
| That is a good thing. Less to render. | |
| | | Joel Marine
Posts : 1473 Join date : 2012-04-01 Age : 27 Location : A Death World, stopping a Waaagh!
| Subject: Re: Additional heightmap compression. Tue Aug 20, 2013 11:12 pm | |
| But Fr0st, what if the cross-compatibility actuator's induction powered latch keys for the circuit housing fail?!?!?!?! | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Wed Aug 21, 2013 3:51 am | |
| I don't know what that means. | |
| | | ACH0225 General
Posts : 2346 Join date : 2012-01-01 Location : I might be somewhere, I might not.
| Subject: Re: Additional heightmap compression. Thu Aug 22, 2013 7:22 pm | |
| But fr0st, what if the computing nacelles don't properly dispense raytracing equations? | |
| | | fr0stbyte124 Super Developrator
Posts : 1835 Join date : 2011-10-13
| Subject: Re: Additional heightmap compression. Thu Aug 22, 2013 8:15 pm | |
| That's a risk I am willing to take. | |
| | | ACH0225 General
Posts : 2346 Join date : 2012-01-01 Location : I might be somewhere, I might not.
| Subject: Re: Additional heightmap compression. Thu Aug 22, 2013 10:33 pm | |
| But fr0st, what if - Code:
-
void Next(state) { switch (state) { case INITIAL: DoThing1(); break; case THING1: DoThing2(); break; case THING2: DoThing3(); break; case THING3: // do nothing -- exit break; } }
void DoThing1() { // Do "thing 1" Next(THING1); }
void DoThing2() { // Do "thing 2" Next(THING2); }
void DoThing3() { // Do "thing 3" Next(THING3); }
? | |
| | | Last_Jedi_Standing Moderator
Posts : 3033 Join date : 2012-02-19 Age : 112 Location : Coruscant
| Subject: Re: Additional heightmap compression. Thu Aug 22, 2013 10:55 pm | |
| Every time I see a post in the ideas section, I hope it's something interesting actually happening to the mod, but instead it's ACH spamming nonsense. | |
| | | Sponsored content
| Subject: Re: Additional heightmap compression. | |
| |
| | | | Additional heightmap compression. | |
|
| Permissions in this forum: | You cannot reply to topics in this forum
| |
| |
| |
|