22nd May 2003, 3:14 PM
Z-Buffer is pretty close to actually drawing everything. It checks every single polygon's location on the screen to see if anything has been drawn there yet. If not, it draws the polygon and keeps trakc of how close it is to the screen. If another polygon has already been drawn there, then it compares which is closer to the screen. If the old one is closer, that it doesn't draw the new one. But if the new polygon is closer, then it draws the new one over the old one. So either way it has to run through every polygon and will draw a lot of them, so that sounds kind of like what you are talking about lazy. The depth-sort technique actually sorts the polygons into furthest to closest order, then uses that to draw. So it's a little better. It could also be that there are new techniques that have been developed and are now widely used or that Nintendo being the super geniuses that they are developed their own way of finding what to draw.
ABF- I thought you said before that it was a new idea, so I was disagreeing with that. Implementation, however, is an entirely different matter. I have no idea what algorithms are used in what graphics cards or how long they have been in use for standard machines. I'm a college student, we only learn theory! :) As an example, they've had ray tracing algorithms since the 60's, but it STILL is not something you can do in a real time environment, and probably won't be for another 5-10 years.
ABF- I thought you said before that it was a new idea, so I was disagreeing with that. Implementation, however, is an entirely different matter. I have no idea what algorithms are used in what graphics cards or how long they have been in use for standard machines. I'm a college student, we only learn theory! :) As an example, they've had ray tracing algorithms since the 60's, but it STILL is not something you can do in a real time environment, and probably won't be for another 5-10 years.