#1 (קישור ישיר)
תאריך הצטרפות: Dec 2006
Intel shows off Raytraced Quake 4
NTEL WAS SHOWING off Raytraced Quake 4 at IDF, and it's creator Daniel Pohl gave us a little tour of the game. There is a lot here that will be in the next generation of games, or the ones after that, this is much more than a demo.
You see it as soon as you watch the demos like the one above, it does things that other games don't, they cheat and cut corners, this one does not have to. See the multiple reflective surfaces that are all accurate? Try that with standard raster graphics.
Raytraced Quake 4 (RQ4) casts one ray per pixel by default, and using four ganged 4C machines, it can run at an almost playable frame rate. This means a Tigerton system should be more than able to render this on a CPU with a GPU there simply to throw pixels on the screen.
There is another raytracer at Intel that will so the same job on a single QC CPU, so this is not a job for a Tyan Typhoon anymore, you can do it on a Kentsfield box now. In a year, this will be quite acceptable on a mid-range machine, and in three, you will have to wonder why you need GPUs anymore. Oh wait, that is no surprise, it is already faster on a Kentsfield than a G80.
If you want to do the same colored lights trick that Epic pulled off in UE3 a few GDCs ago, you can, and you get 100% correct shadows thrown in for free. You also can do physically correct glass shaders, reflection and refraction.
If you have the CPU power to spare, you can also cast more than one ray per pixel and get the equivalent of AA and AF. With 16X AA, you obviously need 16x the horesepower, but that is already on the roadmaps.
Without getting too much into the math, lets just say there are some scaling advantages to using raytracing over normal raster graphics. Where you would have to up geometry and take a non-linear scaling hit if you are using a rasteriser, raytracing will do the same job with a much more linear increase in complexity.
In a neat bending of technology to an unintended use, Daniel Pohl did one really cool thing, he used the same rays that you use for graphics to do collision detection. You cast rays out from the player and everything they hit may be an object.
Since the math is being done already, collision detection, one of the harder problems with 3D games, is done for you. It isn't free, but considering how many millions of pixels there are on a screen, 1600*1200 would be almost 2 million pixels, a few hundred more per object is rounding error. You can do much more accurate collisions for every bullet and bit of debris spinning around, and do it right.
Basically, this is the future. AMD and ATI have both said this is the path they are on, and Intel has demo'd it at a few IDFs. With the work Daniel Pohl did, combined with a few Intel programs, it is on the short term horizon for gamers. Now that Pat Gelsinger has uttered the 'L' word, you know things are
E6400 2.14@Stock, DFI Infinity 975X/G, EVGA 7900GS
2GB Twinmos 667MHZ CL5 Enermax Liberty 400W ,CoolerMaster Centurion 5