- This topic has 5 replies, 4 voices, and was last updated 12 years, 3 months ago by Anonymous.
August 18, 2008 at 7:35 am #6886AnonymousInactive
Check this out! Works on shite PCs, in a browser, with no plugins! Allegedly!
August 18, 2008 at 9:34 am #42047AnonymousInactive
Thats ridiculous. Perfectly sharp shaddows etc. No way is this real and ain’t gona be without major technology advances.
As article says you would need a super graphics card for each player or be able to batch the scene and with all that light interaction they don’t say how that could be done.
Only way in my mind to currently achieve something like this is a pre rendered database Quicktime VR style on a Google farm like the way Meez render their avatars but clearly that’s not what’s going on.
City bits sure a purdy though. Good modellers. Which is what I think they want to hear from people watching video.
August 18, 2008 at 2:13 pm #42061AnonymousInactive
I was thinking about whether it was possible to make something broadly like this recently. (Stream video of 3d world)
I haven’t studied the video in detail, but I wouldn’t be so quick to call BS on it (as a concept, if not the actual quality of graphics you see there)
As article says you would need a super graphics card for each player or be able to batch the scene and with all that light interaction they don’t say how that could be done. [/quote:393027b80f]
The article seems to assume they are going for a rasterisation based approach… There doesn’t seem to be any reason to think this, and it wouldn’t seem very smart off the bat…
What if you consider some sort of global illumination based approach instead?
Could you potentially store a big complex globally illuminated world on your servers, and stream viewpoints to players as they moved through it? With a good few player viewpoints moving around your complex scene, and maybe specialist hardware in your servers, this might work out cheaper than rasterisation of a viewpoint for each player..? particularly if you don’t have to render avatars for the viewpoints themselves or if you fudge that?
High bandwidth requirements, of course, to make it work, but that might be acceptable.
August 18, 2008 at 3:20 pm #42062AnonymousInactive
The bit at the start with moving cars etc. is purportedly pre-rendered video, but the ‘real time’ stuff is not too far away from what ATI are demoing here for their new card:
and Nvidia are showing here (albeit on 4 futuristic Quadros):
August 18, 2008 at 3:21 pm #42063AnonymousInactive
Well I get the terminal concept (and I thinks its cool). I have thought about it for mobile phones where the rendering of 3D graphics is constrained but can draw streamed images and pass inputs back to server.
My problem was with the actual quality of graphics shown, didn’t look like they are fudging on quality at all, they had dynamic lights, shaddows etc so I am calling BS that these guys have achieved what they are saying.
I have no doubt we will have dumb terminals receiving fancy graphics soon just not unconstrained flight through massive city scapes for a few years at least. :wink:
August 29, 2008 at 11:21 am #42141AnonymousInactive
OTOY (the original links) are sponsored by AMD/ATI, and the opposition are NV of course, who own Mental Images, who are now showing Reality server:
A little bit more realistic perf (downgrades as viewpiont moves) as they are doing the whole scene, not just the whole ATI ‘Cinema 2.0’ style composting that OTOY originally where demoing.
EDIT: for the original OTOY demos, not composite based, see:
- The forum ‘General Discussion’ is closed to new topics and replies.