CUDA is definitely one of the coolest technologies to pop around in a long time. It’ll be really interesting to seeing what ingenious ways the power of the GPU can be harnessed to speed up games, and indeed computing at large.
One thing I find slightly worrying about it though. From the CUDA docs:
3.5 Mode Switches
GPUs dedicate some DRAM memory to the so-called primary surface, which is used to refresh the display device whose output is viewed by the user. When users initiate a mode switch of the display by changing the resolution or bit depth of the display (using NVIDIA control panel or the Display control panel on Windows), the amount of memory needed for the primary surface changes. For example, if the user changes the display resolution from 1280x1024x32-bit to 1600x1200x32-bit, the system must dedicate 7.68 MB to the primary surface rather than 5.24 MB. (Full-screen graphics applications running with anti-aliasing enabled may require much more display memory for the primary surface.) On Windows, other events that may initiate display mode switches include launching a full-screen DirectX application, hitting Alt+Tab to task switch away from a full-screen DirectX application, or hitting Ctrl+Alt+Del to lock the computer.
If a mode switch increases the amount of memory needed for the primary surface, the system may have to cannibalize memory allocations dedicated to CUDA applications, resulting in a crash of these applications.[/quote:c85a452b62]
Depending on how fragmented V-RAM is, how smart the display driver is with relocating and allocating resources, and whether GPU memory addressing is virutal or real – this could be a problem for some CUDA apps.