Home Forums Programming nVidia CUDA w/ PhysX support

Viewing 3 reply threads
  • Author
    Posts
    • #6765
      Anonymous
      Inactive

      Don’t know how long this has been out in the public domain, but I just noticed it today.

      On the bottom of the nVidia/PhysX page is a note:

      "*Note: NVIDIA will deploy PhysX on CUDA-enabled GPUs later this year. The exact models and availability will be announced in the near future."
      [/quote:920dc34d7c]

      http://www.nvidia.com/object/nvidia_physx.html

      If I understand this right, (with a little help from the all-knowing Wikipedia)

      CUDA ("Compute Unified Device Architecture"), is a GPGPU technology that allows a programmer to use the C programming language to code algorithms for execution on the graphics processing unit (GPU).

      By opening up the architecture, CUDA provides developers both with the low-level, deterministic, and the high-level API for repeatable access to hardware which is necessary to develop essential high-level programming tools such as compilers, debuggers, math libraries, and application platforms.
      [/quote:920dc34d7c]

      So, CUDA is adding functionality to its API for PhysX to run in the GPU, GeForce8 or GeForce9 families.

      I wonder how Intel / ATI / Havok are going to respond to this one…

      B.

    • #41413
      Anonymous
      Inactive

      CUDA is definitely one of the coolest technologies to pop around in a long time. It’ll be really interesting to seeing what ingenious ways the power of the GPU can be harnessed to speed up games, and indeed computing at large.

      One thing I find slightly worrying about it though. From the CUDA docs:

      3.5 Mode Switches

      GPUs dedicate some DRAM memory to the so-called primary surface, which is used to refresh the display device whose output is viewed by the user. When users initiate a mode switch of the display by changing the resolution or bit depth of the display (using NVIDIA control panel or the Display control panel on Windows), the amount of memory needed for the primary surface changes. For example, if the user changes the display resolution from 1280x1024x32-bit to 1600x1200x32-bit, the system must dedicate 7.68 MB to the primary surface rather than 5.24 MB. (Full-screen graphics applications running with anti-aliasing enabled may require much more display memory for the primary surface.) On Windows, other events that may initiate display mode switches include launching a full-screen DirectX application, hitting Alt+Tab to task switch away from a full-screen DirectX application, or hitting Ctrl+Alt+Del to lock the computer.

      If a mode switch increases the amount of memory needed for the primary surface, the system may have to cannibalize memory allocations dedicated to CUDA applications, resulting in a crash of these applications.[/quote:c85a452b62]

      Depending on how fragmented V-RAM is, how smart the display driver is with relocating and allocating resources, and whether GPU memory addressing is virutal or real – this could be a problem for some CUDA apps.

    • #41415
      Anonymous
      Inactive

      CUDA is definitely one of the coolest technologies to pop around in a long time. It’ll be really interesting to seeing what ingenious ways the power of the GPU can be harnessed to speed up games, and indeed computing at large.

      One thing I find slightly worrying about it though. From the CUDA docs:

      3.5 Mode Switches

      GPUs dedicate some DRAM memory to the so-called primary surface, which is used to refresh the display device whose output is viewed by the user. When users initiate a mode switch of the display by changing the resolution or bit depth of the display (using NVIDIA control panel or the Display control panel on Windows), the amount of memory needed for the primary surface changes. For example, if the user changes the display resolution from 1280x1024x32-bit to 1600x1200x32-bit, the system must dedicate 7.68 MB to the primary surface rather than 5.24 MB. (Full-screen graphics applications running with anti-aliasing enabled may require much more display memory for the primary surface.) On Windows, other events that may initiate display mode switches include launching a full-screen DirectX application, hitting Alt+Tab to task switch away from a full-screen DirectX application, or hitting Ctrl+Alt+Del to lock the computer.

      If a mode switch increases the amount of memory needed for the primary surface, the system may have to cannibalize memory allocations dedicated to CUDA applications, resulting in a crash of these applications.[/quote:d6d47fe6b0]

      Depending on how fragmented V-RAM is, how smart the display driver is with relocating and allocating resources, and whether GPU memory addressing is virutal or real – this could be a problem for some CUDA apps.[/quote:d6d47fe6b0]

      This shouldnt be a problem to professional game developers, we dont allocate stuff continuous (i.e. new\delete is banned from games) you use overloaded versions which allocate off your own previous allocated heap. So as long as you dont over allocate on creating this heap you should be fine. Also its higly unlikely you will be changing resolution mid game??

    • #41430
      Anonymous
      Inactive

      This shouldnt be a problem to professional game developers, we dont allocate stuff continuous (i.e. new\delete is banned from games) you use overloaded versions which allocate off your own previous allocated heap. So as long as you dont over allocate on creating this heap you should be fine. [/quote:7fc57ec4ec]

      I do understand that most (if not all) commercial game engines implement their own memory management and create their own fixed size heap at startup. Thing is that’s regular system memory and you’re pretty much guaranteed that Windows or whatever console kernel your running on will not ‘seize’ it off you after you’ve allocated it. Your OS kernel might move your memory to a page file if RAM is low or the app has been idle but at least you’ll always be able to access it- without crashing your application.

      The point I’m making is that this spec from NVIDIA seems to turn that normal convention on it’s head and you now no longer have any guarantees about the memory you allocate on the GPU. The display driver is free to do what it pleases and release whatever memory it wants- bringing down your application in the process. Also, the ‘allocate a big chunk of memory’ approach would more likely make the problem worse in this case- making it harder to shuffle about blocks of memory and rearrange them to meet the needs of the bigger primary buffer size.

      It probably won’t be a problem in most cases, but it’s still something to be wary of. Any potential source of instability in an application is never a welcome thing ! We have enough of them already as it is ! :P

      Also its higly unlikely you will be changing resolution mid game??[/quote:7fc57ec4ec]

      On consoles, no. It is never going to happen. On Windows, yes- it can always happen even if your application doesn’t give the user the option to switch modes. Task switching can be one way this can happen. Vistas lock / login screen (or whatever it’s called) can also cause a mode switch too. And other misbehaving applications can trigger it also..

Viewing 3 reply threads
  • The forum ‘Programming’ is closed to new topics and replies.