Home Forums Programming Your Existing Code Throw it Away?

Welcome to our forums. These forums were active from 2003-2014. We have now decided to close them down, but will leave them here as an archive.

Remember you can send us feedback, news, jobs and content ideas by clicking here.

If you're really stuck for time, email news@gamedevelopers.ie.

You can also follow us on Twitter @gamedev_ie 

 

 

This topic contains 9 replies, has 6 voices, and was last updated by  feral 12 years, 2 months ago.

  • Author
    Posts
  • #4459

    gizmo
    Participant

    The issue of writing game code for multi-core processors has been touched upon here previously but Gabe Newells comments in this article REALLY surprised me…

    “Technologically, I think every game developer should be terrified of the next generation of processors. Your existing code, you can just throw it away. It’s not going to be helpful in creating next generation game titles.”[/quote:33f8cad666]
    Terrified? It sounds abit extreme but this is the guy who has brought us Half-Life so I guess he kinda knows what hes talking about…

    The amount of time it takes to get a good multicore engine running, the Xbox 360 might not even be on the market any longer. That should scare the crap out of everybody.”[/quote:33f8cad666]
    Hrm…

    His comments regarding hardware manufacturer claims regarding perormance, especially “Oh, the PS3 is going to be twice as fast as an Xbox 360′ are totally meaningless.” surprised me also. I’ve never heard anyone in the industry being so frank…

    Link

  • #23794

    peter_b
    Participant

    well all i can say is i tried once to write a true multi processer piece of code using c and mpi, using u.c.c. supercomputer (contains a feck load of processors) and i can say to write a multi core fiburnacci sequence generator was a frick nightmare. And thats an algorithm you’d bang together in about 20 lines of normal code, and one you probably learned in the first 2 months of programming.

    I can imagine a state of the art 3d engine is going to scale up to a really shit hard challenging piece of coding. Although i have no doubt they will rise to the challenge, they have too.

  • #23802

    feral
    Participant

    This isn’t that surprising. Well, maybe the magnitude of his statements are, but not the direction.
    This multicore issue isn’t limited to video games, the entire computer industry is probably heading for more processing units, as opposed to faster ones. (again, not surprising, it’s getting real hard to increase clockspeed).

    Multi process programming is very different to single core and concurrency is hard to do right.

    A point of view might be that for coders in countries like Ireland this is a good thing…

  • #24179

    y2kprawn
    Participant

    Im a big fan of following the CPU market in general, and Im in the study of programming in this game world, perhaps the following comment is a little thick, but sure im gonna try it anyway and , sure, we shall see what comes out.
    Perhaps write a central part of your engine , like an OS for your game, which finds a core and sends whatever task you need to schedule next onto it, in essence , write all code in the usual fashion , and write one controlling piece of code to essentailly “dole” out the CPU(s) power to your game code, if you know what I mean. Then the code is pretty usual linear stuff, but the “OS” part is sending it whereever the task can be done best.
    My worry is that if you identify in any problem, a core to work the problem, that perhaps your code would be better spent somewhere else mixed in on another core on the CPU, using a core for , well, lets say physics, makes it redundant for other tasks, which may need it. So you may never utilise all the power of these wonderful CPU’s.
    Dunno, its late, I might be raving, but at least im doing it in the right place.
    Any thoughts ?

  • #24180

    Steph
    Participant

    I may make a complete tit of myself with the following, but…

    I can’t help but notice that perhaps SGi had it right first time, and considering the main reason for next-gen to go multi-CPU is to push more & better-looking pixels out of the box into the tube, I kinda see an analogy with the development of high-end image processing software (movie-res’ stuff) that’s been engineered to take advantage of dual- (or more) CPU SGi boxes for an awful long time…

    However, isn’t it paradoxical, that multi-CPU SGi boxes would be on the wane over the past 2/3years and lose favour with image softw developers and the special effects industry for muscly, single-CPU IBM workstations, when ‘mainstream’ consoles are now apparently progressing the other way?

    I’ve long postulated that the movie/games convergence debate (well, from the POV of the VG industry) was not properly focused:
    (i) it’s nothing to do with the content itself,
    (ii) it’s more to do with the content development model (including the business angle) and, especially,
    (iii) it’s all to do with the tools and the skillsets.

    If you’re a developer and you want to get ahead of the pack with one or more programmers experienced with engineering multi-core software, you could do worse than look at previous Discreet, Avid, Alias|W or (back in the day) 5D employees :wink:

  • #24248

    feral
    Participant

    If you’re a developer and you want to get ahead of the pack with one or more programmers experienced with engineering multi-core software, you could do worse than look at previous Discreet, Avid, Alias|W or (back in the day) 5D employees[/quote:d638f9ae63]

    The multi processing skills may carry over, I really can’t speak for that.

    The techniques used in movies (non real time rendering to make it look as pretty/real as possible) versus in games (crazy hacks and bodges to just make it run fast enough) have traditionally being quite different.

    It’s always changing, but I’d say the skills required to write a real time renderer (game engine) versus non real time are still very different.

  • #24251

    feral
    Participant

    Dunno, its late, I might be raving, but at least im doing it in the right place.
    Any thoughts ?[/quote:908e67a8ac]

    I think it’s probably more complicated than that.

    Imagine this sequence of instructions (shown as pseudocode) gets executed on your Xbox360:

    player.health = 100;
    A) player.decreaseHealth(10);
    B) if(player.health >= 100){exitLevel(PlayerHasWon);}

    What happens if instruction A gets sent to Core 1, and instruction B gets sent to Core 2? (as doled out by your ‘OS thing’ (scheduler)).

    One of two things happen:
    1) Core2 use its locally cached copy of the health variable? (In which case the player wins the level, when they shouldn’t. (This is called a dirty read)).

    2) Core 2 waits for Core 1 to finish, so Core1 can execute A, then Core1 updates the health variable in shared memory, and then Core2 reads it, getting the correct value.

    1) is incorrect behaviour.
    2) is slow.

    So you try and solve the problem, maybe by having the scheduler do dependancy analysis at run time (Remember, the above are instructions, not code, so you can’t analyse statically). For larger real world examples, this type of analysis doesn’t work so well.

    So programmers instead try and, er, arrange their code so that these concurrency problems dont occur, and so that bigger chunks of code can be ‘doled out’, without causing problems. (Maybe you arrange things so that your graphics engine only reads variables, and is only allowed read them when they are in some good state, as marked somewhere else etc…)

    But arranging the code in that way can again be hard to do in such a way that the code doesn’t interfere with itself, and behaves predictably, while mantaining efficiency.

    There are whole range of different techniques used to tacke this, and other problems arising from concurrency/parallelism. (One simple technqiue for above might be to the mark A&B as an atomic operation by some programming construct).

    That’s not meant to be an accurate or real world explanation, but hopefully gives a flavour of the problem? Writing the ‘OS thing’, or scheduler, is non-trivial.
    Again, this concurrency stuff has been around in various forms for a long while (such as in the very chips you follow, at a low level… long pipelines in any modern cpu is an example of concurrency, and compilers use careful arranging of instructions to try and maximise throughput).

    It’s just new to games programming.
    But it is a very well known problem, with many different techniques aimed at solving it.
    At a very high level, what you are describing could be grids :)

  • #24280

    Steph
    Participant

    The techniques used in movies (non real time rendering to make it look as pretty/real as possible) versus in games (crazy hacks and bodges to just make it run fast enough) have traditionally being quite different.[/quote:19ae7d35d3]

    Sorry – my bad(ly-phrased), so I’ll clarify: I was on about programming techniques to develop post-processing software that makes use of multi-processor (RISC mostly) specialist boxes.

    Such software is not used for the non-RT final render, but it is used to work in RT on -say- the colour/tone of a movie frame. In such instances, because of the frame size (20k pix by 20k pix these days), the shrinking post-proc timescales and -generally- the workflow of the artist, the multi-CPU box/post-proc software optimised for multi-CPU must be able to render the portion of frame displayed with the modified colourspace (since they don’t yet make 20k by 20k monitors) in as RT/close to RT as possible.

    Traditionally, most of the issues with the above has been with RAM and bandwidth (again because of frame/file size), but there is constant code optimisation taking place to take advantage of multiple CPUs, and the issue of parallelism/concurrency is old hat (well, relatively speaking of course) in the particular programming environment to which I was referring.

    It’s always changing, but I’d say the skills required to write a real time renderer (game engine) versus non real time are still very different.[/quote:19ae7d35d3]

    Well, I don’t know enough to comment so I’d only make more of a tit of myself, and I’ll take your word for it :D

  • #24643

    philippe_j
    Participant

    feral :
    I can’t help but ask.
    All these problems you are describing sound very much like the problems you would get in a database environment, doens’t it? I forget the buzzwords as this was five years ago, but I remember stuff about lock concurrency and how to handle them…
    maybe those database programming skills could come handy, after all :roll:

    Philippe

  • #25816

    feral
    Participant

    philippe_j:
    I imagine a lot the concurrency issues found in any specific domain are general issues with concurrent execution of code, and well studied – albeit in different forms – in areas such as databases.

    I was trying to make a point about concurrency in general, and explain why it isnt’t just a simple matter to just solve it using a fairly straightforward scheduler.

    I don’t know exactly what sort of challenges people are facing on next gen hardware, but, as you point out, I’m sure that traditional approaches to concurrency issues from other domains (as you point out databases) are the first place people will look.

The forum ‘Programming’ is closed to new topics and replies.