JoeJ said:
Yeah, the problem with nostalgia. But i think that's a big warning. If nostalgia becomes big, and it currently does (Remasters, retro-games, even retro HW), that's simply an indicator we are beyond some peak, and the arrow goes down. It means we need to come up with something truly new, i'm afraid. Analyzing old masterpiece games likely wont help us here.
Game dev has become too much inundated with commercial interests and software development has become too hard. Back in the 80s, it was normal to ship an in-house OS + game on a diskette. Everyone could write an OS and a game. But now, you are forced to depend on pre-existing OS APIs that are derived from mainframes and you have no idea what hardware your game will run on (availability of certain SIMD extensions, GPU vendor & model, etc.). If you want to play it safe and target old OpenGL versions, you're wasting your CPU time. You have this immense complexity in the increasingly brittle software stack that didn't exist in the 80s and early 90s.
You have to put a lot of effort in to even get some basic prototype going. Software studios need more and more specialised experts for anything (think of the whole docker in a VM debacle, just to be able to even compile something) and the efficiency of programmers is going down as a result. Software is becoming extremely expensive to develop, especially if it has to perform well. Which means almost nobody has the means to just try stuff out. And that's why everyone who has the means is risk-averse. Any software that either communicates with hardware or with other software is basically impossible to maintain. And as long as it stays that way, we will not see a resurgence in software overall.
Once we make development simple again, people will be able to try out new things with much less effort, less of a learning barrier, and in turn, less financial risk.
JoeJ said:
Shitty Atari 2600 games were great at spurring imagination of the player. You came up with your own story while looking at abstract boxes moving on screen. Sadly we have lost this while improving graphics and overall realism. We show it all, so there is no more point to imagine while playing. That's a huge loss. Scorn is maybe not the perfect game, but it manages to bring imagination back. You do not understand the world you see, so your imagination works hard to make sense out of it. That's awesome.
Old games had really strong atmosphere, but crappy mechanics, especially the early 3D games. Maybe those games were built the other way around, centered around the story or world, and modern games are almost exclusively centered around their mechanics, I think. Old games like Gothic had an intent and entered a dialogue with the player (what you called spurring the imagination), while modern games seem to just drop the player into a world, but all he really experiences is the mechanics. Maybe the mechanics have far eclipsed the rest of modern games, and the balance is messed up. Eventhough Gothic had a shitty camera & combat system, and the enemies weren't even that great, and the economy was unbalanced, it still is a great world that feels alive somehow. Meanwhile, other games treat the game world and characters as mere background, and put moment-to-moment action in the foreground. But moment-to-moment action is not memorable and it is not emotionally relatable. It's like old games gave you a world or story to experience, but modern games give you a core gameplay loop and then only have a world as a vehicle for that loop. And then there's the weird AAA games that are basically interactive movies and player action is a mere filler between cutscenes.
For example RPGs are basically all derived from DnD, which had always been about making a fantasy world accessible for adventuring. But modern RPGs and especially MMORPGs are now some weird kind of action / battle / powerup simulator, which happens to usually play in a fantasy world. And they try to make that world appealing, but they cannot hide the fact that they are no longer about that world and its adventure, they are about their core gameplay loop, which is slaughter, loot, powerup, repeat. Virtually all quests and storylines are just a facade to gloss over the uninspired gameplay loop.
JoeJ said:
I can not imagine what it means to build your own CPU / GPU. But if FPGA makes this possible to a single guy, that's pretty mindblowing.
Well, it's complicated and mostly like writing a hand-optimised assembly program. You have very limited amounts of logic you can put on an FPGA in comparison to regular hardwired chips, but recent FPGA lines have increased the available logic capacity by an order of magnitude. I'm mostly waiting for them to become a bit cheaper, and then I'll have to write a program that can output tweakable versions of the overall same circuit, so that it can be used on different FPGAs with different logic capacities. I often think about circuits when I'm in bed, so I think I'm familiar enough with it to produce something acceptable. You can easily get some very primitive CPU going without too much effort, but if you want pipelining, moderately complex instruction sets, etc., you'll have to really put your nose to the grindstone.
JoeJ said:
I think flexibility is much more important than raw performance. But that's no ideology, it's because i want performance.
Well, it's a more or less known fact that current CPUs and current programming languages don't fit properly. CPUs want to handle large batches of tightly packed sequential data in branchless loops. Current programming languages want indirection, subroutines, objects. CPUs want structs of arrays, not arrays of structs. Etc.
The discovery phase of what computing is and how CPUs work is over. We have pipelining, caches, HW prefetching, branch prediction, out of order execution, all that stuff. And we have written a lot of software, too. We know all the major features that we need to express computation. Now we should just scratch all the junk and start over with a holistic design that incorporates all the lessons we have learned, without all the mistakes that propagate themselves throughout languages and ISAs. It's astonishing how much sophistication modern CPUs have in their JIT translation to microcode (which is the actual commands they execute, the ISA is actually just a programming language / API at this point), trying to parse semantic patterns out of x86 ASM to activate some hardware acceleration for a sequence of instructions, etc. We could relieve the CPU of that burden and directly tell it the semantic pattern of what we want to achieve, and then all that circuitry can be spent on more ALUs or something, which can improve the throughput of the dynamic GPU-like SIMD mode. And if all the logic circuitry and cache space of on-chip GPUs was directly available to the regular CPU, too, you could boost general computation performance by a lot, too, and then do graphics in software, using the GPU-like SIMD mode instead of a dedicated separate part of the chip that is dead weight when we're not trying to do graphics. The achievable performance should be roughly the same for graphics, but massively increased for general computation. And that is before even talking about actual multi core.
JoeJ said:
Initially, HW acceleration made things easier. But nowadays, it only makes it harder. We are past that fixed function era. We need flexibility first, so we can make better software, instead relying on better HW.
Definitely. Graphics is also just a form of massively parallel general compute. All we want to do is draw pixels on the screen. GPUs should be obsolete if CPUs stopped being linear. CPUs would be cheaper and more efficient if they didn't require complex circuitry just to interface with an ISA that does not actually reflect the hardware anymore (for many years now, and the mismatch just keeps growing).
JoeJ said:
Maybe, it's even the future. Idk, but maybe AMDs new ‘AI CPU’ features are FPGA based? It's Xilinx stuff, afaik. So maybe, we could reprogram it, to turn AI crap into something actually useful?
I doubt it, because FPGA circuitry is inefficient. I would expect them to hardwire certain network topologies for ANNs with a hardwired activation function, they could fit probably at least 10 times as much circuitry in there (or even way more, probably) if they went for hardwiring instead of FPGA. And it could operate at much higher frequencies.