Advertisement

The Next Huge Leap in Computing Power?

Started by April 09, 2014 04:21 AM
43 comments, last by Prefect 10 years, 4 months ago

Hi,

I see that I was indeed not clear enough, so...


2) Virtual transistors - Why can't coding emulate the digital transistor better than it does?

What does this mean? Software transistor or memory transistor instead of hardware transistor. Could this become a way of overcoming the inconsistencies of hardware across devices? ? In any case, you're far better off emulating it in software, due to spacial constraints and acquisition complexity in hardware, not to mention drivers supporting it. Perhaps all those complications could be avoided with some kind of binary transistor, memory transistor (memory locations interacting directly with one another (Not software!) to form another kind of transistor instead of hardware transistor circuitry.


3) 3D light processors (3 dimensions instead of the 2 dimensional chip and using light in the processing in conjunction with electricity, such as micro-LEDs)

Chips are already 3-dimensional, so I also don't understand what you mean here? Chips are 2 dimensional whereas cubes are XYZ - 3 dimensional. Cooling of a cube could be satisfied with cooling tubes right thru the cube. Instead of 2 dimension chip circuitry, transistors and other components could interact with transistors above or below them in the cube (as well as adjacent to them on the same level). In mathematic terms, this would increase computing processing by a power exponentially, even far beyond the number of transistors to the 3rd power and beyond.

Using LEDs to process light is probably the worst possible idea, because 1) it's not consistent over time due to degradation, Degradation doesn't matter in ON/OFF operations or only as an original light source. 2) it's analog technology, Could be digitally manipulated which introduces all sorts of problems, such as data acquisition and control Sensors can detect light. 3) it'll take up far more space than a digital processing unit, System could make it worth it by increase processing combinations exponentially. 4) it won't be flexible, and Flexibility comes from exponential increase in processing power 5) it'll be slow because LEDs require multiple microseconds before they "stabilise" their emission of light. Slow doesn't matter as an original light source.

I envision light transistors some day, made by the interaction of light. Changes in light can exist in areas very much smaller than the current size of digital transistors by far. Somehow that must be exploited.


4) What is the probability that several immerging different technologies compete for many years and each will be promoted by competing companies?

Ever heard of AMD, nVidia, Intel, etc....? AMD, NVidia, Intel, etc. are fundamentally the same. I mean DIFFERENT technologies (not variations of the same). As different as electric motor propelled cars are different than internal combustion propelled ones are - is the kind of different that I am saying.


6) Will performance leap enable more inefficient coding to be competitive?

That's an interesting question. Comparing code on embedded systems (where resources are scarce) to code for front-end applications yields some key differences. Embedded code is far more compact, filled with all sorts of hacks and weird optimisations, because the code only targets a single instruction set so you can get away with it. Front-end code is always more structured and favours clean and comprehensible code over speed.

So with that in mind, increased performance certainly relaxes constraints, but I don't think it directly means that programmers become sloppy.

I believe that huge leap in processing performance could enable sloppy, inefficient coding at all levels.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer


2) Virtual transistors - Why can't coding emulate the digital transistor better than it does?

A transistor is just a switch, on or off, so I suppose that software already does this, converting binary into readable decimal number output.

What if parallel processing could much more efficiently use coding transistors rather than relying on hardware transistors? Maybe this could be memory or cache based? When I think about mathematical multiplying effects in the area of possibility/ probability math, I think that we are very underachieving in processing somehow. We should tap into probability math more.

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

Advertisement

Well, that is why Sketchup integrates well with CAD software. And there are many who would prefer to use Sketchup if they were not so set in their ways.

There are companies that use SketchUp as a CAD tool, but usually they're smaller companies. The aerospace or automotive companies use the higher-end packages like NX or CATIA or Creo because they have higher-level functionality with respect to parts and assemblies, and they have richer feature sets that implement more complicated geometric functions, like multi-section sweeps and so on. You can't really create easily-modifiable airfoil shapes without things like that.

I do agree that CAD is kinda broken, and there need to be better tools created.

The companies I've seen make their own plugins for sketchup since you have the whole Ruby thing. I use sketchunation myself to find plugins.

As for computing power, it seems it's all about algorithms today, not so much the actual software,

They call me the Tutorial Doctor.

Quantum computing is unlikely to change the landscape significantly outside of cryptography.

There is a common misconception that quantum computers can do exponentially many computations in parallel and do something useful with the results of all of them. This is not true, or at least misleading. One can indeed think of quantum computers as performing exponentially many computations in parallel. The problem is that it is impossible to extract the result of an individual computation (or a small subset of them) in a flexible way. Instead, there is only a single overall result of the computation which arises from what is essentially a linear combination of the result states. As a consequence, the really huge speedups in quantum computers are only seen in algorithms for problems that have a lot of algebraic structure - in particular, factoring integers and solving the discrete logarithm problem, which are the building blocks for all asymmetric cryptography that is in actual use today.

In other words, quantum computers are purely bad news: If they can be made to work, they will seriously damage asymmetric crypto, but they won't actually help us in solving tough problems such as the traveling salesperson problem or register allocation in a compiler any faster.

If you're interested in some serious reading on the topic, you might want to start with the blog of Scott Aaronson, or his lecture "Quantum Computing since Democritus".

Widelands - laid back, free software strategy

This topic is closed to new replies.

Advertisement