Hi,
I see that I was indeed not clear enough, so...
2) Virtual transistors - Why can't coding emulate the digital transistor better than it does?What does this mean? Software transistor or memory transistor instead of hardware transistor. Could this become a way of overcoming the inconsistencies of hardware across devices? ? In any case, you're far better off emulating it in software, due to spacial constraints and acquisition complexity in hardware, not to mention drivers supporting it. Perhaps all those complications could be avoided with some kind of binary transistor, memory transistor (memory locations interacting directly with one another (Not software!) to form another kind of transistor instead of hardware transistor circuitry.
3) 3D light processors (3 dimensions instead of the 2 dimensional chip and using light in the processing in conjunction with electricity, such as micro-LEDs)Chips are already 3-dimensional, so I also don't understand what you mean here? Chips are 2 dimensional whereas cubes are XYZ - 3 dimensional. Cooling of a cube could be satisfied with cooling tubes right thru the cube. Instead of 2 dimension chip circuitry, transistors and other components could interact with transistors above or below them in the cube (as well as adjacent to them on the same level). In mathematic terms, this would increase computing processing by a power exponentially, even far beyond the number of transistors to the 3rd power and beyond.
Using LEDs to process light is probably the worst possible idea, because 1) it's not consistent over time due to degradation, Degradation doesn't matter in ON/OFF operations or only as an original light source. 2) it's analog technology, Could be digitally manipulated which introduces all sorts of problems, such as data acquisition and control Sensors can detect light. 3) it'll take up far more space than a digital processing unit, System could make it worth it by increase processing combinations exponentially. 4) it won't be flexible, and Flexibility comes from exponential increase in processing power 5) it'll be slow because LEDs require multiple microseconds before they "stabilise" their emission of light. Slow doesn't matter as an original light source.
I envision light transistors some day, made by the interaction of light. Changes in light can exist in areas very much smaller than the current size of digital transistors by far. Somehow that must be exploited.
4) What is the probability that several immerging different technologies compete for many years and each will be promoted by competing companies?Ever heard of AMD, nVidia, Intel, etc....? AMD, NVidia, Intel, etc. are fundamentally the same. I mean DIFFERENT technologies (not variations of the same). As different as electric motor propelled cars are different than internal combustion propelled ones are - is the kind of different that I am saying.
6) Will performance leap enable more inefficient coding to be competitive?That's an interesting question. Comparing code on embedded systems (where resources are scarce) to code for front-end applications yields some key differences. Embedded code is far more compact, filled with all sorts of hacks and weird optimisations, because the code only targets a single instruction set so you can get away with it. Front-end code is always more structured and favours clean and comprehensible code over speed.
So with that in mind, increased performance certainly relaxes constraints, but I don't think it directly means that programmers become sloppy.
I believe that huge leap in processing performance could enable sloppy, inefficient coding at all levels.