Philosofikal wrote:
xxjesus1412fanx wrote:
why do you think that is? a transistor is a transistor. the instructions for the things you are mentioning (computational tasks) are in favour of cpu's, and that's what people write for.
Let me know when you're running Windows on a bitcoin miner
I can't, ASIC isn't capable of that and you know it, but imagine taking all those thousands of tiny weak logical cores and having them execute the same SSE, MMX, etc instructions that drive current mainstream CPU's. You know it's possible. In the same way supercomputers utilize distributed computing to split a workload across hundreds of cpu's, you could split a workload across hundreds of smaller low voltage cores and get a massively higher throughput. There are limitations if you were to attempt to implement this on current architectures of course, namely the fact that a single register is responsible for 16 or more cores, and schedulers would need to be completely revised on the silicon level to accommodate a much larger instruction cache, but it can be done. The performance gain would be huge over just having 4 - 8 higher voltage cores, you end up getting things done much slower like that, regardless of how much higher the IPC per core is in comparison.
This is far from a reality, but can't you see that there's a much brighter future here in terms of performance than the incremental upgrades to this dated cpu design we have now over the past decade or so? Things have slowed to a crawl for the past 6-7 years for cpu's because of the unwillingness to make that leap both on the side of the silicon engineers who don't want to completely make every chip that preceded this potential product in purpose being completely useless in comparison to the new silicon, and because programmers are still writing just about everything to run on 1 or 2 threads, everything from word processors to web browsers, and you cant just magic that away with some sort of clunky low-level API implementation, everyone has to be on board from the get go, so it's not going to happen in the consumer market until we reach some great tipping point.
RISC just keeps charging forward, if intel/amd don't adapt at some point in the next 20 years, it will no longer be worth it to have a PC, because you'll barely be getting more horsepower out of something plugged into a wall. The gap is already closing fast in terms of performance between RISC and x86-64, and as more people decide to write code for these ARM chips it's going to become more of a problem.
Call me a dreamer, an airhead, whatever, but I firmly believe everyone working in the field should keep looking forward, not letting things stagnate to maintain some sort of bottom line profit margin, oh no we can't just suddenly have silicon that's dozens of times better than the thing that came right before it, everyone will feel betrayed in their purchase, and we won't be able to profit from old yields at all anymore!

and think of the programmers!

well it's a bitter pill to swallow but the end result will be worth it if and when it does happen.