• Question: Is there a limit to computing power?

    Asked by anon-258116 on 30 Jun 2020.
    • Photo: James Smallcombe

      James Smallcombe answered on 30 Jun 2020:


      Maybe?
      There is a limit to how small traditional computing parts can become, already designers of computer processors have to adjust for quantum tunnelling of electrons. You cant make a wire smaller than an atom!
      We can always connect more computers together to get more power, which is basically what super computers do. But in a finite space (say a laptop or phone) the computing power limit depends on how small you can make things.
      Quantum computing may help break this fundamental limit as a “quantum bit” has an infinite number of superposition of its state, which might be a key to “limitless” computing, but I’m not an expert in that subject.

    • Photo: Sam Carr

      Sam Carr answered on 1 Jul 2020:


      I can’t give an answer, but I can ponder the question (which is how all great questions should be!)

      One issue is how to define computing power.

      If we limit ourselves to classical computers, and count power as something like ‘fundamental operations per second’, there will be a limit. One could come up with an upper bound for this limit, supposing that the computer was made from every single atom in the universe, and the speed was limited by the speed of light of a signal getting from one atom to the next. Obviously any practical computer will be far below this limit – to be honest, I’d rather you didn’t use my atoms to make your big computer, and I’d prefer it if you left me a little bit of atmosphere and food too. But this “thought experiment” does show that somewhere there is a limit. It also shows that this limit is essentially meaningless as for any practical computer you ever build, you could always make it a little bit bigger.

      There is another related question you could ask about a limit to computing power for something of a given physical size, which James has already commented on. Fundamentally, a single electronic component (e.g. a transistor) can’t be smaller than an atom, and in practical terms probably a lot more than a single atom. So there is a limit here too. There is something called Moore’s law which is an empirical observation that the number of components on an integrated circuit doubles roughly every two years. Moore observed this back in 1965, and around the turn of the century there was talk that components were now getting so small that the pace of development would flatten off soon, but it hasn’t yet. This is partly due to improved fabrication techniques as well as better 3D stacking of components which you can read about in the article on Moore’s law on Wikipedia. But there was another development that gets far less attention, which is to do with design of the circuits. Making really small things is obviously a tricky business, and when components are really small, a little bit of dirt getting into it could stop it working. This used to be a big problem, because if there is just a single component in a circuit that doesn’t work, then typically the entire circuit won’t work (at least not for all functions). But somewhere around the late 90s, processor design became far more modular. What this means is that the circuit is broken down into little parts, and if one part doesn’t work, then it can be turned off and the rest of the circuit will do the task (albeit a little slower than if every single part of the circuit was working). This meant that chips could become much bigger than they used to, because some ‘errors’ in manufacturing were allowed and wouldn’t affect it’s operational ability.

      But this comes to another aspect of computing power – the algorithm. If the computer has to do fewer operations to achieve the same goal, then it seems like it is more powerful. As an example, suppose I ased you to do 65 times 38. Then you know you have to add 65 to itself 38 times, so after 38 additions you give me the answer. But maybe you have studied long multiplication and know that there is a much quicker way of getting to the answer by only doing 4 single digit multiplications and a few additions, rather than 38 additions. Then you can give me the answer much quicker. How optimised a given algorithm can become in general is an open question in maths/computer science. In fact, even a basic question about the scaling of algorithms (known as the P vs NP problem) is not yet solved and is on the list of ‘Millenial problems’ (see wikipedia) that you would get rich by solving.
      So if by computing power you don’t just mean the raw number of operations a computer can do, but some other metric related to how quickly the computer can do certain tasks, the answer is much less clear.

      And now the waters start to become really murky, because maybe you have a specific problem to solve and you could design not just the algorithm but also a specific computer to solve it. The computer may not be as ‘universal’ as most current computers are, but optimised for this one task. There is an example of this already commonplace in the commercial world – gaming. The universal processor can work out all the calculations to render the beautiful 3D scenes in modern games, but it is not very efficient at it. So modern computers also have a GPU (graphics processing unit) which is optimised for this task. And if you put more GPUs in a computer, then it can do games and movie rendering really well. In fact, a GPU can do many other things rather well — the calculations involved in quantum mechanics is a good example. But they can’t do everything which is why computers will also have some central processor as well. For science modelling tasks, it isn’t worth the investment to develop specialised computer chips optimized for each individual task, but creating computers with many GPUs is quite common in universities as it is better optimised than a more universal machine.

      Going further down the rabbit hole, we then get to quantum computers. Nowadays when people say quantum computer, they usually mean a ‘universal quantum computer’ in the same way as a classical computer is universal, i.e. could run any algorithm. But the subject is still more or less in its infancy and many things called quantum computers over the last 10 years have been far from universal. These will do some tasks faster than a classical computer (something known in the field as ‘quantum supremacy’), but not all tasks. The field of quantum algorithms is far more developed than the materials technology of building quantum computers, but there is still a lot to do. I would predict that in 50 years or so, a cellphone will likely have both a classical and a quantum processor — but the quantum processor will most likely be made of something we haven’t thought of yet. We need another one or two ‘materials’ revolutions in quantum computers before commercial technology development takes off — very much like first the semi-conductor revolution and then more specifically the silicon revolution in classical circuits.

      Then finally at the far end of the scale, what about a quantum computer optimised for a specific task. Suppose we wanted to simulate the evolution of the universe. What about actually using the universe? Is that a ‘computational task’? At some point, computational and experimental physics become one and the same thing.

      Well — that was fun to think about. What’s next?

    • Photo: Richard Fielder

      Richard Fielder answered on 2 Jul 2020:


      Yes. For questions about limits, it’s often useful to start off at the truly absurd scale. Computing requires some physical structure, and needs energy to do the work. There is only a finite amount of matter and energy in the observable universe, so there must be a limit to how much computing can be done even if you converted the entire universe into a single giant computer.With that established, it simply becomes a questions of scaling things down until you reach the point you think you have a practical answer.

      Pick a size you think would make a useful computer; that could be converting the whole Moon into a computer, a supercomputer taking up a whole building, or something you can fit in your pocket. There must still be a theoretical limit to how many atoms can fit in the space, and how much energy you can force through it to perform calculations. The question is then not whether there is a limit, but rather how close we are able to get to that theoretical limit with whatever technology we have available.

      As the previous answers have noted, quantum computers have the potential to give a huge jump in the amount of computing power you can fit in a given space. But they still need matter and energy to do their work, so even if they’re astronomically more powerful than the computers we have today, there will still be some limit on how powerful they can eventually be.

Comments