从CPU /计算的角度来看:

The CPU's might not have more work per second than they can do per second (aka load), but they do get less efficient when they have to do different tasks with different data sets. Keywords to research in this topic are context switches and cache hits.

To use something equivalent: Think of the CPU as a secretary which you give work. (s)he will work more efficient if she has single job and works on that for an hour compared to trying to do 60 jobs each lasting a minutes. Part of this is switching from one job (context) to another job, which takes time.

Another part is the cache on the CPU. It keeps a local copy of the data it works with. This is done because memory access is relative slow. As soon as you switch tasks you start working on a new set of data. This means fetching new info. And since you have limited space in the cache that means throwing old data out. And once you switch back this happens again. And again...

Then on modern CPUs there is a thermal budget. A CPU can run at regular max. speed all the time. It will get hot doing so, but heat produced and heat dissipated should stay in balance. If the CPU has less work it can cool down. This effectively give it a small heat buffer. This buffer is used with what Intel and AMD now call turbo. When The CPU is relative cold and it has a lot of work the CPU increases its clockspeed and works faster. It can not sustain that for long, but a short but intense task on a cold CPU (with spare thermal budget) will briefly run faster than on a CPU which already spent its thermal budget.


From a memery standpoint: You application will use at least some memory. That is less memory available to other tasks (such as IO buffers). This will slow the system down.


如果您的应用程序使I / O达到最大值(例如磁盘访问),那么即使它减慢了CPU的速度也没有关系。如果其他所有程序都必须在队列中等待磁盘访问,则即使不超过100%CPU负载,也可以降低系统速度。



如有侵权,请联系 [email protected] 删除。



0 条评论
登录 后参与评论