Increasing performance

From LQWiki
(Redirected from Increasing Performance)
Jump to navigation Jump to search

This describes how you can increase performance on your system. To measure performance, have a look at benchmarking.

Before you can begin increasing performance for a system, you must identify performance bottlenecks, you can use a benchmarking tool to find them. I recommend that you run it, change one parameter, run it again, and see how performance differs. One popular example is Compiling a kernel. Once found, the main question is: is it a software or hardware bottleneck?


Warning: If you're not an experienced hacker, you probably shouldn't change things you don't understand, or haven't been told on reliable authority. This page is meant to help people find ways to increase system performance, but as with any tweaking, its possible to end up breaking things. Consider making backups of relevant files before proceeding in any system tweak, and always have a rescue disk ready just in case.

Hardware

Hard Drive

Hard drives are another common area when it comes it performance. Faster hard drives are good, but you want the hard drive to support and use DMA. Without DMA active, your hard drive performance can suffer.

See the command hdparm to query and test your hard drive, as well as set/disable DMA.

Video Card

Video cards offer 3D acceleration, and is fairly important in computers these days. Even if you don't play games, the chances are high that you will still make use of 3D sooner or latter, and without acceleration, it will be very slow. Even a cheap low-end video card can make all the difference. The faster 3D frame-per-second you want, the more advanced you will probably want your card to be. On board memory can also give good speed increases, usually, the only difference between a cheap card and a expensive card is the on board memory, or lack thereof.

See direct rendering for more information.

Other

Various parts of the computer can also play a role, such as CPU and motherboard. Sadly, replacing the motherboard is a very big project, and basically replaces the soul of the computer. Motherboard problems are hard to spot, basically, the only real influence they have is over how much throughput the system can do, as well as memory speed. Normally, you usually don't need to replace a motherboard unless your updating to incompatible CPU's or other hardware parts. The CPU, while usually replaceable, is closely related to the motherboard, when a CPU upgrade is actually worthwhile is normally when you upgrade to a incompatible CPU. Minor CPU upgrades (the kind that wont require a motherboard change) are normally not worth much in terms of performance, unless you run multiple programs, or power hungry programs. For most users, performance bottlenecks in other areas of the system will appear long before any CPU bottlenecks for the average desktop and OK processor combo.

Software

3D

3D, usually in the realm of games, is fairly important for a lot of desktop users. However, more than just a good video card effects 3D performance. The quality of the cards drivers is also a important factor. Its said that ATI's Linux drivers are crap, and that Nvidia's drivers are of better quality on Linux. However, more ATI cards have reverse engineered open source drivers, so they don't require ATI's support. Still, no matter what card you choose, there are other factors in X and Linux to look at. The drivers options, for example, can give a big speed increase, so can a well configured DRI.

See DRI for more information about 3D under Linux.

Applications

Various applications will have different performance requirements. This is because some are designed to be faster and more memory efficient then others. Applications are usually designed for a target desktop environment (see below), but most will only require a few key libraries. For the most part then, its the application itself that is slow. The best solution is to find a replacement application, most common applications have good alternatives.

See applications for a list of applications.

Cleaning up disk space

Its not uncommon for various distros, and desktops to eat away at disk space. This is typically due to distros, programs, and desktops not cleaning up after themselves, and over time, this can eat up quite a bit of space. The common places to find unnecessary things to delete are:

  • /tmp/ - contains temporary data, and often this is not cleaned out. While its safe to remove everything in this directory, it will probably cause your desktop to crash as you just deleted files needed by active programs. Simply reboot and everything will be fine.
  • ~/.thumbnails/ - this directory contains a cache used by all desktop environments to store thumbnails for all images browsed. This directory is not cleaned out, and if you have lots of images you browse on your desktop, this directory can start getting rather big. Its safe to delete everything inside of it, but next time you browse your images, it might take longer as the thumbnail must be regenerated.

See disk space for more information about how to find things to delete.

CPU Scheduler

The CPU scheduler is a piece of code inside a OS that decides what program should be allowed to run on the CPU, and for how long. Preemption is the process of interrupting a active process to switch it with another. Preemption can lead to less throughput, but lower latency. In 2.6 kernels, you can choose what level of preemption (if any) to do, as to find a good trade between less work done, and responsiveness of the system.

Desktop Environment

The desktop environment is composed of a set of programs, and libraries common to them. The programs compose pretty much every task a normal desktop would be asked to do, and some are better then others at different things. There are only two complete, semi-compatible DE's as of this writing, KDE and GNOME. Both are usually slow due to their eye candy they give the user. GNOME is usually noted for its applications as being more memory efficient, while KDE gets noted for its better configuration. In any case, those that want to reduce resource use as much as possible should look into creating their own desktop, mixing and matching programs.

See desktop environment for more information about DE's.

I/O Scheduler

The I/O scheduler is responsible for determining when a program can access a block device like a hard drive. I/O schedulers try to batch operations together, as to minimize disk usage. Linux 2.6 kernels have support for multiple I/O schedulers, but only one scheduler per device of course. Choosing the right scheduler, and its options, can have pretty big effects on your system. If you have a desktop, you probably want to use CFQ as it will allow for lower latency access to the disk, which helps to prevent xruns when listening to music or skipping with video, even if the system is under load.

See IOSched for a list of schedulers, and more information.

Services

Some distributions of GNU/Linux will start unnecessary boot time services, such as file servers, various daemons, web services, etc. Some of these services might dig into performance, and usually always makes the startup process slower. Check with your distribution to see which services you can safely disable.

Various desktop services may also be started once you login. Disabling what you don't need can free up those resources for other programs to use.

Swap

Memory problems are fairly common. The only real way to remedy the situation is to add more memory, to use software that require less of it. Software will not complain about lack of memory. The symptom of low memory is a very slow computer, and this can easily be mistaken as a CPU problem. If you know what your looking for, its easy to spot, however. Use commands like free to see how much memory and swap you have, than how much your using. If memory usage is full, then you might consider taking action. See memory for more information and commands.

Swap space, while it artificially increases the amount of memory, is a lot slower then normal memory. When your system has enough memory, it simply doesn't need the added swap space, but the kernel can still make use of it. While on low memory systems, swap space is essential for anything to function, once you have enough memory, the usefulness of having swap starts to drop. Fine tuning the swap parameters can help to increase performance for systems that only sometimes need to use the swap space. For systems that never need to use the swap space, having swap can hurt performance of the system. When unnecessary swapping occurs, slowdowns in both throughput and responsiveness can occur.

See swap for information about swap, and how to fine tune the swap parameters.

Tuning the boot process

At system startup, various things are done that consume CPU power, a little tunning and its possible to make the system boot faster. For e.g. pcmcia and hotplug to work, its common practice that inside a startup script the kudzu program is called several times in a row.

Next is pausing in the boot process by using the sleep command. If one counts inside /etc/rc.d/init.d directory the number of "sleep 1" calls, one finally understands why that P4 3.0 GHz hotrod iron is still slower at booting up as my old pentium P75 with RedHat 4.2 :

[root@tinker init.d]# grep sleep * | wc -l
31
[root@tinker init.d]#

So booting might at least take 30 seconds extra due to sleep commands. One might check /etc/init.d/functions where the sleep command is used inside a loop! Why not use the usleep command? This command uses a microsecond as its time unit instead of a whole second. My solution would be to replace all sleep shell scripting commands with usleep and put 2 or 3 zeros behind the number of seconds . so one would replace :

sleep 1 ==> usleep 1000

1 second ===> 1 msec

Your Linux machine and its scripts might boot/run a 1000 times faster after this.

It's commonly advised to tweak your OS (notably Ubuntu) to enable Concurrent Booting, by changing the "CONCURRENCY" setting, to take advantage of Dual-Core processors. However, this should not be necessary, as it is supposed to be auto-detected on new versions of Ubuntu (and other distros?). Parallel booting is another term for this. Slight performance increases can also be seen on single-core/processor systems, however, if a error occurs during boot, this could make it harder to see what caused it, or where it happened.

See also