[LUGOS] cpu load
Mitja Zabukovec
mitja.zabukovec at rs-pi.com
Fri Jul 25 16:23:28 CEST 2003
On Friday 25 of July 2003 16:12, Iztok Umek wrote:
> Mitja Zabukovec <mitja.zabukovec at rs-pi.com> said:
> > zivjo,
> >
> > na frisno instalirani masini, ki laufa samo sambo,
> > imam cpu load ves cas okoli 1.00.
> > Top ne pokaze nobenega procesa, ki bi povzrocal ta load.
> > Edina netipicna stvar pri tej masini je HW IDE raid contoller,
> > HighPoint HPT37x.
> > Driver je od proizvajalca, njegova instalacija pa je potekala brez tezav.
> >
> > Ima kdo idejo, zakaj bi bil load tako visok, in ali ga mogoce povzroca
> > raid contoller ?
>
> Kaj ti pravi TOP? Predvsem me zanima \"header\".
>
> Kaj pa iostat?
TOP (to je takoj po rebootu, zato povprecja za 15 min in 1 uro se niso na 1)
11:43am up 6 min, 1 user, load average: 1.00, 0.76, 0.36
43 processes: 41 sleeping, 2 running, 0 zombie, 0 stopped
CPU states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle
Mem: 514516K av, 59420K used, 455096K free, 0K shrd, 11752K buff
Swap: 1052216K av, 0K used, 1052216K free 27328K cached
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
1 root 15 0 240 240 204 S 0.0 0.0 0:03 init
2 root 15 0 0 0 0 SW 0.0 0.0 0:00 keventd
3 root 15 0 0 0 0 SW 0.0 0.0 0:00 kapmd
4 root 34 19 0 0 0 SWN 0.0 0.0 0:00 ksoftirqd_CPU0
5 root 25 0 0 0 0 SW 0.0 0.0 0:00 kswapd
6 root 25 0 0 0 0 SW 0.0 0.0 0:00 bdflush
7 root 15 0 0 0 0 SW 0.0 0.0 0:00 kupdated
8 root 25 0 0 0 0 SW 0.0 0.0 0:00 kinoded
10 root 25 0 0 0 0 SW 0.0 0.0 0:00 mdrecoveryd
13 root 25 0 0 0 0 DW 0.0 0.0 0:00 hpt_wt
16 root 15 0 0 0 0 SW 0.0 0.0 0:00 kreiserfsd
70 root 0 -20 0 0 0 SW< 0.0 0.0 0:00 lvm-mpd
111 root 15 0 0 0 0 SW 0.0 0.0 0:00 pagebuf_daemon
323 root 15 0 0 0 0 SW 0.0 0.0 0:00 eth0
364 root 15 0 600 600 492 S 0.0 0.1 0:00 syslogd
367 root 15 0 1028 1028 412 S 0.0 0.1 0:00 klogd
403 root 17 0 0 0 0 SW 0.0 0.0 0:00 khubd
745 bin 18 0 388 388 312 S 0.0 0.0 0:00 portmap
771 root 15 0 1636 1632 1468 S 0.0 0.3 0:00 nmbd-classic
772 root 17 0 1360 1360 1248 S 0.0 0.2 0:00 nmbd-classic
804 at 19 0 544 544 460 S 0.0 0.1 0:00 atd
815 root 15 0 1568 1568 1400 S 0.0 0.3 0:00 sshd
1024 root 15 0 872 872 728 S 0.0 0.1 0:00 xinetd
1123 root 15 0 700 700 588 S 0.0 0.1 0:00 nscd
1124 root 15 0 700 700 588 S 0.0 0.1 0:00 nscd
1125 root 15 0 700 700 588 S 0.0 0.1 0:00 nscd
1126 root 15 0 700 700 588 S 0.0 0.1 0:00 nscd
1127 root 15 0 700 700 588 S 0.0 0.1 0:00 nscd
1128 root 15 0 700 700 588 S 0.0 0.1 0:00 nscd
1129 root 15 0 700 700 588 S 0.0 0.1 0:00 nscd
1183 root 15 0 1412 1412 1104 S 0.0 0.2 0:00 master
1199 postfix 15 0 1380 1380 1084 S 0.0 0.2 0:00 pickup
1200 postfix 15 0 1712 1712 1344 S 0.0 0.3 0:00 qmgr
1223 root 15 0 596 596 508 S 0.0 0.1 0:00 cron
1242 root 16 0 492 492 424 S 0.0 0.0 0:00 mingetty
1243 root 16 0 492 492 424 S 0.0 0.0 0:00 mingetty
1244 root 16 0 492 492 424 S 0.0 0.0 0:00 mingetty
1245 root 16 0 492 492 424 S 0.0 0.0 0:00 mingetty
1246 root 16 0 492 492 424 S 0.0 0.0 0:00 mingetty
1270 root 16 0 492 492 424 S 0.0 0.0 0:00 mingetty
vmstat 1:
procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 2 0 455192 11752 27344 0 0 79 22 121 61 3 3 94
0 0 1 0 455192 11752 27344 0 0 0 0 106 10 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 107 10 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 104 8 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 103 10 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 105 6 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 107 10 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 105 6 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 107 10 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 103 8 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 106 8 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 106 12 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 103 12 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 104 8 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 103 10 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 103 6 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 103 10 0 0 100
0 0 1 0 455192 11752 27344 0 0 0 0 106 8 0 1 99
0 0 1 0 455192 11752 27344 0 0 0 0 106 10 0 0 100
lp,
MItja
More information about the lugos-list
mailing list