LFCE.2 Advanced Network and System Administration
LFCE: Advanced Network and System Administration
[toc]
check the init daemons, Starting and Stopping Services
server0:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
1. command to switch between runlevels on initd-base Systems
$ ls -la /sbin/init
lrwxrwxrwx. 1 root root 22 Apr 9 17:52 /sbin/init -> ../lib/systemd/systemd
// centos:
// init is a symbolic link back to systemd.
2. initialization is the first process in user space.
check the process ID.
$ pstree -np
// systemd is the UID 1.
systemd(1)─┬─systemd-journal(735)
├─systemd-udevd(764)
├─alsactl(986)
├─dbus-daemon(989)───{dbus-daemon}(1022)
├─systemd-machine(990)
├─chronyd(992)
├─firewalld(1036)─┬─{firewalld}(1907)
│ └─{firewalld}(4293)
├─systemd-logind(1095)
├─accounts-daemon(1096)─┬─{accounts-daemon}(1097)
│ └─{accounts-daemon}(1100)
├─NetworkManager(1110)─┬─{NetworkManager}(1119)
│ └─{NetworkManager}(1121)
├─sshd(1123)
├─cupsd(1125)
├─libvirtd(1127)─┬─{libvirtd}(1151)
│ ├─{libvirtd}(1384)
│ └─{libvirtd}(2051)
├─httpd(1128)─┬─httpd(1165)
3. manipulate the service by systemd
$ systemctl stop httpd
$ systemctl start httpd
$ systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor prese>...)
Active: inactive (dead) since Sun 2020-04-19 14:21:47 EDT; 4s ago
// or
Active: reloading (reload) since Sun 2020-04-19 14:22:16 EDT; 2s ago
Docs: man:httpd.service(8)
Main PID: 8944 (httpd)
Status: "Reading configuration..."
Tasks: 1 (limit: 14155)
Memory: 4.3M
CGroup: /system.slice/httpd.service
└─8944 /usr/sbin/httpd -DFOREGROUND
Apr 19 14:22:06 server0 systemd[1]: Starting The Apache HTTP Server...
Apr 19 14:22:16 server0 httpd[8944]: AH00558: httpd: Could not reliably determi
Apr 19 14:22:16 server0 systemd[1]: Started The Apache HTTP Server.
check the Unit Files and configure the unit file precedence
server0:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
1. startup when the system startup: enable/disable
$ systemctl status httpd
● httpd.service - The Apache HTTP Server
// (location, autostart, ...)
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor prese>)
Active: reloading (reload) since Sun 2020-04-19 14:22:16 EDT; 2s ago
Docs: man:httpd.service(8)
Main PID: 8944 (httpd)
$ systemctl disable httpd
Removed /etc/systemd/system/multi-user.target.wants/httpd.service.
$ systemctl enable httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
- list the 3 location of the units
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
$ cd /usr/lib/systemd/system
[server1@server0 system]$ ls
accounts-daemon.service plymouth-kexec.service
alsa-restore.service plymouth-poweroff.service
alsa-state.service plymouth-quit.service
anaconda-direct.service plymouth-quit-wait.service
anaconda-nm-config.service plymouth-read-write.service
anaconda-noshell.service plymouth-reboot.service
anaconda-pre.service plymouth-start.service
anaconda.service plymouth-switch-root.service
$ cd /var/run/systemd/system
$ ls -lat
total 4
drwxr-xr-x. 17 root root 420 Apr 19 16:19 ..
drwxr-xr-x. 2 root root 40 Apr 18 17:17 .
drwxr-xr-x. 2 root root 140 Apr 18 19:17 session-1.scope
drwxr-xr-x. 2 root root 70 Apr 18 19:17 session-1.scope.d
// the time that user login to the system
// check the time that user login to the system
$ w
16:23:36 up 9:05, 1 user, load average: 0.02, 0.02, 0.00
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
server1 tty2 tty2 Sat17 23:05m 1:26 0.00s /usr/libexec/gs
// when enable or disable Services
// the place where it wrote the symlink.
$ cd /etc/systemd/system
$ ls
basic.target.wants nfs-blkmap.service.requires
bluetooth.target.wants nfs-idmapd.service.requires
dbus-org.bluez.service nfs-mountd.service.requires
dbus-org.fedoraproject.FirewallD1.service nfs-server.service.requires
dbus-org.freedesktop.Avahi.service printer.target.wants
dbus-org.freedesktop.ModemManager1.service remote-fs.target.wants
dbus-org.freedesktop.nm-dispatcher.service rpc-gssd.service.requires
dbus-org.freedesktop.resolve1.service rpc-statd-notify.service.requires
dbus-org.freedesktop.timedate1.service rpc-statd.service.requires
default.target sockets.target.wants
display-manager.service sysinit.target.wants
getty.target.wants syslog.service
graphical.target.wants systemd-timedated.service
multi-user.target.wants timers.target.wants
network-online.target.wants vmtoolsd.service.requires
- ask systemctl about the Service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
// list of all of the loaded units on this system
// including the unloaded ones
$ systemctl
// a quick status of the system,
$ systemctl -all
UNIT LOAD ACTIVE SUB DESCRIPTION
● boot.automount not-found inactive dead boot.automount
proc-sys-fs-binfmt_misc.automount loaded active waiting Arbitrary Exec
dev-block-8:2.device loaded active plugged VBOX_HARDDISK 2
dev-cdrom.device loaded active plugged VBOX_CD-ROM VBox_GAs_6
dev-cl-root.device loaded active plugged /dev/cl/root
dev-cl-swap.device loaded active plugged /dev/cl/swap
// ask systemctl what it knows about particular unit types
// ask for socket
$ systemctl --type=socket
UNIT LOAD ACTIVE SUB DESCRIPTION
avahi-daemon.socket loaded active running Avahi mDNS/DNS-SD Stack A>
cups.socket loaded active running CUPS Scheduler
dbus.socket loaded active running D-Bus System Message Bus >
dm-event.socket loaded active listening Device-mapper event daemo>
iscsid.socket loaded active listening Open-iSCSI iscsid Socket
iscsiuio.socket loaded active listening Open-iSCSI iscsiuio Socket
lvm2-lvmpolld.socket loaded active listening LVM2 poll daemon socket
multipathd.socket loaded active listening multipathd control socket
rpcbind.socket loaded active running RPCbind Server Activation>
sssd-kcm.socket loaded active running SSSD Kerberos Cache Manag>
systemd-coredump.socket loaded active listening Process Core Dump Socket
systemd-initctl.socket loaded active listening initctl Compatibility Nam>
systemd-journald-dev-log.socket loaded active running Journal Socket (/dev/lo>)
systemd-journald.socket loaded active running Journal Socket
systemd-udevd-control.socket loaded active running udev Control Socket
systemd-udevd-kernel.socket loaded active running udev Kernel Socket
virtlockd.socket loaded active listening Virtual machine lock mana>
virtlogd.socket loaded active listening Virtual machine log manag>
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
18 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
lines 4-26/26 (END)
configure the unit file precedence and Automatic Restart
server0:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
1. check status, find the location
$ systemctl status httpd
● httpd.service - The Apache HTTP Server
// (location, autostart, ...)
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor prese>)
Active: reloading (reload) since Sun 2020-04-19 14:22:16 EDT; 2s ago
Docs: man:httpd.service(8)
Main PID: 8944 (httpd)
================================================================
2. copy the file to the higher dependency
$ sudo cp /usr/lib/systemd/system/httpd.service /etc/systemd/system
================================================================
3. notify the systemd the new file.
$ systemctl disable httpd
$ systemctl enable httpd
or
$ sudo systemctl daemon-reload
$ systemctl restart httpd
================================================================
4. check it. the location changed.
$ systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/etc/systemd/system/httpd.service; enabled; vendor preset: d>
Active: reloading (reload) since Sun 2020-04-19 16:46:08 EDT; 6s ago
================================================================
5. edits to this file and not impact the base file, if needed that file again for a restore or for reference.
autorestart is difficult to troubleshoot
$ vi /etc/systemd/system/httpd.service
// edit the file
# [Service]
# Environment=OPTIONS=-DMY_DEFINE
[Unit]
Description=The Apache HTTP Server
Wants=httpd-init.service
// the dependency
// it must start after all these
// target that provide the needed service (requirement) for httpd to start.
// target - system status = runlevels
After=network.target remote-fs.target nss-lookup.target httpd-init.service
Documentation=man:httpd.service(8)
[Service]
// add the line
// restart only when there is a crash or an unclean process exit.
!!!!!
Restart=on-failure
// tells the service to notify systemd when finished initializing and updated state
Type=notify
// set environment variables for the service.
Environment=LANG=C
// ExecStart: the actual binary program and its required command line options.
ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND
// ExecReload: the command out options to gracefully reload the daemon
ExecReload=/usr/sbin/httpd $OPTIONS -k graceful
// ExecStop: calls kill with the variable of MAINPID.
// This will stop the main HTTP process, and killing off all child processes
// we don't just want to send a SIGTERM, signal to immediately kill all processes. We want to wait a second, and to do that, we send the SIGCONT signal as the kill signal. And then, the final option in the Service section is PrivateTmp, set to true.
ExecStop=/bin/kill -WINCH ${MAINPID}
# Send SIGWINCH for graceful stop
KillSignal=SIGWINCH
KillMode=mixed
// tells systemd to provide a private temp directory for this process
// a special security feature
PrivateTmp=true
[Install]
// enable httpd, it will place it in the multi-user target
// disable it, it will remove that symbolic link
WantedBy=multi-user.target
================================================================
6. reload to confirm the edit
$ systemctl disable httpd
$ systemctl enable httpd
or
$ sudo systemctl daemon-reload
$ systemctl restart httpd
================================================================
7. kill it, confirm the autorestart with different PID
$ systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/etc/systemd/system/httpd.service; enabled; vendor preset: d>)
Active: active (running) since Sun 2020-04-19 17:01:42 EDT; 23s ago)
Docs: man:httpd.service(8)
Main PID: 12437 (httpd)
$ sudo killall httpd
$ ps -aux | grep httpd
root 12692 0.1 0.4 280264 11024 ? Ss 17:02 0:00 /usr/sbin/httpd -DFOREGROUND
apache 12693 0.0 0.3 292480 8456 ? S 17:02 0:00 /usr/sbin/httpd -DFOREGROUND
apache 12694 0.0 0.6 1481388 13800 ? Sl 17:02 0:00 /usr/sbin/httpd -DFOREGROUND
apache 12695 0.0 0.5 1350252 11752 ? Sl 17:02 0:00 /usr/sbin/httpd -DFOREGROUND
apache 12696 0.0 0.5 1350252 11752 ? Sl 17:02 0:00 /usr/sbin/httpd -DFOREGROUND
server1 12909 0.0 0.0 12108 1056 pts/0 R+ 17:02 0:00 grep --color=auto httpd
$ sudo systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/etc/systemd/system/httpd.service; enabled; vendor preset: d>)
Active: active (running) since Sun 2020-04-19 17:02:25 EDT; 29s ago)
Docs: man:httpd.service(8)
Process: 12687 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=1/FA>)
Main PID: 12692 (httpd)
================================================================
see all of our unit settings
including the default ones that define our Service unit.
$ systemctl show httpd
find out what a particular setting is
$ man systemd.unit
================================================================
systemd: target
server0:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
================================================================
1. list all the target
$ systemctl list-units --type=target --all
UNIT LOAD ACTIVE SUB DESCRIPTION
basic.target loaded active active Basic System
cryptsetup.target loaded active active Local Encrypted Volumes
emergency.target loaded inactive dead Emergency Mode
// loaded: system know it, but not active.
2. list all the active/running target
$ systemctl list-units --type=target
$ systemctl list-units --type target
UNIT LOAD ACTIVE SUB DESCRIPTION
basic.target loaded active active Basic System
cryptsetup.target loaded active active Local Encrypted Volumes
getty.target loaded active active Login Prompts
graphical.target loaded active active Graphical Interface
local-fs-pre.target loaded active active Local File Systems (Pre)
local-fs.target loaded active active Local File Systems
multi-user.target loaded active active Multi-User System
network-online.target loaded active active Network is Online
network-pre.target loaded active active Network (Pre)
network.target loaded active active Network
nfs-client.target loaded active active NFS client services
nss-user-lookup.target loaded active active User and Group Name Lookups
paths.target loaded active active Paths
remote-fs-pre.target loaded active active Remote File Systems (Pre)
remote-fs.target loaded active active Remote File Systems
rpc_pipefs.target loaded active active rpc_pipefs.target
rpcbind.target loaded active active RPC Port Mapper
slices.target loaded active active Slices
sockets.target loaded active active Sockets
sound.target loaded active active Sound Card
sshd-keygen.target loaded active active sshd-keygen.target
swap.target loaded active active Swap
sysinit.target loaded active active System Initialization
timers.target loaded active active Timers
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
24 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
================================================================
3. target
when enable a service
it place a symbolic link in a directory for that target, in /etc/systemd/system.
$ cd /etc/systemd/system/
$ ls
basic.target.wants nfs-blkmap.service.requires
bluetooth.target.wants nfs-idmapd.service.requires
dbus-org.bluez.service nfs-mountd.service.requires
dbus-org.fedoraproject.FirewallD1.service nfs-server.service.requires
dbus-org.freedesktop.Avahi.service printer.target.wants
dbus-org.freedesktop.ModemManager1.service remote-fs.target.wants
dbus-org.freedesktop.nm-dispatcher.service rpc-gssd.service.requires
dbus-org.freedesktop.resolve1.service rpc-statd-notify.service.requires
dbus-org.freedesktop.timedate1.service rpc-statd.service.requires
default.target sockets.target.wants
display-manager.service sysinit.target.wants
getty.target.wants syslog.service
graphical.target.wants systemd-timedated.service
httpd.service timers.target.wants
multi-user.target.wants vmtoolsd.service.requires
network-online.target.wants
the multi-user target is a functioning, network-enabled system state
inside its directory
see symlinks to all of the other unit files that make up this system state.
all the things that need to be running when in the multi-user target.
$ ls multi-user.target.wants
atd.service ksm.service rsyslog.service
auditd.service ksmtuned.service smartd.service
avahi-daemon.service libstoragemgmt.service sshd.service
chronyd.service libvirtd.service sssd.service
crond.service mcelog.service tuned.service
cups.path mdmonitor.service vboxadd.service
dnf-makecache.timer ModemManager.service vboxadd-service.service
firewalld.service NetworkManager.service vdo.service
httpd.service nfs-client.target vmtoolsd.service
irqbalance.service remote-fs.target
kdump.service rpcbind.service
look at those symlinks
see the service/unit files, symbolically linked to the destination in /usr/lib/systemd/system.
$ ls -lah multi-user.target.wants
total 8.0K
drwxr-xr-x. 2 root root 4.0K Apr 19 16:19 .
drwxr-xr-x. 21 root root 4.0K Apr 19 17:01 ..
lrwxrwxrwx. 1 root root 35 Apr 13 21:14 atd.service -> /usr/lib/systemd/system/atd.service
lrwxrwxrwx. 1 root root 41 Apr 13 21:14 firewalld.service -> /usr/lib/systemd/system/firewalld.service
// reload the daemon
// but doesnot update the target, still the old.
lrwxrwxrwx. 1 root root 37 Apr 19 16:19 httpd.service -> /usr/lib/systemd/system/httpd.service
..
// reboot will update auto
// or disable enable will solve it.
$ systemctl disable httpd
Removed /etc/systemd/system/multi-user.target.wants/httpd.service.
$ systemctl enable httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /etc/systemd/system/httpd.service.
// check
$ ls -lah multi-user.target.wants | grep httpd
lrwxrwxrwx. 1 root root 33 Apr 19 17:35 httpd.service -> /etc/systemd/system/httpd.service
================================================================
4. set the default target on our system, the state to which computer will boot to
$ systemctl target_name
$ systemctl reboot
$ init 6
// check the default boot target
$ systemctl get-default
graphical.target
$ ls -lah /etc/systemd/system/default.target
lrwxrwxrwx. 1 root root 36 Apr 13 21:19 /etc/systemd/system/default.target -> /lib/systemd/system/graphical.target
$ systemctl set-default rescure
removed symlink
create symlink
$ systemctl set-default graphical
systemd: control group
server0:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
check out control groups in action
see how they are used to organize the currently-running processes on system.
- slice: grouping of scopes and services together.
- service: a collection of processes started by systemd.
- scope: a group of processes started externally to systemd.
$ systemd-cgls
Control group /:
-.slice
├─init.scope // PID 1, or systemd itself
│ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17
├─user.slice // where to find all logged-in users and their processes.
│ ├─user-42.slice // a specific user's slice, the processes for user UID 42
│ │ ├─session-c1.scope
│ │ │ ├─2464 /usr/bin/gnome-shell
│ │ └─user@42.service
│ │ ├─xdg-permission-store.service
│ │ │ └─2529 /usr/libexec/xdg-permission-store
│ └─user-1000.slice // current user
│ ├─user@1000.service
│ └─session-2.scope
...
└─system.slice
├─// all the processes owned by the system and started up by systemd.
├─rngd.service // the unit name
│ └─979 /sbin/rngd -f // the binary/actual executable for that service.
├─libstoragemgmt.service
│ └─1002 /usr/bin/lsmd -d
================================================================
new window:
open a ssh connection, start up another separate session
$ ssh server1@localhost
server1@localhosts password:
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Sun Apr 19 17:39:58 2020
// check out control groups in action again
$ systemd-cgls
Control group /:
-.slice
├─init.scope // PID 1, or systemd itself
│ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17
├─user.slice // where to find all logged-in users and their processes.
│ ├─user-42.slice // a specific user's slice, the processes for user UID 42
│ │ ├─session-c1.scope
│ │ └─user@42.service
│ └─user-1000.slice
│ ├─user@1000.service
│ ├─session-2.scope
│ └─session-4.scope
│ ├─4905 sshd: server1 [priv]
│ ├─4910 sshd: server1@pts/2
│ └─4915 -bash
...
└─system.slice
session-4.scope, new session added
- the collection of external processes for user, underneath users slice
- logically grouping all of users processes under one slice.
================================================================
new window:
open a ssh connection, again start up another separate session
ssh as root user
$ ssh root@localhost
root@localhosts password:
Activate the web console with: systemctl enable --now cockpit.socket
Last login: Sun Apr 19 14:21:10 2020
// check out control groups in action again
// new user show up
$ systemd-cgls
Control group /:
-.slice
├─init.scope
│ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17
├─user.slice
│ ├─user-0.slice
│ │ ├─user@0.service
│ │ └─session-5.scope
│ ├─user-42.slice
│ └─user-1000.slice
...
└─system.slice
================================================================
$ systemd-cgtop
// show us all of the currently-running control groups on system
// organized by resource consumption,
Control Group Tasks %CPU Memory Input/s Output/s
/ 772 2.0 1.8G - -
/system.slice 328 - 613.0M - -
/system.slice/ModemManager.service 3 - 7.7M - -
/system.slice/NetworkManager.service 3 - 8.5M - -
/system.slice/accounts-daemon.service 3 - 4.0M - -
/system.slice/alsa-state.service 1 - 388.0K - -
/system.slice/atd.service 1 - 480.0K - -
/system.slice/auditd.service 4 - 4.6M - -
/system.slice/avahi-daemon.service 2 - 1.6M - -
/system.slice/bolt.service 3 - 1.9M - -
/system.slice/boot.mount - - 200.0K - -
/system.slice/chronyd.service 1 - 1.6M - -
/system.slice/colord.service 3 - 3.8M - -
/system.slice/crond.service 1 - 1.6M - -
/system.slice/cups.service 1 - 3.3M - -
/system.slice/dbus.service 2 - 5.4M - -
/system.slice/dev-hugepages.mount - - 796.0K - -
/system.slic…ev-mapper-cl\x2dswap.swap - - 412.0K - -
/system.slice/dev-mqueue.mount - - 48.0K - -
limit access to available system resources with policy
to do that persistently, we make edits in the unit files for the service.
server0:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
pick on httpd, limit its total memory that it can allocate.
whats the parameter that looking for
- Google it
- search through the man pages
- ask systemd itself
$ systemctl show httpd
Type=notify
Restart=on-failure
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Sun 2020-04-19 03:35:25 EDT
WatchdogTimestampMonotonic=19477893
PermissionsStartOnly=no
RootDirectoryStartOnly=no
$ systemctl show httpd | grep -i memo
MemoryCurrent=32940032
MemoryAccounting=yes
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity // no limit so far
MemoryDenyWriteExecute=no
// edit it
$ sudo vi /etc/systemd/system/httpd.service
//add line in [service] section
MemoryLimit=512M // no limit so far
// confirm the change
$ systemctl daemon-reload
$ systemctl restart https
// check
// show the byte value
$ systemctl show httpd | grep -i memo
MemoryCurrent=33988608
MemoryAccounting=yes
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=536870912
MemoryDenyWriteExecute=no
$ systemctl show http -p MemoryLimit
MemoryLimit=infinity
$ systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/etc/systemd/system/httpd.service; enabled; vendor preset: d>)
Active: active (running) since Sun 2020-04-19 20:39:59 EDT; 3min 48s ago
Docs: man:httpd.service(8)
Process: 5577 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUC>
Main PID: 5590 (httpd)
Status: "Running, listening on: port 80"
Tasks: 213 (limit: 14155)
Memory: 32.4M (limit: 512.0M) // here, the limit
CGroup: /system.slice/httpd.service
Logs with journalctl
server0:
journald.
- systemds logging mechanism.
- by default, its data does not persist between reboots.
- to configure the data to stick around between reboots.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
$ sudo mkdir -p /var/log/journal
$ sudo systemctl restart systemd-journald
$ cd /var/log/journal/
$ ls
a1293fe73fc04f3f8762991ccbf5ce93
$ cd a1293fe73fc04f3f8762991ccbf5ce93/
$ ls
system.journal
// a binary file
$ sudo cat system.journal
$ sudo journalctl
// date and time, host, who/process: putting this entry into the journal?
-- Logs begin at Sun 2020-04-19 03:35:07 EDT, end at Sun 2020-04-19 20:53:47 ED>
Apr 19 03:35:07 server1 kernel: Linux version 4.18.0-147.8.1.el8_1.x86_64 (mock>)
Apr 19 03:35:07 server1 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4>
Apr 19 03:35:07 server1 kernel: x86/fpu: Supporting XSAVE feature 0x001: x87 f>
Apr 19 03:35:07 server1 kernel: x86/fpu: Supporting XSAVE feature 0x002: SSE r>
Apr 19 03:35:07 server1 kernel: x86/fpu: Supporting XSAVE feature 0x004: AVX r>
Apr 19 03:35:07 server1 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2>
Apr 19 03:35:07 server1 systemd[1]: systemd 239 running in system mode. (+PAM +>)
Apr 19 03:35:07 server1 systemd[1]: Detected virtualization oracle.
Apr 19 03:35:07 server1 systemd[1]: Detected architecture x86-64.
// systemd is the first user process outside of the kernel, and it's PID, PID 1
1. to streaming Output
// to continually read the bottom of the journal,
$ sudo journalctl -f
-- Logs begin at Sun 2020-04-19 03:35:07 EDT. --
Apr 19 20:55:14 server0 org.gnome.Shell.desktop[2850]: Window manager warning: last_user_time (11828777) is greater than comparison timestamp (11828746). This most likely represents a buggy client sending inaccurate timestamps in messages such as ..
Apr 19 20:55:14 server0 org.gnome.Shell.desktop[2850]: Window manager warning: W1 appears to be one of the offending windows with a timestamp of 11828777. Working around...
Apr 19 20:55:15 server0 org.gnome.Shell.desktop[2850]: Window manager warning: last_user_time (11829370) is greater than comparison timestamp (11829340). This most likely represents a buggy client sending inaccurate timestamps in messages such as . Trying to work around...
Apr 19 20:55:15 server0 org.gnome.Shell.desktop[2850]: Window manager warning: W1 appears to be one of the offending windows with a timestamp of 11829370. Working around...
Apr 19 20:55:17 server0 org.gnome.Shell.desktop[2850]: Window manager warning: last_user_time (11832100) is greater than comparison timestamp (11832069). This most likely represents a buggy client sending inaccurate timestamps in messages such as
Apr 19 20:55:17 server0 org.gnome.Shell.desktop[2850]: Window manager warning: W1 appears to be one of the offending windows with a timestamp of 11832100. Working around...
Apr 19 20:58:01 server0 sudo[6187]: pam_unix(sudo:session): session closed for user root
Apr 19 20:58:05 server0 sudo[6233]: server1 : TTY=pts/0 ; PWD=/var/log/journal/a1293fe73fc04f3f8762991ccbf5ce93 ; USER=root ; COMMAND=/bin/journalctl -f
Apr 19 20:58:05 server0 sudo[6233]: pam_systemd(sudo:session): Cannot create session: Already running in a session or user slice
Apr 19 20:58:05 server0 sudo[6233]: pam_unix(sudo:session): session opened for user root by (uid=0)
piped up the grap, if we wanted to restrict the output to a particular process or whatever searchable parameter were looking for.
2. this is all structured data. look at whats going on behind the scenes
$ sudo journalctl -o verbose
// the data and the attribute of the log entry stored inside of journald.
// can use these attributes to query the journal.
-- Logs begin at Sun 2020-04-19 03:35:07 EDT, end at Sun 2020-04-19 21:01:11 ED>
Sun 2020-04-19 03:35:07.344326 EDT [s=4cf6636fb7ac4a5b8df72f582f3ade7d;i=1;b=9d>]
SOURCE_MONOTONIC_TIMESTAMP=0
TRANSPORT=kernel
PRIORITY=5
SYSLOG_FACILITY=0
SYSLOG_IDENTIFIER=kernel
MESSAGE=Linux version 4.18.0-147.8.1.el8_1.x86_64 (mockbuild@kbuilder.bsys.>)
BOOT_ID=9d29ca0bf7a84f6abefe646cb1004555
MACHINE_ID=a1293fe73fc04f3f8762991ccbf5ce93
HOSTNAME=server1
Sun 2020-04-19 03:35:07.344351 EDT [s=4cf6636fb7ac4a5b8df72f582f3ade7d;i=2;b=9d>]
SOURCE_MONOTONIC_TIMESTAMP=0
TRANSPORT=kernel
SYSLOG_FACILITY=0
SYSLOG_IDENTIFIER=kernel
BOOT_ID=9d29ca0bf7a84f6abefe646cb1004555
MACHINE_ID=a1293fe73fc04f3f8762991ccbf5ce93
HOSTNAME=server1
PRIORITY=6
MESSAGE=Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-147.8.1.el8_1.>)
Sun 2020-04-19 03:35:07.344363 EDT [s=4cf6636fb7ac4a5b8df72f582f3ade7d;i=3;b=9d>]
SOURCE_MONOTONIC_TIMESTAMP=0
lines 1-23
// can use these attributes to query the journal
$ sudo journalctl _UID=1000
-- Logs begin at Sun 2020-04-19 03:35:07 EDT, end at Sun 2020-04-19 21:02:45 ED>_
Apr 19 17:39:58 server0 systemd[2748]: Started Mark boot as successful after th>
Apr 19 17:39:58 server0 systemd[2748]: Reached target Timers.
Apr 19 17:39:58 server0 systemd[2748]: Starting D-Bus User Message Bus Socket.
Apr 19 17:39:58 server0 systemd[2748]: Listening on Sound System.
Apr 19 17:39:58 server0 systemd[2748]: Reached target Paths.
Apr 19 17:39:58 server0 systemd[2748]: Listening on Multimedia System.
Apr 19 17:39:58 server0 systemd[2748]: Listening on D-Bus User Message Bus Sock>
Apr 19 17:39:58 server0 systemd[2748]: Reached target Sockets.
Apr 19 17:39:58 server0 systemd[2748]: Reached target Basic System.
Apr 19 17:39:58 server0 systemd[2748]: Starting Sound Service...
$ sudo journalctl _SYSTEMD_UNIT=sshd.service
-- Logs begin at Sun 2020-04-19 03:35:07 EDT, end at Sun 2020-04-19 21:06:41 ED>_
Apr 19 03:35:24 server0 sshd[1122]: Server listening on 0.0.0.0 port 22.
Apr 19 03:35:24 server0 sshd[1122]: Server listening on :: port 22.
Apr 19 20:04:31 server0 sshd[4811]: Invalid user demo from ::1 port 54122
Apr 19 20:08:07 server0 sshd[4871]: Invalid user demo from ::1 port 54124
Apr 19 20:08:09 server0 sshd[4871]: pam_unix(sshd:auth): check pass; user unkno>
Apr 19 20:08:09 server0 sshd[4871]: pam_unix(sshd:auth): authentication failure>
Apr 19 20:08:11 server0 sshd[4871]: Failed password for invalid user demo from >
Apr 19 20:08:17 server0 sshd[4871]: pam_unix(sshd:auth): check pass; user unkno>
Apr 19 20:08:19 server0 sshd[4871]: Failed password for invalid user demo from >
Apr 19 20:08:26 server0 sshd[4871]: pam_unix(sshd:auth): check pass; user unkno>
Apr 19 20:08:28 server0 sshd[4871]: Failed password for invalid user demo from >
Apr 19 20:08:28 server0 sshd[4871]: Connection closed by invalid user demo ::1 >
Apr 19 20:08:28 server0 sshd[4871]: PAM 2 more authenticati
// search according to the time
$ sudo journalctl --since "1 hour ago"
-- Logs begin at Sun 2020-04-19 03:35:07 EDT, end at Sun 2020-04-19 21:08:19 ED>
Apr 19 20:08:19 server0 sshd[4871]: Failed password for invalid user demo from >
Apr 19 20:08:26 server0 sshd[4871]: pam_unix(sshd:auth): check pass; user unkno>
$ sudo journalctl --since 19:00:00
-- Logs begin at Sun 2020-04-19 03:35:07 EDT, end at Sun 2020-04-19 21:09:39 ED>
Apr 19 19:01:01 server0 CROND[4213]: (root) CMD (run-parts /etc/cron.hourly)
Apr 19 19:01:01 server0 run-parts[4216]: (/etc/cron.hourly) starting 0anacron
Apr 19 19:01:01 server0 run-parts[4222]: (/etc/cron.hourly) finished 0anacron
Apr 19 19:36:44 server0 cupsd[1125]: REQUEST localhost
setup tftp server and client by xinetd
server0:
- install the tftp-server
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
$ sudo yum install xinetd tftp-server tftp
$ cd /etc/xinetd.d/
$ ls
chargen-dgram daytime-stream echo-dgram time-dgram
chargen-stream discard-dgram echo-stream time-stream
daytime-dgram discard-stream tcpmux-server
directory for the xinetd configuration
a collection of configuration files for the various services that xinetd can provide.
can build your own, these are the ones that come with the package.
$ sudo vi /etc/xinetd.d/tftp
service tftp
{
socket_type = dgram
protocol = udp
wait = yes
user = root
server = /usr/sbin/in.tftpd
server_args = -s /var/lib/tftpboot // location to put file
disable = no
per_source = 11
cps = 100 2
flags = IPv4
}
service tftp
{
socket_type = dgram
protocol = udp
wait = yes
user = root
port = 69
server = /usr/sbin/in.tftpd
server_args = -c -s /var/lib/tftpboot
disable = no
per_source = 11
cps = 100 2
flags = IPv4
}
the default TFTP root directory is set to /var/lib/tftpboot.
sudo mkdir /var/lib/tftpboot
sudo chmod -R 777 /var/lib/tftpboot
- Enable TFTP Service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
The CentOS 7 services(systemd) can be configured from files under /usr/lib/systemd/system/.
$ sudo vi /usr/lib/systemd/system/tftp.service
// edit tftp.service as follows
[Unit]
Description=Tftp Server
[Service]
ExecStart=/usr/sbin/in.tftpd -c -s /var/lib/tftpboot
StandardInput=socket
[Install]
WantedBy=multi-user.target
start services xinetd and tftp:
$ sudo systemctl daemon-reload
$ sudo systemctl start xinetd
$ sudo systemctl start tftp
$ sudo systemctl enable xinetd
$ sudo systemctl enable tftp
In CentOS 7, the SELinux is not supposed to be disabled(the system will abort booting if you disable SELinux). So the TFTP read and write must be allowed in SELinux. By default, the SELinux uses enforcing policy, which does not accept any change. To make any change to SELinux, first modify /etc/selinux/config and change the policy to permissive:
$ sudo vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
# change: SELINUX=enforcing
SELINUX=permissive
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
reboot the system. After system boot up, check SELinux status:
$ reboot
$ sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 28
check the tftp permissions in SELinux:
$ getsebool -a | grep tftp
tftp_anon_write --> off
tftp_home_dir --> off
enable it with setsebool command:
$ sudo setsebool -P tftp_anon_write 1
$ sudo setsebool -P tftp_home_dir 1
- create the file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ sudo vi /var/lib/tftpboot/file.txt
// "teset file"
sudo service xinetd stop
sudo service xinetd start
$ sudo systemctl restart xinetd
$ sudo journalctl -u xinetd
-- Logs begin at Sun 2020-04-19 03:35:07 EDT, end at Sun 2020-04-19 21:43:09 E>
Apr 19 21:42:37 server0 systemd[1]: Starting Xinetd A Powerful Replacement For>
Apr 19 21:42:37 server0 systemd[1]: xinetd.service: Cant open PID file /var/r>
Apr 19 21:42:37 server0 systemd[1]: Started Xinetd A Powerful Replacement For >
Apr 19 21:42:37 server0 xinetd[7936]: Reading included configuration file: /et>
Apr 19 21:42:37 server0 xinetd[7936]: Reading included configuration file: /et>
Apr 19 21:42:37 server0 systemd[1]: Started Xinetd A Powerful Replacement For >
Apr 19 21:42:37 server0 xinetd[7936]: Reading included configuration file: /et>
Apr 19 21:42:37 server0 xinetd[7936]: removing chargen
Apr 19 21:42:37 server0 xinetd[7936]: xinetd Version 2.3.15 started with loada>
Apr 19 21:42:37 server0 xinetd[7936]: Started working: 1 available service
// it's TFTP
lines 7-29/29 (END)
- Configure firewalld
1
2
3
4
5
6
7
8
9
10
11
12
allow TFTP port in firewalld
$ sudo firewall-cmd --permanent --add-port=69/udp
success
allow TFTP service
$ sudo firewall-cmd --reload
success
$ sudo systemctl status firewalld
$ sudo systemctl enable firewalld
$ sudo systemctl start firewalld
- create file
1
2
3
4
5
6
7
8
$ echo "hello" | sudo tee /var/lib/tftpboot/hello.txt
hello
$ echo "get hello.txt" | tftp 127.0.0.1
tftp> get hello.txt
tftp>
$ cat hello.txt
hello
server1:
1
2
3
4
5
$ sudo yum install xinetd tftp-server tftp
...
$ sudo tftp 192.168.1.1 -c get file.txt
check CPU
server0:
- check the CPU information
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
$ lscpu
// information about systems processor topology, and each individual chip in our environment.
Architecture: x86_64 //64-bit chips
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 1
On-line CPU(s) list: 0
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 70
Model name: Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz // processor
Stepping: 1
CPU MHz: 2194.918
BogoMIPS: 4389.83
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 6144K
L4 cache: 131072K
NUMA node0 CPU(s): 0
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm invpcid_single pti fsgsbase avx2 invpcid md_clear flush_l1d
$ cat /proc/cpuinfo
// same information
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 70
model name : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz
stepping : 1
cpu MHz : 2194.918
cache size : 6144 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq monitor ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm invpcid_single pti fsgsbase avx2 invpcid md_clear flush_l1d
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit
bogomips : 4389.83
clflush size : 64
cache_alignment : 64
address sizes : 39 bits physical, 48 bits virtual
power management:
- monitor the CPU
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
// keep reading a random number and throw it out on /dev/null
$ cat /dev/random > /dev/null
// now check the CPU
$ top
top - 23:15:33 up 22 min, 1 user, load average: 1.12, 0.31, 0.24
Tasks: 226 total, 2 running, 224 sleeping, 0 stopped, 0 zombie
// total number of tasks
%Cpu(s): 60.6 us, 37.8 sy, 0.0 ni, 0.0 id, 0.0 wa, 1.6 hi, 0.0 si, 0.0 st
// CPU percentage:
// user processes, system processes(I/O, network interrupts, inside the kernel),
// nice processes: things that have had their priority adjusted.
// idle CPU: the time that the CPU's actually doing nothing and since we're running a pretty heavy load, that's going to be a very low value right now.
// WA for waits: any time the CPU is waiting for an external thing.
// The final three values are data points for hardware interrupts
// HI, software interrupts, ST - time stolen from this VM's hypervisor.
// CPU metrics, point to whats burning up your CPU.
MiB Mem : 2245.6 total, 152.3 free, 1226.6 used, 866.7 buff/cache
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 843.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
998 root 20 0 160208 6464 5648 R 56.6 0.3 0:13.35 rngd
3994 server1 20 0 7420 832 768 S 24.2 0.0 0:04.87 cat
2859 server1 20 0 3038408 422480 107312 S 14.9 18.4 0:14.09 gnome-shell
sorted by CPU load.
the top of the list here is RNGD, the random number generator.
the process that feeds the random number generator on /dev/random
it is taxing the system to high 90s.
// outputs the load average and all of the logged in users
$ w uptime
23:53:57 up 1:01, 1 user, load average: 2.32, 2.75, 2.53
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
=====
what to do
=====
Find out what the process does and determine if its impacting the usability of our system.
- different between
load average
andCPU percentage
load average
: the average length of the run queue for each of our CPUs on our system
1
2
3
4
5
6
7
8
9
10
$ top
top - 23:15:33 up 22 min, 1 user, load average: 2.17, 1.97, 1. 42
# one minute, five minute, and 15 minute CPU load averages on the first line
# for the last 15 minutes, there has been 1. 42 processes in the run queue
# queues floating around between one and two is ok.
# things arent waiting long for access.
- stop the Tasks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// 2nd tab
// to stop the task
control+z
// one stop service
$ top
top - 23:37:03 up 44 min, 1 user, load average: 0.73, 2.09, 1.95
Tasks: 226 total, 5 running, 220 sleeping, 1 stopped, 0 zombie
// 2nd tab
// bring task back
$ fg
cat /dev/random > /dev/null
- operate the I/O to disk
write data to a file and control where the data comes from and how the data is written out
write a lot of very small I/Os to disk. write one-byte I/OS to a file, our server will spend more time on I/O overhead than the actual disk transfer itself.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ dd if=/dev/zero of=test1.img bs=1 count=100000 oflag=sync
// input file /dev/zero, read a bunch of zeros out.
// write them into a file named test1.img
// block size: one byte
// do this 100, 000 times
// add this parameter called oflag=sync.
each I/O will be synchronous, each I/O transaction will need to be completely flushed out the disk and an acknowledgement returned to the operating system before the next I/O can occur. have to wait for each I/O to occur.
$ top
// cpu% wait time go up.
top - 00:04:26 up 1:11, 1 user, load average: 0.29, 0.59, 1.40
Tasks: 226 total, 4 running, 222 sleeping, 0 stopped, 0 zombie
%Cpu(s): 5.8 us, 57.1 sy, 8.5 ni, 8.5 id, 5.4 wa, 11.6 hi, 3.1 si, 0.0 st
MiB Mem : 2245.6 total, 254.2 free, 1124.5 used, 866.9 buff/cache
MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 945.5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
527 root 0 -20 0 0 0 I 45.8 0.0 0:04.88 kworker/0:1H-kbl+
2859 server1 20 0 2939476 315080 107312 R 5.3 13.7 0:24.38 gnome-shell
4589 server1 20 0 7324 808 744 R 4.7 0.0 0:00.35 dd
// R: Running
// D: waiting
high CPU situation, high I/O waits,
troubleshooting rather than is it just a raw CPU issue.
It could be disks are underperforming or saturated.
It could also be networking, anything thats classified as I/O in your system.
Memory
server0:
- investigating system memory.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
$ cat /proc/meminfo
$ cat /proc/meminfo | sort
MemTotal: 2299536 kB
MemFree: 563356 kB
MemAvailable: 997356 kB
Buffers: 1104 kB
Cached: 554136 kB
$ free
$ free -m
// output in megabytes
total used free shared buff/cache available
Mem: 2245 1090 567 11 587 991
Swap: 2047 25 2022
/// buff/cache 205
file system cache
helps more efficient read-write I/O to our drives cause the drives are the slowest things in system
so file system cache can help interact with drive better.
$ dd if=/dev/zero of=test1.img bs=1 count=100000 oflag=sync
100000+0 records in
100000+0 records out
100000 bytes (100 kB, 98 KiB) copied, 86.5205 s, 1.2 kB/s
with oflag=sync
want it to side step the file system cache and perform the entire disk transaction
throughput was 1. 3 kilobytes a second.
So performing one byte I/OS synchronously obviously is not a good disk I/O pattern to leverage the capacity of this drive, performance is simply terrible.
$ dd if=/dev/zero of=test1.img bs=1 count=100000
100000+0 records in
100000+0 records out
100000 bytes (100 kB, 98 KiB) copied, 0.218376 s, 458 kB/s
remove this flag, leverage the file system cache
oflag=sync disables ability to use the file system cache, waiting for that entire disk transaction.
about a 700 times improvement using the file system cache
cause all that I/O is going to get absorbed into the file system cache into write buffers and then destaged to disk in the background.
- isolate a memory hog on system
find out who and more specifically which process is burning up all the memory on our system.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
1. create a c program.
$ mkdir demos/
$ mkdir demos/m3
$ sudo vi memtest.c
// create file.
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char **argv)
{
int size;
int sizeofint;
int elements;
int *mem*;
int i;
int allocationsize;
allocationsize = atoi(argv[1]);
// not safe, just for demonstration
printf("Memory allocation size: %d\n", allocationsize);
sizeofint = sizeof(int);
size = allocationsize * 1024 * 1024;
elements = size / sizeofint;
printf("Waiting 5 seconds...\n");
sleep(5);
mem = calloc( elements, sizeofint);
for (i=0; i<elements; i++)
{
mem[i]= i;
printf("elements: %d\n", mem[i]);
}
free(mem);
}
$ gcc -o memtest memtest.c
==================================================================
$ free -m
// output in megabytes
total used free shared buff/cache available
Mem: 2245 1090 567 11 587 991
Swap: 2047 25 2022
==================================================================
$ ./memtest 256
//ask for 256 megabytes of memory.
// the free mem decease.
$ free -m
total used free shared buff/cache available
Mem: 2245 1077 511 11 656 1003
Swap: 2047 25 2022
==================================================================
startup new terminal to monitor
$ top
// f, chose %mem, s, q
// sorted by memory utlization
Tasks: 228 total, 4 running, 224 sleeping, 0 stopped, 0 zombie
%Cpu(s): 72.6 us, 24.1 sy, 0.0 ni, 0.0 id, 0.0 wa, 3.3 hi, 0.0 si, 0.0 st
MiB Mem : 2245.6 total, 493.4 free, 1095.8 used, 656.4 buff/cache
MiB Swap: 2048.0 total, 2022.8 free, 25.2 used. 985.4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7315 server1 20 0 2891100 253188 106864 S 19.2 11.0 0:55.27 gnome-shell
2471 gdm 20 0 2586840 211888 98444 S 0.0 9.2 0:10.28 gnome-shell
7680 server1 20 0 1052452 78444 38560 S 0.0 3.4 0:00.97 gnome-software
8884 server1 20 0 266476 69336 1344 R 30.5 3.0 0:22.64 memtest
// memtest goes up
VIRT: memory allocated in the virtual address space of mentest
RES: memory for this process thats resident in physical memory.
So this process is running great.
Its performing well, its not paging and not impacting our system negatively
but dont have a lot of room left to breathe
- about 2022.8 free left from a memory standpoint.
==================================================================
how much disk I/O are generating
take a sample every second, 100 times.
$ vmstat 1 100
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
// SI, SO: how many memory pages are being read/swapped in or written/swapped out.
// BI, block in, so how many blocks we're reading in at each interval
// BO, is how many blocks are being written out.
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 25776 569580 1120 671264 1 2 60 81 205 1114 4 3 93 0 0
0 0 25776 569400 1120 671308 0 0 0 0 381 614 9 5 86 0 0
0 0 25776 569400 1120 671308 0 0 0 0 396 737 6 3 91 0 0
3 0 25776 569400 1120 671308 0 0 0 0 437 678 12 3 85 0 0
0 0 25776 569400 1120 671308 0 0 0 0 438 753 10 4 86 0 0
0 0 25776 569400 1120 671308 0 0 0 0 245 410 4 2 94 0 0
==================================================================
$ sudo yum install dsate
$ dstat
You did not select any stats, using -cdngy by default.
----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
16 3 77 0 0| 0 0 | 0 0 | 0 0 | 484 456
16 5 76 0 0| 0 0 | 0 0 | 0 0 | 512 807
14 3 83 0 0| 0 364k| 0 0 | 0 0 | 411 556
6 1 91 0 0| 0 0 | 0 0 | 0 0 | 251 461
after rum the memtest
$ dstat
----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
49 17 1 0 0| 0 76k| 0 0 | 0 0 |1090 3701
55 14 0 0 0| 0 96k| 0 0 | 0 0 |1119 3663
55 16 0 0 0| 0 86k| 0 0 | 0 0 |1169 3716
55 16 0 0 0| 0 84k| 0 0 | 0 0 |1140 3680
54 16 0 0 0| 0 84k| 0 0 | 0 0 |1124 3691
68 17 0 0 0| 0 67k| 0 0 | 0 0 |1294 2835
65 19 0 0 0| 0 49k| 0 0 | 0 0 |1208 2892
66 18 0 0 0| 0 175k| 0 0 | 0 0 |1438 3295
find high I/O processes, measure disk latency.
server0:
to isolate which process is driving the most I/O on system
1
2
$ sudo yum install iotop
$ sudo iotop
generate some load
1
2
3
4
// block size=1
$ dd if=/dev/zero of=test1.img bs=1 oflag=sync
$ sudo iotop
total
: I/O amongst all processes, includes things like things that might be in cache or on the file system cache actual
: actual block device I/OS.
- maybe some cache hits coming out of total, but actual is the stuff that’s actually hitting block device itself.
process ID, user, disk read, disk write, swap in, I/O waits, command
measure disk I/O latency
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// check the size
$ sudo blockdev --getbsz /dev/sda2
[sudo] password for server1:
4096
// block size=4096
$ dd if=/dev/zero of=test1.img bs=4096 count=10000 oflag=sync
10000+0 records in
10000+0 records out
40960000 bytes (41 MB, 39 MiB) copied, 6.4623 s, 6.3 MB/s
$ dd if=/dev/zero of=test1.img bs=1M count=10000 oflag=sync
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 49.6129 s, 211 MB/s
smaller I/OS are going to have a lower latency larger I/OS are going to have a higher latency,but also going to be able to consume more of the bandwidth out to the block device.
when test for latency, use a smaller I/O, block size
when test for bandwidth, use a larger I/O to task that disk subsystem
I/O size is really going to be application dependent and so here doing synthetic testing.
But when testing disk subsystems, we do want to test for varying I/O sizes, specifically the ones that our applications use, that depends on the application itself.
look at interface buffers and socket queues, find network hogs
server0:
check network traffic
- isolate which network interface with
IPTraf
- figure out who the communicators were, the senders and the receivers, which protocols were involved
- to determine who was burning a lot of bandwidth on system.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
1. install the Tools
$ sudo yum install iptraf-ng
2. copy file to server1.
generate a excessive load on network.
$ scp test1.img root@192.168.1.100:/root
3. monitor the network
$ sudo iptraf-ng
check socket queues
- see if there was kernel-level blocking or anything going on slowing down data flows
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
seen queuing for a very small period of time, not a big deal.
queue works down quickly and goes away, be in good shape
$ ss -t4
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 252 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 36 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 72 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 131360 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 216 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 180 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 36 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 459760 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 459760 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 1609160 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 144 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 65680 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.1.1:43160 192.168.1.100:ssh
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 180 0 192.168.1.1:43160 192.168.1.100:ssh
generate reports for the system Status
server0:
sysstat
a great high-level tool for long-term performance data,
- report on historical performance information over time.
- collect and report on system performance information over time.
- so can answer the question what happened when on system at some point in the past,
SADC
: system activity data collector /usr/lib64/sa/sa2
: a shell script wapping commands around the execution of the SAR command
with this crontab configuration
sysstat
will churn along collecting long-term performance data in the background
1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ vi /etc/sysconfig/sysstat
# sysstat-11.7.3 configuration file.
# How long to keep log files (in days).
# If value is greater than 28, then use sadc's option -D to prevent older data files from being overwritten. See sadc(8) and sysstat(5) manual pages.
# !!!!!!!!!!!!!!!!!!!!
HISTORY=28
$ sudo vi /etc/cron.d/sysstat
# collect and store binary data into the system activity data file every 10min, one record should be written one time into the data.
*/10 * * * * root /usr/lib64/sa/sa1 1 1
# collects and stores a daily activity report each day. at 11:52pm. it create that file and store in /var/log/sa
53 23 * * * root /usr/lib64/sa/sa2 -A
get data each time the sa1 command was executed,
1
2
3
4
$ cd /var/log/sa
$ ll
total 4
-rw-r--r--. 1 root root 2996 Apr 23 21:18 sa23
It is the responsibility of crontab to create this file every day
view the SAR file
1
$ less sar23
view the SA file
if this machine was up 24/7 like a server, will show a full run of data at each sample point for each day. able to go back and forth and look at data files accordingly.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
1. CPU utilization and a percentages over time. Good for narrowing down a CPU issue.
$ sar -u
Linux 4.18.0-147.8.1.el8_1.x86_64 (server0) 04/23/2020 _x86_64_ (1 CPU)
09:18:42 PM CPU %user %nice %system %iowait %steal %idle
09:34:58 PM all 0.87 0.01 0.54 0.02 0.00 98.56
09:36:06 PM all 1.61 0.00 0.79 0.01 0.00 97.58
09:36:08 PM all 2.37 0.00 0.95 0.47 0.00 96.21
09:36:09 PM all 6.12 0.00 3.06 0.00 0.00 90.82
2. load averages for one minute, five minute and 15 minutes, again over time. find out when had some slow down in system.
$ sar -q
Linux 4.18.0-147.8.1.el8_1.x86_64 (server0) 04/23/2020 _x86_64_ (1 CPU)
09:18:42 PM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked
09:34:58 PM 3 760 0.08 0.04 0.02 0
09:36:06 PM 3 757 0.02 0.03 0.02 0
09:36:08 PM 2 757 0.02 0.03 0.02 0
09:36:09 PM 4 757 0.02 0.02 0.02 0
09:36:23 PM 3 761 0.02 0.02 0.02 0
09:36:24 PM 4 761 0.18 0.06 0.03 0
3. pull it from a previous file
$ sar -q -f sa01
4. dump everything to the screen
$ sar -A
5. specific times
$ sar -A -s 21:38:00
$ sar -A -s 21:38:00 -e 22:00:00
6. output into csv
$ sadf -d | head
# hostname;interval;timestamp;CPU;%user;%nice;%system;%iowait;%steal;%idle
server0;975;2020-04-24 01:34:58 UTC;-1;0.87;0.01;0.54;0.02;0.00;98.56
server0;68;2020-04-24 01:36:06 UTC;-1;1.61;0.00;0.79;0.01;0.00;97.58
server0;2;2020-04-24 01:36:08 UTC;-1;2.37;0.00;0.95;0.47;0.00;96.21
server0;1;2020-04-24 01:36:09 UTC;-1;6.12;0.00;3.06;0.00;0.00;90.82
server0;14;2020-04-24 01:36:23 UTC;-1;3.54;0.00;4.39;0.21;0.00;91.86
server0;1;2020-04-24 01:36:24 UTC;-1;9.30;0.00;83.72;1.16;0.00;5.81
server0;1;2020-04-24 01:36:25 UTC;-1;11.84;0.00;73.68;1.32;0.00;13.16
server0;113;2020-04-24 01:38:18 UTC;-1;2.52;0.00;45.73;1.38;0.00;50.37
server0;1;2020-04-24 01:38:18 UTC;-1;6.67;0.00;20.00;1.33;0.00;72.00
$ sadf -d > output.csv
dstat
to triage a performance problem right now.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
$ dstat
----total-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
4 1 92 0 0| 452k 0 | 0 0 | 0 0 | 226 308
1 1 98 0 0| 0 0 | 0 0 | 0 0 | 115 218
5 0 94 0 0| 0 0 | 0 0 | 0 0 | 194 228
4 0 94 0 0| 0 0 | 0 0 | 0 0 | 184 237 ^C
$ dstat --tcp
------tcp-sockets-------
lis act syn tim clo
4 1 0 0 0
4 1 0 0 0
4 1 0 0 0
$ dstat --all --tcp
$ dstat -t --all --tcp
----system---- ----total-usage---- -dsk/total- -net/total->
time |usr sys idl wai stl| read writ| recv send>
23-04 22:02:42| | | >
23-04 22:02:43| 25 26 45 2 0| 45M 831k| 148k 56M>
23-04 22:02:44| 19 22 55 1 0| 39M 508k| 116k 44M>
manage the RPM packages
server0:
rpm
is a package that contains all of the software in things needed to install the software.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ cd /usr/share/doc
$ ls
abattis-cantarell-fonts libXrender
accountsservice libXrender-devel
adcli libXres
adobe-mappings-cmap libxslt
adobe-mappings-pdf libXt
alsa-lib libXtst
alsa-plugins-pulseaudio libXv
alsa-utils libXvMC
put the CentOS installation DVD into the virtual CD drive of virtual machine
$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=1132468k,nr_inodes=283117,mode=755)
$ mount | grep /dev/sr0
/dev/sr0 on /run/media/server1/VBox_GAs_6.0.20 type iso9660 (ro,nosuid,nodev,relatime,nojoliet,check=s,map=n,blocksize=2048,uid=1000,gid=1000,dmode=500,fmode=400,uhelper=udisks2)
$ cd /run/media/server1/
dr-xr-xr-x. 5 server1 server1 2408 Apr 9 12:06 VBox_GAs_6.0.20
dr-xr-xr-x. 5 server1 server1 2408 Apr 9 12:06 CentOS7
$ cd /run/media/server1/CentOS7/packages
// all rpm file, packages that com along with the installation DVD of CentOS.
mc-4.8.7-8.el7.x86_64.rpm
// name-version-release-architecture
- check the information
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ rpm -qi mc-4.8.7-8.el7.x86_64.rpm // look in database for particular package installed setup
$ rpm -qip mc-4.8.7-8.el7.x86_64.rpm // get metadata information from a package
Name : mc
Epoch : 1
Version : 4.8.7
Release : 8.el7 // enterprise Linux
Architecture: x86_64
Install Date: (not installed)
Group : System Environment/Shells
Size : 2159623
License : GPLv3+
Signature : RSA/SHA256, Sat 05 Jul 2014 09:49:28 AM EDT, Key ID 24c6a8a7f4a80eb5
Source RPM : (none)
Build Date : Mon 09 Jun 2014 06:18:17 PM EDT
Build Host : worker1.bsys.centos.org
Relocations : (not relocatable)
Packager : CentOS BuildSystem <https://bugs.centos.org>
Vendor : CentOS
URL : https://www.midnight-commander.org/
Summary : User-friendly text console file manager and visual shell
Description :
Midnight Commander is a visual shell much like a file manager, only
with many more features. It is a text mode application, but it also
includes mouse support. Midnight Commanders best features are its
ability to FTP, view tar and zip files, and to poke into RPMs for
specific files.
- list all file inside setup package
1
2
3
4
5
6
7
8
9
$ rpm -ql mc-4.8.7-8.el7.src.rpm
$ rpm -qlp mc-4.8.7-8.el7.src.rpm
mc-4.8.7.tar.xz
mc-4.8.8.man_mcdiff.patch
mc-VFSsegfault.patch
mc-cpiosegfault.patch
mc-signed_overflow_fix.patch
mc-widgetsegfault.patch
mc.spec
- check the signiture ```c $ rpm -K mc-4.8.7-8.el7.src.rpm mc-4.8.7-8.el7.src.rpm: rsa sha1 (md5) pgp md5 OK
$ rpm -K mc-4.8.7-8.el7.src.rpm.demo // check some content mc-4.8.7-8.el7.src.rpm.demo: digests SIGNATURES NOT OK
1
2
3
4
5
6
7
8
9
10
11
4. install, update, remove
```c
// -install verbose hashes
$ rpm -ivh mc-4.8.7-8.el7.src.rpm.demo
// update
$ rpm -Uvh mc
// remove
$ rpm -e mc
- confirm the install
1
2
3
4
5
6
7
$ rpm -qa mc
$ rpm -q mc
mc-4.8.7-8.el7.src.rpm
package mc is not installed
// check all installed
$ rpm -qa
- get the original copy from the packages
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// take all the information from the package and write it locally on our file system.
$ ls
mc-4.8.7-8.el7.src.rpm
$ rpm2cpio mc-4.8.7-8.el7.src.rpm | cpio -id
4221 blocks
$ ls
mc-4.8.7-8.el7.src.rpm mc-signed_overflow_fix.patch
mc-4.8.7.tar.xz mc.spec
mc-4.8.8.man_mcdiff.patch mc-VFSsegfault.patch
mc-cpiosegfault.patch mc-widgetsegfault.patch
// check when it come from
$ rpm -qf /etc/fstab
setup-2.12.2-2.el8_1.1.noarch
- verify package attribute
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
// compare the change
$ rpm -Vv setup
......... c /etc/aliases
......... c /etc/bashrc
......... c /etc/csh.cshrc
......... c /etc/csh.login
......... c /etc/environment
......... c /etc/ethertypes
......... c /etc/exports
......... c /etc/filesystems
......... c /etc/fstab
......... c /etc/group
......... c /etc/gshadow
......... c /etc/host.conf
......... c /etc/hosts
......... c /etc/inputrc
......... c /etc/motd
......... c /etc/networks
......... c /etc/passwd
......... c /etc/printcap
......... c /etc/profile
......... /etc/profile.d
......... c /etc/profile.d/csh.local
......... /etc/profile.d/lang.csh
......... /etc/profile.d/lang.sh
......... c /etc/profile.d/sh.local
......... c /etc/protocols
......... c /etc/services
......... c /etc/shadow
......... c /etc/shells
......... c /etc/subgid
......... c /etc/subuid
......... /usr/share/doc/setup
......... d /usr/share/doc/setup/uidgid
......... /usr/share/licenses/setup
......... l /usr/share/licenses/setup/COPYING
.M....G.. g /var/log/lastlog
size, mode, MD5 check zone, device number, link, user owner, group owner, files time, capabilities.
//
$ man rpm
/verify options
build a RPM package
1. what is RPM make of.
https://vault.centos.org/ -> source -> binary RPM centos 8.1.1911 -> os/ -> source/ -> Spackages/ -> rpm -> save link as
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
$ cd demo/m4
procps-ng-3.3.15-1.el8.src.rpm
$ rpm -qlp procps-ng-3.3.15-1.el8.src.rpm
README.md
README.top
procps-ng-3.3.15.tar.xz // the program source code
procps-ng.spec // the specification on how to build it
$ sudo rpm2cpio procps-ng-3.3.15-1.el8.src.rpm | cpio -id
1833 blocks
$ ll
-rw-rw-r--. 1 server1 server1 935129 Apr 24 23:35 procps-ng-3.3.15-1.el8.src.rpm
-rw-rw-r--. 1 server1 server1 904416 Apr 24 23:38 procps-ng-3.3.15.tar.xz
-rw-r--r--. 1 server1 server1 13150 Apr 24 23:38 procps-ng.spec
$ tar -xf procps-ng-3.3.15.tar.xz
$ ll
drwxr-xr-x. 14 server1 server1 4096 May 19 2018 procps-ng-3.3.15
-rw-rw-r--. 1 server1 server1 935129 Apr 24 23:35 procps-ng-3.3.15-1.el8.src.rpm
-rw-rw-r--. 1 server1 server1 904416 Apr 24 23:38 procps-ng-3.3.15.tar.xz
-rw-r--r--. 1 server1 server1 13150 Apr 24 23:38 procps-ng.spec
$ cd procps-ng-3.3.15/
// the source code
$ ls
ABOUT-NLS COPYING.LIB misc ps tload.1
aclocal.m4 depcomp missing pwdx.1 tload.c
AUTHORS Documentation mkinstalldirs pwdx.c top
autogen.sh free.1 NEWS skill.1 uptime.1
ChangeLog free.c pgrep.1 skill.c uptime.c
compile include pgrep.c slabtop.1 vmstat.8
config.guess install-sh pidof.1 slabtop.c vmstat.c
config.h.in kill.1 pidof.c snice.1 w.1
config.rpath lib pkill.1 sysctl.8 watch.1
config.sub ltmain.sh pmap.1 sysctl.c watch.c
configure m4 pmap.c sysctl.conf w.c
configure.ac Makefile.am po sysctl.conf.5
contrib Makefile.in proc test-driver
COPYING man-po procio.c testsuite
$ more procps-ng.spec
// the spec file
# The testsuite is unsuitable for running on buildsystems
%global tests_enabled 0
Summary: System and process monitoring utilities
Name: procps-ng
Version: 3.3.15
Release: 1%{?dist}
License: GPL+ and GPLv2 and GPLv2+ and GPLv3+ and LGPLv2+
Group: Applications/System
URL: https://sourceforge.net/projects/procps-ng/
Source0: https://downloads.sourceforge.net/%{name}/%{name}-%{version}.ta
r.xz
# README files are missing in latest tarball
# wget https://gitlab.com/procps-ng/procps/raw/e0784ddaed30d095bb1d9a8a
d6b5a23d10a212c4/README.md
Source1: README.md
# wget https://gitlab.com/procps-ng/procps/raw/e0784ddaed30d095bb1d9a8a
d6b5a23d10a212c4/top/README.top
Source2: README.top
BuildRequires: ncurses-devel
BuildRequires: libtool
BuildRequires: autoconf
BuildRequires: automake
BuildRequires: gcc
BuildRequires: gettext-devel
BuildRequires: systemd-devel
2. build an RPM from source.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
$ sudo yum group install "Development Tools"
// include all requirement needed to compile and build RPM from source code
$ ls
procps-ng-3.3.15-1.el8.src.rpm
$ rpmbuild --rebuild procps-ng-3.3.15-1.el8.src.rpm
// install the rpm
// going though and start to complie the software
Installing procps-ng-3.3.15-1.el8.src.rpm
error: Failed build dependencies:
systemd-devel is needed by procps-ng-3.3.15-1.el8.x86_64
$ sudo yum install systemd-devel
$ cd
// it will make a rpmbuild directory
$ ls
demo Documents index.html Pictures rpmbuild Videos
Desktop Downloads Music Public Templates
// prefer to use user to install not root.
$ ls -R rpmbuild/
rpmbuild/:
BUILD BUILDROOT RPMS SOURCES SPECS SRPMS
rpmbuild/BUILD:
rpmbuild/BUILDROOT:
rpmbuild/RPMS:
noarch x86_64
rpmbuild/RPMS/noarch:
procps-ng-i18n-3.3.15-1.el8.noarch.rpm
// build constructed RPM
rpmbuild/RPMS/x86_64:
procps-ng-3.3.15-1.el8.x86_64.rpm
procps-ng-debuginfo-3.3.15-1.el8.x86_64.rpm
procps-ng-debugsource-3.3.15-1.el8.x86_64.rpm
procps-ng-devel-3.3.15-1.el8.x86_64.rpm
// source file
rpmbuild/SOURCES:
procps-ng-3.3.15-1.el8.x86_64.tar.gz
// spec file
rpmbuild/SPECS:
procps.spec
rpmbuild/SRPMS:
3. build a RPM package from code
spec file
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
// under the SPEC path, temple for sepc
$ vi newfile.spec
$ more hello.spec
//...name...
// configuration sections
// special directives that tell RPM what to do at certain parts of the build phase.
%prep
%setup -q
# prep section
// caused the command setup -q
// creates the data directories in build process and uncompresses source tarball
%Build
#make // either one words
gcc -o hello hello.c
# build section
// actually builds the software
%install
rm ...
mkdir ...
install ...
# install section
// are the directories on what to do while building software package in our build directory.
// kind of a temporary working space where software is compiled
// first, delete a directory if it exists.
// Then recreate that directory, give a blank space to work in
// and then finally et up some permissions on particular binary that compiling and distributing.
%clean
rm ...
// clean the working place
%files
/usr/local.bin/hello
// files that need to be package inside the rpm
build the rpm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
1. create the source code.
$ tar -zxvf hello-1.tar.gz
// 2 file inside
hello-1/
hello-1/hello.c // the c program
hello-1/Makefile // makefile to compile it
$ mkdir hello-1
$ sudo vi hello.c
#include <hellomake.h>
int main() {
// call a function in another file
myPrintHelloMake();
return(0);
}
$ sudo vi Makefile
hellomake: hellomake.c hellofunc.c
gcc -o hellomake hellomake.c hellofunc.c -I.
$ tar -czvf hello-1.tar.gz /home/server1https://github.com/ocholuo/language/tree/master/0.project/webdemo/m4/hello-1
# sudo vi hello.spec
// create
Name: hello
Version: 1
Release: 1%{?dist}
Summary: simple hello world program in a package
License: MIT
URL: www.pluralsight.com
source0: /home/server1/rpmbuild/SOURCES/hello-1.tar.gz
%description
simple hello world program in a package
%prep
%setup -q
%build
#make
gcc -o hello hello.c
%install
rm -rf $RPM_BUILD_ROOT/usr/local/bin/
mkdir -p $RPM_BUILD_ROOT/usr/local/bin/
install -m 755 hello $RPM_BUILD_ROOT/usr/local/bin/hello
%clean
rm -rf $RPM_BUILD_ROOT
%files
/usr/local/bin/hello
=====================================================
$ ls
hello-1.tar.gz // the source file
hello.spec // the metadata, name, version, url, source, configuration
=====================================================
// put in the right place
$ mv hello-1.tar.gz ~/rpmbuild/SOURCES/
$ mv hello.spec ~/rpmbuild/SPECS/
install from the source code
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
$ cd ~/rpmbuild/SOURCES
$ wget https://ftp.gnu.org/gnu/hello/hello-2.10.tar.gz -P ~/rpmbuild/SOURCES
$ cd ~/rpmbuild/SPECS
$ rpmdev-newspec hello
$ ls
hello.spec
$ vi hello.spec
Name: hello
Version: 2.10
Release: 1%{?dist}
Summary: The "Hello World" program from GNU
License: GPLv3+
URL: https://ftp.gnu.org/gnu/%{name}
Source0: https://ftp.gnu.org/gnu/%{name}/%{name}-%{version}.tar.gz
BuildRequires: gettext
Requires(post): info
Requires(preun): info
%description
The "Hello World" program package
%prep
%autosetup
%build
%configure
make %{make_build}
%install
%make_install
%find_lang %{name}
rm -f %{buildroot}/%{_infodir}/dir
%post
/sbin/install-info %{_infodir}/%{name}.info %{_infodir}/dir || :
%preun
if [ $1 = 0 ] ; then
/sbin/install-info --delete %{_infodir}/%{name}.info %{_infodir}/dir || :
fi
%files -f %{name}.lang
%{_mandir}/man1/hello.1.*
%{_infodir}/hello.info.*
%{_bindir}/hello
%doc AUTHORS ChangeLog NEWS README THANKS TODO
%license COPYING
#%changelog
#* Tue May 28 2019 Aaron Kili
build the rpm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// built the rpm from source file
$ cd rpmbuild/SPECS/
$ rpmbuild -ba hello.spec
After the build process, the source RPMs and binary RPMs wills be created in the ../SRPMS/ and ../RPMS/ directories respectively.
use the rpmlint program to check and ensure that the spec file and RPM files created conform to RPM design rules:
// have the source rpm now
$ ls -R rpmbuild/
// install
$ sudo rpm -ivh rpmbuild/RPMS/x86_64/hello-2.10-1.el8.x86_64.rpm
// query it
$ rpm -qi hello
yum
1
2
3
4
5
6
7
8
9
10
11
12
$ sudo yum install zsh
// download, install, verify
$ cd rpmbuild/RPMS/x86_64/
$ yum install mc.rpm
// install package
$ sudo yum update
$ sudo yum -y update
// 1st step after installed
$ sudo yum update tuned
yum group
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
$ yum group list
// list the group
Available Environment Groups:
Server
Minimal Install
Workstation
KDE Plasma Workspaces
Virtualization Host
Custom Operating System
Installed Environment Groups:
Server with GUI
Installed Groups:
Container Management
Development Tools
Headless Management
Available Groups:
.NET Core Development
RPM Development Tools
Graphical Administration Tools
Legacy UNIX Compatibility
Network Servers
Scientific Support
Security Tools
Smart Card Support
System Tools
Fedora Packager
Xfce
$ yum group info "basic web server"
// see contend
Group: Basic Web Server
Description: These tools allow you to run a Web server on the system.
Mandatory Packages:
httpd
Default Packages:
httpd-manual
mod_fcgid
mod_ssl
Optional Packages:
libmemcached
memcached
mod_auth_gssapi
mod_security
mod_security-mlogc
mod_security_crs
$ yum group install "basic web server"
$ yum group remove "basic web server"
$ yum list tuned
// info
Installed Packages
tuned.noarch 2.12.0-3.el8_1.1 @BaseOS
$ yum search tuned
// package that it showup
$ yum info tuned
// metadata
$ yum provides top
// ask
$ yum provides top
Last metadata expiration check: 0:02:00 ago on Sun 26 Apr 2020 02:54:57 PM EDT.
procps-ng-3.3.15-1.el8.i686 : System and process monitoring utilities
Repo : BaseOS
Matched from:
Filename : /usr/bin/top
procps-ng-3.3.15-1.el8.x86_64 : System and process monitoring utilities
Repo : @System
Matched from:
Filename : /usr/bin/top
procps-ng-3.3.15-1.el8.x86_64 : System and process monitoring utilities
Repo : BaseOS
Matched from:
Filename : /usr/bin/top
$ yum list installed
// list all Installed
$ yumdownloader procps-ng
// downloader specific packages, then install on other pc
[SKIPPED] procps-ng-3.3.15-1.el8.i686.rpm: Already downloaded
[SKIPPED] procps-ng-3.3.15-1.el8.x86_64.rpm: Already downloaded
repository
store software package
- software publisher: Redhat, CentOS
- third party: EPEL(extra packages for enterprise linux), RPMForge
- build you own.
trusted and authenticated
/etc/yum.repos.d
: all repo configuration file
/var/repo/dvd
: put custom.repo file
1. repository configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
// yum configuration file
$ more /etc/yum.conf
[main]
cachedir=/var/cache/yum/$basearch/$releasever //where YUM caches packages locally with performing installations.
keepcache=0 // after done installation, delete the cache
debuglevel=2
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=3
# PUT YOUR REPOS HERE OR IN separate files named file.repo in /etc/yum.repos.d
$ cd /etc/yum.repos.d
$ ls
CentOS-AppStream.repo CentOS-Extras.repo CentOS-Vault.repo
CentOS-Base.repo CentOS-fasttrack.repo epel-modular.repo
CentOS-centosplus.repo CentOS-HA.repo epel-playground.repo
CentOS-CR.repo CentOS-Media.repo epel.repo
CentOS-Debuginfo.repo CentOS-PowerTools.repo epel-testing-modular.repo
CentOS-Devel.repo CentOS-Sources.repo epel-testing.repo
$ more CentOS-Base.repo
// check one of it
# CentOS-Base.repo
# The mirror system uses the connecting IP address of the client and the update status of each mirror to pick mirrors that are updated to and geographically close to the client. You should use this for CentOS updates unless you are manually picking other mirrors.
# If the mirrorlist= does not work for you, as a fall back you can try the remarked out baseurl= line instead.
[BaseOS]
name=CentOS-$releasever - Base
mirrorlist=https://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo
=BaseOS&infra=$infra
// a copy of the package to download from
#baseurl=https://mirror.centos.org/$contentdir/$releasever/BaseOS/$basearch/os/
gpgcheck=1
// enabled=one. authenticates the remote repository with a digital signature, gpgkey.
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
//local copy of the key provided from the repo.
enabled=1
// enabled = 0: turns this repository off from bing used when use the YUM commands to administer the packages on system. useful if no longer want or need packages from a particular repository.
// see all enabled repo on system
$ yum repolist
Last metadata expiration check: 5:19:07 ago on Sun 26 Apr 2020 02:54:57 PM EDT.
repo id repo name status
AppStream CentOS-8 - AppStream 4,830
BaseOS CentOS-8 - Base 1,661
PowerTools CentOS-8 - PowerTools 1,456
*epel Extra Packages for Enterprise Linux 8 - x86_64 5,352
*epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64 0
extras CentOS-8 - Extras 15
$ yum -v repolist
// more version detail info
Repo-id : AppStream
Repo-name : CentOS-8 - AppStream
Repo-revision: 8.1.1911
Repo-distro-tags: [cpe:/o:centos:centos:8]: , 8, C, O, S, e, n, t
Repo-updated : Wed 22 Apr 2020 01:16:06 AM EDT
Repo-pkgs : 4,830
Repo-size : 5.6 G
Repo-mirrors : https://mirrorlist.centos.org/?release=8&arch=x86_64&repo=AppStream&infra=stock
Repo-baseurl : https://mirrors.xtom.com/centos/8.1.1911/AppStream/x86_64/os/ (9 more)
Repo-expire : 172,800 second(s) (last: Sun 26 Apr 2020 02:54:15 PM EDT)
Repo-filename: /etc/yum.repos.d/CentOS-AppStream.repo
// actual location
2. create own local repository base on DVD of CentOS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
install and save network resource
download dvd iso and make a yum repo configuration, use that to install the rpm.
ref:
https://www.itzgeek.com/how-tos/linux/centos-how-tos/create-local-yum-repository-on-centos-7-rhel-7-using-dvd.html
https://www.tecmint.com/setup-local-http-yum-repository-on-centos-7/
https://phoenixnap.com/kb/create-local-yum-repository-centos
============================================
$ yum install createrepo
- make a dir to mount the dvd
$ sudo mkdir -p /var/repo/dvd
// If the CD has been mounted automatically, then ignore this step. Otherwise, mount it manually.
$ sudo mount /dev/sr0 /var/repo/dvd
$ ls /var/repo/dvd // content of the centos dvd
// If the ISO is present on the file system, mount it to /media/CentOS using the mount command with -o loop option.
$ mount -o loop CentOS-DVD1.iso /var/repo/dvd
$ ls /var/repo/dvd
AppStream BaseOS EFI images isolinux media.repo TRANS.TBL
$ more media.repo
[InstallMedia]
name=CentOS Linux 8
mediaid=None
metadata_expire=-1
gpgcheck=0
cost=500
============================================
- create the repo configuration
$ cd /etc/yum.repos.d
$ cp -v /var/repo/dvd/media.repo /etc/yum.repos.d/local-centos8.repo
// assign file permissions as shown to prevent modification or alteration by other users.
# chmod 644 /etc/yum.repos.d/local-centos8.repo
# ls -l /etc/yum.repos.d/local-centos8.repo
$ sudo vi /etc/yum.repos.d/local-centos8.repo
// add
[Local-Centos8-baseOS]
name=Local-CentOS8-BaseOS
metadata_expire=-1
enabled=1
baseurl=file:///var/repo/dvd/BaseOS/
gpgcheck=0
[Local-Centos8-AppStream]
name=Local-CentOS8-AppStream
metadata_expire=-1
enabled=1
baseurl=file:///var/repo/dvd/AppStream/
gpgcheck=0
//After modifying the repository file with new entries, proceed and clear the DNF / YUM cache as shown.
$ yum clean all
============================================
- disable other repo
$ sudo vi xx.repo
//add
enable=0
- check
$ yum repolist
repo id repo name status
AppStream CentOS-8 - AppStream 5,402
BaseOS CentOS-8 - Base 1,661
Local-Centos8-AppStream Local-CentOS8-AppStream 4,754
Local-Centos8-baseOS Local-CentOS8-BaseOS 1,659
$ sudo yum install vsftpd
$ sudo yum install ypserv
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
ypserv x86_64 4.0-6.20170331git5bfba76.el8 Local 171 k
Installing dependencies:
tokyocabinet x86_64 1.4.48-10.el8 Local 486 k
- clean the cache
$ yum clean all
3. create own HTTP based repository
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
Step 1: setup Web Server
$ yum list httpd
$ firewall-cmd --zone=public --permanent --add-service=http
$ firewall-cmd --zone=public --permanent --add-service=https
$ firewall-cmd --reload
$ systemctl start httpd
$ systemctl enable httpd
$ systemctl status httpd
// confirm that server is up and running
https://192.168.1.1
Step 2: Create Yum Local Repository
$ yum install createrepo
$ yum install yum-utils
// a better toolbox for managing repositories
- create directory: path of the http repo in file system
// create the necessary directories (yum repositories) that will store packages and any related information.
$ ls /var/www/html/
index.html
$ mkdir -p /var/www/html/custom
// synchronize CentOS YUM repositories to the local directories as shown.
// $ sudo reposync -m --repoid=BaseOS --newest-only --download-metadata -p DOWNLOAD_PATH=/var/www/html/repos/
Step 3: Create a Directory to Store the Repositories
- create repo configuration file: custom repo.
$ vi /etc/yum.repos.d/custom.repo
// change
[http]
name=Local HTTP repository
baseurl=https://192.168.1.1/custom
// default http Document root for HTTP server
enabled=1
gpgcheck=0
// for system want to use this repo
// just take this custom.repo file put it in /etc/yum.repos.d on that servers
Step 4: add own custom built package to repositories
- put package inside
$ cd /var/www/html/custom/
$ sudo yumdownloader ypserv
$ ls
ypserv-4.0-6.20170331git5bfba76.el8.x86_64.rpm
Step 5: Create the New Repository
// create the repo metadata database about the repo
// each time add packages
[server1@server0 custom]$ sudo createrepo .
Directory walk started
Directory walk done - 1 packages
Temporary output repo path: ./.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished
$ sudo yum clean all //clean cache
$ sudo yum makecache // update repo info
configure and manage NFS, share resource
exports
: shared resources
- in /etc/exports:
- /share1 server1.psdemo.local
- rw/ro: read only
- async/sync:
- trade off of the perform and safety
- server reply the right request when the I/O flush/commit to disc
- write acknowledgement
- edelay: delay write to disc. if NFS knows other request in coming soon.
- root_squash: client request come as root, UID 0, NFS map the UID to anonymous. prevent root access.
- all_squash:
- all users map to an anonymous user. when resource for public with low security.
- sec=krb5, krb5i, krb5p
- authentication settings. default=sys
- krb5 = user authentication only
- krb5i = add integrity checking
- krb5p = add Encryption, most secure.
exportfs
update and maintains a table of exports in server.
table path: /var/lib/nfs/etab table holds the run time configuration of exports
host:
- single machine:
- IP/name:
server1.psdemo.local
- IP/name:
- IP network:
- CIDR Notation:
192.168.2.0/25
- CIDR Notation:
- range: * ? [a-z]
*.psdemo.local
server?.psdemo.local
: any single value character.server[2-9].psdemo.local
runtime mounting: mount -t nfs -o rw server0.psdemo.local:/share1 /mnt/share1
persistent mounting: (keep mount after reboot) /etc/fstab server0.psdemo.local:/share1 /mnt/share1 nfs rw 0 0
dynamic/on demand mounting: autofs
global server-level/mount-point-level NFS options: /etc/nfsmount.conf
runtime mount
server0:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
step 1. install nfs server
$ sudo yum install nfs-utils
$ yum info nfs-utils
$ systemctl start nfs-server
$ systemctl enable nfs-server
$ systemctl status nfs-server
$ sudo systemctl enable rpcbind
$ sudo systemctl start rpcbind
$ sudo systemctl status rpcbind
$ sudo firewall-cmd --permanent --zone=public --add-service=nfs
$ sudo firewall-cmd --permanent --zone=public --add-service=rpc-bind
$ sudo firewall-cmd --permanent --add-service=mountd
$ sudo firewall-cmd --reload
step 2. setup DNS to manage naming of system
$ sudo vi /etc/hosts
// modify
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.1 server0.psdemo.local
192.168.1.100 server1.psdemo.local
192.168.2.100 server2.psdemo.local
step 3. define exports for servers
$ sudo mkdir /share1
$ sudo vi /etc/exports
/share1 server1.psdemo.local
// reload the file
$ sudo exportfs -arv
exportfs: No options for /share1 server1.psdemo.local: suggest server1.psdemo.local(sync) to avoid warning
exporting server1.psdemo.local:/share1
// check the rumtime configuration of NFS
$ cat /var/lib/nfs/etab
/share1 server1.psdemo.local(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
// default options
$ sudo vi /etc/exports
/share1 server1.psdemo.local(rw)
$ sudo exportfs -arv // no warning
exporting server1.psdemo.local:/share1
$ cat /var/lib/nfs/etab
/share1 server1.psdemo.local(rw,sync...)
// no space
/share1 server1.psdemo.local (rw)
$ sudo exportfs -arv
exportfs: No options for /share1 server1.psdemo.local: suggest server1.psdemo.local(sync) to avoid warning
exportfs: No host name given with /share1 (rw), suggest *(rw) to avoid warning
exporting server1.psdemo.local:/share1
exporting *:/share1 // allow anyone to access share1
client:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
step 1: setup client side
// set the dns setting for each client:
$ sudo vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.1 server0.psdemo.local
192.168.1.100 server1.psdemo.local
192.168.2.100 server2.psdemo.local
// install
$ sudo yum install nfs-utils
$ sudo systemctl status nfs-server
$ sudo systemctl status rpcbind
$ sudo firewall-cmd --permanent --zone=public --add-service=nfs
$ sudo firewall-cmd --permanent --zone=public --add-service=rpc-bind
$ sudo firewall-cmd --permanent --add-service=mountd
$ sudo firewall-cmd --reload
// mount the exports
$ sudo mount -t nfs server0.psdemo.local:/share1 /mnt
// check
$ mount | grep server0
server0.psdemo.local:/share1 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.100,local_lock=none,addr=192.168.1.1)
================================================================
didnot know the exports
ask NFS server, need to enable the rpcbind server on the NFS server
server1:
$ showmount -e server0.psdemo.local // or 192.168.1.1
Export list for 192.168.1.1:
/share1 server1.psdemo.local
$ showmount -e 192.168.1.1
clnt_create: RPC: Unable to receive
# firewall: server systemctl stop firewalld, client success.
server side:
# firewall-cmd --permanent --add-service=nfs
# firewall-cmd --permanent --add-service=rpcbind
# firewall-cmd --permanent --add-service=mountd // this one!!!
# firewall-cmd --reload
================================================================
persistent mount
reboot all lost, rumtime mount
server1:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ sudo vi /etc/fstab
// add
/dev/mapper/cl-root / xfs defaults 0 0
UUID=ca2f12da-b776-4a8a-9dd0-308da349067a /boot ext4 defaults 1 2
/dev/mapper/cl-swap swap swap defaults 0 0
server0.psdemo.local:/share1 /mnt nfs defaults,rw,_netdev 0 0
// _netdev: not attempt to mount file system until the network device is online, since nfs through network.
$ mount
$ showmount -e server0.psdemo.local
Export list for server0.psdemo.local:
/share1 server?.psdemo.local
$ sudo umount /mnt/
// update configuration file
$ sudo mount -a
// new mount show
$ mount | grep server0
server0.psdemo.local:/share1 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.100,local_lock=none,addr=192.168.1.1,_netdev_)
dynamic monut: autofs
autofs a daemon that runs on computer that dynamically mount shares/exports/anything/even local file systems
server1:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ sudo yum install autofs
$ systemctl enable autofs
$ sudo vi /etc/auto.misc
// add
# This is an automounter map and it has the following format key [ -mount-options-separated-by-comma ] location
# Details may be found in the autofs(5) manpage
cd -fstype=iso9660,ro,nosuid,nodev :/dev/cdrom
share1 -fstype=nfs,rw server0.psdemo.local:/share1
# the following entries are samples to pique your imagination
#linux -ro,soft,intr ftp.example.org:/pub/linux
#boot -fstype=ext2 :/dev/hda1
#floppy -fstype=auto :/dev/fd0
#floppy -fstype=ext2 :/dev/fd0
#e2floppy -fstype=ext2 :/dev/fd0
#jaz -fstype=ext2 :/dev/sdc1
#removable -fstype=ext2 :/dev/hdd
$ systemctl restart autofs
$ ls /misc // nothing
$ ls /misc/share1
$ ls /misc
share1 // share1 shows up, on demands
mount unsuccessful
server2:
1
2
$ sudo mount -t nfs server0.psdemo.local:/share1 /mnt/
mount.nfs: access denied by server while mounting server0.psdemo.local:/share1
server0:
1
2
3
4
5
6
7
// change
$ vi /etc/exports
/share1 server?.psdemo.local(rw) // not server1.psdemo.local(rw)
// update to reread the configuration file
$ sudo exportfs -arv
exporting server?.psdemo.local:/share1
server2:
1
2
3
4
$ sudo mount -t nfs server0.psdemo.local:/share1 /mnt/
$ mount | grep server0
server0.psdemo.local:/share1 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.100,local_lock=none,addr=192.168.1.1)
NFS file permission
UID and GID overlap:
- configuration file specific the user machine, lack scalability.
- central authentication server
AUTH_SYS
:
- dafult for NFS.
- UID/GID model
AUTH_GSS
:
- base on kerberos.
- authorize both the user and the system.
- requirement: configuration need to be setup
- kerneros key distribution center (KDC) installed
- host and service principals added for client and server.
- create a add key-tabs on client and server.
- change the authentication mechanisms on NFS client and server to use.
- sec=krb5, krb5i, krb5p
AUTH_SYS (default security mechanism)
no authentication, but passing same UID.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
server1:
$ ssh server1@192.168.1.100
$ touch /mnt/file1.test
touch: cannot touch '/mnt/file1.test': Permission denied
$ ll /
lrwxrwxrwx. 1 root root 7 May 10 2019 bin -> usr/bin
drwxr-xr-x. 2 root root 6 May 10 2019 mnt
================================================
server0:
$ sudo chown server1:server1 /share1
================================================
server1:
$ ll /
drwxr-xr-x. 2 server1 server1 6 Apr 28 19:23 share1
$ touch /mnt/file1.test
$ cat /etc/passwd | grep server0
server0:x:1000:1000:server0:/home/server0:/bin/bash
$ cat /etc/passwd | grep server1
server1:x:1000:1000:server1:/home/server1:/bin/bash
AUTH_GSS (kebereo on NFS server)
name resolution: /etc/hosts
ensure NTP clock synchronization, keberos is time-sentitive.
setup the Kerberos server and client.
get the machine access
- setup client
1
2
3
4
5
6
7
8
server1:
$ sudo kadmin
addprinc -randkey host/server1.psdemo.local
addprinc -randkey nfs/server1.psdemo.local
ktadd host/server1.psdemo.local
ktadd nfs/server1.psdemo.local
- setup server
1
2
3
4
5
6
7
8
server0: server side
$ kadmin
// add service principal for NFS server
addprinc -randkey nfs/server0.psdemo.local
ktadd host/server0.psdemo.local
ktadd nfs/server0.psdemo.local
- configure the kerberos server to authenticate on NFS mounts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
server0:
configure for NFS
$ vi /etc/exports
// add sec=krb5
/share1 server*.psdemo.local(rw,sec=krb5)
// reload the export configuration
$ exportfs -arv
exporting server1.psdemo.local:/share1
// check the rumtime configuration
$ cat /var/lib/nfs/etab
/share1 server1.psdemo.local(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=krb5,rw,secure,root_squash,no_all_squash)
- configure the client to use kerberos based configuration
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
server1: unmount and remount again.
1. runtime mount
$ sudo umount /mnt
$ mount -t nfs -o sec=krb5 server0.psdemo.local:/share1 /mnt/
$ mount | grep server0
server0.psdemo.local:/share1 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=krb5,clientaddr=192.168.1.1,local_lock=none,addr=192.168.1.1)
2. add mount in /etc/fstab.
$ vi /etc/fstab
/dev/mapper/cl-root / xfs defaults 0 0
server0.psdemo.local:/share1 /mnt nfs defaults,rw,_netdev,sec=krb5 0 0
$ mount -a
$ mount | grep server0
server0.psdemo.local:/share1 on /mnt type nfs4 (rw,relatime,vers=4.2,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=krb5,clientaddr=192.168.1.1,local_lock=none,addr=192.168.1.1,_netdev_)
$ ls /mnt
ls: cannot access '/mnt': Permission denied
machine access finish, user not yet.
get the user access
server1:
1
2
3
4
5
6
7
8
9
10
$ sudo kadmin
// add user principal
addprinc demo
// need to get a ticket from kdc.
$ kinit
// verify kerberos ticket
$ klist
Setup SELinux Performance and Monitor in CentOS 8
default SELinux security context support NFS.
Multiple mount points
: Multiple mount points to the subdirectory of root directory
Multiple-protocol sharing
: web server contents
monitor: nfsstat, nfsiostat, mountstats
.
Comments powered by Disqus.