Product SiteDocumentation Site

9장. Unix Services

9.1. System Boot
9.1.1. The systemd init system
9.1.2. The System V init system
9.2. Remote Login
9.2.1. Secure Remote Login: SSH
9.2.2. Using Remote Graphical Desktops
9.3. Managing Rights
9.3.1. Owners and Permissions
9.3.2. ACLs - Access Control Lists
9.4. Administration Interfaces
9.4.1. Administrating on a Web Interface: webmin
9.4.2. Configuring Packages: debconf
9.5. syslog System Events
9.5.1. Principle and Mechanism
9.5.2. The Configuration File
9.6. The inetd Super-Server
9.7. Scheduling Tasks with cron and atd
9.7.1. Format of a crontab File
9.7.2. Using the at Command
9.8. Scheduling Asynchronous Tasks: anacron
9.9. Quotas
9.10. Backup
9.10.1. Backing Up with rsync
9.10.2. Restoring Machines without Backups
9.11. Hot Plugging: hotplug
9.11.1. Introduction
9.11.2. The Naming Problem
9.11.3. How udev Works
9.11.4. A concrete example
9.12. Power Management: Advanced Configuration and Power Interface (ACPI)
This chapter covers a number of basic services that are common to many Unix systems. All administrators should be familiar with them.

9.1. System Boot

When you boot the computer, the many messages scrolling by on the console display many automatic initializations and configurations that are being executed. Sometimes you may wish to slightly alter how this stage works, which means that you need to understand it well. That is the purpose of this section.
On systems with a BIOS, first, the BIOS takes control of the computer, initializes the controllers and hardware, detects the disks, and bridges everything together. Then it looks up the Master Boot Record (MBR) of the first disk in the boot order and loads the code stored there (first stage). This code then launches the second stage and finally executes the bootloader.
In contrast to the BIOS, UEFI is more sophisticated, it knows filesystems and can read the partition tables. The interface searches the system storage for a partition labeled with a specific globally unique identifier (GUID) that marks it as the EFI System Partition (ESP), where the bootloaders, boot managers, UEFI shell, etc., are located, and launches the desired bootloader. If Secure Boot is enabled, the boot process will verify authenticity of the EFI binaries there by signature (thus grub-efi-arch-signed is required in this case). The UEFI specification also defines support for booting in legacy BIOS mode. This is called the Compatibility Support Module (CSM). If CSM is enabled, it will attempt to boot from a drive's MBR. However, many new systems do no longer support the CSM mode.
In both cases then the actual bootloader takes over, finds either a chained bootloader or the kernel on the disk, loads, and executes it. The kernel is then initialized, and starts to search for and mount the partition containing the root filesystem, and finally executes the first program — init. Frequently, this “root partition” and this init are, in fact, located in a virtual filesystem that only exists in RAM (hence its name, “initramfs”, formerly called “initrd” for “initialization RAM disk”). This filesystem is loaded in memory by the bootloader, often from a file on a hard drive or from the network. It contains the bare minimum required by the kernel to load the “true” root filesystem: this may be driver modules for the hard drive, or other devices without which the system cannot boot, or, more frequently, initialization scripts and modules for assembling RAID arrays, opening encrypted partitions, activating LVM volumes, etc. Once the root partition is mounted, the initramfs hands over control to the real init, and the machine goes back to the standard boot process.

9.1.1. The systemd init system

The “real init” is currently provided by systemd and this section documents this init system.
Boot sequence of a computer running Linux with systemd

그림 9.1. Boot sequence of a computer running Linux with systemd

Systemd executes several processes, in charge of setting up the system: keyboard, drivers, filesystems, network, services. It does this while keeping a global view of the system as a whole, and the requirements of the components. Each component is described by a “unit file” (sometimes more); the general syntax is derived from the widely-used “*.ini files“ syntax, with key = value pairs grouped between [section] headers. Unit files are stored under /lib/systemd/system/ and /etc/systemd/system/; they come in several flavors, but we will focus on “services” and “targets” here.
A systemd “.service file” describes a process managed by systemd. It contains roughly the same information as old-style init-scripts, but expressed in a declaratory (and much more concise) way. Systemd handles the bulk of the repetitive tasks (starting and stopping the process, checking its status, logging, dropping privileges, and so on), and the service file only needs to fill in the specifics of the process. For instance, here is the service file for SSH:
[Unit]
Description=OpenBSD Secure Shell server
Documentation=man:sshd(8) man:sshd_config(5)
After=network.target auditd.service
ConditionPathExists=!/etc/ssh/sshd_not_to_be_run

[Service]
EnvironmentFile=-/etc/default/ssh
ExecStartPre=/usr/sbin/sshd -t
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
ExecReload=/usr/sbin/sshd -t
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartPreventExitStatus=255
Type=notify
RuntimeDirectory=sshd
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target
Alias=sshd.service
The [Unit] section contains generic information about the service, like its description and manual page resources, as well as relations (dependency and order) to other services. The [Service] part contains the declarations related to the service execution (starting, stopping, killing, restarting), directories and configuration file(s) used. The last section, [Install], again carries generic information into which targets to install the service and, in this case, the alias that can be used instead of the service name. As you can see, there is very little code in there, only declarations. Systemd takes care of displaying progress reports, keeping track of the processes, and even restarting them when needed. The syntax of these files is fully described in several manual pages (e.g. systemd.service(5), systemd.unit(5), systemd.exec(5), etc.).
A systemd “.target file” describes a state of the system where a set of services are known to be operational. It can be thought of as an equivalent of the old-style runlevel. One of the pre-defined targets is local-fs.target; when it is reached, the rest of the system can assume that all local filesystems are mounted and accessible. Other targets include network-online.target and sound.target (for a full list of special targets see systemd.special(7)). The dependencies of a target can be listed either within the target file (in the Requires= line), or using a symbolic link to a service file in the /lib/systemd/system/targetname.target.wants/ directory. For instance, /etc/systemd/system/printer.target.wants/ contains a link to /lib/systemd/system/cups.service; systemd will therefore ensure CUPS is running in order to reach printer.target.
Since unit files are declarative rather than scripts or programs, they cannot be run directly, and they are only interpreted by systemd; several utilities therefore allow the administrator to interact with systemd and control the state of the system and of each component.
The first such utility is systemctl. When run without any arguments, it lists all the unit files known to systemd (except those that have been disabled), as well as their status. systemctl status gives a better view of the services, as well as the related processes. If given the name of a service (as in systemctl status ntp.service), it returns even more details, as well as the last few log lines related to the service (more on that later).
Starting a service by hand is a simple matter of running systemctl start servicename.service. As one can guess, stopping the service is done with systemctl stop servicename.service; other subcommands include reload and restart.
To control whether a service is active (i.e. whether it will get started automatically on boot), use systemctl enable servicename.service (or disable). is-enabled allows checking the status of the service.
An interesting feature of systemd is that it includes a logging component named journald. It comes as a complement to more traditional logging systems such as syslogd, but it adds interesting features such as a formal link between a service and the messages it generates, and the ability to capture error messages generated by its initialization sequence. The messages can be displayed later on, with a little help from the journalctl command. Without any arguments, it simply spews all log messages that occurred since system boot; it will rarely be used in such a manner. Most of the time, it will be used with a service identifier:
# journalctl -u ssh.service
-- Logs begin at Tue 2015-03-31 10:08:49 CEST, end at Tue 2015-03-31 17:06:02 CEST. --
Mar 31 10:08:55 mirtuel sshd[430]: Server listening on 0.0.0.0 port 22.
Mar 31 10:08:55 mirtuel sshd[430]: Server listening on :: port 22.
Mar 31 10:09:00 mirtuel sshd[430]: Received SIGHUP; restarting.
Mar 31 10:09:00 mirtuel sshd[430]: Server listening on 0.0.0.0 port 22.
Mar 31 10:09:00 mirtuel sshd[430]: Server listening on :: port 22.
Mar 31 10:09:32 mirtuel sshd[1151]: Accepted password for roland from 192.168.1.129 port 53394 ssh2
Mar 31 10:09:32 mirtuel sshd[1151]: pam_unix(sshd:session): session opened for user roland by (uid=0)
Another useful command-line flag is -f, which instructs journalctl to keep displaying new messages as they are emitted (much in the manner of tail -f file).
If a service doesn't seem to be working as expected, the first step to solve the problem is to check that the service is actually running with systemctl status; if it is not, and the messages given by the first command are not enough to diagnose the problem, check the logs gathered by journald about that service. For instance, assume the SSH server doesn't work:
# systemctl status ssh.service
● ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled)
   Active: failed (Result: start-limit) since Tue 2015-03-31 17:30:36 CEST; 1s ago
  Process: 1023 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS)
  Process: 1188 ExecStart=/usr/sbin/sshd -D $SSHD_OPTS (code=exited, status=255)
 Main PID: 1188 (code=exited, status=255)

Mar 31 17:30:36 mirtuel systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Mar 31 17:30:36 mirtuel systemd[1]: Unit ssh.service entered failed state.
Mar 31 17:30:36 mirtuel systemd[1]: ssh.service start request repeated too quickly, refusing to start.
Mar 31 17:30:36 mirtuel systemd[1]: Failed to start OpenBSD Secure Shell server.
Mar 31 17:30:36 mirtuel systemd[1]: Unit ssh.service entered failed state.
# journalctl -u ssh.service
-- Logs begin at Tue 2015-03-31 17:29:27 CEST, end at Tue 2015-03-31 17:30:36 CEST. --
Mar 31 17:29:27 mirtuel sshd[424]: Server listening on 0.0.0.0 port 22.
Mar 31 17:29:27 mirtuel sshd[424]: Server listening on :: port 22.
Mar 31 17:29:29 mirtuel sshd[424]: Received SIGHUP; restarting.
Mar 31 17:29:29 mirtuel sshd[424]: Server listening on 0.0.0.0 port 22.
Mar 31 17:29:29 mirtuel sshd[424]: Server listening on :: port 22.
Mar 31 17:30:10 mirtuel sshd[1147]: Accepted password for roland from 192.168.1.129 port 38742 ssh2
Mar 31 17:30:10 mirtuel sshd[1147]: pam_unix(sshd:session): session opened for user roland by (uid=0)
Mar 31 17:30:35 mirtuel sshd[1180]: /etc/ssh/sshd_config line 28: unsupported option "yess".
Mar 31 17:30:35 mirtuel systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Mar 31 17:30:35 mirtuel systemd[1]: Unit ssh.service entered failed state.
Mar 31 17:30:35 mirtuel sshd[1182]: /etc/ssh/sshd_config line 28: unsupported option "yess".
Mar 31 17:30:35 mirtuel systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Mar 31 17:30:35 mirtuel systemd[1]: Unit ssh.service entered failed state.
Mar 31 17:30:35 mirtuel sshd[1184]: /etc/ssh/sshd_config line 28: unsupported option "yess".
Mar 31 17:30:35 mirtuel systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Mar 31 17:30:35 mirtuel systemd[1]: Unit ssh.service entered failed state.
Mar 31 17:30:36 mirtuel sshd[1186]: /etc/ssh/sshd_config line 28: unsupported option "yess".
Mar 31 17:30:36 mirtuel systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Mar 31 17:30:36 mirtuel systemd[1]: Unit ssh.service entered failed state.
Mar 31 17:30:36 mirtuel sshd[1188]: /etc/ssh/sshd_config line 28: unsupported option "yess".
Mar 31 17:30:36 mirtuel systemd[1]: ssh.service: main process exited, code=exited, status=255/n/a
Mar 31 17:30:36 mirtuel systemd[1]: Unit ssh.service entered failed state.
Mar 31 17:30:36 mirtuel systemd[1]: ssh.service start request repeated too quickly, refusing to start.
Mar 31 17:30:36 mirtuel systemd[1]: Failed to start OpenBSD Secure Shell server.
Mar 31 17:30:36 mirtuel systemd[1]: Unit ssh.service entered failed state.
# vi /etc/ssh/sshd_config
# systemctl start ssh.service
# systemctl status ssh.service
● ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled)
   Active: active (running) since Tue 2015-03-31 17:31:09 CEST; 2s ago
  Process: 1023 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS)
 Main PID: 1222 (sshd)
   CGroup: /system.slice/ssh.service
           └─1222 /usr/sbin/sshd -D
# 
After checking the status of the service (failed), we went on to check the logs; they indicate an error in the configuration file. After editing the configuration file and fixing the error, we restart the service, then verify that it is indeed running.

9.1.2. The System V init system

The System V init system (which we'll call init for brevity) executes several processes, following instructions from the /etc/inittab file. The first program that is executed (which corresponds to the sysinit step) is /etc/init.d/rcS, a script that executes all of the programs in the /etc/rcS.d/ directory.
Among these, you will find successively programs in charge of:
  • configuring the console's keyboard;
  • loading drivers: most of the kernel modules are loaded by the kernel itself as the hardware is detected; extra drivers are then loaded automatically when the corresponding modules are listed in /etc/modules;
  • checking the integrity of filesystems;
  • mounting local partitions;
  • configuring the network;
  • mounting network filesystems (NFS).
After this stage, init takes over and starts the programs enabled in the default runlevel (which is usually runlevel 2). It executes /etc/init.d/rc 2, a script that starts all services which are listed in /etc/rc2.d/ and whose names start with the “S” letter. The two-figures number that follows had historically been used to define the order in which services had to be started, but nowadays the default boot system uses insserv, which schedules everything automatically based on the scripts' dependencies. Each boot script thus declares the conditions that must be met to start or stop the service (for example, if it must start before or after another service); init then launches them in the order that meets these conditions. The static numbering of scripts is therefore no longer taken into consideration (but they must always have a name beginning with “S” followed by two digits and the actual name of the script used for the dependencies). Generally, base services (such as logging with rsyslog, or port assignment with portmap) are started first, followed by standard services and the graphical interface (gdm3).
This dependency-based boot system makes it possible to automate re-numbering, which could be rather tedious if it had to be done manually, and it limits the risks of human error, since scheduling is conducted according to the parameters that are indicated. Another benefit is that services can be started in parallel when they are independent from one another, which can accelerate the boot process.
init distinguishes several runlevels, so it can switch from one to another with the telinit new-level command. Immediately, init executes /etc/init.d/rc again with the new runlevel. This script will then start the missing services and stop those that are no longer desired. To do this, it refers to the content of the /etc/rcX.d (where X represents the new runlevel). Scripts starting with “S” (as in “Start”) are services to be started; those starting with “K” (as in “Kill”) are the services to be stopped. The script does not start any service that was already active in the previous runlevel.
By default, System V init in Debian uses four different runlevels:
  • Level 0 is only used temporarily, while the computer is powering down. As such, it only contains many “K” scripts.
  • Level 1, also known as single-user mode, corresponds to the system in degraded mode; it includes only basic services, and is intended for maintenance operations where interactions with ordinary users are not desired.
  • Level 2 is the level for normal operation, which includes networking services, a graphical interface, user logins, etc.
  • Level 6 is similar to level 0, except that it is used during the shutdown phase that precedes a reboot.
Other levels exist, especially 3 to 5. By default they are configured to operate the same way as level 2, but the administrator can modify them (by adding or deleting scripts in the corresponding /etc/rcX.d directories) to adapt them to particular needs.
Boot sequence of a computer running Linux with System V init

그림 9.2. Boot sequence of a computer running Linux with System V init

All the scripts contained in the various /etc/rcX.d directories are really only symbolic links — created upon package installation by the update-rc.d program — pointing to the actual scripts which are stored in /etc/init.d/. The administrator can fine tune the services available in each runlevel by re-running update-rc.d with adjusted parameters. The update-rc.d(1) manual page describes the syntax in detail. Please note that removing all symbolic links (with the remove parameter) is not a good method to disable a service. Instead you should simply configure it to not start in the desired runlevel (while preserving the corresponding calls to stop it in the event that the service runs in the previous runlevel). Since update-rc.d has a somewhat convoluted interface, you may prefer using rcconf (from the rcconf package) which provides a more user-friendly interface.
Finally, init starts control programs for various virtual consoles (getty). It displays a prompt, waiting for a username, then executes login user to initiate a session.