Though SystemD will be compared to its predecessor SystemV for a long time , it has much more to offer in terms of System Management. It is a new way how Linux interact with the underline Objects such as hardware , sockets , application processes and many more.
Understand How SystemD works
systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts.
Features
Contrary to its predecessor SystemD handles processes parallel
socket and D-Bus activation
traditionally services will be configured to start on boot, but with systemd it is more event driven and you can configure to start when something connects to a specific port or a device get connected , this is called socket and d-bus activation.
Offers on-demand starting of daemons, also keeps track of processes using Linux cgroups
Supports snapshot and restoring of the system state
Maintains mount and automount points
Implements an elaborate transactional dependency-based service control logic.
Concept of Units in SystemD
SystemD manages units, which are representations of system resources and services.
Type of Units in SystemD
Service
A Service unit is used to manage service, the unit file include instruction to start , stop , restart the service.
Socket A network socket associated with a service.
Device
Unit file related to Device is used to manage Device , start , stop , auto etc.
Mount Unit manages the Mount points via SystemD
Automount Unit file mounts the file system on system boot. This feature might replace traditional fstab files in the near future.
Swap Mounts swap space on the system
Target Targets are much like Runlevel used previously to manage different services to start and stop at different
Path
A path for path-based activation. For example, you can start services based on the state of a certain path, such as whether it exists or not.
Timer Timer unit is used similar to Crontab to schedule other units.
Snapshot
A snapshot of the current systemd state. Usually used to rollback after making temporary changes to systemd.
Slice Restriction of resources through Linux Control Group nodes (cgroups).
Scope Information from systemd bus interfaces. Usually used to manage external system processes.
systemctlcommand
is the primary tool to manage SystemD. It can be used for starting , stopping of services as well as enabling and disabling , this was previously performed with service and chkconfig commands in the previous versions.
Basic Service Management Syntax
Description:
Command
Start the Service
systemctl start foo
Stop the Service
systemctl stop foo
Restart the Service
systemctl restart foo
Status of the Service
systemctl status foo
Enable Service to start at bootime
systemctl enable foo
Disable the Service
systemctl disable foo
Check if service is enabled
?
systemctl is-enabled foo
Mask
the Service
systemctl mask foo
Reload the updated unit file
systemctl daemon-reload
Show Failed Services
systemctl -failed
Reset any failed service
systemctl reset-failed
Show properties of the Unit
systemctl show < service >
Edit the Service Unit
systemctl edit < service >
Edit the Full Service Unit
systemctl edit --full < service >
Run on remote host
systemctl -H < host_name > status network
Changing System State
Reboot host
systemctl reboot
Poweroff host
systemctl poweroff
Switch to Emergency mode
systemctl emergency
Log back to default mode (Multi-User)
systemctl default
Viewing Log Messages
Show all log messages
journalctl
Show only kernel log messages
journalctl -k
Show log for specific service
journalctl -u network.service
Follow messages as they appear
journalctl -f
Besides services, most systemd commands can work with these unit types: paths,
slices, snapshots, sockets, swaps, targets, and timers
Once the Hardware check POST “Power On Self Test” is completed and boot devices are identified , the last step by UEFI / BIOS was to identify the first boot device read the <abbr title=”Master boot record”>MBR</abbr>. Its a 512 byte in the storage device that stores boot loader information, this can be considered as an index location that refer to other sectors for loading the Operating System. In most of the Linux distributions GRUB V2 is used as a bootloader as of this writeup.
GRUB V2 stands for “Grand Unified Bootloader, version 2” , it is the program that identifies and loads system kernel. At this point we should be clear why we use GRUB2 / GRUB V2 rather then just simply calling it GRUB ? that is because GRUB V2 is the rewrite of a legacy bootloader GRUB2 with many new features and a modular design. It is designed for multi-OS boot running multiple Linux, Unix and other proprietary Operating systems such as MS Windows. It can even identify multiple kernel for the Same Linux distribution and allow to boot from older version if required.
The default configuration file is Ubuntu /boot/grub/grub.conf RHEL 7 /boot/grub/grub.conf
Grub in itself is a complete topic to be discussed along with its configuration and management options which is discussed in GRUB section.
Once the Kernel is selected, Kernel along with initramfs is loaded in main memory and root file system gets mounted, the first process in legacy SystemV was the init process which will initiate the OS processes , but this has changed with Canonical Upstart and more recently SystemD. Both these systems have been designed to overcome what was previously the shortcomings of SystemV INIT system, both of these have comparatively similar features but do differ in design and architecture. As of now SystemD seems to lead with many big distribution such as Redhat , Fedora , CentOS , Debian and last but not least Ubuntu have given up Upstart in favor of SystemD, one of the reason was both of these system were causing more confusion for the software developers community.
To keep things simple i have divided the working of these three system in separate links below.
Boot process is one of the major part of troubleshooting an Operating System , it is the most critical time when administrators are tested to bring server up and running as soon as possible. Understanding how Operating system boot and what are the possible issues helps administrator to manage and configure Operating system which can not only boot faster but also recover and repaired in the fasted possible time.
The very first part of the boot process depends on the hardware architecture, there are few of them that are commonly used
Intel x86-based i386
AMD64 & Intel 64 amd64
multiplatform for LPAE generic-lpae
IBM POWER Systems ppc64el
IBM z/Architecture s390x
BIOS based X86 Architecture
X86 systems are BIOS based and loads the first stage boot loader from the MBR of assigned storage , that inturn loads the boot loader stage 1.5 and 2 , default boot loader for linux is GRUB UEFI-based x86 systems mount an EFI System Partition that contains a version of the GRUB boot loader. The EFI boot manager loads and runs GRUB as an EFI application. Power Systems servers mount a PPC PReP partition that contains the Yaboot boot loader. The System Management Services (SMS) boot manager loads and runs yaboot. IBM System z runs the z/IPL boot loader from a DASD or FCP-connected device that you specify when you IPL the partition that contains Linux Operating System
Note : BIOS and UEFI are both available in VMWare products as well as Oracle VirtualBox for latest configurations.
BIOS-based x86 Systems Details
BIOS (Basic input / output system) is a firmware interface in IBM compatible PCs and lately is also adopted by Virtual Software companies like VMWare and Virtualbox to be available in Virtual machines.
It is embedded on a chip in the motherboards for physical hardware and helps to scan and test all the devices in the system and selects the device to boot. Boot options is the list of devices in BIOS that provides list of bootable devices and the sequence to test bootable devices for the Operating system availability.
Usually, it checks any optical drives or USB storage devices present for bootable media, then, failing that, looks to the system’s hard drives. The BIOS then loads into memory whatever program is residing in the first sector of this device, called the Master Boot Record (MBR).
The MBR is only 512 bytes in size and contains machine code instructions for booting the machine, called a boot loader, along with the partition table. Once the BIOS finds and loads the boot loader program into memory, it gives control of the boot process to it.
UEFI based X86 Systems
UEFI is designed similar to BIOS with some great additions , unlike BIOS it run on its own architecture independ of the CPU and its own device driver. UEFI can mount partitions and read certain file systems. Although it has unique features its main tasks include searching for the bootable file system and passing on the control to the Operating system kernel. UEFI system identifies the partition with the GUID (globally unique identifier) which marks it as the EFI system partition. This partition contains applications compiled for the EFI architecture, which might include bootloaders for operating systems and utility software.
UEFI system includes an EFI boot manager that can boot the system from a default configuration or allow the user to choose from list of detected Operating systems. Once selected UEFI reads it into memory and gives control to the boot process.
RHEL 8 Cockpit Web Console is a web based management tool that allows you to complete many common RHEL tasks from a web browser , it is designed as per cloud OS. As a feature of any web based application it is accessible from remote machines by default.
Enabling Cockpit Web Console
By default cockpit gets installed on all RHEL 8 installations with exception to minimal installs, however it is not enabled by default, use below command to enable the web interface.
systemctl enable --now cockpit.socket
Notice that cockpit is a self contained application and does not require a web server to be installed to run this web application.
If you want to run cockpit dashboard locally from the desktop you can use below command to install the graphical interface.
yum install virt-viewer
The next step is to open a web browser (either from a remote host, or from the RHEL 8 system console), and go to the RHEL 8 systems hostname or IP address, followed by :9090 to specify port 9090, for example: https://localhost.localdomain:9090
Log in to the Web Console with the root account, or with another RHEL account.