With Linux, you can just turn it on, wait a few moments, and start doing powerful things!
But if you are going to be responsible for critical systems, you need to know how they are supposed to work so you can recognize and fix problems. At its very simplest, we’re looking at this sequence:
initprogram runs boot scripts to find the other file systems and start network and local service processes.
The closer we look at any of this, the deeper it goes. There are choices for firmware, media type, boot loader, kernel construction, and
init program to control the user-space operation. I will go through the booting process and fill in some of the details in this series of blog posts. For more detail, see the Learning Tree courses I teach on Linux server administration and optimization and troubleshooting.
The traditional Linux platform has been derived from the early 1980s IBM PC and its very limited BIOS firmware. The Unified Extensible Firmware Interface or UEFI came out of the Intel-HP Itanium server development of the mid 1990s. Version 2.1 was released in early 2007. UEFI firmware supports remote diagnostics and configuration, other networking in the pre-boot environment, booting from large disks, and cryptographic verification of the OS to be booted, among other things.
Microsoft released Windows 8 in October 2012, and requires that all retail sales of Windows 8 systems include UEFI and its support for Secure Boot. That really accelerated the presence of UEFI on non-server platforms.
Next week I will get into UEFI in more detail. A lot of people really haven’t heard much about it. Of those who have heard about it, there is a lot of misunderstanding and wrong information out there, leading to a lot of fear that UEFI is some anti-Linux project. It isn’t, check back next week for the details. To move through the rest of this overview:
The IBM PC design shows its early 1980s vintage in other ways. The IBM MBR partition table scheme is very limited. Yes, it can only handle a smaller number of partitions per disk, but that isn’t usually much concern. You optimize system performance by limiting activity per physical disk.
A much bigger problem these days is its inability to handle partitions larger than 2 TB. I just looked at a “big box” store’s on-line catalog, they still sell 500 GB internal disks for desktop computers, but 1 TB is the main entry point and their shelves are mostly filled with 2-6 TB drives.
So, the IBM MBR partition table is going away in favor of the GPT or GUID Partition Table.
The days of LILO are long gone except in some embedded systems. GRUB has been the standard boot loader for Intel/AMD platforms for some time now.
But there is a huge change from “legacy GRUB” to GRUB 2. With added capability comes much more complex configuration.
There isn’t a yes/no choice here, it’s a matter of degree. The boot loader will find, load, and start the monolithic core of the kernel. Once that kernel has found the file system holding
/lib/modules, it can load further modules as needed. But there is a design decision associated with configuring a kernel build — how much to put into the monolithic kernel versus building loadable modules.
A more recent development has been the move from a script to a binary to handle the very early tasks after the kernel has started but before it has mounted the root file system.
Here is where things get really different and really complicated…
The short version is that the kernel detects the hardware and mounts at least the basic file systems, and then it starts a program named
init to finish starting the needed services in an appropriate order to bring the system to its target run state. This
init program keeps running in order to handle any requested future state changes (such as shutting down cleanly), and possibly to monitor and automatically re-start crashed services.
In a little more detail,
init has gone through a long evolution and gained more capability and complexity along the way. The latest development is something called
systemd, which is an enormous change that also affects logging and other subsystems.
The appearance of
systemd brings this overview to its end. Next week we’ll dive into the firmware details.