Thursday, August 28, 2008

What Is System Administration?

By Eric Goebelbecker

System Administration is planning, installing, and maintaining computer systems. If that seems like a very generalized answer, it's because "What is System Administration" is a broad question.

In this chapter we'll describe what is expected from a System Administrator, and how she might approach her responsibilities.

Help Wanted
Administer Sun and IBM UNIX systems and control Internet access. Assist with some database administration (Oracle). Administer Sun server and network: system configuration, user id/creation/maintenance, security administration, system backup, and ensuring all applications are running on the system properly. Administer Internet access and services: internal/external security, user identification creation and maintenance, and Web server maintenance.

This is a typical "Help Wanted" advertisement for a UNIX System Administration position. From this description you might guess that a System Administrator has to install and configure the operating system, add new users, back up the system(s), keep the systems secure, and make sure they stay running. Setting up Internet access and keeping it running is part of the job too.

Unfortunately, you'd only be considering about half of the job. Being an expert on installing, running, and maintaining all of the major UNIX variants isn't enough to be a truly good system administrator. There is a significant nontechnical component to being a system administrator, especially in terms of planning, organizational, and people skills.

As computers become more and more pervasive in business, system administration becomes a mission critical position in more and more organizations. The administrator has to understand the systems that he is responsible for, the people who use them, and the nature of the business that they are used for. A key skill in administration is planning, because at the rate that systems are being created, overhauled, and expanded, trying to improvise and design a network "on the fly" just doesn't work.

Companies are moving more and more processes not just to computers, but to point of sale systems, such as Web commerce and sophisticated in-store systems, such as electronic catalogs and cash registers that are directly connected to both inventory and credit systems. Companies that may have moved their inventory control to computers five years ago are scrambling to get order entry computerized and on the Web, while companies that haven't automated their inventory and ordering systems yet are scrambling to do so in order to remain competitive. E-mail is now regarded as just as important as faxes and telephones, while every part of customer service that can be manned by a computer and fax retrieval system already is. These companies are almost completely dependent on their computers, and their system administrators need to understand a lot more than how to partition a disk drive, add memory, and install Adobe PhotoShop.

This chapter introduces some of the basic technical and organizational concepts that a system administrator needs to know in order to perform his job well. It also covers a few key system administration tools that are already found with most UNIX variants or are included on the UNIX Unleashed CD that accompanied this book.

The chapter is divided into the following sections:

  • Technical Concepts for System Administrators--in this section we introduce some of UNIX's important characteristics and how they differ from operating systems like Windows and Macintosh.

  • UNIX is Heterogeneous--this section describes UNIX's diverse and sometimes self-contradictory user interfaces and how those interfaces ended up that way.

  • System Administration Tasks--this part of the chapter is where basic administration tasks and responsibilities are introduced.

  • Administration Resources--both the operating system itself and the Internet provide a wide variety of information for administrators. This section provides a few pointers to these resources.

  • Tools of the Trade--UNIX provides users and administrators with an amazing set of tools. We briefly cover a few of them in the end of this chapter.

TIP: No one, regardless of how long they've "been in the business" can remember all of the configuration options, command line switches, oddities, and outright bugs in the UNIX tool chest. Experienced users soon learn the value of UNIX's online documentation, the manual pages. Before you use any tool, read the man page(s) for it; many of them include useful examples. If you install a utility from the CD, be sure you read the installation instructions that accompany it and install the man page too. Later in this chapter we'll cover some advanced features found in the manual page system.

Technical Concepts for New System Administrators

UNIX differs from Windows and Macintosh at a fundamental level. UNIX was originally intended for multiple users running multiple simultaneous programs. (At least it was by the time it was distributed outside of Bell Labs.) The phenomenon of a user having a low cost personal workstation, running Linux or any other variant of UNIX is actually a recent development in terms of the operating system's history. The fact that UNIX is designed for use by more than a single user is reflected throughout the operating system's file systems, security features, and programming model.

Networking is not an afterthought or a recent development for UNIX, the way it seems to be for Windows and Macintosh. Support for sharing files, moving from workstation to workstation, and running applications on remote computers that are controlled and viewed on a local workstation is not only intuitive and natural on UNIX, but was more powerful and stable on past versions of UNIX than the latest versions of Windows and Windows NT.

Multiple Users and Multiple Accounts

The DOS/Windows and Macintosh environments are "computer centric" in terms of security and customization. The ownership of files and processes (programs that are currently running) is more or less governed by the computer where they are located as opposed to the notion of a user id or session. If a person can physically get to a computer, he has access to all of the files and programs on it. If a truly malicious person wants access to the contents of a PC, even a password-protected screen saver can be overcome by restarting the system, because the files and programs belong to the PC, not a user, and the computer will boot ready for use. (Even the login prompt on Windows 95 can be bypassed with the Cancel button, bringing the user interface up with no network resources connected.)

Add-ons are available for DOS and Macintosh systems that enable users to identify themselves and save files in protected areas. But these are applications, not a part of the operating system, and come with their own set of problems, limitations, and rules. The fact is, as anyone who has ever had to share a computer with a family member or coworker will agree, personal computer operating systems are single user and geared toward supporting one user and one set of configuration options.

UNIX systems are multi-user. Users must log in to the system. Each has his own areas for saving files, the home directory, and files have properties that determine who can or cannot access them, their mode. All running programs are associated with a user, and, similar to the way files have access control, programs can only be started or stopped by certain users. Unless a user can log in as the super-user, she cannot access another user's files unless the owner gives her permission through one of several direct or indirect means. Only the super-user can reboot the computer (without the power switch) or stop another user's processes. Even if a system is rebooted, all of the security features will be remain effect, so restarting is not a valid means of subverting security.

Network Centricity

Networking has become an integral part of UNIX. The ability to share files, support network logins, share network configuration information, and run applications across a network is included in all of the major UNIX distributions, and more importantly is a natural extension of the base operating system, not an application that was designed by an individual vendor, with their idea of networking and administration, and that has to be purchased separately.

When configured to allow it, anything that can be done at the console (main keyboard and monitor) of a UNIX system can be done at another system via a network connection. (We'll refer to these other systems as remote nodes or remote systems.) Actually many server systems, such as Web servers and file servers, have consoles with very limited capabilities (such as a text-only terminal), and are deliberately designed with the idea of doing as much administration as possible from remote nodes outside the data center.

Two of the mechanisms that provide these capabilities are remote (or pseudo) terminals and remote shells. Remote terminals emulate an actual terminal session, just as if the user were logged into a terminal connected to the system via a serial line, where the network is the line and the application (usually telnet) is the terminal (Remote terminals are usually referred to as telnet sessions, since telnet is by far the most common application.) This is similar, in theory at least, to dialing into a system and using an application like Procomm.

Remote shells (or remote logins) are sessions on remote computers that are centered around the execution of a shell (or shell command) instead of a terminal on a remote system. A remote shell executes a command on a remote host, while a remote login runs a shell; both usually appear to be running on the local system. Remote shells are frequently used by administrators in shell scripts, allowing them to execute commands on several systems and consolidate the results in one place, such as collecting disk utilization statistics or automating backups.

The differences between a remote shell and a telnet session are very important. Telnet sessions use terminal emulation and are run as separate applications. So the results cannot be shared with other applications unless a mechanism such as cut and paste is used, while remote shells allow the results of commands to be interspersed with local commands.

So a directory on one workstation could be backed up to a tape drive on another via a simple shell command:

tar cvfb  - tgt_dir | rsh -n bilbo dd of=/dev/rmt/0

The tar command creates an archive of tgt_dir and sends it to standard output. This stream of data is redirected to the rsh command. rsh connects to the host bilbo, executes dd, and passes the output of tar to it. The dd command just happens to know how to save the archive to the tape drive on /dev/rmt/0. (This may seem complicated. By the end of this chapter, it will make perfect sense.) So two programs on two different computers are linked as one, without any special configuration requirements or extra software.

With remote logins and shells being so powerful, why would anyone use telnet? One reason is that telnet is much more robust over slow links, such as Internet and WAN connections between different sites than remote logins, The other is security. We'll cover some of the risks posed by remote shells later.

X-windows is another example of UNIX's networking prowess. The X environment is a graphical user interface, much like the Windows or Macintosh environments. A key difference is that it is segmented into a client (the application) and a server (the display.) The server is a process that manages the keyboard, mouse, and screen, and accepts connections from applications over a socket (a network connection). Because a network connection is the method of communication between the two programs, the application can be running on a different workstation than the display that is controlling it. (When applications are running on the same workstation as the server, the network connection is frequently bypassed in favor of faster methods. These enhancements are proprietary extensions and differ widely from vendor to vendor.)

The remote functionality of X-windows is a very powerful advantage, especially when considered in terms of large database, Web, or file servers that tend to be installed in data centers or equipment rooms, but have graphical management and configuration applications. Another possibility is installing slower, less expensive workstations on desktops while still being able to run more demanding applications on shared, more powerful computers.

File sharing is another extension that has become a natural and integral part of UNIX computing. All of the major UNIX variants support NFS (Network File System) and can share files seamlessly between themselves and other UNIX versions.

Because NFS file systems are treated like any other type of disk, the same way SCSI, IDE, and floppy disks are implemented, network drives fully support user accounts, file permissions, and all of the mechanisms UNIX already uses for adding, removing, and managing other types of files and file systems, even when shared between different UNIX variants.

This means that two UNIX systems, regardless of what version of UNIX or on what type of computer they happen to be, can share files, maybe even from a common file server running on a third system type. Only the user IDs have to be coordinated in order for the files to be securely shared. No additional software or special modifications are required.

Any UNIX system can export or share file systems. This is one of the ways that the difference between servers and clients is blurred with UNIX systems; any system can provide or consume file system resources. When file systems are shared, the workstation can specify which systems can and cannot use it, and also whether or not users may write to it.

Clients specify where the file system will be mounted in their directory tree. For example, a system can mount the /export/applications directory from another system to the /mnt/apps directory.

UNIX Networking: Sharing files and Information

Consider the following practical applications to illustrate how UNIX networking works, and how some common tools and planning can make administration very easy.

A system that is frequently used to administer NFS is the automounter. This program (or set of programs) allows administrators to specify file systems to be mounted (attached) and unmounted (detached) as needed, based on when clients refer to them. This process happens completely in the background, without any intervention after the automount program(s) are configured properly.

Linux systems use a publicly available application called amd instead of automount. Although the configuration files differ slightly, the functionality amd provides Linux is the same. amd is provided with all of the major Linux distributions. As always, see the manual page for specifics.

automount is configured with maps (a fancy term used for referring to configuration files), where each map specifies a top level directory. So, if the automount system is to create five directories, the master map, which is contained in the file /etc/auto_master might look like this:

/share         auto_share
/home auto_home
/gnu auto_gnu
/apps auto_apps
/opt auto_opt

This file provides the automount with the names of the directories it will create and the files that contain what the directories will contain.

For example, let's look at an organization that decided that all user directories belong under the directory /home. (Therefore, the user dan would find his files in /home/dan.) The first thirty home directories were located on the file server gandalf in the /export/home directory.

When the next user comes along, the administrator realizes that gandalf does not have enough drive space for a new user, and, as a matter of fact, moving a couple of users over to the other file server, balrog, which has been used for applications until now, would probably be a good idea.

automount simplifies this distribution by allowing us to create a virtual /home directory on every workstation that runs the automount daemon. An excerpt from the home directory map, /etc/auto_home, as named in /etc/auto_master above, would look like this:

eric  gandalf:/export/home/eric
dan gandalf:/export/home/dan
mike balrog:/export/home/michael

In this map, eric, dan, and mike are referred to as directory keys. The directories are called values. When a program refers to /home/eric for the first time, the system recognizes the key eric and arranges for /export/home/eric on gandalf to be mounted at /home/eric. Because automount does a simple key/value replacement, Eric and Dan's directories can be on gandalf while Mike's is on balrog. Also note that the directory names in balrog and the automounter virtual directory do not have to agree. After a period of inactivity, automount unmounts the directory.

automount alleviates the tedious task of mounting all of the directories that a user or group of users will need in advance. Permanently mounting all of those file systems leads to unnecessary system overhead and requires excessive human intervention. This system also makes it easier for users to travel from workstation to workstation, since they can always expect to find important application and data files in the same place.

Sun Microsystems defined autofs as a new file system type in Solaris 2.x, and created a multi-threaded automounter system. This makes automounted file systems perform extraordinarily well under Solaris, since the kernel is now aware that automounted file systems exist and by preventing bottlenecks in the automount program through the use of threads.


Hewlett Packard's HP-UX and IBM's AIX provide a version of automount. Since automount uses NFS, it is compatible across different UNIX variants.

Newer versions of automount (and amd) also enable us to specify more than one file server for a directory, in order to provide a backup for when one file server is unavailable or overloaded. This system, with a little planning and thought, can simplify a distributed network and even make it a little more reliable.

The maps, however, still must be kept up to date and distributed to each workstation. This could be done using ftp, or even nfs, but on a large network this can become tedious. There are two very elegant solutions that are examples of how experienced system administrators tend to solve these problems.

The first is a tool called rdist. It is a tool for maintaining identical copies of software between remote hosts. It can accept filenames from the command line or use a configuration file (usually referred to as a distfile) that lists what files to copy to which hosts.

In order to distribute a set of automounter maps we might use a distfile like this simple example:

HOSTS = (bilbo frodo thorin snowball)
FILES = /etc/auto_master /etc/auto_home /etc/auto_apps

The distfile is a very powerful mechanism. In this example, we take advantage of the ability to create variables. By specifying the hosts to be updated and the files to send in lists, we can easily add to them. After we define $FILES and $HOSTS, we indicate that the lists of files should be kept up to date on the hosts that are listed. install (the one line command after the dependency line) is one of many rdist commands, it indicates the rdist should keep the files up to date with the local copies. See the man page for more information on the numerous directives and distfile options.

To use this we could add the distfile to the /etc directory on the host where the master set of automounter maps are created. Whenever we make a change, we would execute:

rdist -f /etc/auto_distfile

rdist can be used for much more complicated tasks, such as synchronizing user directories between redundant file servers or distributing new versions of software packages where nfs is not being used.

The disadvantage behind rdist is that it has the same security requirement as any other remote shell command. The user executing the rdist command must be able to attach to the remote host and execute commands without a password. (Refer to the following "Network Security Issues" section.)

If the security issues associated with rdist are unacceptable, how could we conveniently distribute these maps?

Sun Microsystems's Network Information Service (NIS) (also frequently called "yp" or yellow pages) may be an acceptable alternative. NIS provides common configuration information, such as IP addresses, service port numbers, and automount maps to hosts on a network from a server or set of servers.

The NIS server(s) have master copies of the files that are made available. Instead of copying the files to the clients, they are distributed as needed across the network. So the clients retrieve the information from the server instead of reading files.

NIS is used for hosts, services, passwd, automount and a few other configuration files. Configuring a network for NIS is beyond the scope of this chapter, and is not the same for each UNIX variant, since many vendors made their own "improvements" on Sun's system. Thoroughly consult the documentation for the platform(s) that you are using before you try to implement it, and make sure you understand any changes that the vendor's have made. NIS does have some advantages and drawbacks that we can cover.

The main advantage to NIS is convenience. Changes can be made to a map and made available to clients almost instantly, usually by executing a single command after the file is edited. Being able to keep all of this information in one place (or a few places if secondary servers are used) is obviously convenient, especially since synchronizing user IDs, IP addresses, and service ports is crucial to keeping a network working well.

There are however, some significant disadvantages.

A workstation that uses NIS to resolve IP addresses cannot use DNS for internet addresses without some important modifications. Sun's NIS+ solves this problem, but is not yet widely supported by other versions of UNIX and is considered to be a proprietary and very hard to use system. (Which is why it is not covered here.)

If the passwd file is distributed by NIS, the encrypted password field can be read by anyone who connects to NIS. This can be used to subvert security by many hackers, because cracking the encrypted password field is susceptible to a brute force attack with publicly available software.

NIS has no mechanism for controlling who connects to the server and reads the maps. This adds to the security issues, and is one of the reasons you won't find any NIS servers on the Internet. (NIS+ also addresses this issue with a complex authentication system.)

If a client cannot contact a server to handle a request, the request tends to wait until a server is found instead of trying to find a way to continue.

Obviously, NIS comes with its own set of issues, and is a system that requires considerable examination before being selected as an administration tool.

Network Security Issues

We've already mentioned a few security issues regarding UNIX and networking. These issues are very serious for administrators, and frequently have a huge impact on how a network is configured.

Earlier we mentioned that telnet is still used extensively in favor of remote shells. Remote shells allow noninteractive logins, such as the examples above using tar, dd, and rdist. While this is a very convenient feature, it's also a very dangerous one when not carefully administered.

The automatic login feature is implemented with a pair of text files. One of them, /etc/hosts.equiv, controls users on the system level. The other, .rhosts, controls access for individual users. Each file lists the systems by name that a user can login from without supplying a password if the user ID exists on the target host. All of the major UNIX variants treat these files the same way.

hosts.equiv provides this access for an entire workstation, except for the root account. Obviously this file should be very carefully used, if at all. The .rhosts file provides access for individual users, and is located in the user's home directory. It is consulted instead of hosts.equiv. If the root account has one of these files, then the root account from the listed hosts may enter the workstation without any authentication at all, since the .rhosts effectively supercedes the hosts.equiv file.

So the convenience of the remote logins and commands comes with a high price. If a set of workstations were configured to allow root to travel back and forth without authentication than a malicious or, maybe even worse, ill-informed user only needs to compromise one of them in order to wreak havoc on them all.

Some possible precautions are:

  • Use root as little as possible. Root should never be used for remote operations is a simple enough general rule.

  • Avoid using rlogin where telnet will do.

  • If you must use rlogin, try to get by without using .rhosts or hosts.equiv.

  • If you need to use noninteractive logins for operations such as backups or information collection, create a special account for it that only has access to the files and devices necessary for the job.

  • Remote logins are just about out of the question on any systems that are directly connected to the Internet. (Please note that a home system that is dialing into the Internet through an ISP is not truly directly connected.)

A little bit of explanation regarding the first rule is in order. The root account should be used as little as possible in day-to-day operations. Many UNIX neophytes feel that root access is necessary to accomplish anything worthwhile, when that's not true at all. There is no reason for common operations, such as performing backups, scanning logs, or running network services to be done as root. (Other than the services that use ports numbered less than 1024. For historical reasons UNIX only allows processes run as root to monitor these ports.) The more situations where root is used, the more likely it is that something unexpected and quite possibly disastrous will occur.

Even beyond remote logins, UNIX offers a lot of network services. File sharing, e-mail, X-windows, and information services, such as DNS and NIS, comprise only part of the flexibility and extensibility offered by networking. However many of these services represent risks that are not always necessary and sometimes are unacceptable.

The right way to handle these services is to evaluate which ones are needed and enable only them. Many network services are managed by inetd. This daemon process listens for requests for network services and executes the right program in order to service them.

For example, the ftp service is administered by inetd. When a request for the ftp service (service port number 21) is received, inetd consults its configuration information and executes ftpd. Its input and output streams are connected to the requester.

Network services are identified by ports. Common services such as ftp and telnet have well-known ports, numbers that all potential clients need to know. inetd binds and listens to these ports based on its configuration data, contained in the file inetd.conf.

A typical configuration file looks like this:

# Configuration file for inetd(1M).  See inetd.conf(4).
# To re-configure the running inetd process, edit this file, then
# send the inetd process a SIGHUP.
# Syntax for socket-based Internet services:
ftp stream tcp nowait root /usr/sbin/in.ftpd in.ftpd
telnet stream tcp nowait root /usr/sbin/in.telnetd in.telnetd
# Tnamed serves the obsolete IEN-116 name server protocol.
name dgram udp wait root /usr/sbin/in.tnamed in.tnamed
# Shell, login, exec, comsat and talk are BSD protocols.
shell stream tcp nowait root /usr/sbin/in.rshd in.rshd
login stream tcp nowait root /usr/sbin/in.rlogind in.rlogind
exec stream tcp nowait root /usr/sbin/in.rexecd in.rexecd
comsat dgram udp wait root /usr/sbin/in.comsat in.comsat
talk dgram udp wait root /usr/sbin/in.talkd in.talkd

As the comment states, information for inetd is available on two different manual pages.

Each configuration entry states the name of the service port, which is resolved by using the /etc/services file (or NIS map), some more information about the connection, the user name that the program should be run as, and, finally, the program to run. The two most important aspects of this file, from a security standpoint, are what user the services are run as and what services are run at all. (Details on network connection types are covered in Chapter 20, Networking.")

Each service name corresponds to a number. Ports that are numbered less than 1024, which ftp, telnet, and login all use, can only be attached to as root, so inetd itself does have to be run as root, but it gives us the option of running the individual programs as other users. The reason for this is simple: If a program that is running as root is somehow compromised, the attacker will have root privileges. For example, if a network service that is running as root has a "back door" facility that allows users to modify files, an attacker could theoretically use the program to read, copy or modify any file on the host under attack.

Some of the most serious and effective Internet security attacks exploited undocumented features and bugs in network services that were running as root. Therefore the best protection against the next attack is to avoid the service completely, or at least provide attackers with as little power as possible when they do get find a weakness to take advantage of.

Most UNIX variants come from the vendor running unneeded and, in some cases, undesirable services, such as rexecd, which is used for the remote execution of programs, frequently with no authentication. As you see above, this service came from the vendor configured as root. Many organizations configure key systems to deny all network services except the one service that they are built to provide, such as Internet Web and ftp servers.

Well written network software also takes these issues into consideration. For example the Apache Web Server is usually configured to listen to the http port, which is number 80 and therefore can only be bound to by root. Instead of handling client requests as root and posing a significant security risk, Apache accepts network connections as root, but only handles actual requests as nobody, a user with virtually no rights except to read Web pages. It does this by running multiple copies of itself as nobody and utilizing interprocess communication to dispatch user requests to the crippled processes.

Sharing files on the network poses another set of security issues. Files should be shared carefully, with close attention to not only who can write to them, but who can read them, because e-mail and other forms of electronic communication have become more and more common and can contain important business information.

See Chapters 20, "Networking," and 21, "System Accounting," for more detailed information and instructions on how to properly secure your systems.

UNIX Is Heterogeneous

UNIX is frequently criticized for a lack of consistency between versions, vendors and even applications. UNIX is not the product of any single corporation or group, and this does have a significant impact on its personality. Linux is probably the ultimate expression of UNIX's collective identity. After Linus Torvalds created the Linux kernel and announced it to the Internet, people from all over the world began to contribute to what has become called the Linux Operating System. While there is a core group of a few developers who were key in its development, they do not all work for the same company or even live in the same country. Obviously, Linux reflects a few different views on how computers should work. UNIX does too.

Administration Tools

UNIX vendors all offer their own GUI administrative tools that are generally useful, provided that you do not have to do something that the vendor did not anticipate. These tools vary widely in how they work and how they are implemented.

IBM's UNIX operating system, AIX, comes with a sophisticated tool called SMIT. Administrators can use SMIT to configure the system, add and remove users, and upgrade software among other things. SMIT is widely considered to be the best and most mature of the system administration systems, because it can be customized and run in either X-windows or a terminal session. It also allows the user to view the command line equivalent of each task before it is performed. The downside (in at least some system administrator's opinions) is that use of SMIT is just about mandatory for some basic administration tasks.

Hewlett Packard's HP/UX has a similar tool called SAM, which provides much of the functionality offered by SMIT but is not quite as powerful or sophisticated. Its use is not required to administer the system, however.

Sun's Solaris does not come with a tool comparable to SMIT or SAM. However, Sun's individual tools for upgrading and installing software and administering NIS+ are functional and intuitive. Unfortunately, the tool supplied with Solaris for administering users, printers, and NIS/NIS+ requires X-windows. It is not, however required to administer the system at all.

Linux distributions vary widely when it comes to administrative tools. RedHat offers a powerful desktop environment for adding software, administering users, configuring printers, and other everyday administrative tasks. Unlike the commercial tools, it's based on scripts, not binary code, and therefore can be examined and customized by administrators. The Slackware distribution comes with management tools for upgrading and adding software also.

In addition to the different administrative tools and environments provided by the different UNIX vendors, each vendor has felt obligated to provide its own improvements to UNIX over the years. Fortunately, the threat of Windows NT and its homogeneous look and feel has made the UNIX vendors sensitive to these differences, and the tide has turned toward standardization.

UNIX has historically been divided into two major variants, AT&T's UNIX System V and The University of California's BSD UNIX. Most of the major vendors are now moving toward a System V system, but many BSD extensions will always remain.

Regardless of what the vendors do (and claim to do) UNIX's heterogeneous nature is a fact of life, and is probably one of its most important strengths, since that nature is what's responsible for giving us some of the Internet's most important tools, such as perl, e-mail, the Web, and Usenet News. It also provides us with a lot of choices and how to administer our systems. Few problems in UNIX have only one answer.

Graphical Interfaces

Macintosh and Microsoft Windows benefit from a user interface that is designed and implemented by a single vendor. For the most part, a set of applications running on one of these computers shares not only a common look and feel but the same keyboard shortcuts, menus, and mouse movements. X-windows does not have this advantage.

X-windows is a collection of applications, not an integral part of the operating system. Because it is structured this way, it differs greatly from the windowing systems on a Macintosh or Microsoft Windows. As a result, the relationship between X applications and the operating system tends to be a bit more loose than on those platforms.

One of the things that gives an X desktop its "personality" is the window manager. This is the application that provides each window with a border and allows them to be moved and overlapped. However, it is just an application, not part of the X-window system. This separation enables users to select the manager they want, just like a shell. Individual users on the same system can also use different window managers. The differences between window managers are significant, having a significant impact on the look of the desktop, the way the mouse acts, and sometimes, what applications can be run.

The OpenLook window manager, which is distributed by Sun and also accompanies many Linux distributions, has many proprietary extensions and is very lightweight and fast. It bears little resemblance, however, to Macintosh or Microsoft Windows in look or feel. (For one thing, it makes heavy use of the right mouse button, which Microsoft Windows only recently started to do and does not even exist on a Macintosh.) The Sun OpenWindows package comes with OpenLook and a set of tools for reading mail, managing files, and a few other common tasks.

Motif, which has many similarities with Microsoft Windows, has become more popular in the past few years, with most of the major vendors having agreed to standardize on the Common Desktop Environment (CDE), which is based largely on Motif. The CDE also has additional features, such as a graphical file manager, a toolbar, and support for virtual screens. The CDE also comes with a set of user applications.

Window managers also come with programming libraries for creating menus and other programming tasks. As a result, it is possible to create an application that runs poorly under some window managers or requires libraries that a UNIX version does not have. This is a common problem for Motif applications on Linux, because the libraries are not free. For this reason, many applications are available in a non-Motif version or with the libraries statically linked, which means they are built into the application. (This is generally considered undesirable because it makes the application larger and requires more memory and more disk space.)

Windows 3.x has the win.ini and .ini file scheme; Windows 95 and Windows NT have the system registry. Both serve as central repositories for application configuration information. X-windows has its own standard interface for configuration information, called X resources. X resources are more flexible in many ways than the Windows mechanisms.

X resources support wildcards with parameters, which allows administrators (and users) to standardize behavior between diverse applications. Most X-windows application developers recognize this ability and tend to use standard naming schemes for configuration parameters, which makes the use of wildcards even more convenient.

Users are able to customize application to reflect their personal preferences without affecting others by maintaining personal resource files, unlike single user systems that maintain one set of parameters for an entire system. However, administrators still have the ability to set default parameters, so users that do not wish to customize an application still have minimal functionality.

Command Line Interfaces

The command line is where UNIX's long history and diversity really show. One would think that every utility was written by a different person for a different purpose, and one might be right.

There is little cohesion between command line tools as far as the interface goes. The directory list command ls covers just about the whole alphabet when it comes to command line options. The find command, however, uses abbreviations for options instead of letters and almost looks like a new language. Most commands require a "-" to delineate different arguments; others do not. Several commands use subtly different dialects of regular expressions (wildcards).

There are also frequently two versions of the same command: one from the System V world and one from the BSD world. My favorite example of this is the mail command. The mail command is a simple, text-based, virtually feature-free mail reading and sending tool. There are two versions of this command though, so there must have been something worth disagreeing about when it came time to create the most basic of mail readers.

Both mail commands can be used to send e-mail from the command line and are invaluable for unattended scripting jobs since they can alert administrators of problems via e-mail. The BSD version allows you to specify a subject on the command line, like this:

mail -s "Backup Results" {hyperlink } < backup.txt

where the -s is an option for the mail subject. The System V version of mail (which is the default for Solaris) quietly ignores the -s option. This has led to my actually troubleshooting why my mail had no subject on at least three different occasions.

Another endearing difference between the two major variants is the ps command. Both commands provide the same information about running programs. The command line arguments are completely different.

The best way to avoid problems with these pointless and gratuitous differences is to take full advantage of the wealth of information contained in the man pages. (In the "Administration Resources" portion of this chapter we cover a few ways to get more information from them.) And keep these tips in mind:

  • UNIX has been around for more than 25 years. What are the chances that you have a problem that hasn't been solved by someone yet? Before you spend a lot of time and effort solving a problem, spend some time reading man pages about the problem and the tools you are using.

  • Many of the tools on a UNIX system have been around for those 25+ years. Software that lasts that long must have something going for it.

  • Experiment with the tools and try to solve the same problem using different methods every once in awhile. Sometimes you can pick up new tricks that will save you hours of time in the future.

System Administration Tasks

System administration can generally be divided into two broad categories: supporting users and supporting systems.

Supporting Users

Users are your customers. Without them the network would be a single computer, probably running a frivolous application like Doom and generating no business or creating no new sales. (Sounds awful doesn't it?) We support users by creating their logins and providing them with the information they need to use their computers to get something done, without forcing them to become computer experts or addicts. (Like us.)

Creating User Accounts The most fundamental thing that an administrator can do for a user is create her account. UNIX accounts are contained in the /etc/passwd file with the actual encrypted password being contained in either the passwd file or the /etc/shadow file if the system implements shadow passwords.

When a user is created, the account obviously has to be added to the passwd file. (The specifics behind user accounts are covered in Chapter 17, "User Administration.") This is a very easy task that can be performed in less than a minute with any text editor. However, if the UNIX version you are using has a tool for adding users, it may not be a bad idea to just take advantage of it.

These tools will frequently:

  • Add the user to the passwd file. (and shadow file if it is used) Since all users require a unique user ID, letting the computer assign the number when you are administering a large number of users can be very convenient.

  • Create a home directory for the user with the proper file permissions and ownership.

  • Copy generic skeleton files to the account, which give the user a basic environment to work with. These files can be created by the administrator, so the paths to shared applications and necessary environment variables can be distributed to users as they are created.

  • Register the new account with network systems, such as NIS and NFS, if the home directory needs to be accessed on other hosts.

So these tools can prevent common errors and save a bit of time. Most UNIX variants also provide a command line tool named useradd that will perform all or most of these steps. If using the GUI tool is uncomfortable or unworkable, useradd can be incorporated into a shell or perl script.

Providing Support Providing users with documentation and support is another key system administration task and potential headache. The best system administrator is both invisible and never missed. She also realizes that expecting users to get as excited about learning how to use a computer as she is is asking too much.

A famous proverb talks about teaching a man to fish instead of buying him a McFish sandwich. (Or something like that.) The idea behind this expression can be applied to users with amazing results. However, few users will go out of their way to learn how to use their systems. How can an administrator teach his users how to help themselves without fighting a never ending battle?

All user populations are different, and there is no universal solution to user training, but here are a couple of ideas that may help.

Try to provide and gently enforce a formal method for requesting changes to systems and getting help. If users get accustomed to instant answers to questions, they will become angry when you can't drop what you're doing to help them. Try to set up an e-mail or Web-based help system. It may seem bureaucratic to force users to use these systems, but it helps prevent your day from becoming "interrupt-driven" and also provides you with a log of requests. (Have you ever forgotten to take care of a request because you were too overloaded? Give yourself an automatic to do list.)

Provide as much documentation as you can through easy-to-use interfaces, such as Web browsers. The time you spend developing this information in user friendly formats will be paid back in phone calls and conversations that never happen, and may help you learn some more about your systems as you do it.

If you have Internet access now, I'm sure your users spend hours glued to their browsers; take advantage of that. You may also find that a lot of the information your users need is already out there, so you may be able to link your users to some of the information they need, without generating it yourself. (We'll go over some of these resources later in the "Administration Resources" portion of this chapter.)

Supporting Systems

The other half of the job is supporting your systems. Systems have to be built, backed up, upgraded, and, of course, fixed.

Adding Nodes A frequent system administration task is adding new nodes to the network. It's also one of the parts of the job that can truly benefit from some planning and insight.

Not all systems are created equal, and not all of them are used for the same thing. Spending some time understanding what your network is really used for and then applying that to systems is key in network planning. Workstations should have well defined roles and should be configured in accordance with those roles.

When systems are designed and evaluated, some of the questions an administrator can ask are:

  • Will users be able to access all or some of the systems? Do users need to access more than one system? Are there system that users should never access?

  • What network file systems will each workstation need to access? Are there enough that automount would help?

  • What network services, such as telnet, remote logins, sharing file systems, and e-mail, do workstations need to provide? Can each service be justified?

  • What networks will workstations need to access? Are there networks that should be inaccessible from others?

These questions should help us develop a profile for each workstation. Following a profile makes workstations easier to build, maintain, and troubleshoot, not to mention making them more reliable since they tend to be less complex.

Backups Files get corrupted, lost, accidentally overwritten, or deleted. Our only protection against these situations is backups, since UNIX does not have an undelete command.

UNIX provides several backup tools, and deciding which tool(s) to use can be a difficult.

Earlier in the chapter we had an example of backing up a directory, using tar and dd, to a remote tape drive. In that example, dd was just be used as a way to copy a stream of data to a tape, while tar was the command actually performing the backup.

tar (tape archive) is a commonly used backup tool.

tar -c -f /dev/rmt/0 /home/eric

The above command would back up the contents of the /home/eric directory to the first tape drive installed on a Solaris system. tar automatically traverses the directory, so all files in /home/eric and its subdirectories are archived.

The device name for tape drives on systems differs from variant to variant. BSD derivatives tend to refer to them as /dev/rst1 where 0 is the first, 1 is the second, and so on. (Linux, HP/UX, and SunOS 4.1.x use this nomenclature.) System V derivatives usually use /dev/rmt1 the same way. (AIX uses this model.) However, Solaris 2.x, which is System V based, adds a directory to the path, so it is /dev/rmt/1.

The -f option tells tar which tape drive to use, while the -c option is telling it to create a new archive instead of modifying an existing one. One of tar's idiosyncrasies is that when a path is given to it as the back up specification, it is added to the archive, so when it time to restore /home/eric, that is where it will be restored to. A more flexible way to back up the directory is to do this:

cd /home/eric
tar -cf /dev/rmt/0 .

Tar recognizes . as meaning backup the current directory. When the archive is extracted, it will be placed in the current directory, regardless of where that is.

cpio is another standard UNIX tool for backups. Its interface is a little more difficult than tar's, but has several advantages.

cpio is usually used with ls or find to create archives.

find . -print | cpio -o > /dev/rst0

The find command prints the full path of all of the files in its current directory to standard out. cpio accepts these filenames and archives them to standard output. This is redirected to the tape, where it is archived. This command is an excellent example of the UNIX way of combining commands to create a new tool.

find is a tool for finding files. They can be located by name, size, creation date, modification date, and a whole universe of other criteria too extensive to cover here. (As always, see the man page!) This example takes this tool for locating files and makes it the user interface to our backup system.

cpio is the file copying and archiving "Swiss Army Knife." In addition to streaming files in a format suitable for tape it can:

  • Backup special files, such as device drive stubs like /dev/rst0.

  • Place data on tapes more efficiently than tar or dd.

  • Skip over bad areas on tapes or floppies when restoring, when tar would simply die. With cpio, you can at least restore part of a damaged archive.

  • Perform backups to floppies, including spread a single archive over more than one disk. tar can only put a single archive on one disk.

  • Swap bytes during the archive or extraction in order to aid in transferring files from one architecture to another.

This example also illustrates how we can redirect standard output to a device name and expect the device driver to place it on the device.

Our example could also be stretched into a full-featured backup system. Since find allows us to pick files based on creation and modification dates, we could perform a full backup to tape once a week by telling find to name all files, and perform incremental backups to floppy every other day. The UNIX command line is a very powerful tool, especially since programs have been designed for it steadily during the past three decades.

There are other backup tools, most notably dump or ufsdump as it is named on Solaris 2.x. These tools are oriented toward backing up entire disk partitions, instead of files. They are covered in Chapter 28, "Backing Up and Restoring Your System."

Larger sites with more comprehensive backup requirements may need a commercial package. Legato's NetWorker provides an advanced backup and restore system that support the automated and unattended backup of UNIX systems, Windows and NT workstations, Macintosh PCs and even database servers. The process of scheduling backups, selecting file for backup and confirming the integrity of archives is all done automatically. Restoring files and file systems is also very simple due to the GUI.

Sun bundles NetWorker with the Solaris 2.x server edition.

Just as important as picking a backup tool is designing a backup strategy.

Backing up the entire system daily is not acceptable, since it would take too long and would be too ponderous to restore if only one file is needed.

Here is another area where it pays to know your users and understand their business. It's also important to design the network with backup and recovery in mind.

  • Do the users have important files that they modify daily?

  • Are the users clustered into one of two directories on one system, or will systems have to be backed up across the network?

  • If only a single file has to be restored, will it have to be done quickly? Will our current backup system allow us to find one file.

  • How often and for how long are users not using the system? (It is better to back up directories when they are not in use.)

Depending on the requirements, a commercial backup tool may be a good investment. The commercial tools, such as NetWorker, do excel in allowing you to locate and restore one specific file quickly, which may be necessary in your situation.

System Load and Performance The task of monitoring system load and system performance falls on the shoulders of the system administrator. This is yet another area where planning and anticipation are superior to waiting for something to happen and reacting to it.

Daily monitoring of usage statistics is a good idea, especially on mission-critical systems and systems that users interact with regularly, such as Web servers and X displays. These statistics can be gathered with automated scripts.

Some of the things to monitor are disk usage, CPU utilization, swap, and memory. The tools used for getting this information, such as du and df for disk information and top and vmstat for the rest, are covered in the next few chapters in Part III.

Administration Resources

A system administrator needs every shred of information and help he can get, and as much as UNIX vendors would hate to admit, the documentation that accompanies is sometimes lacking. Fortunately, it's a big world out there.

The Manual Pages

The famous rejoinder RTFM (Read The Fine Manual) refers to the manual pages installed on (hopefully) every UNIX system. The man pages, as they are frequently called, contain documentation and instructions on just about every UNIX command, C function call and data file on your system.

The man command searches for documentation based on a command or topic name. So the command

man ls

provides us with documentation on the ls command, which happens to be in section one.

As simple as they may appear, the man pages actually have a sophisticated structure. The pages are divided into sections, with some of the sections being further divided into subsections.

The section layout resembles this:

  • User commands--commands like ls, tar, and cpio.

  • System calls--C programming functions that are considered system calls, like opening and closing files.

  • C library functions--C programming functions that are not considered system calls, like printing text.

  • File formats--descriptions of file layouts, such as hosts.equiv and inetd.conf.

  • Headers, Tables and Macros--miscellaneous documentation, such as character sets and header files, not already covered.

  • Games and Demos--games and demo software. (Even Doom for UNIX has a man page!)

This is a generalized table of contents. BSD and System V started out with slightly different section schemes, and vendors tend to add their own sections and make their own "improvements." (How could it be any other way?)

In order to view information about each section, we can view the intro page for it. In order to see information about section one, we would execute the following command.

man -s 1 intro

The -s option selects which section of the man pages to search with the System V version of the man command. BSD versions accept the section number as the first argument with no switch, while the Linux version will select the section from an environment variable or from a -s option.

All of the versions accept the -a option, which will force the man command to search all of the sections and display all pages that match. Since there are different pages with the same name, understanding the different sections and what belongs in each of them is helpful. However, there are additional tools, and it is not necessary to memorize the section layout for each system.

Man pages can be indexed and pre-formatted with a tool called catman. The pre-formatting part make the pages display faster and is less important than the indexing, which allows us to use more powerful information retrieval tools, namely whatis and apropos.

apropos displays the section number, name and short description of any page that contains the specified keyword. whatis gives us the man page name and section number.

For example, let's try the apropos command for the keyword nfs:

apropos nfs

A portion of the response would be this:

automount (8) Automatically mount NFS file systems
exportfs (8) Export and unexport directories to NFS clients
exports, xtab (5) Directories to export to NFS clients
nfs, NFS (4P) Network file system
nfsd, biod (8) NFS daemons
(The output was edited, because the actual request returned 12 responses.)

Now let's try whatis:

whatis nfs

nfs, NFS (4P) Network file system

With whatis we only see the name of the actual nfs manual page and its section number.

In addition to the manual pages, Sun supplies Answerbook with Solaris. It's a GUI based online help system that supports full text search and just about completely removes the need for hard copy documentation.

IBM supplies Infoviewer which is also a GUI-based online help system. As a matter of fact, the default AIX installation program installs Infoviewer instead of the manual pages, which can be quite an inconvenience for users that do not use X-windows.

Internet Information resources

The internet provides administrators with a wealth of resources too.

1. Usenet News--While not as useful as it was in the past (due to overcrowding and a plummeting signal to noise ratio) usenet news offers discussion groups about all of the UNIX variants and covers various aspects of them. Some examples are comp.sys.sun.admin, comp.os.linux.setup and comp.unix.admin. Because of the high traffic on usenet news, some sites do not carry it. If you cannot get access, try using a search service such as to find what you need. Sometimes searching for articles that interest you instead of browsing is a faster way to get what you need anyway. Dejanews offers searching and browsing.

2. FAQ Lists--Frequently Asked Question Lists hold a wealth of information. Most of the computer-related usenet groups have their own FAQ lists. They can be found posted periodically in the groups and at the ftp server. Many of the FAQS are also available in html format.

3. The Web--Documentation is available from the UNIX vendors and from people and groups involved with Linux. Many users and developers also post a wealth of information just to be helpful. Use the search services to get your research going. UNIX was one of the first operating systems to colonize the Internet, and, as a result, there is a huge number of UNIX users on it.

There is a large amount of documentation and help available free on the Internet, and there is a lot more to be learned from seeking out an answer than simply paying a consultant to tell you or leaning on someone else to do what has to be done for you.

Tools of the Trade

A successful administrator takes full advantage of the tools provided with a UNIX system. To the uninitiated, UNIX seems difficult and unwieldy, but once you get the idea you'll never want to use another system.

The Shell

Earlier, we demonstrated how cpio uses the output of the find command to learn what files to archive. This was a demonstration of shell pipes, which redirect the output of one command to another. We also used this to back up files to a tape drive that is located on another host.

We demonstrated mailing a file to a user on the command line using redirection, which opens a file and passes it to a command as if it was provided in the command line or typed in as a program requested it.

UNIX shells also support sophisticated programming constructs, such as loops, and provide comparison operators, such as equality, greater than, and less than.

Shell programming is essential to system administration, especially as networks grow larger and, as a result, more time consuming to maintain. File backups, adding users and nodes, collecting usage statistics, and a whole host of other administrative tasks are candidates for unattended scripts.

Perl and Other Automation Tools

Perl has become more and more popular over the past few years, and not without good reason. Many tasks that would have required the use of C in the past can now be done by a novice programmer in Perl. System administrators can benefit greatly from a little working knowledge of this language, since it can be used for tasks such as:

  • Analyzing log files and alerting the administrator of trouble via e-mail or pager

  • Automatically converting systems statistics into Web pages.

  • Automating the process of creating user accounts, adding and distributing automounter maps, backing up systems, and creating html content.

  • Creating and communicating over network connections.

These are only a few examples of what this language can do. Chapter 5, "Perl," in Volume 2 has in-depth coverage of Perl.

Some other tools worth noting are TCL/TK, which most of the RedHat administrative tools are written in and awk, which is covered extensively in Chapter 4, "Awk," of Volume 2.

Intranet Tools

As mentioned earlier, consider using intranet tools like Web servers and e-mail to communicate with your users.

There are many Web discussion and guest book applications, all written in perl, that could be modified and used to allow customers to enter requests for support. An internal home page could be used to announce scheduled outages and system enhancements. The Web also allows you to link clients to vendor-provided support resources and documentation.


What is system administration? This chapter kind of meandered along and tried to give you an idea of what administrators do and what they need to know.

A literal attempt at answering the question may say that administrators are responsible for:

  • Understanding how the systems they are responsible interact over the office or organizational LAN/WAN.

  • Supporting users by creating their accounts, protecting their data and making their sometimes bewildering systems easier to use.

  • Supporting systems by keeping the secure, up to date and well tuned.

  • Planning the efforts behind support and growth.

  • Anticipating and resolving problems.

But it may be better to generalize and say that system administration is the process and effort behind supporting a system or systems so that a company or group can attain its goals.

The rest of this section delves much more deeply into the details of system support. Chapters 15 and 16 describe how to install a UNIX system and start it up and shut it down.

In Chapter 17 we cover supporting users much more in-depth than was covered here. Chapter 18 covers file systems and disk configuration , while 19 provides coverage on kernel configuration.

Chapter 20 is where the complete coverage on UNIX networking is located. In Chapter 21 we cover system accounting, which is key for administrators who are concerned with supported large groups of users or that need to come up with a comprehensive system for charging users for access.

For information on system performance and how to improve it, see Chapter 22. In addition, Chapter 23 provides information on how to add and maintain new system components, such as tape drives and modems.

Managing mail, Usenet news and other network services is covered in Chapters 23 through 27. Our last chapter in this section, 28, covers system backup and restore.

What Is a Shell?

By William A. Farra

Nearly every human-usable invention has an interface point with which you interact. Whether you are in the front seat of a horse and buggy, in the cockpit of a plane, or at the keyboard of a piano, this position is where you manipulate and manage the various aspects of the invention to achieve a desired outcome. The human interface point for UNIX is the shell, which is a program layer that provides you with an environment in which to enter commands and parameters to produce a given result. As with any invention, the more knowledge and experience you have with it, the greater the accomplishment you make with it.

To meet varying needs, UNIX has provided different shells. Discussed in Chapters 9 through 13 are Bourne, Bourne Again, Korn, and C shells. Each of these offers features and ways to interact with UNIX. Topics discussed in this chapter are the following:

  • How shells works with you and UNIX
  • The features of a shell
  • Manipulating the shell environment

How the Kernel and the Shell Interact

When a UNIX system is brought online, the program unix (the Kernel) is loaded into the computer's main memory, where it remains until the computer is shut down. During the bootup process, the program init runs as a background task and remains running until shutdown. This program scans the file /etc/inittab, which lists what ports have terminals and their characteristics. When an active, open terminal is found, init calls the program getty, which issues a login: prompt to the terminal's monitor. With these processes in place and running, the user is ready to start interacting with the system.

UNIX Calls the Shell at Login

Figure 8.1 shows the process flow from the kernel through the login process. At this point the user is in an active shell, ready to give command to the system.

Figure 8.1.
How a shell is started from login.

During login, when you type your user name, getty issues a password: prompt to the monitor. After you type your password, getty calls login, which scans for a matching entry in the file /etc/passwd. If a match is made, login proceeds to take you to your home directory and then passes control to a session startup program; both the user name and password are specified by the entry in /etc/passwd. Although this might be a specific application program, such as a menu program, normally the session startup program is a shell program such as /bin/sh, the Bourne shell.

From here, the shell program reads the files /etc/profile and .profile, which set up the system-wide and user-specific environment criteria. At this point, the shell issues a command prompt such as $.

When the shell is terminated, the kernel returns control to the init program, which restarts the login process. Termination can happen in one of two ways: with the exit command or when the kernel issues a kill command to the shell process. At termination, the kernel recovers all resources used by the user and the shell program.

The Shell and Child Processes

In the Unix system, there are many layers of programs starting from the kernel through a given application program or command. The relationship of these layers is represented in Figure 8.2.

Figure 8.2.
UNIX system layers.

After you finish logging on, the shell program layer is in direct contact with the kernel, as shown in Figure 8.2. As you type a command such as $ ls, the shell locates the actual program file, /bin/ls, and passes it to the kernel to execute. The kernel creates a new child process area, loads the program, and executes the instructions in /bin/ls. After program completion, the kernel recovers the process area and returns control to the parent shell program. To see an example of this, type the following command:


This lists the processes you are currently running. You will see the shell program and the ps program. Now type the following:

$sleep 10 &

The first command creates a sleep child process to run in background, which you see listed with the ps command. Whenever you enter a command, a child process is created and independently executes from the parent process or shell. This leaves the parent intact to continue other work.

Auto-Execution of the Shell

Some UNIX resources, such as cron, can execute a shell program without human interaction. When using this feature, the user needs to specify which shell to run in the first line of the shell program, like this:

#! /bin/sh

This specifies the Bourne shell.

You should also redirect any output, because no terminal is associated with auto-execution. This is described in the "File Handling: Input/Output Redirection and Pipes" section later in this chapter.

The Functions and Features of a Shell

It doesn't matter which of the standard shells you choose, because they all have the same purpose: to provide a user interface to UNIX. To provide this interface, all the shells offer the same basic characteristics:

  • Command-line interpretation
  • Reserved words
  • Shell meta-characters (wild cards)
  • Access to and handling of program commands
  • File handling: input/output redirection and pipes
  • Maintenance of variables
  • Environment control
  • Shell programming

Command-Line Interpretation

When you log in, starting a special version of a shell called an interactive shell, you see a shell prompt, usually in the form of a dollar sign ($), a percent sign (%), or a pound sign (#). When you type a line of input at a shell prompt, the shell tries to interpret it. Input to a shell prompt is sometimes called a command line. The basic format of a command line is

command arguments

command is an executable UNIX command, program, utility, or shell program. arguments are passed to the executable. Most UNIX utility programs expect arguments to take the following form:

options filenames

For example, in the command line

$ ls -l file1 file2

there are three arguments to ls; the first is an option, and the last two are filenames.

One of the things the shell does for the kernel is to eliminate unnecessary information. For a computer, one type of unnecessary information is whitespace; therefore, it is important to know what the shell does when it sees whitespace. Whitespace consists of space characters, horizontal tabs, and newline characters. Consider this example:

$ echo part A     part B     part C
part A part B part C

Here, the shell has interpreted the command line as the echo command with six arguments and has removed the whitespace between the arguments. For example, if you were printing headings for a report and wanted to keep the whitespace, you would have to enclose the data in quotation marks, as in

$ echo 'part A     part B     part C'
part A part B part C

The single quotation mark prevents the shell from looking inside the quotes. Now the shell interprets this line as the echo command with a single argument, which happens to be a string of characters including whitespace.

Reserved Words

All shell versions have words that have special meaning. In shell programming, words such as do, done, for, and while provide loop control--and if, then, else, and fi provide conditional control. Each shell version has different reserved word pertaining to its specific features.

Shell Meta-Character (Wild Cards)

All shell versions have meta-characters, which allow the user to specify filenames. The following are wild cards:

Wild Card Description
* Matches any portion
? Matches any single character
[] Matches a range or list of characters

Wild cards can be useful when processing a number of specific files. The following are some examples:

$ls t*

This lists all files starting with t.

$ls test?5.dat

This lists all files starting with test, any single character and ends with 5.dat.

$ls [a-c]*

This lists all files starting with a through c.

$ls [e,m,t]*

This lists all files starting with e, m, or t.

Program Commands

When a command is typed, the shell reads the environment variable $path, which contains a list of directories containing program files. The shell looks through this set of directories to find the program file for the command. The shell then passes the true filename to the kernel.

File Handling: Input/Output Redirection and Pipes

In previous chapters, you learned about standard input and output. Unless otherwise specified with arguments, most UNIX commands take input from the terminal keyboard and send output to the terminal monitor. To redirect output to a file, use the > symbol. For example,

$ls > myfiles

lists the files in your current directory and places them in a file called myfiles. Likewise, you can redirect input with the < symbol. For example,

$wc -l < myfiles

feeds the command wc with input from the file myfiles. Although you could obtain the same output by having the filename as an argument, the need for input redirection becomes more apparent in shell programming.

To string the output from one command to the input of the next command, you can use the | (pipe) symbol. For example,

$ls -s | sort -nr | pg

This lists the files in the current directory with blocksize and then pipes the output to the sort, which sorts the files in numeric descending order and pipes that output to the paging command pg for final display on the terminal's monitor. The pipe command is one of the most useful tools when creating command constructs.

Command Substitution

Command substitution is similar to redirection except that is used to provide arguments to a command from the output of another. For example,

$grep 'wc -l myfiles' *

takes the number of lines in the file myfiles from the wc command and places the number as an argument to the grep command to search all files in the current directory for that number.

Maintenance of Variables

The shell is capable of maintaining variables. Variables are places you can store data for later use. You assign a value to a variable with an equal (=) sign:

$ LOOKUP=/usr/mydir

Here, the shell establishes LOOKUP as a variable and assigns it the value /usr/mydir. Later, you can use the value stored in LOOKUP in a command line by prefacing the variable name with a dollar sign ($). Consider these examples:

$ echo $LOOKUP
$ echo LOOKUP

To make a variable available to child processes, you can use the export command--for example:

$ LOOKUP=/usr/mydir
$export LOOKUP

NOTE: Assigning values to variables in the C shell differs from doing so in the Bourne and Korn shells. To assign a variable in the C-shell, use the set command:

% set LOOKUP = /usr/mydir

Notice that spaces precede and follow the equal sign.

Like filename substitution, variable name substitution happens before the program call is made. The second example omits the dollar sign ($). Therefore, the shell simply passes the string to echo as an argument. In variable name substitution, the value of the variable replaces the variable name.

For example, in

$ ls $LOOKUP/filename

the ls program is called with the single argument /usr/mydir/filename.

Shell Startup--Environment Control

When a user begins a session with UNIX and the shell is executed, the shell creates a specified environment for the user. The following sections describe these processes.

Shell Environment Variables When the login program invokes your shell, it sets up your environment variables, which are read from the shell initialization files /etc/profile and .profile. These files normally set the type of terminal in the variable $TERM and the default path that is searched for executable files in the variable $PATH. Try these examples:

$ echo $TERM
$ echo $PATH

You can easily change the variables the same way you assign values to any shell variable.

NOTE: C shell assigns values to environment variables using the setenv command:

% setenv TERM vt100

Shell Startup Files The file .profile is the local startup file for the Bourne shell. The Korn shell uses .kshrc, and the C shell uses .cshrc. You can edit these files to manipulate your startup environment. You can add additional variables as the need arises. You also can add shell programming to have conditional environment settings, if necessary.

Shell Startup Options When invoking the shell either from /etc/passwd or the command line, you can set several options as arguments to the shell program. For example, the Bourne shell has a -x option that displays commands and their arguments before they are executed. This is useful for debugging a shell program. These options are described in detail in the following chapters.

Shell Programming

You've seen that the shell is used to interpret command lines, maintain variables, and execute programs. The shell is also a programming language. You can store a set of shell commands in file. This is known as a shell script or shell programming. By combining commands and variable assignments with flow control and decision making, you have a powerful programming tool. Using the shell as a programming language, you can automate recurring tasks, write reports, and build and manipulate your own data files. The remaining chapters in Part II discuss shell programming in more detail.


The shell provides an interface between the user and the heart of UNIX--the kernel. The shell interprets command lines as input, makes filename and variable substitution, redirects input and output, locates the executable file, and initiates and interfaces programs. The shell creates child processes and can manage their execution. The shell maintains each user's environment variables. The shell is also a powerful programming language.

While this chapter gives an overview of the UNIX shell, Chapters 9 through 13 describe in detail the various shells, their features, and language specifics. Also described are the fundamentals of shell programming and execution. Continued reading is highly recommended.

Unix commands

cd matlab change to the subdirectory named matlab
cd change to the home directory
cp file1 file2 make a copy of file1 called file2
cp dir2/file1 . copy file1 from directory dir2 into current directory
echo $param show idefinition of param
env show values of environment variables
history show previous commands by number
lpr -Pworkman54 f1 print f1 (notice capital P)
lpq -Pworkman54 look at printer queue for sage
ls *dwg list names of all files ending in the letters dwg
man cp learn more about cp command (manual)
mkdir ansys make a new directory called ansys (change to root
first with cd command)
more file1 look at file1 page by page (usually aliased to just m)
mv file1 file2 move or rename file1 to file2
pwd show current directory name
rm *bak remove all files ending in the letters bak
abbr=orig define abbreviation (usually shorter)
tail -15 file1 look at last 15 lines of file1

You can rerun previous commands and make changes with emacs commands:
^P redisplay previous command (then use commands to change it)
^Rls repeat most recent command that begins with ls
!5 repeat command 5 (use history or h to find the command number)
!^ first option of previous command
!$ last option of previous command
!* all options of previous command
!:2 second option of previous command

Before erasing files, check that the parameters are correct. For example, first give:
ls fname* # list corresponding files
rm !$ # erase those files

To copy files from my class directory, define a short symbol a, then check th

rm': Remove files or directories

`rm' removes each given FILE. By default, it does not remove
directories. Synopsis:

rm [OPTION]... [FILE]...

If a file is unwritable, standard input is a terminal, and the `-f'
or `--force' option is not given, or the `-i' or `--interactive' option
_is_ given, `rm' prompts the user for whether to remove the file. If
the response does not begin with `y' or `Y', the file is skipped.

The program accepts the following options. Also see XRef Common

Remove directories with `unlink' instead of `rmdir', and don't
require a directory to be empty before trying to unlink it. Only
works if you have appropriate privileges. Because unlinking a
directory causes any files in the deleted directory to become
unreferenced, it is wise to `fsck' the filesystem after doing this.

Ignore nonexistent files and never prompt the user. Ignore any
previous `--interactive' (`-i') option.

Prompt whether to remove each file. If the response does not begin
with `y' or `Y', the file is skipped. Ignore any previous
`--force' (`-f') option.

Remove the contents of directories recursively.

Print the name of each file before removing it.

One common question is how to remove files whose names begin with a
`-'. GNU `rm', like every program that uses the `getopt' function to
parse its arguments, lets you use the `--' option to indicate that all
following arguments are non-options. To remove a file called `-f' in
the current directory, you could type either:

rm -- -f


rm ./-f

The Unix `rm' program's use of a single `-' for this purpose
predates the development of the getopt standard syntax.

Prev (fileutils) mv invocation Up (fileutils) Basic operations
automatically generated by info2html

FTP Options

Retrieve symbolic links on FTP sites as if they were plain files,
i.e. don't just create links locally.

`-g on/off'
Turn FTP globbing on or off. Globbing means you may use the
shell-like special characters ("wildcards"), like `*', `?', `['
and `]' to retrieve more than one file from the same directory at
once, like:


By default, globbing will be turned on if the URL contains a
globbing character. This option may be used to turn globbing on
or off permanently.

You may have to quote the URL to protect it from being expanded by
your shell. Globbing makes Wget look for a directory listing,
which is system-specific. This is why it currently works only
with Unix FTP servers (and the ones emulating Unix `ls' output).

Use the "passive" FTP retrieval scheme, in which the client
initiates the data connection. This is sometimes required for FTP
to work behind firewalls.


My Blog List