The Console

When using a "ready to go" distribution almost never a text console will pop up. So why taking care about historic textural user interfaces?

If you want to see behind the scenes and discover why and how it behaves, this is probably why you have chosen Gentoo, then you will be getting in contact with the text mode. It is hard to imagine that Linux can be understood without having a knowledge about text mode working. This does not mean you have to work without a graphical desktop. In fact you can open different consoles to let you easily understand how Linux handles applications run in parallel. Unfortunately learning the text mode stuff is dealing with historic inherited things. Additionally many system routines and programs in Linux are bash scripts. Without understanding bash to some detailed level, it is not possible understand bash scripts. Bash seems to be so simple and straight forward, why to read about it and and one minute later you are completely lost, since you find lines as:

find /usr/bin -type l ! -xtype f ! -xtype d -ok rm -f {} \;

there is no way to understand the syntax without spending lots of time reading the manuals.

If you have a crashed X desktop, try pressing Ctrl Alt F1 and you might get a chance that just X has crashed and you see a login prompt. Just login and try to kill the bad processes or do a shutdown -h now.

There are actually different things, when you see text on your screen: bash, getty and console.

  1. The commando interpreter bash (bash is a shell implementation and therefore the link /bin/sh is a link pointing to /bin/bash) it runs in a console. So you use bash and might not be aware of it! The file /etc/passwd defines on a user base, what commando interpreter is used. You could have other commandos interpreters than bash, if you want to be a real Linux freak.

  2. However before you can type in a command you have to login, this is done using a getty. There are different getty's:

  3. The original getty (not present on my computer)

  4. The alternative Linux agetty

  5. The mingetty that even allows automatic login of a user (used for Home theater).

Gettys are text based, but also graphical login X display manager as xdm, kdm, gdm and others do the same. In the file /etc/inittab it is defined what getty is used.

  1. In a graphical desktop environment as KDE there a numerous console programs where bash commandos can be entered.

Finally all you type in and all you see, is connected to a teletypewriter (tty). What device file is involved can be seen by simply typing the command tty.

If you run under X you get the pseudo and see /dev/pts/1

In the console you get: /dev/tty1

Console login

As described above gettys are responsible for the login. See also the booting and runlevels section within this document.

Type who to see who has logged in.

Toggling trough gettys:

In text mode

Alt F1 to F6

in X :

Crtl Alt F1 to F6

Ctrl Alt F7 brings back to X

To log out of a text console Ctrl D


To see on what terminal you work w, tty, who

Open console and check with tty what file you have. Oben other console and echo Hello > /dev/pts/0 Hello goes from one console to the other.

When opening a console in a desktop environment a new /dev/pts/<n> file is created that is also reported when doing tty in that console. The pts dev files are pseudo terminal slaves that are linked to /dev/ptmx as man pts describes.

Screen Manager

When connecting via ssh to a remote computer and then closing the ssh connection closes also the terminal and stops what is in progress (as for gentoo updating using emerge on an remote embedded device stops). Within the ssh console the program screen allows to connect to a local terminal (screen session) using screen -S <name> and then detach Ctrl+d (or Ctrl+a followed by d), screen -ls shows the sessions and session ID's and later on re-attach screen -r or screen -r <session ID>

Ctrl+a ? gives help

Working with bash

See the bash manual

Test script

The following script is used in the following sections to test process id's and environmental variables. Lets call it testscript:

Example 3.1. Test script

echo ===============================
echo This is the process with PID $$ 
echo ===============================
# Create a variables with unique name and value Start with 
# Z so they are the latest in the alphabet.
eval $LocalScriptVarName=$$
cmd="declare -x $GlobalScriptVarName=$$"
eval $cmd
# Create variables with common name and values.
declare -x ZGlobalVar=$$
# print the variables
eval "value=$cmd"
echo Local Script Variable:  Name: $LocalScriptVarName  Value: $value
eval "value=$cmd"
echo Global Script Variable: Name: $GlobalScriptVarName Value: $value
echo Local Variable:         Name: ZLocalVar             Value: $ZLocalVar
echo Global Variable:        Name: ZGlobalVar            Value: $ZGlobalVar
echo =====================================================
echo "Process PID $$ knows the global and local variables"
echo =====================================================
# get rid of some local variables
unset cmd
unset value
echo ===========================================
echo "Process PID $$ knows the global variables"
echo ===========================================
export -p
echo ===================================== 
echo Process PID $$ waits to be terminated
echo =====================================
# hang here to observe that this process is alive. 
# kdialog creates a new process 
msgtext="Click ok to terminate PID $$"
kdialog --msgbox "$(echo $msgtext)" 


kdialog puts a graphic window under kde and the scripts will not terminate until the button gets clicked.

Command line typing support

Bash offers some features that maybe unknown:

The tab completion helps to type in path names. Type in the first characters of the path and press simply the tab button and bash finishes the rest or it beeps when multiple choices exist. If it beeps press tab tab and you see all choices.

cat /etc/fs press tab button

cd /usr/src/ tab tab


Tab completion works also on commands since commands are files and tab completion considers the contents of the PATH environmental variable: echo $PATH

Ctrl R followed behaves as tab but it shows the available commands.

history pops up a long list of commandos entered !<number> starts a previous typed in commend !<number>:p pops it just up to be modified

!$ gives the last argument of the last command

Starting and controlling processes

The programs in echo $PATH can be started by typing their name and pressing enter. Others, including working directory must have the path included. For programs in the working directory type ./<program name>


PATH is a regular environmental variable, therefore different environmental variables exist. So the PATH variable of some daemon process might be completely different from the PATH variable that you see in the console.

Do start a program and put it in to background

<command> &

or to not get the text output

<command> > /dev/null &

after this the PID (Process ID) gets printed.

ps shows and confirms the running program in background and shows the PID

To put a running program in background Ctrl Z to suspend it, then type bg to put it in background.

It will still in memory and can be put back to front using the command fg. However what was printed on the screen is lost. Without parameter fg brings back the last process put in background. To get an other process to the front use fg with a parameter. However not the PID is used as parameter, it is the job number instead.

jobs shows the job number, the job numbers start from 1 within each started session. To see both job number and PID type jobs -l.

The jobs can be killed using the PID: kill -9 <PID>

To stop (or abort, quit, exit) a running process do Ctrl C.

If it does not react, then it might be crashed, so try to suspend it via Ctrl Z and than use kill.

There is also the dot command . <filename>. Dot is reading the file <filename> and executes the commands found in it. The file <filename> does not have executable permission. An other difference is that the script could do a cd to the caller shell.

Deskop environments have GUI's to see what is going on, how the processes come and go.

Using the previously defined test script type a couple of time:./testscript &

And observe it in your desktop environments GUI. Check the PID's. Test the commands jobs -l, fg, bg

Conditional processing

<command1> ; <command2>

run first command1 then command2 using two PID's

(<command1> ; <command2>)

as above but with the same PID

<command1> & <command2>

run command1 in background then command2

<command1> && <command2>

run first command1 when OK then run command2

<command1> || <command2>

run first command1 when not OK then run command2

Running in an other directory

Programs run in the directory where they got started. To run them in an other directory (cd /opt/slic3r-bin-1.3.0/ && ./Slic3r)

Special characters

. the current working directory

.. the directory above

/ the top most root directory or a subdirectory

~ the home directory of the user

> file creates a file then moves the output into this file and not to the screen

>>file appends the output to the file and not to the screen

2>file Error messages are rerouted to a file

>& Error messages and standard output are rerouted to a file

2>&1 Error messages are rerouted in standard output

<file Input comes from file not from keyboard

1 Standard output /dev/stdout

2 Standard error output /dev/stderr

0 Standard input /dev/stdin

& Standard output and standard error output together

1> is equal to >


./cmd 1>out.txt 2>err.txt

./cmd 1>>out.txt 2>>err.txt

ls -l > <filename>

less < <filename>

Using the vertical line character | the following sample shows how to print to a file and on the screen at the same time (however first the complete output goes into the file and then the complete output goes to the screen ):

ls | tee out.txt

The | character represents an unnamed pipe.

Wild cards

Wild cards let you select a group of files and directories. Using wild cards is a key feature in bash scripts and therefore it is absolutely necessary to understand how they behave. There are two dedicated characters for wild cards * and ?.

  1. The character * means any combination of characters. There is the exception that a dot . is not allowed in the first position, so it excludes hidden files and directories. Also the directory character / is excluded.

  2. The character ? means a single character (inclusive . but exclusive /)

Example, ls prints out all the files in a directory. To show just the pdf files in the working directory type ls *.pdf

However for a N00b or for one just working with GUI's, there will be many surprises:

  1. The command ls * will give such a surprise. Not just the files are printed out but also the sub-directories with even their content, but just down to one level.

  2. The command ls .* is the next surprise the hidden files will be shown but also much more, even hidden files of the directory above.

What is going on? What logic is behind?

To be sarcastic, bash can not handle wildcards (directly). Before bash executes the command line, it replaces the wild cards in a previous step.

The outcome of that first step can easily be understood when you type echo * or echo .*. This shows how bash resolves the wild cards. On this echo response, bash executes the ls command. So not just one argument (directory or file) is passed to ls, a list of directories and files are passed to the ls command. As an alternative type sh -x to enable some kind of debugging feature. After that, the next command will be printed with the replaced the wild cards, then it executes the command. This way you can observe the two steps.


See how it handles the working directory . and the directory above .. , so you understand the strange directory up and down in the ls example. The characters . and .. are in the working directory and therefore handled as they would be regular files.

Additionally bash lets you use any regular character as wild card by putting it into brackets [a]. The command echo [a]* prints out all files and directories starting with an a. Or echo [ab]* prints out all files and directories starting with an a or a b. Or echo [a-d]* can be used as a range. Or echo [!a]* to print all files and directories except the ones that start with an a. The command echo [^a]* is equivalent.

Conclusion to list the hidden files (and unfortunately the hidden directories) in the working directory type ls .[!.]*. It is confusing isn't it?

Strings in Bash

Strings are put between "<string>" characters.


$ will be interpreted within the strings. To tell bash that those character have been taken as regular characters they have to be marked '<character>' or the \ character has to be put before.


Watch out there is a difference between the ` the ' and the ' character.

Parameter substitution

Parameter substitution is used to manipulate with strings.

It can have the following syntax:

${<name of variable><one or two characters><string>}


The variable is returned except when it is empty then the string is returned


The variable is returned except when it is empty then the string is returned and written in the empty variable


String overwrites the variable except when the variable was empty


If variable is empty the script terminates and the string appears on the screen


Number of characters in the variable are returned


Returns the largest part of the variable that matches the string or the entire variable when not successful


Returns the smallest part of the variable that matches the string or the entire variable when not successful


Returns the largest part of the variable that matches the string looked from the right or the entire variable when not successful


Returns the smallest part of the variable that matches the string looked from the right or the entire variable when not successful


Takes the string inside the variable and looks for a variable that has this name, when found returns its contents.

Brace expansion

Brace expansion can be used to create a list of strings:

echo string{x,y}{1,2,3}


Put the term in [ ] and $ in front and bash knows that it has to calculate:

echo $[1+2]

Command substitution

Command substitution follows the following two syntax possibilities:

$(<command >)




Watch out there is a difference between the ` the ' and the ' character. Therefore prefer the first syntax.


Puts not the string pwd into the variable a but the name of the working directory.

And now what it does. The command <command> is executed and produces a text that is interpreted in the command containing this term.

It will fail when it is the only term in the bash line. The way out is using eval <some text to be interpreted as command>

Often command substitution is used in a command:

<command1> $(<command2>)

<command1> does not use the <command2> as parameter. The $ character is used to execute <command2> first and then the result is(are) passed as parameter(s) to <command1>.

Bash scripts

Bash commands can be put into a text file and form a bash script. This way they can be started from anywhere and everywhere. However first some things have to be made:

  1. The file containing the bash script must have executable permission chmod

    chmod a+x <scriptname>

  2. The first line must have some magic characters to tell what interpreter has to be taken to run the script. It is the path to bash.


    For a python script it would be


    Often the interpreter is called via /bin/env or /usr/bin/env to get a known environment as

    #!/usr/bin/env python

    Gentoo makes heavily use of the runscript instead of bash for the files in /etc/init.d.


    The # character will be interpreted from bash as comment line.

    Make use of a syntax sensitive editor when you write bash scripts.

  3. And finally the file has to be found. Either call it with the full path /<path to script>/<mybashscript>, as ./<mybashscript> or copy it where echo $PATH points. Even better, keep it where it is, add a version number to the filename and create a symbolic link not having the version number from a directory that is in the path .

Bash processes

As every process, the running bash has also a PID (Process ID). The PID of the running bash is stored in the variable $ and can be seen typing:

echo $$





5650 pts/1 00:00:00 bash

5683 pts/1 00:00:00 ps

Ore even better use a system monitor of your desktop environment.

And if you open an other window you will start a new bash process with an other PID. The PID's are unique and can also be used in file names to create unique files: ls > tmp.$$ to avoid conflicts from accidentally accesses by two processes running the same bash script.

Bash variables

Since exported bash variables are environmental variables and get read by the operating systems and various programs not written in bash (as C and Python) it is fundamental to understand how they behave.

Reading and writing Bash variables

Input can be done by putting the following line in a script

read a

Read stores all keys into the variable a and returns when CR is typed.

echo $a

prints the content of a. The $ sign is important to identify that it is a variable and not just the letter a.

Local bash variables

The following example shows well how bash variables behave:

  1. Bash variables created as var=hello are just valid for that process (PID) and can not be read by other programs


    DO NOT PUT BLANKS AROUND THE =! and use an editor with syntax highlighting

  2. To show the contents of such variables echo $var

  3. Now type bash, this creates a child shell (that can well be observed in the process table).

  4. Then type echo $var and nothing will be printed, since the child runs on an other process and var did not got inherited (exported).

  5. Type exit to quit the child shell and you are back in the original shell.

  6. Type echo $var and hello pops up again.

  7. Finally delete the variable by typing unset var.

  8. Now type echo $var again, to verify that it has been deleted.

Exported bash variables

The previous created variables are just available by the current running process and are therefore be called local variable.

To see the variables you have type:

printenv or export -p or /bin/env

or set to get all including the local variable.

When a command, program or a bash script (or when you type bash to create a bash session depending on the first) is started, a new PID is created and the previously assigned variables are no more available. To prevent loosing the availability variables can be created as follows:

declare -x<variable name>=<value>

or exported when previously created:

export<variable name> or export<variable name>=<value>


Exporting means the child process gets a copy of the variable. If this variable gets overwritten or deleted, just the copy gets modified and not the variable of the parent process.


Obviously but very confusing is when a local gets gets created with the same name as a non exported variable or if a exported variable gets over written. Then multiple variables with different values appear on the computer.

If a bash script wants to receive and be able to modify the variables from the calling shell, then it has to be called adding . in front of it (.<name of script>). In this case no new PID is created. The script has the same PID as the calling process.

Calling a script

To understand Linux, it is very fundamental and very important to understand what happens if you call a script, this is the same as calling any other executable.

Open a console and observe it. Create a local and a global variable:


declare -x myglobalvar=global

Change the directory to where your test script is and type: ./testscript. Notice that a new process with a different PID has been created. PPID holds the ID of the process that has called it. This allows to build the tree. Just the local variables are there. When clicking to the close button it terminates its process and returns back to the console. The test script has defined some variables, however non of them is available in the shell that called them:

echo $ZGlobalScriptVar<PID>

echo $ZGlobalVar

echo $ZLocalVar

echo $ZLocalScriptVar<PID>

But the previously defined variables are still there there:

echo $mylocalvar

echo $myglobalvar


exec <command> starts the bash command within the same environment and PID as the calling process. The calling process is therefore terminated.

Open a console and observe it. Create a local and a global variable:


declare -x myglobalvar=global

Change the directory to where your test script is and type: exec ./testscript. Notice that the PID stays and just the global variable is there. When clicking to the close button the console closes.


source <command> or its short form . <command> starts the bash command within the same environment and PID as the calling process.

Open a console and observe it. Create a local and a global variable:


declare -x myglobalvar=global

Change the directory to where your test script is and type:

source ./testscript. Notice that the PID stays the same and both the local and the global variable are there. When clicking to the close button it returns back to the console. The test script has defined some variables type that are available now:

echo $ZGlobalScriptVar<PID>

echo $ZGlobalVar

echo $ZLocalVar

echo $ZLocalScriptVar<PID>

And also the previously variables are there:

echo $mylocalvar

echo $myglobalvar

Environmental variables

Environmental variables are exported bash variables that a process can read, but they are usually not created by the running applications, they are part of the system environment. Even tough they are usually created using bash they can be read, written and created by other programming languages as C, python and what ever.

Samples of environmental variables are:

echo $PATH

echo $HOME


echo $PWD

Some other variables give more info about the own running process, application and program:

$* or $@ all the command line parameters passed to the script (come back one by one)

$1 or (new syntax ${1} ) first command line parameter (

${10} is the 10st parameter)

$# number of command parameter passed

$0 name of the script

$? return value of last command called

$$ PID of the script (can be used as unique number)

$! PID of last background processing

However there are also bash variables accessible different processes and used to control the system as the localization variables or more generally speaking the environmental variables.

Expanding Environmental variables

Environmental variables might be overwritten or not be expanded to a child process, so their previous value is lost. But many times they contain something as a list of data items where an additional item has to be added. The following as command or as put in a bash script appends a values to a list:


Setting environmental variables via /etc/env.d

The variables are in RAM and will be lost after a boot. To avoid that, Gentoo stores the contents of the environment variables in the /etc/env.d directory. The program env-update (this is also done automatically during emerge) takes all those files from /etc/env.d and creates /etc/profile.env

Finally /etc/profile is a script called when bash starts. /etc/profile reads /etc/profile.env and puts the variable in RAM of the current process.

To prevent that a new process with a new environment is created /etc/profile is started with the source key word source /etc/profile or its short form . /etc/profile . Without the keyword source or . the new created process would get the updated environmental variable but everything would be lost when /etc/profile terminates.

There is also a hierarchy among the processes that can be viewed when for example pstree is typed in. Some selected variables (declare -x or export) are inherited down wards the tree, therefore not all processes will see that an new /etc/profile.env is created. Additionally env-update uses ldconf to create files as /etc/ for the dynamic linker run time bindings. env-update is a python script: /usr/lib/portage/bin/env-update env-update can be called manually and is also called when emerge executes.

Setting bash variables per user

The script ~/.bashrc is called when a shell is opened and therefore setting environmental variables can be initialized or changed there.

Run not installed programs

Run something as the w3m html browser that is not installed (as from a usb stick) would fail since the environmental variables do not contain the link to the binary and the libraries, so do:

HOME=/mnt/home/user LD_LIBRARY_PATH=/mnt/usr/lib /mnt/usr/bin/w3m

The [ program

[ is actually a regular program, but obviously with a strange looking name. Type


and you will get

bash: [: missing `]'


whereis [

and it is 3 times on the PC:

[: /usr/bin/[ /usr/X11R6/bin/[ /usr/bin/X11/[

Check in /usr/bin and you'll find it. It does almost the same as the program test. [ has no man page but man test (or info test) shows more about it and its syntax. The programs test and [ test an expression (number, string, file) and exits with:

0 if the expression is true,

1 if the expression is false,

2 if an error occurred.

If you want to see what it exits

[ a = b ]

echo $?


[ a = a ]

echo $?


test a = a is equivalent to[ a = a ]. and suddenly the [ program does not look strange anymore. The difference between [ and test is that [ wants as last parameter the ] character to look cool. take care about the spaces since [ a = b ] will be interpreted as any other regular program <program name = "["> <Parameter 1 = "a"> <Parameter 2 = "="> <Parameter 3 = "b"> < Parameter 4 = "]">. This explains why


fails with

bash: [a=b]: command not found

Using the program [ in bash scripts looks as it would be a part of the bash syntax (especially for the if else conditional branches and loops), but this is not the case, it is a standalone program and can be used in all kinds of situations.

Bracket overview

{} define a block

${} reference of a variable

[] index and arrays

$[] calculate contents => new syntax $(())

() arithmetic expression

$() String in brackets will be interpreted as command

Debug of bash scripts

Bash syntax can be so easy, but also so confusing. There are different options that might help.

Be more verbose

However there are different methods to know what will go on during a script runs.

You can force bash to print out every line that will be executed:

v shows what bash gets

x shows what bash makes with it

You can call

sh -vx to activate


sh +vx to make passive

Within a bash script you can activate it with the following script command:

set -vx

Instead of printing every line to the screen you can move it to a file:

sh -vx "nameofscript" 2>&1 | tee log.txt

Bash debugger

There is the bash debugger bashdb. See Just emerge bashdb and start your script:

bashdb <bash script>

Type s to go step by step through your script.

Be aware, when you have enabled bashdb, then some other bash script (as used in emerge) that were previously running without interruptions, might be stopped and showing the bashdb prompt. Just type s until they restart.

It has a man page and some documentation is in /usr/share/doc

or type

bash /usr/bin/bashdb <scriptname>


bash --debugger <script script-arguments>

if this fails then the link from bash to bashdb might be missing, so check where bash wants the bash logger:

strings /bin/bash | grep


and if nothing is there, check where emerge bashdb has put this file and put a link (you might create the missing directory before):

ln -s /usr/share/bashdb/ /usr/local/share/bashdb/

or to have a gui emerge ddd and call DDD as front end:

ddd --bash <script>

should start the debugger when it comes to this line

Figure 3.1. Bash debugger

bash debugger


There's a bashlogger useflag to log all history commands. It should ONLY be used in restricted environments to learn what is going on (or on honeypots to log attacks).

Bash configuration

Default settings are in /etc/inputrc that will be overwritten by ~/.inputrc when present.

See man readline to understand the contents of those two files.

The more bash oriented configuration is in /etc/profile, again ~/.profile will overwrite when present. /etc/profile is a script that looks into /etc/bash/bashrc and /etc/profile.d

The Linux login gets the default shell passed, in case of bash, the bash configuration scripts are started at login.

Users have ~/.bashrc where command can be added that are started when a shell opens. ~/.bash_history remembers all commands typed in (up/down arrow make use of it) ~/.bash_logout is run when a shell closes and ~/.bash_profile is sourced by bash for login shells.

Working with Windows

To use a Windows computer to communicate with a Linux computer using Telnet or SSH a program as putty.exe from is required. Telnet is simple but not considered to be safe enough, so usually SSH (Secure Shell) is used to log into the Linux computer and having a working console.

Required is:

  1. Name of the Linux computer where you have an account

  2. User name

  3. Password

Copy files between the windows computer and the Linux computer needs something on the windows computer. The easiest way is using a program as winscp down-loadable at and use SCP (Secure Copy) for that. Winscp resolves many more issues, since it is a file manager and allows deleting, creating, editing files and synchronizing directories between the two computers. The two programs putty.exe and WinSCP.exe run directly without the hassle of installing them under windows. So no administrator rights are required on the Windows PC.

Linurs startpage