Add New Notes

This commit is contained in:
geekard
2012-08-08 14:26:04 +08:00
commit 5ef7c20052
2374 changed files with 276187 additions and 0 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,140 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-22T10:28:39+08:00
====== Bash and the process tree ======
Created Thursday 22 December 2011
http://wiki.bash-hackers.org/scripting/processtree
The processes in UNIX® are - unlike in other systems you may have seen - __organized in a tre__e. Every process has a parent process that started it or is responsible for it. Also, every process has an own __context memory__ (I don't mean the memory where the process stores its data, I mean memory where data is stored that doesn't directly belong to the process, but is needed to run the process): __The environment__.
To make it really clear I want to repeat it:__ Every process has its own environment space__.
The environment stores, beside other stuff, data that's useful for us: **The environment variables**. These are strings in the common NAME=VALUE form, but they are not related to shell variables. A variable named LANG, for example, is used by every program that looks it up in its environment to determinate the current locale.
Attention: A variable that is set, like with MYVAR=Hello, is **not automatically** part of the environment. You need to put it into the environment with the __export utility__:
**export MYVAR**
Common system variables like PATH or HOME usually already are part of the environment (as set by login scripts or programs).
===== Executing programs =====
All the diagrams of the process tree use names like "xterm" or "bash", but that's just for you to understand what's going on, it doesn't mean it really runs processes with these names.
Let's take a short look what happens when you "execute a program" from the Bash prompt, a program like "ls":
$ ls
Bash will now perform two steps:
* It will make a** copy** of itself
* The copy will **replace** itself with the "ls" program
The copy of Bash will __inherit the environment__ from the "main Bash" process: All environment variables will also be copied to the new process. This step is called** forking**.
For a short moment, you have a process tree that might look like this...
xterm ----- bash ----- bash(copy)
...and after the "second Bash" (the copy) replaced itself by the ls-program (it execs it), it might look like
xterm ----- bash ----- ls
If everything was okay, the two steps resulted in one program being run. The copy of the environment from the first step (forking) results in the environment for the final running program (ls in this case).
What is so important about it? Well, in our example, __whatever the program ls will do inside its own environment, it can't have any effect to the environment of its parent process__ (bash here). The environment was copied when ls was executed. That's a one-way! Nothing will "copy it back" when ls terminates!
===== Bash playing with pipes =====
Pipes are a very powerful tool. **You can connect the out- and inputstreams of two separate programs, and thus create a new utility - or better: a new functionality**. Well, we're not here to explain piping, we just want to see how they look in the process tree. Again, we execute some commands - ls and grep:
$ ls | grep myfile
It results in a tree like this:
+-- ls
xterm ----- bash --|
+-- grep
Just to be boring again: ls can't influence the environment of grep, grep can't influence the environment of ls, neither grep nor ls can influence the environment of bash.
===== How is that related to shell programming?!? =====
Well, imagine some Bash-code that reads data from a pipe. Let's take the internal command read, which reads data from stdin and puts it into a variable. We run it in a loop here - we count input lines...:
counter=0
cat /etc/passwd | while read; do __((counter++))__; done
echo "Lines: $counter"
#**注意**counter变量并没有被执行while的shell继承对于$(())、(())中的变量__即使没定义__其默认值为0或空NULL
What? __It's 0__? Yes! The number of lines might not be 0, but the variable $counter still is 0. Why? Remember the diagram from above? I'll rewrite it a bit:
+-- cat /etc/passwd
xterm ----- bash --|
+-- bash (while read; do ((counter++)); done)
See the relation? The forked Bash will count the lines like a charm. It will also set the variable counter like you wanted it. But if everything ends, **this extra process will be terminated - your variable is gone** - R.I.P. You see a 0 because in the main shell it always was 0 and never something else!
Aha! And now, how to count those lines? Easy: __Avoid the subshell__. How you do it in detail doesn't matter, the important thing is that the shell that sets the counter must be the "main shell". For example, do it like this:
counter=0
while read; do ((counter++)); done __</etc/passwd__
echo "Lines: $counter"
#while语句块中的命令可以__作为一个整体__其输入、输出可以被重定向(对于__管道线连接起来的各命令__也类似它们也可以作为一个整体被重定向)。
It's nearly self-explaining. The while-loop runs in the current shell, the counter is increased in the current shell, everything vital happens in the current shell, also the read-command sets the variable REPLY (the default if nothing is given), though we don't use it here. This small script should work.
===== Actions that create a subshell =====
Bash creates subshells or subprocesses on various actions it performs:
=== • Executing commands ===
As shown above, Bash will create subprocesses everytime it executes commands. That's nothing new.
But imagine your command actually is a script that sets variables you want to use in your main script. __This won't work__.
For exactly this purpose, there's the **source** command (also: the dot **.** command). It doesn't really actually execute the script like it would execute any other program - it's more like **including the other script's source code into the current shell**:
**source** ./myvariables.sh
# equivalent to:
. ./myvariables.sh
__shell在执行命令或脚本文件前会fork一个sub shell来运行它然后该sub shell再对脚本中的每一个命令fork shell来运行。__
........
|---bash(exec commandi)
xterm----->bash----->bash(脚本对应的main shell)----->bash(exec commandj)
|---bash(exec commandk)
.......
===== Pipes =====
The last big section was about pipes, so no example here...
===== Explicit subshell =====
If you __group commands by enclosing them in parentheses__, these commands are run inside a subshell:
(echo PASSWD follows; cat /etc/passwd; echo GROUP follows; cat /etc/group) >output.txt
默认情况下命令行中的每个命令都有一个不同的子shell来执行如
# date; w; uptime
以上三个命令由三个不同的子shell来运行
#(date; w; uptime)
则由**一个shell**来运行。
=== Command substitution ===
With command substitution you** re-use the output of another command as text in your commandline**, for example to set a variable. This other command is run in a subshell:
number_of_users=$(cat /etc/passwd | wc -l)
Note that, in this example, you create a second subshell by using a pipe in the command substitution (just as sidenote):
+-- cat /etc/passwd
xterm ----- bash ----- bash (cmd. subst.) --|
+-- wc -l
FIXME to be continued

View File

@@ -0,0 +1,62 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-25T22:23:39+08:00
====== Bash快捷键 ======
Created Sunday 25 December 2011
原帖地址http://linuxtoy.org/archives/bash-shortcuts.html
编辑命令
Ctrl + a :移到命令行首
Ctrl + e :移到命令行尾
Ctrl + f :按字符前移(右向)
Ctrl + b :按字符后移(左向)
Alt + f :按单词前移(右向)
Alt + b :按单词后移(左向)
Ctrl + xx在命令行首和光标之间移动
Ctrl + u :从光标处删除至命令行首
Ctrl + k :从光标处删除至命令行尾
Ctrl + w :从光标处删除至字首
Alt + d :从光标处删除至字尾
Ctrl + d :删除光标处的字符
Ctrl + h :删除光标前的字符
__Ctrl + y 粘贴至光标后__
Alt + c :从光标处更改为首字母大写的单词
Alt + u :从光标处更改为全部大写的单词
Alt + l :从光标处更改为全部小写的单词
Ctrl + t :交换光标处和之前的字符
Alt + t :交换光标处和之前的单词
Alt + Backspace与 Ctrl + w 相同类似,分隔符有些差别 [感谢 rezilla 指正]
__重新执行命令__
Ctrl + r逆向搜索命令历史
Ctrl + g从历史搜索模式退出
Ctrl + p历史中的上一条命令
Ctrl + n历史中的下一条命令
Alt + .:使用上一条命令的最后一个参数
控制命令
Ctrl + l清屏
__Ctrl + o执行当前命令并选择上一条命令__
**Ctrl + s**:阻止屏幕输出
**Ctrl + q**:允许屏幕输出
Ctrl + c终止命令
Ctrl + z挂起命令
Bang (!) 命令
!!:执行上一条命令
!blah执行最近的以** blah 开头**的命令,如 !ls
!blah:p仅打印输出而不执行
!$上一条命令的__最后一个参数__与 Alt + . 相同
!$:p打印输出 !$ 的内容
__!*上一条命令的所有参数__
!*:p打印输出 !* 的内容
^blah删除上一条命令中的 blah
__^blah^foo将上一条命令中的 blah 替换为 foo__
^blah^foo^:将上一条命令中所有的 blah 都替换为 foo

View File

@@ -0,0 +1,170 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-24T17:22:11+08:00
====== IFS-分词以及位置参数 ======
Created Saturday 24 December 2011
http://wiki.bash-hackers.org/syntax/expansion/wordsplit
===== IFS的缺省值 =====
[geekard@geekard ~]$ set |grep IFS #IFS的缺省值为空格、TAB和换行注意其定义形式使用的是ANSI-C quote详情参考bash reference manual的quote一节。
IFS=__$' \t\n'__
These characters are also assumed when IFS is __unset__. When IFS is empty (nullstring), **no word splitting** is performed at all.
* 当IFS被unset时bash是使用IFS的缺省值分词。
* 当IFS为空时(如IFS= )bash将不会对扩展后的结果进行分词因此扩展后的结果是**一个词**(即使词中含有空白符)。
* 如果IFS不是缺省值而且其中含有空白符则分词的前后空格将被删去否则保留。
[geekard@geekard ~]$ a="a b:c d e"
[geekard@geekard ~]$ b=" a b:c d e "
[geekard@geekard ~]$ IFS=":"
[geekard@geekard ~]$ echo $b #会分词而且分得的词的前后空格没有被删去
a b c d e
[geekard@geekard ~]$ echo $b |od -a
0000000 **sp sp ** a sp b ** sp** sp **sp** c sp ** sp** d sp e sp sp
0000020 nl
0000021
[geekard@geekard ~]$
[geekard@geekard ~]$ IFS=": " #IFS中含有空格时分词结果中每个词的前后空格会被删去。
[geekard@geekard ~]$ echo $b |od -a #分得的词前后空格__被去掉__
0000000 a sp b sp c sp d sp e nl
0000012
[geekard@geekard ~]$ b=" a b:c d e "
[geekard@geekard ~]$ IFS=":"
[geekard@geekard ~]$ echo "$b" #不会分词, 所以__替换结果是一个词(__词中含有空格)这可以从echo的输出前面的空格看出来。
a b:c d e
[geekard@geekard ~]$ cp "$b" #同样证明,变量替换不会产生新的词,但是后续的分词操作可能产生新的词。
cp: missing destination file operand after **` a b:c d' **
Try `cp --help' for more information.
[geekard@geekard ~]$ cp $b #会分词,$b被分为两个词" a b"和"c d e ",注意由于IFS中没有空白符所以分词结果中的前后空格不会忽略这可从cp的错误输出中验证。
cp: cannot stat `__ a b__': No such file or directory #注意a前的空格。
[geekard@geekard ~]$
__命令替换__结果中的还行符会被替换为空格但是整个结果还是一个词。
若目录中的文件有空格,对于**输出该文件名**的程序来说它输出的是__一个词__但是对于其它程序来说这是多个词。输出的__各个文件名间是多个词__输出程序一般用空格或还行来区分。但是有些程序可以__改变词分割字符__为NULL等这样下一个处理程序只要能识别这个特殊的分割字符就能正确分词从而正确处理文件名。
===== 分词(只发生在没有引用的扩展结果) =====
__Only brace expansion, word splitting, and filename expansion can change the number of words of the expansion; __other expansions expand a single word to a single word. The only exceptions to this are the expansions of __"$@"__ (see Special Parameters) and "__${name[@]}__" (see Arrays).
除了大括号扩展,分词,文件名扩展会改变扩展后词的数目外,其它的扩展形式都是一个词扩展为一个词(虽然扩展后的词中可能有空格,但实际上它们是作为一个词出现的,空格是这个词内容的一部分,只有后续的分词操作才会将这些扩展结果分为多个词)。
bash在对命令行进行各种扩展(如下三种类型)后会对__扩展后生成的内容__进行分词而分词的依据就是IFS变量中的各字符~~(注意,任何时候没有被引用(quoted)的空白符都是分词的依据)~~。同时分词__只对没有被引用__的各类型扩展结果有效。
Word splitting occurs once any of the following expansions are done (and only then!)
* Parameter expansion
* Command substitution
* Arithmetic expansion
Bash will scan the results of these expansions for special** IFS characters** that mark word boundaries. This is only done on
__results that are not double-quoted__!
When a null-string (e.g., something that before expanded to »nothing«) is found, it is removed, unless it is quoted ('' or ""). 当分词后的结果中含有空串时(一般发生在扩展结果中含有连续的IFS字符例如IFS=":", 而扩展结果为a b:c::d时分词时就会产生空串。)会被删去除非该空串被引用a b:c:"":d此时分词后的空串不会被删去
Without any expansion beforehand, Bash won't perform word splitting! In this case, the initial token parsing is solely responsible. 使用IFS分词的情形__只能是先产生了没有引用的扩展结果__其它情况下bash使用空白符将命令行参数分割为各个word.
===== 实例: =====
[geekard@geekard ~]$ sp="a b"
[geekard@geekard ~]$ echo $sp #等效为echo a b; echo参数个数为2
a b
[geekard@geekard ~]$ echo "$sp" #等效为echo "a b"; echo参数个数为1
a b
[geekard@geekard ~]$ IFS=":"
[geekard@geekard ~]$ echo $sp #等效为echo a b; 参数个数为2注意a与b间的空格是__sp字符串自带的__。
a b
[geekard@geekard ~]$ echo "$sp" #等效为echo "a b"
a b
[geekard@geekard ~]$ sp="a:b" #sp__实际保存的值为a:b(bash会自动去掉引号)__
[geekard@geekard ~]$ echo $sp #bash先将$sp进行参数替换为a:b, 由于没有引号故接着用IFS对替换后的结果再次分词(word)故结果为a b;
a b
[geekard@geekard ~]$ echo "$sp" #bash想将$sp进行参数替换为a:b, __由于有引号故不再用IFS对替换后的结果分词__。
a:b
[geekard@geekard ~]$
[geekard@geekard ~]$ sp="a 'b c' d" #sp实际保存的值为a 'b c' d外层的引号自动去掉但内层不能去掉。
[geekard@geekard ~]$ echo $sp
a 'b c' d
[geekard@geekard ~]$
[geekard@geekard ~]$ set a "b c" d #位置参数的实际保存形式为: $1=a; $2=b c; $3=d
[geekard@geekard ~]$ echo $1,$2,$3
a,b c,d
[geekard@geekard ~]$ echo $* # $*值为a空格+b空格c+空格da后d前的空格是shell区分单词的空白符与IFS无关。__由于没有使用引号__故shell使用IFS对这个结果进行__进一步分词__。注意后面会将bc中空格换为冒号可以验证这步。
a b c d
[geekard@geekard ~]$ echo $@ # 同上
a b c d
[geekard@geekard ~]$ echo "$*" #$*被同上替换但由于外围有引号故shell__不再用IFS对其分词__。所以结果等效为所有位置参数组成的__一个字符串__。同理后面将验证。
a b c d
[geekard@geekard ~]$ echo "$@" #bash对"$@"形式做了特殊处理:"$@"="$!""$2""$3",然后按正常流程处理即各参数配替换:"$@"="a""b c""d"然后再对各替换结果按IFS进行分词。后面将验证。
a b c d
[geekard@geekard ~]$
===== 验证: =====
[geekard@geekard ~]$ set a b:c d
[geekard@geekard ~]$ IFS=":"
[geekard@geekard ~]$ echo **"$2"** #bash__不会对__含有引号的参数扩展结果用IFS进一步分词。
b:c
[geekard@geekard ~]$ echo $1,**$2**,$3 #在对没有引号的执行参数扩展后(但是__带引号的$*和$@是特殊情况__)bash会对扩展结果使用IFS进行分词分开的词会以空白符(一般是空格)分开同时外围不带引号。
a,__b c__,d
[geekard@geekard ~]$ echo $* #原理同上bash会对扩展结果a b:c d使用IFS进一步分词然后用空格分割得到的各个词。
a b c d
[geekard@geekard ~]$ for var in $*; do echo $var;done #进一步验证结果是空格分割的各个词且外围不带引号循环执行了4次
a
b
c
d
[geekard@geekard ~]$ echo $@ #原理及解释同上
a b c d
[geekard@geekard ~]$ for var in $@; do echo $var;done #解释同上
a
b
c
d
[geekard@geekard ~]$ echo "$*" #带引号的$*会被特殊处理对各个位置参数进行正常的参数扩展然后用IFS对各扩展结果进行分词最后对得到的各个词__用IFS的第一个字符连接在一起组成一个新的词__。
a:b:c:d
[geekard@geekard ~]$ for var in "$*"; do echo $var;done #循环执行了一次故__"$*"代表一个词__在执行echo $var时bash会对$var进行参数扩展然后对扩展结果用IFS分词最后再将各个词用空格连接起来(注意,连接后各个词是独立的,而不是一个新词)。所以输出中没有冒号。
a b c d
[geekard@geekard ~]$ echo "$@" #带引号的$@也会被特殊处理__先将各位置参数外加引号__然后对各个位置参数进行正常的参数扩展。由于有引号bash不会对各扩展结果进一步用IFS分词。"$@"的结果为:"a" "''b:c"'' "d"
a b:c d
[geekard@geekard ~]$
[geekard@geekard ~]$ for var in "$@"; do echo $var;done #证明"$@"结果的确为三个词,在执行第二次循环时,$var的替换结果"b:c"会进一步用IFS进行分词结果为b c。
a
b c
d
[geekard@geekard ~]$ for var in "$@"; do echo "$var" ;done #加了引号后替换结果不会被用IFS进一步分词显示结果为b:c进一步验证了上面的说法。
a
b:c
d
[geekard@geekard ~]$
===== Bash命令行解析过程 =====
1. token解析[geekard@geekard ~]$ echo $*
将命令行分成由固定元字符集分隔的记号SPACE, TAB, NEWLINE, __;__ , (, ), <, >, __|__, __&__
记号类型包括单词关键字I/O重定向符和分号。
2.检测每个命令的第一个记号查看是否为不带引号或反斜线的__关键字__。如果是一个开放的关键字如if和其他控制结构起始字符串function{或(则命令实际上为一__复合命令__。shell在内部对复合命令进行处理读取下一个命令并重复这一过程。如果关键字不是__复合命令起始字符串__(如then等一个控制结构中间出现的关键字),则给出语法错误信号。
3.依据__别名列表__检查每个命令的第一个关键字。如果找到相应匹配则替换其别名定义并退回第一步;否则进入第4步。该策略允许递归别名还允许定义关键字别名。如alias procedure=function
4.执行**大括号扩展**例如a{b,c}变成ab ac
5.如果~位于单词开头,用$HOME替换~。使用usr的主目录替换__~user__。
6.**参数(变量)替换**:对任何以符号$开头的表达式执行参数(变量)替换,注意参数扩展时的**大括号内容有多种形式**。
${foo:-bar} ${foo:=bar} ${foo:?bar} ${foo:+bar}
7.**命令替换**: 对形式$(string)的表达式进行命令替换这里是__嵌套的命令行处理__。
8.**算术替换**:计算形式为$((string))的算术表达式
9.把行的参数命令和算术__替换部分再次分成单词__这次它使用__$IFS__中的字符做分割符而不是步骤1的**元字符集**。
10.**通配符扩展**:对出现*, ?, [ / ]对执行路径名扩展,也称为通配符扩展
11. 按命令优先级表(跳过别名),进行命令查寻
12.设置完I/O重定向和其他操作后执行该命令。
**函数名---->别名----->内部命令----->外部命令**
总结: bashksh执行命令的过程分析命令变量求值命令替代``和$( ))-重定向-通配符展开-确定路径-执行命令;
关于引用
1. 单引号__跳过了前10个步骤__不能在单引号里放单引号
2. 双引号跳过了步骤1~5步骤9~10也就是说只处理6~8个步骤。
也就是说,双引号忽略了管道字符,别名,~替换,通配符扩展,和通过分隔符分裂成单词。
双引号里的单引号没有作用,但双引号允许参数替换,命令替换和算术表达式求值。可以在双引号里包含双引号,方式是加上转义符"\"还__必须转义$, `, \__。

View File

@@ -0,0 +1,87 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-25T16:58:27+08:00
====== Process Substitution ======
Created Sunday 25 December 2011
http://www.linuxjournal.com/content/shell-process-redirection
In addition to the fairly common forms of input/output redirection the shell recognizes something called process substitution. Although not documented as a form of input/output redirection, its syntax and its effects are similar.
The syntax for process substitution is:
<(list)
or
>(list)
where each list is a command or a pipeline of commands. The effect of process substitution is to__ make each list act like a file__. This is done by **giving the list a name** in the file system and then** substituting that name in the command line**. The list is given a name either by connecting the list to named** pipe** or by using a file in **/dev/fd **(if supported by the O/S). By doing this, the command simply sees a file name and is unaware that its reading from or writing to a command pipeline.
To substitute a command pipeline for an** input file** the syntax is:
command ... <(list) ...
To substitute a command pipeline for an **output file** the syntax is:
command ... >(list) ...
At first process substitution may seem rather pointless, for example you might imagine something simple like:
uniq <(sort a)
to sort a file and then find the unique lines in it, but this is more commonly (and more conveniently) written as:
sort a | uniq
The power of process substitution comes when you have __multiple command pipelines that you want to connect to a single command__.
For example, given the two files:
# cat a
e
d
c
b
a
# cat b
g
f
e
d
c
b
To view the lines **unique to each of these** two unsorted files you might do something like this:
# sort a | uniq >tmp1
# sort b | uniq >tmp2
# comm -3 tmp1 tmp2
a
f
g
# rm tmp1 tmp2
With process substitution we can do all this with one line:
# __comm -3 <(sort a | uniq) <(sort b | uniq)__
a
f
g
Depending on your shell settings you may get an error message similar to:
syntax error near unexpected token `('
when you try to use process substitution, particularly if you try to use it within a shell script. Process substitution is not a POSIX compliant feature and so it may have to be enabled via:
set +o posix
Be careful not to try something like:
if [[ $use_process_substitution -eq 1 ]]; then
set +o posix
comm -3 <(sort a | uniq) <(sort b | uniq)
fi
The command set +o posix enables not only the execution of process substitution but the recognition of the syntax. So, in the example above the shell tries to parse the process substitution syntax before the "set" command is executed and therefore still sees the process substitution syntax as illegal.
Of course, note that all shells may not support process substitution, these examples will work with bash.

View File

@@ -0,0 +1,301 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-24T14:39:42+08:00
====== Scripting with style ======
Created Saturday 24 December 2011
http://wiki.bash-hackers.org/scripting/style
These are some__ coding guidelines__ that helped me to read and understand my own code over the years. They also will help to produce code that will be a bit__ more robust__ than "if something breaks, I know how to fix it".
This is not a bible, of course. But I have seen so much ugly and terrible code (not only in shell) during all the years, that I'm 100% convinced there needs to be some code optics and style. No matter which one you use, use it through all your code (at least don't change it within the same shellscript file). Don't change your code optics with your mood.
Some good code optics help you to read your own code after a while. And of course it helps others to read the code.
===== Indention guidelines =====
**Indention is nothing that technically influences a script, it's only for us humans.**
I'm used to use the indention of __two space characters__ (though many may prefer 4 spaces, see below in the discussion section):
* it's easy and fast to type
* it's not a** hard-tab **that's displayed differently in different environments
* it's **wide enough** to give an optic break and** small enough** to not waste too much space on the line
Speaking of hard-tabs: Avoid them if possible. They only make trouble. I can imagine one case where they're useful: Indenting here-documents.
===== Breaking up lines =====
Whenever you need to break lines of long code, you should follow one of these two ways:
* Indention using command width:
activate some_very_long_option \
some_other_option
* Indention using two spaces:
activate some_very_long_option \
some_other_option
Personally, with some exceptions, I prefer __the first form__ because the optic impression of "this belongs together" is better.
===== Breaking compound commands =====
Compound commands form the structures that make a shell script different from a stupid enumeration of commands. Usually they contain **a kind of "head" and a "body" that contains command lists**. This kind of compound command is relatively easy to indent.
I'm used to (not all points apply to all compound commands, just pick the basic idea):
* put the introducing keyword and the initial command list or parameters on one line ("head")
* put the "body-introducing" keyword on the same line
* the command list of the "body" on separate lines, indented by two spaces
* put the closing keyword on a separated line, indented like the initial introducing keyword
What?! Well, here again:
HEAD_KEYWORD parameters; BODY_BEGIN
BODY_COMMANDS
BODY_END
=== if/then/elif/else ===
This construct is a bit special, because it has keywords (elif, else) "in the middle". The optical nice way is to indent them like the if:
if ...__; then__
...
elif ...; then
...
else
...
fi
=== for ===
for f in /etc/*__; do__
...
done
=== while/until ===
while [[ $answer != [YyNn] ]]__; do__
...
done
=== The case construct ===
The case construct might need a bit more discussion here, since the structure is a bit more complex.
In general it's the same: Every new "layer" gets a new indention level:
case $input in
__hello)__
echo "You said hello"
__;;__
bye)
echo "You said bye"
if foo; then
bar
fi
;;
*)
echo "You said something weird..."
;;
esac
Some notes:
* if not 100% needed, the optional left parenthesis on the pattern is not written
* the patterns (hello)) and the corresponding __action terminator (;;) __are__ indented at the same level__
* the action command lists are indented one more level (and continue to have their own indention, if needed)
* though optional, the very last action terminator is given
===== Syntax and coding guidelines =====
===== Cryptic constructs =====
Cryptic constructs, we all know them, we all love them. If they are not 100% needed, __avoid them__, since nobody except you may be able to decipher them.
It's - just like in C - the middle between smartness, efficiency and readablity.
If you need to use a cryptic construct, place a small comment that actually tells what your monster is for.
===== Variable names =====
Since all __reserved variables are UPPERCASE__, the safest way is to __only use lowercase variable names__. This is true for reading user input, loop counting variables, etc., ... (in the example: file)
* prefer lowercase variables
* if you use UPPERCASE names, do not use reserved variable names (see SUS for an incomplete list)
* if you use UPPERCASE names, at best prepend the name with a __unique prefix__ (MY_ in the example below)
#!/bin/bash
# the prefix 'MY_'
MY_LOG_DIRECTORY=/var/adm/
for file in __"$MY_LOG_DIRECTORY"/*__; do
echo "Found Logfile: $file"
done
===== Variable initialization =====
As in C, it's always a good idea to initialize your variables, though, the shell will initialize fresh variables itself (better: **Unset variables will generally behave like variables containing a **__nullstring__).
It's no problem to pass a variable you use as environment to the script. If you blindly assume that all variables you use are empty for the first time, somebody can inject a variable content by just passing it in the environment.
The solution is simple and effective: Initialize them
my_input=""
my_array=()
my_number=0
If you do that for every variable you use, then you also have a kind of documentation for them.
注意__本地变量的值将覆盖环境变量的值__。
===== Parameter expansion =====
Unless you are really sure what you're doing, __quote every parameter expansion__.
There are some cases where this isn't needed from a technical point of view, e.g.
* inside [[ ... ]]
* the parameter (WORD) in **case $WORD in **....
* variable asssignment: VAR=$WORD
But quoting these is never a mistake. If you get used to quote every parameter expansion, you're safe.
If you need to parse a parameter as a list of words, you can't quote, of course, like
list="one two three"
# you MUST NOT quote $list here
for word in $list; do
...
done
Function names
Function names should be all lowercase and have a good name. The function names should be human readable ones. A function named f1 may be easy and quick to write down, but for debugging and especially for other people, it will tell nothing. Good names help to document the code without using extra comments.
A more or less funny one: If not intended to do so, do not name your functions like common commands, typically new users tend to name their scripts or functions test, which collides with the UNIX test command!
Unless absolutely necessary, only use alphanumeric characters and the underscore for function names. /bin/ls is a valid function name in Bash, but it only makes limited sense.
Command substitution
As noted in the article about command substitution you should use the $( ... ) form.
Though, if portability is a concern, you might have to use the backquoted form ` ... `.
In any case, if other expansions and word splitting are not wanted, you should quote the command substitution!
Eval
Well, like Greg says: "If eval is the answer, surely you are asking the wrong question."
Avoid if, unless absolutely neccesary:
eval can be your neckshot
there are most likely other ways to achieve what you want
if possible, re-think the way your script works, if it seems you can't avoid eval with your current way
if you really really have to use it, then you should take care and know what you do (if you know what you do, then eval is not evil at all)
Basic structure
The basic structure of a script simply reads:
#!SHEBANG
CONFIGURATION_VARIABLES
FUNCTION_DEFINITIONS
MAIN_CODE
The shebang
If possible (I know it's not always possible!), use a shebang.
Be careful with /bin/sh: The argument that "on Linux /bin/sh is a Bash" is a lie (and technically irrellevant)
The shebang serves two purposes for me:
it specifies the interpreter when the script file is called directly: If you code for Bash, specify bash!
it documents the desired interpreter (so: use bash when you write a Bash-script, use sh when you write a general Bourne/POSIX script, ...)
Configuration variables
I call variables that are meant to be changed by the user "configuration variables" here.
Make them easy to find (directly at the top of the script), give them useful names and maybe a short comment. As noted above, use UPPERCASE for them only when you are sure what you're doing. lowercase will be the safest.
Function definitions
Unless the code has reasons to not do, all needed function definitions should be declared before the main script code is run. This gives a far better overview and ensures that all function names are known before they are used.
Since a function isn't parsed before it is executed, you usually don't even have to ensure a specific order.
The portable form of the function definition should be used, without the function keyword (here using the grouping compound command):
getargs() {
...
}
Speaking about the command grouping in function definitions using { ...; }: If you don't have a good reason to use another compound command directly, you should always use this one.
Behaviour and robustness
Fail early
Fail early, this sounds bad, but usually is good. Failing early means to error out as early as possible when checks indicate some error or unmet condition. Failing early means to error out before your script begins its work in a potentially broken state.
Availability of commands
If you use commands that might not be installed on the system, check for their availability and tell the user what's missing.
Example:
my_needed_commands="sed awk lsof who"
missing_counter=0
for needed_command in $my_needed_commands; do
if ! hash "$needed_command" >/dev/null 2>&1; then
printf "Command not found in PATH: %s\n" "$needed_command" >&2
((missing_counter++))
fi
done
if ((missing_counter > 0)); then
printf "Minimum %d commands are missing in PATH, aborting\n" "$missing_counter" >&2
exit 1
fi
Exit meaningfully
The exit code is your only way to directly communicate with the calling process without any special things to do.
If your script exits, provide a meaningful exit code. That minimally means:
exit 0 (zero) if everything is okay
exit 1 - in general non-zero - if there was an error
This, and only this, will enable the calling component to check the operation status of your script.
You know: "One of the main causes of the fall of the Roman Empire was that, lacking zero, they had no way to indicate successful termination of their C programs." Robert Firth
Misc
Output and optics
if the script is interactive, if it works for you and if you think this is a nice feature, you can try to save the terminal content and restore it after execution
output clean and understandable messages to the screen
if applicable, you can use colors or specific prefixes to tag error and warning messages
make it more easy for the user to identify those messages
write normal output to STDOUT and error, warning and diagnostic messages to STDERR
this gives the possibility to filter
this doesn't make the script poisoning the real output data with diagnostic messages
if the script gives syntax help (-? or -h or help arguments), it should go to STDOUT, since it's expected output in this moment
if applicable, write a logfile that contains all the details
it doesn't clutter the screen then
the messages are saved for later and don't get lost (diagnostics)
Input
never blindly assume anything. If you want the user to input a number, check the input for being a number, check for leading zeros, etc... As we all know, users are users and not programmers. They will do what they want, not what the program wants. If you have specific format or content needs, always check the input
Other Coding style guidelines
http://www.opensolaris.org/os/project/shell/shellstyle/

View File

@@ -0,0 +1,214 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-25T13:52:34+08:00
====== The coproc keyword ======
Created Sunday 25 December 2011
http://wiki.bash-hackers.org/syntax/keywords/coproc
===== Synopsis =====
coproc [NAME] command [redirections]
===== Description =====
Bash 4.0 introduced the coprocesses, a feature certainly familiar to __ksh__ users.
coproc starts __a command__ in the backgound __setting up pipes__ so that you can interact with it. Optionally, the co-process can have a name NAME.
If NAME is given, the following command must be a__ compound command__. If no NAME ist given, the command can be a simple command or a compound command.
===== Redirections这里的重定向是对于coproc里的command而言的 =====
The redirections are normal redirections that are set __after the pipe has been set up__, some examples:
# redirecting stderr in the pipe
$ coproc { ls thisfiledoesntexist; read ;} __2>&1 #1代表coproc的标准输出即COPROC[1]__
__#将在子shell里执行的{ ls thisfiledoesntexist; read ;}标准出错重定位到其标准输出(而不是当前shell的标准输出)。由于重定向是在pipe建立之后而且coproc的标准输入和标准输出是和执行该语句的当前shell相连接的所以coproc的出错和输出将可以通过pipe访问到。__
[2] 23084
$ read -u ${__COPROC[0]__};printf "%s\n" "$REPLY"
ls: cannot access thisfiledoesntexist: No such file or directory #coproc的出错可以通过其标准输出fd访问到。
#COPROC是__当前shell__中与coproc的标准输入/输出相连的描**述符数组COPROC[0]与coproc的标准输出相连COPROC[1]与coproc的标准输入相连**。
#let the output of the coprocess go to stdout
$ __{ coproc mycoproc { awk '{print "foo" $0;fflush()}' ;} >&3 ;} 3>&1__
{....} 分组命令代表在**当前shell中**执行其中的命令所以3>&1表示将{...}中使用的3描述符与__当前shell__的1描述符相连接。
而 {...}中的>&3是在coproc与当前shell的pipe建立之后执行的其标准输出的描述符其环境中的3相连后实际上是与当前shell的标准输出相连。
[2] 23092
$ echo bar >&${mycoproc[1]}
$ foobar
Here we need to **save the previous file descriptor of stdout**, because by the time we want to redirect the fds of the coprocess stdout has been redirected to the pipe.
注意,不能是:
[geekard@geekard ~]$ { coproc mycoproc { awk '{print "foo" $0; fflush()}'; }__ 3>&1 >&3;__ } #只是重定向协程自身的fd__与外界shell无关__。
[geekard@geekard ~]$ echo ddd >&${mycoproc[1]}
[geekard@geekard ~]$ jobs
[1]+ Running coproc mycoproc { awk '{print "foo" $0; fflush()}'; } 3>&1 1>&3 &
[geekard@geekard ~]$ __lsof -p 31330__
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 31330 geekard cwd DIR 8,7 4096 2490369 /home/geekard
bash 31330 geekard rtd DIR 8,2 4096 2 /
bash 31330 geekard txt REG 8,2 705652 15 /bin/bash
bash 31330 geekard mem REG 8,2 47044 411422 /lib/libnss_files-2.15.so
bash 31330 geekard mem REG 8,2 7015792 303590 /usr/lib/locale/locale-archive
bash 31330 geekard mem REG 8,2 1949865 411409 /lib/libc-2.15.so
bash 31330 geekard mem REG 8,2 13952 391721 /lib/libdl-2.15.so
bash 31330 geekard mem REG 8,2 351704 264729 /usr/lib/libncursesw.so.5.9
bash 31330 geekard mem REG 8,2 281388 391738 /lib/libreadline.so.6.2
bash 31330 geekard mem REG 8,2 151125 411426 /lib/ld-2.15.so
bash 31330 geekard ** 0r ** FIFO 0,8 0t0 199595 pipe
bash 31330 geekard **1w ** FIFO 0,8 0t0 199594 pipe
bash 31330 geekard 2u CHR 136,6 0t0 9 /dev/pts/6
bash 31330 geekard ** 3w ** FIFO 0,8 0t0 199594 pipe
bash 31330 geekard **10w** FIFO 0,8 0t0 199594 pipe
bash 31330 geekard 255u CHR 136,6 0t0 9 /dev/pts/6
[geekard@geekard ~]$ **echo** </proc/31330/fd/3 __#错误__echo只输出命令行上的字符__不使用重定向__。
[geekard@geekard ~]$** cat **!$ #cat是**行缓冲**一次读入一行直到遇到EOF。
cat /proc/31330/fd/3
__fooddd__
^C
[geekard@geekard ~]$ echo "ddd2" >&${mycoproc[1]}
[geekard@geekard ~]$ __read line__ <&${mycoproc[0]} #read只读取一行后返回。
[geekard@geekard ~]$ echo $line
fooddd2
[geekard@geekard ~]$
这其实是将**协程的文件描述符3**重定向到mycoproc[0],协程的标准输出并没有改变。
===== Pitfalls =====
Avoid the command__ | __while read subshell
The traditional KSH workaround to avoid the subshell when doing command | while read is to use a coprocess, unfortunately, it seems that bash's behaviour differ from KSH.
In KSH you would do:
ls |& #start a coprocess
while read -p file;do echo "$file";done #read its output
In bash:
#DOESN'T WORK
$ coproc ls
[1] 23232
$ while read __-u__ ${COPROC[0]} line;do echo "$line";done
bash: read: line: invalid file descriptor specification
[1]+ Done coproc COPROC ls
By the time we start reading from the output of the coprocess, the file descriptor has been closed.
===== Buffering =====
In the first example, we used fflush() in the awk command, this was done on purpose, as always when you use__ pipes the I/O operations are buffered__, let's see what happens with sed:
$ coproc sed s/^/foo/
[1] 22981
$ echo bar >&${COPROC[1]}
$ read __-t 3__ -u ${COPROC[0]}; (( **$? >127** )) && echo "nothing read"
nothing read
Even though this example is the same as the first awk example, the read doesn't return, simply because the output is waiting in a buffer.
===== background processes =====
The file descriptors of the coprocesses are __available to the shell where you run coproc__, but they are __not inherited__. Here a not so meaningful illustration, suppose we want something that continuely reads the output of our coprocess and echo the result:
#NOT WORKING
$ coproc awk '{print "foo" $0;fflush()}'
[2] 23100
$ while read -u ${COPROC[0]};do echo "$REPLY";done __&__
[3] 23104
$ ./bash: line 243: read: 61: invalid file descriptor: Bad file
descriptor
it fails, because__ the descriptor is not avalaible in the subshell created by &__. evoking shell中的协程文件描述符并不会被后续的子shell继承。但是在evoking shell中打开的其它文件描述符一般会被子shell继承。
A possible workaround:
#WARNING: for illustration purpose ONLY
# this is not the way to make the coprocess print its output
# to stdout, see the redirections above.
$ coproc awk '{print "foo" $0;fflush()}'
[2] 23109
$__ exec 3<&${COPROC[0]}__
$ while read -u 3;do echo "$REPLY";done &
[3] 23110
$ echo bar >&${COPROC[1]}
$ foobar
Here the** fd 3 is inherited**.
===== Only one coprocess at a time =====
The title says it all, complain to the bug-bash mailing list if you want more.
===== Examples =====
=== Anonymous Coprocess ===
First let's see an example without NAME:
$ coproc awk '{print "foo" $0;fflush()}'
[1] 22978
The command starts in the background, coproc returns immedately. 2 new files descriptors are now available via the __COPROC array__, We can send data to our command:
$ echo bar >&${COPROC[1]}
And then read its output:
$ read -u ${COPROC[0]};printf "%s\n" "$REPLY"
foobar
When we don't need our command anymore, we can kill it via its pid:
$ kill __$COPROC_PID__
$
[1]+ Terminated coproc COPROC awk '{print "foo" $0;fflush()}'
=== Named Coprocess ===
Using a named coprocess is as simple, we just need a compound command like when defining a function:
$ coproc mycoproc __{ awk '{print "foo" $0;fflush()}' ;}__
[1] 23058
$ echo bar >&${mycoproc[1]}
$ read -u ${mycoproc[0]};printf "%s\n" "$REPLY"
foobar
$ kill $mycoproc_PID
$
[1]+ Terminated coproc mycoproc { awk '{print "foo" $0;fflush()}'; }
Redirecting the output of a script to a file and to the screen
#!/bin/bash
# we start tee in the background
# redirecting its output to the stdout of the script
{ coproc tee { tee logfile ;} __>&3 __;} __3>&1__ #invoking shell写打开文件描述符3(duplicate from its fd 1)然后该描述符__3被子进程继承__。
# we redirect stding and stdout of the script to our coprocess
__exec >&${tee[1]} 2>&1__
在使用复制文件描述符特性时被复制到的文件描述符必须事先读打开或写打开。所以内层的3其实继承的是外层shell打开的描述符。
The operator
[n]>&word
is used similarly to duplicate output file descriptors. If n is not specified, the standard output (file descriptor 1) is used. If the digits in word do not specify __a file descriptor open for output__, a redirection error occurs.
===== Portability considerations =====
* the coproc keyword is not specified by POSIX(R)
* other shells might have different ways to solve the coprocess problem
* the coproc keyword appeared in Bash version 4.0-alpha
===== See also =====
Anthony Thyssen's Coprocess Hints - excellent summary of everything around the topic
http://www.ict.griffith.edu.au/anthony/info/shell/co-processes.hints

View File

@@ -0,0 +1,10 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T19:49:25+08:00
====== beep蜂鸣 ======
Created Sunday 26 February 2012
可以使用命令:
# echo -ne '\a'
但是不能将上述命令的输出重定向,否则不会响铃。

View File

@@ -0,0 +1,181 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-23T17:16:52+08:00
====== dialog小结 ======
Created Friday 23 December 2011
http://molinux.blog.51cto.com/2536040/466001
内容导述:
1、命令说明
2、命令概要
3、框体类型
4、通用选项
5、命令用法
6、命令示例
命令说明:
liunx 下的dialog 工具是一个可以和shell脚本配合使用的文本界面下的创建对话框的工具。
每个对话框提供的输出有两种形式:
1. 将所有输出用stderr 输出,不显示到屏幕。
2. 使用退出状态码“OK”为0“NO”为1
命令概要:
dialog --clear
dialog --create-rc file
dialog --print-maxsize
dialog common-options box-options
窗体类型:
calendar 日历
checklist 允许你显示一个选项列表,每个选项都可以被单独的选择 (复选框)
form 表单,允许您建立一个带标签的文本字段,并要求填写
fselect 提供一个路径,让你选择浏览的文件
gauge 显示一个表,呈现出完成的百分比,就是显示出进度条。
infobox 显示消息后,(没有等待响应)对话框立刻返回,但不清除屏幕(信息框)
inputbox 让用户输入文本(输入框)
inputmenu 提供一个可供用户编辑的菜单(可编辑的菜单框)
menu 显示一个列表供用户选择(菜单框)
msgbox(message) 显示一条消息,并要求用户选择一个确定按钮(消息框)
password (密码框)显示一个输入框,它隐藏文本
pause 显示一个表格用来显示一个指定的暂停期的状态
radiolist 提供一个菜单项目组,但是只有一个项目,可以选择(单选框)
tailbox 在一个滚动窗口文件中使用tail命令来显示文本
tailboxbg 跟tailbox类似但是在background模式下操作
textbox 在带有滚动条的文本框中显示文件的内容 (文本框)
timebox 提供一个窗口,选择小时,分钟,秒
yesno(yes/no) 提供一个带有yes和no按钮的简单信息框
通用选项 common options
这个选项来设置dialog box的背景颜色和 标题等。
常用选项说明:
[--title <title>] 指定将在对话框的上方显示的标题字符串
[--colors] 解读嵌入式“\ Z”的对话框中的特殊文本序列序列由下面的字符 0-7, b B, u, U等恢复正常的设置使用“\Zn”。
[--no-shadow] 禁止阴影出现在每个对话框的底部
[--shadow] 应该是出现阴影效果
[--insecure] 输入部件的密码时,明文显示不安全,使用星号来代表每个字符
[--no-cancel] 设置在输入框菜单和复选框中不显示“cancel”项
[--clear] 完成清屏操作。在框体显示结束后,清除框体。这个参数只能单独使用,不能和别的参数联合使用。
[--ok-label <str>] 覆盖使用“OK”按钮的标签换做其他字符。
[--cancel-label <str>] 功能同上
[--backtitle <backtitle>] 指定的backtitle字符串显示在背景顶端。
[--begin <y> <x>] 指定对话框左上角在屏幕的上的做坐标
[--timeout <secs>] 超时(返回的错误代码),如果用户在指定的时间内没有给出相应动作,就按超时处理
[--defaultno] 使的是默认值 yes/no使用no
[--sleep <secs>]
[--stderr] 以标准错误方式输出
[--stdout] 以标准方式输出
[--default-item <str>] 设置在一份清单,表格或菜单中的默认项目。通常在框中的第一项是默认
其余各种选项可以自行从man文档里面查找.
[--aspect <ratio>] [--backtitle <backtitle>] [--begin <y> <x>] [--cr-wrap] [--item-help] [--no-collapse]
[--default-item <str>] [--defaultno] [--extra-button] [--extra-label <str>] [--help-button] [--no-kill]
[--help-label <str>] [--help-status] [--ignore] [--input-fd <fd>] [--keep-window] [--max-input <n>]
[--output-fd <fd>] [--print-maxsize] [--print-size] [--print-version] [--separate-output] [--size-err]
[--separate-widget <str>] [--single-quoted] [--sleep <secs>] [--tab-correct] [--tab-len <n>]
[--timeout <secs>] [--trim] [--visit-items] [--version]
附注:
--cr-wrap
解释对话框内的文本换行相当于一个新行。另外。dialog只会控制文本适应对话框。即使你可以控制一行突破这个规则dialog也会自动调整所有在对话框内的文本使他们与对话框的宽度一样。不用cr-wrap文本的版面会排列成脚本的原始规则。
--colors
解释内含在对话框的”\Z”的顺序属性。他告诉对话框设置颜色或者视频属性:
0到7是ANSI码在curses中分别指定为:黑色,红色,绿色,黄色,蓝色,紫红色,蓝绿色和白色。
粗体用b设置重设用B。背面用r设置重设用R。下划线用u设置重设用U。所做出的改动将会累积起来。例如”\Zb\Z1”表示文本显示红色。恢复正常的设置用”\Zn”
--input-fd fd
从给定的文件描述符中读取键盘输入。大部分的dialog脚本从标准输入读取但是gauge组件从管道读取(那通常是标准输入)。当dialog试着重新打开终端时一些配置不能严格执行。如果你的脚本必须工作在那种类型的环境中可以使用这个选项.
--output-fd fd
直接输出到给定的文件描述符。多数dialog脚本写到标准输出但是错误信息可能也被输出到那里这取决于你的脚本。
--insecure
输入密码时回显星号(×)将使得passwd组件更友好但较不安全。
--keep-window
退出时不清屏和重绘窗口。当几个组件在同一个程序中运行时,对于保留窗口内容很有用的。
注意curses在开始一个新的处理时会清空屏幕。
--max-input size
限制输入的字符串在给定的大小之内。如果没有指定默认是2048。
--separate-output
对于chicklist组件,输出结果一次输出一行,没有限额.这一便利的分离可以被别的程序使用。
--separator string
--separate-widget string
指定一个分隔符分离dialog中每个组件的输出。它可以用来简单地分析一个dialog中几个组件的结果。如果没有给出这个选项默认的分隔符是一个tab符号。
--sleep secs
在处理完一个对话框后静止(延迟)的时间(秒)。
命令用法Box options
--calendar <text> <height> <width> <day> <month> <year>
--checklist <text> <height> <width> <list height> <tag1> <item1> <status1>...
--form <text> <height> <width> <form height> <label1> <l_y1> <l_x1> <item1> <i_y1> <i_x1> <flen1> <ilen1>...
--fselect <filepath> <height> <width> //文件选择
--gauge <text> <height> <width> [<percent>]
--infobox <text> <height> <width>
--inputbox <text> <height> <width> [<init>]
--inputmenu <text> <height> <width> <menu height> <tag1> <item1>...
--menu <text> <height> <width> <menu height> <tag1> <item1>...
--msgbox <text> <height> <width>
--passwordbox <text> <height> <width> [<init>]
--pause <text> <height> <width> <seconds>
--radiolist <text> <height> <width> <list height> <tag1> <item1><status1>...
--tailbox <file> <height> <width>
--tailboxbg <file> <height> <width>
--textbox <file> <height> <width>
--timebox <text> <height> <width> <hour> <minute> <second>
--yesno <text> <height> <width>
使用附注:
可以在一个脚本中放置一个或多个对话框
- 使用 and-widget 表示强制Dialog处理到下一个dialog直到按下ESC键取消。
- 简单地加一个标志给下一个对话框组合成一条链。当一个dialog的返回值为非0比如Cancel或者No(查看诊断)dialog就会停止。
一些部件比如清单会写文本到dialog的输出。
一般情况下是标准错误,但是这里有一些选项可以改变它: --output-fd--stderr 和 --stdout。
在按下Cancel(或ESC)时不会写入文本这种情况下dialog会立即退出。
选项附注:
所有的选项以"--"开头。
单独的”--“符号作为跳脱符(ESCAPE)使用,也就是说,命令行上的下一个标记不作为一个选项。
dialog --title -- --NotAnOption
--file 选项告诉dialog从文件中读取参数作为它的值。
dialog --file parameterfile
命令示例:
不再贴图, /usr/share/doc/dialog/sample/ 下有各中框体的使用示例。可以通过查看脚本示例熟悉功能来使用。

View File

@@ -0,0 +1,148 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-21T20:45:23+08:00
====== getopts的用法 ======
Created Wednesday 21 December 2011
使用getopts解析bash的命令行位置参数()或函数的参数是一个专业的做法。getopts是getopt的改进版本它是bash的内置命令。
它只能用来解析短格式的命令行选项(-a)而不能识别GNU-style的长选项(--myoption)或XF86-syle的长选项(-myoption)
===== 术语: =====
* 命令行参数(command line arguments)/位置参数(positional parameters): 命令行中除第一个字符串(命令名)后的所有内容它们以bash的空白字符分割为不同的位置参数并用12...等数字标示其位置。
* 选项(option):命令行参数中以短横线开头的单字符(-a)
* 选项参数(an option but has an additional argument):紧随在选项字符后的字符串,它们之间有可选的空白字符。
* 剩余参数(remaining arguments without any option related): 命令行参数最后的,与任何选项无关的参数(可以为多个字符串)。
选项一般用单个字母或数字表示,不带参数的选项可称为标志(flag)、开关(switch), 它们可以用断横线开头组合在一起。
//mybackup -x -f /etc/mybackup.conf -r ./foo.txt ./bar.txt//
//mybackup -xrf /etc/mybackup.conf ./foo.txt ./bar.txt//
===== 工作原理: =====
多次读取命令行参数中的选项,每次获得下一个位置参数(positional parameter)和可能的参数每次读取一个选项及其参数后会递增变量OPTIND(option index)的值使其指向下一个将要解析的位置参数。如果遇到__非选项参数或--__(两个短横线标示命令行选项及其参数结束,此后的为**剩余参数**)则停止解析退出并返回FALSE因此可用while来循环来迭代。
//while getopts ...; do//
// ...//
//done//
===== 使用的变量(变量值由getopts自动赋值可用在while循环中) =====
**OPTIND(选项的位置编号)** Holds the index to the next argument to be processed. This is how getopts "remembers" its own status between invocations. Also usefull to __shift the positional parameters after processing with getopts__. OPTIND is __initially set to 1__, and needs to be__ re-set__ to 1 if you want to parse anything again with getopts
**OPTARG(选项的参数,无参数的选项其值为空)** This variable is set to __any argument for an option__ found by getopts. It also contains the option flag of an unknown option.
**OPTERR** (Values 0 or 1) Indicates if Bash should display error messages generated by the getopts builtin. The value is initialized to 1 on every shell startup - so be sure to always set it to 0 if you don't want to see annoying messages!
getopts also uses these variables for **error reporting** (they're set to value-combinations which arent possible in normal operation).
===== Specify what you want =====
The base-syntax for getopts is:
**getopts OPTSTRING VARNAME [ARGS...]**
where:
**OPTSTRING(所有的选项字符及其参数标志字符串)** tells getopts **which** options to expect and** which **to expect arguments (see below)
**VARNAME(while循环变量值为解析的选项)** tells getopts which** shell-variable** to use for option reporting
**ARGS(若未指定,则解析命令行参数)** tells getopts to parse these optional words__ instead of __the positional parameters
实例:
#cat **go_test.sh**
//#!/bin/bash//
//while getopts ":ad:" opt; do//
// case $opt in//
// a)//
// echo "-a was triggered!" >//__&2__
// ;;//
d)
echo "-d was triggered! OPTAG is $OPTARG" >&2
__\?__//)//
// echo "Invalid option: -$OPTARG" >&2//
// ;;//
// esac//
//done//
$ ./go_test.sh #不加任何参数
$
$ ./go_test.sh /etc/passwd #加一个**没有与任何选项关联**的参数
$
以上两个执行后getopts并没有输出因为它没有看到任何有效或无效的参数。
$ ./go_test.sh -b # b选项并没有在getopts的选项字符串中指定因此getopts解析后报错。
同时 ? 赋给循环变量$opt, 无效选项字符赋给$OPTARG因此我们的代码能__捕获并处理__这个错误。
Invalid option: -b
$
$ ./go_test.sh -a
-a was triggered!
$
$ ./go_test.sh -a -x -b -c #各选项可以同时指定其位置可以与getopts中的不同。
-a was triggered!
Invalid option: -x
Invalid option: -b
Invalid option: -c
$
$ ./go_test.sh -a -a -a -a #同一个选项可以指定多次,但每一个的位置变量都不同(OPTIND)
-a was triggered!
-a was triggered!
-a was triggered!
-a was triggered!
$
__$./go_test.sh -d -a__
__-d was triggered! OPTAG is -a__
__$__
$./go_test.sh -d -a df -a fjdk
__-d was triggered! OPTAG is -a df__
__-a was triggered!__
$
The last examples lead us to some points you may consider:
* __invalid options don't stop the processing__: If you want to stop the script, you have to do it yourself (exit in the right place)
* __multiple identical options are possible__: If you want to disallow these, you have to check manually (e.g. by setting a variable or so)
[geekard@geekard bin]$ ./getopts_test.sh -a #有效选项(选项不带参数)
-a
-a was triggered! OPTARG is , OPTIND is 2.
[geekard@geekard bin]$ ./getopts_test.sh -d #无效选项,-d选项需要参数
-d
-d require args! OPTIND is 2
[geekard@geekard bin]$ ./getopts_test.sh -a -d df #两有效选项
-a -d df
-a was triggered! OPTARG is , OPTIND is 2.
-d was triggered! OPTARG is df, OPTIND is__ 4__.
[geekard@geekard bin]$ ./getopts_test.sh -f #无效选项OPTIND指向下一个参数位置
-f
Invalid option: -f, OPTIND is __2__.
[geekard@geekard bin]$ ./getopts_test.sh -a -d df -f -a #同一个选项可以多次指定无效选项__不终止__解析
-a -d df -f -a
-a was triggered! OPTARG is , OPTIND is 2.
-d was triggered! OPTARG is df, OPTIND is 4.
Invalid option: -f, OPTIND is 5.
-a was triggered! OPTARG is , OPTIND is 6.
[geekard@geekard bin]$ ./getopts_test.sh -a -d -a -a
-a -d -a -a
-a was triggered! OPTARG is , OPTIND is 2.
-d was triggered! OPTARG is -a, OPTIND is 4.
-a was triggered! OPTARG is , OPTIND is 5.
[geekard@geekard bin]$
[geekard@geekard bin]$ ./getopts_test.sh -a -d df __df__ -a #__第二个为无选项关联的参数因此getopts遇到它时停止后续命令行参数的解析__
-a -d df df -a
-a was triggered! OPTARG is , OPTIND is 2.
-d was triggered! OPTARG is df, OPTIND is__ 4__.
[geekard@geekard bin]$ ./getopts_test.sh -a -d df__ -- __-a #--表示命令行选项到此结束后面的为__剩余参数__getopts遇到它时停止解析。
-a -d df -- -a
-a was triggered! OPTARG is , OPTIND is 2.
-d was triggered! OPTARG is df, OPTIND is__ 4__.
[geekard@geekard bin]$

View File

@@ -0,0 +1,131 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-21T20:10:39+08:00
====== BASH中getopts的用法 ======
Created Wednesday 21 December 2011
http://blog.chinaunix.net/space.php?uid=7216005&do=blog&id=2062590
getopts optstring varname [arg ...]
optstring **option字符串**,会逐个匹配
varname 每次匹配成功的变量名
arg 参数列表,没写时它会取命令行参数列表
$OPTIND 特殊变量option index会逐个递增
$OPTARG 特殊变量option argument不同情况下有不同的值
细则1当optstring以”:“开头时getopts会区分invalid option错误和miss option argument错误。
invalid option时varname会被设成?$OPTARG是出问题的option
miss option argument时varname会被设成:$OPTARG是出问题的option。
如果optstring不以”:“开头invalid option错误和miss option argument错误都会使varname被设成?$OPTARG是出问题的option。
细则2当optstring中的**字母**跟”:“时表明该option可接参数参数(argument)放在$OPTARG中
如果缺参数且optstring是以”:“开头则varname的值会是:$OPTARG是该option
否则varname的值是?$OPTARG是该option。(参照细则1)
简单的sample:
#!/bin/bash
SKIPBLANKS=
TMPDIR=/tmp
CASE=lower
**while getopts :bt:u arg **#这是while的特殊使用形式
do
case **$arg** in
b) SKIPBLANKS=TRUE
echo "If skip blanks? $SKIPBLANKS"
;;
t) if [ -d "$OPTARG" ]
then
TMPDIR=$OPTARG
echo "Temp dir is $TMPDIR."
else
echo "$0: $OPTARG is not a directory." __>&2__
__exit 1__
fi
__;;__
u) CASE=upper
echo "Case sensitivity is $CASE."
;;
:) echo "$0: Must supply an argument to -$OPTARG." >&2
exit 1
;;
\?) echo "Invalid option -$OPTARG ignored." >&2
;;
esac
done
===============================================================
#!/bin/bash
# Example: args parse
__usage()__ {
local __prog__="__`basename $1`__"
echo "Usage: $prog -n name1 [name2...] [-c count] [-D DestDir]"
echo " $prog -h for help."
exit 1
}
__showhelp()__ {
echo "Usage: `basename $1`: **-n name1 [name2...]** [-c count] [-D OutputDir]"
echo " -n target name (__\"__None\" for no tag)" #同类型引号嵌套时相互没有保护作用。
echo " -c count for each name (\"None\"=1)"
echo " -D output directory"
echo " -h show this help"
exit 1
}
name=
count=
outputdir=
file="${!#}"
filename="__`basename $file`__" #变量替换早于命令替换
run=false # once for "None"
while getopts "n:c:D:h" arg
do
case $arg in
n) **name**=$OPTARG;;
c) count=$OPTARG;;
D) outputdir=$OPTARG;;
h) __ showhelp $0__;;
?) __usage $0__;;
esac
done
#[ ! -f $file ] && usage $0
[ -z "$name" ] && usage $0
[ -z "$count" ] && count=1
[ -z "$outputdir" ] && outputdir="**`dirname $file`**"
for n in $name
do
for((c=0; c<count; c++))
do
if [ "None" == "$n" ];then
if [ "false" == "$run" ];then
run=true
c=""
else
break
fi
fi
suffix="${n}${c}"
echo $filename | sed "s/.iso$/-${suffix}.iso/"
done
done
exit 0

View File

@@ -0,0 +1,465 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-21T13:41:56+08:00
====== Small getopts tutorial ======
Created Wednesday 21 December 2011
http://wiki.bash-hackers.org/howto/getopts_tutorial
When you want to parse commandline arguments in __a professional way__, //getopts// is the tool of choice. Unlike its older brother //getopt// (note the missing s!), it's a shell **builtin** command. The advantage is
* you don't need to hand your **positional parameters** through to an external program
* getopts can easily set **shell variables** you can use for parsing (impossible for an external process!)
* you don't have to argue with several getopt implementations which had buggy concepts in the past (whitespaces, ...)
* getopts is defined in POSIX®
Note that getopts is __not able to parse GNU-style long options__ (--myoption) or XF86-style long options (-myoption)!
===== Description =====
===== Terminology =====
It's useful to know what we're talking about here, so let's see... Consider the following commandline:
**mybackup -x -f /etc/mybackup.conf -r ./foo.txt ./bar.txt**
All these are positional parameters, but you can divide them into some logical groups:
* -x is** an option, a flag, a switch**: one character, indroduced by a dash (-)
* -f is also an option, but this option has **an additional argument** (argument to the option -f): /etc/mybackup.conf. This __argument is usually separated from its option__ (by a whitespace or any other splitting character) but that's __not a must__, -f/etc/mybackup.conf is valid.
* -r depends on the configuration. In this example, -r **doesn't take arguments**, so it's a standalone option, like -x
* ./foo.txt and ./bar.txt are __remaining arguments without any option related__. These often are **used as mass-arguments** (like for example the filenames you specify for cp(1)) or for arguments that don't need an option to be recognized because of the intended behaviour of the program (like the filename argument you give your text-editor to open and display - why would one need an extra switch for that?). POSIX® calls them **operands**.
To give you an idea about why getopts is useful: The above commandline could also read like...
**mybackup -xrf /etc/mybackup.conf ./foo.txt ./bar.txt**
...which is very hard to parse by own code. getopts recognized all the common option formats.
The option flags can be upper- and lowercase characters, and of course digits. It may recognize other characters, but that's not recommended (usability and maybe problems with special characters).
===== How it works =====
In general you need to call getopts **several times**. Each time it will use **"the next" positional parameter** (and a possible argument), if parsable, and provide it to you. getopts will **not change** the positional parameter set — if you want to__ shift__ it, you have to do it manually after processing:
__shift $((OPTIND-1))__
//# now do something with $@//
Since **getopts **will set an__ exit status of FALSE __when there's nothing left to parse, it's easy to use it in a while-loop:
//while getopts ...; do//
// ...//
//done//
getopts will parse **options and their possible arguments**. It will __stop parsing on the first non-option argument__ (a string that doesn't begin with a hyphen (-) that isn't an argument for any option infront of it). It will also stop parsing when it sees the__ -- (double-hyphen), which means end of options__.
===== Used variables =====
variable description
**OPTIND** Holds the index to the next argument to be processed. This is how getopts "remembers" its own status between invocations. Also usefull to __shift the positional parameters after processing with getopts__. OPTIND is __initially set to 1__, and needs to be__ re-set__ to 1 if you want to parse anything again with getopts
**OPTARG** This variable is set to __any argument for an option__ found by getopts. It also contains the option flag of an unknown option.
**OPTERR** (Values 0 or 1) Indicates if Bash should display error messages generated by the getopts builtin. The value is initialized to 1 on every shell startup - so be sure to always set it to 0 if you don't want to see annoying messages!
getopts also uses these variables for error reporting (they're set to value-combinations which arent possible in normal operation).
===== Specify what you want =====
The base-syntax for getopts is:
**getopts OPTSTRING VARNAME [ARGS...]**
where:
**OPTSTRING** tells getopts **which** options to expect and **whether **to expect arguments (see below)
**VARNAME** tells getopts which** shell-variable** to use for option reporting
**ARGS** tells getopts to parse these optional words__ instead of __the positional parameters
===== The option-string =====
The option-string tells getopts which options to expect and which of them **must** have an argument. The syntax is very simple — every option character is simply named as is, this example-string would tell getopts to look for -f, -A and -x:
**getopts fAx VARNAME**
When you want getopts to expect** an argument for an option**, just place a __: (colon)__ after the proper option flag. If you want -A to expect an argument (i.e. to become -A SOMETHING) just do:
getopts fA:x VARNAME
If the __very first __character of the option-string is a : (colon), which normally would be __nonsense__ because there's no option letter preceeding it, getopts switches to the mode__ "silent error reporting"__. In productive scripts, this is usually what you want (handle errors yourself and don't get disturbed by annoying messages).
===== Custom arguments to parse =====
The getopts utility parses the positional parameters of the current** shell or function** by default (which means it parses "$@").
You can give **your own set of arguments** to the utility to parse. Whenever additional arguments are given after the VARNAME parameter, getopts doesn't try to parse the positional parameters, but these given words.
This way, you are able to parse any option set you like, here for example from an array:
//while getopts :f:h opt //**"${MY_OWN_SET[@]}"**//; do//
// ...//
//done//
A call to getopts __without__ these additional arguments is equivalent to explicitly calling it with "$@":
getopts ... "$@"
===== Error Reporting =====
Regarding error-reporting, there are two modes getopts can run in:
* verbose mode
* silent mode
For **productive scripts** I recommend to use the silent mode, since everything looks more professional, when you don't see annoying standard messages. Also it's easier to handle, since the failure cases are indicated in an easier way.
=== Verbose Mode ===
* invalid option VARNAME is set to ? (quersion-mark) and OPTARG is unset
* required argument not found VARNAME is set to ? (quersion-mark), OPTARG is unset and **an error message is printed**
=== Silent Mode ===
* invalid option VARNAME is set to ? (question-mark) and OPTARG is set to the (invalid) option character
* required argument not found VARNAME is set to : (colon) and **OPTARG** contains the option-character in question
===== A first example =====
Enough said - action!
Let's play with a very simple case: Only one option (-a) expected, without any arguments. Also we **disable the verbose error handling** by preceeding the whole option string with a colon (:):
//#!/bin/bash//
//while getopts ":a" opt; do//
// case $opt in//
// a)//
// echo "-a was triggered!" >&2//
// ;;//
__\?__//)//
// echo "Invalid option: -$OPTARG" >&2//
// ;;//
// esac//
//done//
I put that into a file named **go_test.sh**, which is the name you'll see below in the examples.
Let's do some tests:
Calling it without any arguments
$ ./go_test.sh
$
Nothing happened? Right. getopts **didn't see any valid or invalid **__options__ (letters preceeded by a dash), so it wasn't triggered.
Calling it with non-option arguments
$ ./go_test.sh /etc/passwd
$
Again — nothing happened. The very same case: getopts didn't see any valid or invalid options (letters preceeded by a dash), so it __wasn't triggered__.
The arguments given to your script are of course** accessible** as $1 - ${N}.
Calling it with option-arguments
Now let's trigger getopts: Provide options.
First, an invalid one:
$ ./go_test.sh -b
Invalid option: -b
$
As expected, getopts didn't accept this option and acted like told above: It **placed ? into $opt** and the **invalid option character (b) into $OPTARG**. With our case statement, we were able to detect this.
Now, a valid one (-a):
$ ./go_test.sh -a
-a was triggered!
$
You see, the detection works perfectly. The a was put into the variable $opt for our case statement.
Of course it's possible to mix valid and invalid options when calling:
$ ./go_test.sh -a -x -b -c
-a was triggered!
Invalid option: -x
Invalid option: -b
Invalid option: -c
$
Finally, it's of course possible, to give our option multiple times:
$ ./go_test.sh -a -a -a -a
-a was triggered!
-a was triggered!
-a was triggered!
-a was triggered!
$
The last examples lead us to some points you may consider:
* __invalid options don't stop the processing__: If you want to stop the script, you have to do it yourself (exit in the right place)
* __multiple identical options are possible__: If you want to disallow these, you have to check manually (e.g. by setting a variable or so)
===== An option with argument =====
Let's extend our example from above. Just a little bit:
* -a now takes an argument
* on an error, the parsing exits with exit 1
#!/bin/bash
while getopts ":a:" opt; do
case $opt in
a)
echo "-a was triggered, Parameter: $OPTARG" >&2
;;
\?)
echo "Invalid option: -$OPTARG" >&2
exit 1
;;
__:__)
echo "Option -**$OPTARG** requires an argument." __>&2__
exit 1
;;
esac
done
Let's do the very same tests we did in the last example:
Calling it without any arguments
$ ./go_test.sh
$
As above, nothing happend. It wasn't triggered.
Calling it with non-option arguments
$ ./go_test.sh /etc/passwd
$
The very same case: It wasn't triggered.
Calling it with option-arguments
Invalid option:
$ ./go_test.sh -b
Invalid option: -b
$
As expected, as above, getopts didn't accept this option and acted like programmed.
Valid option, but without the mandatory argument:
$ ./go_test.sh -a
Option -a requires an argument.
$
The option was okay, but there is an argument missing.
Let's provide the argument:
$ ./go_test.sh -a /etc/passwd
-a was triggered, Parameter: /etc/passwd
$
See also
Internal: Handling positional parameters
Internal: The case statement
Internal: The while-loop
===============================================================================================
I am sorry if I am just missing something here but, what is with the > ampersand 2 in the echo commands?
Jan Schampera, 2010/07/29 11:55
__It's good practice to print error and diagnostic messages to the standard error output (STDERR)__. foo > ampersand 2 does this.
What if there are multiple options and some require arguments while some do not? I can't seem to get it to work properly...
Ex)
#!/bin/bash
while getopts "__:__a:b:cde:f:g:" opt; do
case $opt in
a)
echo "-a was triggered, Parameter: $OPTARG" >&2
;;
b)
echo "-b was triggered, Parameter: $OPTARG" >&2
;;
c)
echo "-c was triggered, Parameter: $OPTARG" >&2
;;
d)
echo "-d was triggered, Parameter: $OPTARG" >&2
;;
e)
echo "-e was triggered, Parameter: $OPTARG" >&2
;;
f)
echo "-w was triggered, Parameter: $OPTARG" >&2
;;
g)
echo "-g was triggered, Parameter: $OPTARG" >&2
;;
\?)
echo "Invalid option: -$OPTARG" >&2
exit 1
;;
:)
echo "Option -$OPTARG requires an argument." >&2
exit 1
;;
esac
done
Here's my problem:
**./hack.bash -a -b **
**-a was triggered, **__Parameter: -b__
Shouldn't it display that -a is missing an argument instead of t**aking the next option as the parameter**. What am I doing wrong here?
Jan Schampera, 2010/12/05 07:29
You're doing nothing wrong. It is like that, __when getopts searches an argument, it takes the next one__.
This is how most programs I know behave (tar, the text utils, ...).
Mark, 2011/01/29 20:42
How do I get it so that with no arguments passed, it returns text saying "no arguments password, nothing triggered"?
Jan Schampera, 2011/01/29 20:50
I'd do it by checking $# before the while/getopts loop, if applicable:
if (($# == 0)); then
...
fi
If you really need to check if getopts found something to process you could make up a variable for that check:
options_found=0
while getopts ":xyz" opt; do
options_found=1
...
done
if ((!options_found)); then
echo "no options found"
fi
Reid, 2011/08/12 00:07
Another method of checking whether it found anything at all is to run a separate if statement right before the while getopts call.
if ( ! getopts "abc:deh" opt); then
echo "Usage: __`basename $0`__ options (-ab) (-c value) (-d) (-e) -h for help";
exit $E_OPTERROR;
fi
while etopts "abc:deh" opt; do
case $opt in
a) do something;;
b) do another;;
c) var=$OPTARG;;
...
esac
done
Mark, 2011/01/29 21:09
Sweet - that work, thanks!
How do you get it to return multiple arguments on one line? eg. hello -ab returns "option a option b"?
Jan Schampera, 2011/01/29 22:16
This isn't related to getopts. Just use variables or echo without newlines or such things, as you would do in such a case without getopts, too.
Andrea, 2011/05/02 16:22
Hi. how can I control the double invocation of the same option? I don't want this situation: ./script -a xxx -a xxx!
Jan Schampera, 2011/05/02 17:03
See the question above. Set a variable that handles this, a kind of flag that is set when the option is invoked, and checked if the option already was invoked. A kind of "shield".
A_WAS_SET=0
...
case
...
a)
if [[ $A_WAS_SET = 0 ]]; then
A_WAS_SET=1
# do something that handles -a
else
echo "Option -a already was used."
exit 1
fi
;;
esac
...
Andrea, 2011/05/03 15:57
Thanks! It works!
Joe Wulf, 2011/06/22 22:33
Joshua's example (from above @ 2010/12/05 01:06 ) asked about parsing multiple options, where some DO have required arguments, and some have OPTIONAL arguments. I've a script I'm enhancing. It takes a '-e' argument to EXECUTE ( and '-i' for installation, '-r' for removal, etc...). The -e is stable by itself. My enhancement would be allowing an optional '-e <modifier>' so that the functionality would be appropriately conditionally modified. How do I define the getopts line to state that '-e' is a valid parsable option, and that it MIGHT have an argument??
Jan Schampera, 2011/06/23 08:42
Hi,
try this trick. When you discover that OPTARG von -c is something beginning with a hyphen, then reset OPTIND and re-run getopts (continue the while loop).
The code is relatively small, but I hope you get the idea.
Oh, of course, this isn't perfect and needs some more robustness. It's just an example.
#!/bin/bash
while getopts :abc: opt; do
case $opt in
a)
echo "option a"
;;
b)
echo "option b"
;;
c)
echo "option c"
__ if [[ $OPTARG = -* ]]; then__
__ ((OPTIND--))__
__ continue__
__ fi__
echo "(c) argument $OPTARG"
;;
\?)
echo "WTF!"
exit 1
;;
esac
done
Reid, 2011/08/11 23:29
Another method to have an "optional" argument would be to have both a__ lower and uppercase__ version of the option, with one requiring the argument and one not requiring it.
Jay, 2011/07/27 20:10
Bold TextWhat if you have a flag with an OPTIONAL argument; say the call can either be with -a username or just -a. Defined with just a: it complains there is no argument. I want it to use the argument if there is one, else use default defined elsewhere.
Arvid Requate, 2011/10/07 12:04
The builtin getopts can be used to parse long options by putting a dash character followed by a colon into the optstring ("getopts 'h-:'" or "getopts '-:'"), here is an example how it can be done:
http://stackoverflow.com/questions/402377/using-getopts-in-bash-shell-script-to-get-long-and-short-command-line-options/7680682#7680682
Very nice trick!
Another way I could imagine (and I'll try some test code some day) is preprocessing the positional parameters and convert long options to short options before using getopts.

View File

@@ -0,0 +1,189 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-24T12:26:39+08:00
====== writing-robust-shell-scripts ======
Created Saturday 24 December 2011
http://www.davidpashley.com/articles/writing-robust-shell-scripts.html
Many people hack together shell scripts quickly to **do simple tasks**, but these soon take on a life of their own. Unfortunately shell scripts are **full of subtle effects** which result in scripts failing in unusual ways. It's possible to write scripts which minimise these problems. In this article, I explain several techniques for writing robust bash scripts.
===== Use set -u =====
How often have you written a script that broke because a variable wasn't set? I know I have, many times.
chroot=$1
...
rm -rf $chroot/usr/share/doc
If you ran the script above and accidentally forgot to give a parameter, you would have just deleted all of your system documentation rather than making a smaller chroot. So what can you do about it? Fortunately bash provides you with **set -u**, which will exit your script if you try to use __an uninitialised variable__. You can also use the slightly more readable** set -o nounset**.
david% bash /tmp/shrink-chroot.sh
$chroot=
david% bash -u /tmp/shrink-chroot.sh
/tmp/shrink-chroot.sh: line 3: $1: **unbound variable**
david%
===== Use set -e =====
Every script you write should include** set -e at the top**. This tells bash that it should **exit the script if any statement returns a non-true return value**. The benefit of using -e is that it__ prevents errors snowballing into serious issues __when they could have been caught earlier. Again, for readability you may want to use **set -o errexit**.
Using -e gives you error checking for free. If you forget to check something, bash will do it or you. Unfortunately it means you __can't check $?__ as bash will** never get to** the checking code if it isn't zero. There are other constructs you could use:
command
if [ "$?"-ne 0]; then echo "command failed"; exit 1; fi
could be replaced with
**command || { echo "command failed"; exit 1; } #整个用&& ||连起来的各命令组成了一个整体,其退出值为最后一个**__执行__**的命令。**
or
if ! command; then echo "command failed"; exit 1; fi #__整个控制结构是一个整体__可以对这个整体进行输入\出重定向。
What if you have a command that returns non-zero or you are not interested in its return value? You can use **command || true**, or if you have a longer section of code, you can turn off the error checking, but I recommend you use this sparingly.
set +e
command1
command2
set -e
On a slightly related note, by default bash takes __the error status of the last item__ in a pipeline, which may not be what you want. For example, **false | true** will be considered to have succeeded. If you would like this to fail, then you can use **set -o pipefail** to make it fail.
===== Program defensively - expect the unexpected =====
Your script** should take into account of the unexpected**, like files missing or directories not being created. There are several things you can do to prevent errors in these situations. For example, when you create a directory, if the parent directory doesn't exist, mkdir will return an error. If you add a __-p__ option then mkdir will create all the parent directories before creating the requested directory. Another example is rm. If you ask rm to delete a non-existent file, it will complain and your script will terminate. (You are using -e, right?) You can fix this by using __-f__, which will silently continue if the file didn't exist.
===== Be prepared for spaces in filenames =====
__Someone will always use spaces in filenames or command line arguments and you should keep this in mind__ when writing shell scripts.
In particular you** should use quotes around variables**.
if [ $filename = "foo" ];
will fail if $filename contains a space. This can be fixed by using:
if [ **"$filename" **= "foo" ];
When using $@ variable, you should __always quote it__ or any arguments containing a space will be expanded in to separate words.
david% foo() { for i in $@; do echo $i; done }; foo bar "baz quux"
bar
baz
quux
david% foo() { for i in** "$@"**; do echo $i; done }; foo bar "baz quux"
bar
baz quux
I can not think of a single place where you shouldn't use "$@" over $@, so when in doubt, use quotes.
If you use **find and xargs** together, you should use __-print0 to separate filenames(可能包括路径) __with a **null character **rather than __new lines__. You then need to use -0 with xargs.
david% touch "foo bar"
david% **find | xargs ls**
ls: ./foo: No such file or directory
ls: bar: No such file or directory
david% find __-print0__ | xargs -0 ls
./foo bar
===== Setting traps(设置trap捕获信号后除非明确exit否则脚本继续执行) =====
Often you write scripts which fail and__ leave the filesystem in an inconsistent state__; things like lock files, temporary files or you've updated one file and there is an error updating the next file. It would be nice if you could fix these problems, either by deleting the lock files or by __rolling back to a known good state__ when your script suffers a problem. Fortunately bash provides a way to run a command or function when it receives a unix signal using the trap command.
**trap command signal [signal ...] **
There are many signals you can trap (you can get a list of them by running__ kill -l__), but for __cleaning up after problems__ there are only 3 we are interested in: INT, TERM and EXIT. You can also** reset traps back** to their default by using __-__ as the command.
Signal Description
**INT** Interrupt - This signal is sent when someone kills the script by pressing ctrl-c.
**TERM ** Terminate - this signal is sent when someone sends the TERM signal using the kill command.
**EXIT ** Exit - this is a **pseudo-signal** and is triggered when your **script exits**, either through reaching the** end of the script**, an** exit **command or by a command failing when using set -e.
Usually, when you write something using a lock file you would use something like:
if [ ! -e $lockfile ]; then
touch $lockfile
critical-section #脚本在这个过程中退出时lockfile就会留在系统中
rm $lockfile
else
echo "critical-section is already running"
fi
What happens if someone kills your script while critical-section is running? The lockfile will be **left there** and your script won't run again until it's been deleted. The fix is to use:
if [ ! -e $lockfile ]; then
**trap "rm -f $lockfile; exit" INT TERM EXIT #在捕获信号前需安装信号及其执行的命令**
~~ #上述命令有一个bug存在重复触发的死循环。~~
touch $lockfile
critical-section
rm $lockfile
** trap - INT TERM EXIT #将信号的处理处理恢复到缺省状态(一般异常地终止脚本的继续执行)**
else
echo "critical-section is already running"
fi
Now when you **kill the script** it will delete the lock file too. Notice that we __explicitly exit__ from the script at the end of trap command, otherwise the script will **resume from the point **that the signal was received.
===== Race conditions =====
It's worth pointing out that there is a slight race condition in the above lock example between the time we test for the lockfile and the time we create it. A possible solution to this is to use__ IO redirection and bash's noclobber mode, which won't redirect to an existing file__. We can use something similar to:
if ( __set -o noclobber__; echo "$$" > "$lockfile") 2> /dev/null;
then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
critical-section
rm -f "$lockfile"
trap - INT TERM EXIT
else
echo "Failed to acquire lockfile: $lockfile."
echo "Held by $(cat $lockfile)"
fi
A slightly more complicated problem is where you need to update a bunch of files and need the script to__ fail gracefully__ if there is a problem in the middle of the update. You want to be certain that something either happened correctly or that it appears as though it **didn't happen at all**.Say you had a script to add users.
add_to_passwd $user
cp -a /etc/skel /home/$user
chown $user /home/$user -R
There could be problems if you ran out of diskspace or someone killed the process. In this case you'd want the user to not exist and all their files to be removed.
__rollback() __{
del_from_passwd $user
if [ -e /home/$user ]; then
rm -rf /home/$user
fi
** exit #错误发生,脚本退出执行。**
}
**trap rollback INT TERM EXIT**
add_to_passwd $user
cp -a /etc/skel /home/$user
chown $user /home/$user -R
**trap - INT TERM EXIT**
We needed to remove the trap at the end or the rollback function would have been called as we exited, undoing all the script's hard work.
===== Be atomic =====
Sometimes you need to update a bunch of files in a directory at once, say you need to rewrite urls form one host to another on your website. You might write:
for file in $(find /var/www -type f -name "*.html"); do
perl -pi -e 's/www.example.net/www.example.com/' $file
done
Now if there is a problem with the script you could have half the site referring to www.example.com and the rest referring to www.example.net. You could fix this using a backup and a trap, but you also have the problem that the site will be inconsistent during the upgrade too.
The solution to this is to __make the changes an (almost) atomic operation__. To do this make a copy of the data, **make the changes in the copy**, move the original out of the way and then move the copy back into place. You need to make sure that both the old and the new directories are moved to locations that are__ on the same partition__ so you can take advantage of the property of most unix filesystems that moving directories is very fast, as they only have to update the inode for that directory.
cp -a /var/www /var/www-**tmp**
for file in $(find /var/www-tmp -type f -name "*.html"); do
perl -pi -e 's/www.example.net/www.example.com/' $file
done
__mv /var/www /var/www-old__
mv /var/www-tmp /var/www
虽然上面几步的cp mv过程有可能发生中断但是可以保证__在最坏的情况下系统总有一份可以恢复的文件__。
This means that if there is a problem with the update, the live system is not affected. Also the time where it is affected is reduced to the time between the two mvs, which should be very minimal, as the filesystem just has to change two entries in the inodes rather than copying all the data around.
The disadvantage of this technique is that you need to use twice as much disk space and that any process that keeps files open for a long time will still have the old files open and not the new ones, so you would have to restart those processes if this is the case. In our example this isn't a problem as__ apache opens the files every request__. You can check for files with files open by using lsof. An advantage is that you now have a backup before you made your changes in case you need to revert.

View File

@@ -0,0 +1,49 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-21T20:04:54+08:00
====== 临时(随机)文件生成方法 ======
Created Wednesday 21 December 2011
http://www.cyberciti.biz/tips/shell-scripting-bash-how-to-create-temporary-random-file-name.html
Various methods exists to create a** random temporary** file name. This is useful if your application/shell scripting needs temporary unique file names.
===== Method #1: Use of $RANDOM bash shell variable =====
1) At shell prompt type command:
# echo __$RANDOM__
You will get random value every time. This variable can be use to create unique file name
===== Method # 2 Use of $$ variable =====
This is old and classic method. __$$__ **shell variable **returns the current running process this can be use to create unique temporary file as demonstrated in following script:
vi random2.bash
#!/bin/bash
#
TFILE="/tmp/__$(basename $0)__.$$.tmp"
ls > $TFILE
echo "See diretory listing in $TFILE"
Save the script and execute as follows:
$ chmod +x random2.bash
$ ./ random2.bash
===== Method # 3 Use of mktemp or tempfile utility =====
As name suggest both makes unique temporary filename. Just type mktemp at shell prompt to create it:
$ __mktemp__
Output:
/tmp/tmp.IAnO5O
OR
$ __tempfile__
Output:
/tmp/IAnO5O
Make a unique __temporary directory__ instead of a file using d option to both of them
$ mktemp d
$ tempfile d
Both mktemp or tempfile provides the shell scripts facility to use temporary files** in safe manner** hence it is highly recommended to use them.

View File

@@ -0,0 +1,132 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-22T10:34:01+08:00
====== 代码片段 ======
Created Thursday 22 December 2011
************提取命令行参数
#!/bin/bash
echo "$@"
echo ""
while getopts ":ad:" opt; do
case $opt in
a)
echo "-${opt} was triggered! OPTARG is $OPTARG, OPTIND is $OPTIND." __>&2__
;;
d)
echo "-${opt} was triggered! OPTARG is $OPTARG, OPTIND is $OPTIND." >&2
;;
\?)
echo "Invalid option: -$OPTARG, OPTIND is $OPTIND." >&2
;;
:)
echo "-${OPTARG} require args! OPTIND is $OPTIND"
;;
esac
done
__shift__ "$(($OPTIND - 1))" #or $((OPTIND - 1)) or~~ ((OPTIND - 1)) ~~#
***********对文件中的每行单独处理
counter=0
while read; do ((counter++)); done __</etc/passwd__
echo "Lines: $counter"
不能使用:
counter=0
cat /etc/passwd | while read; do __((counter++))__; done
echo "Lines: $counter"
因为:
+-- cat /etc/passwd
xterm ----- bash --|
+-- bash (while read; do ((counter++)); done)
*********警报程序:监视一个事件,如果发生则发出警报
#/bin/bash
until condition; do #condition是一个可执行命令
sleep 10;
done
#now ring the bell and do somethings
echo -e '\a\a'
echo "********Alert****************"
#do somethngs
exit 0
**********竞争处理
if ( __set -o noclobber__; echo "$$" > "$lockfile") 2> /dev/null; #如果lockfile存在则**含有重定向的命令**出错返回
then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
critical-section
rm -f "$lockfile"
trap __-__ INT TERM EXIT
else
echo "Failed to acquire lockfile: $lockfile."
echo "Held by $(cat $lockfile)"
fi
而不能是
if [ ! -e $lockfile ]; then
**trap "rm -f $lockfile; exit" INT TERM EXIT #在捕获信号前需安装信号及其执行的命令**
#上述命令有一个bug存在重复触发的死循环。
touch $lockfile
critical-section
rm $lockfile
** trap - INT TERM EXIT #将信号的处理处理恢复到缺省状态(一般异常地终止脚本的继续执行)**
else
echo "critical-section is already running"
fi
****************Counted loops
# Three expression for loop:
for__ (( i = 0; i < 20; i++ ))__
do
echo $i
done
# While loop:
i=0
while __[[ $i -lt 20 ]]__
do
echo $i
let i++
done
# For loop using seq:
for i in__ $(seq 0 19)__
do
echo $i
done
A counted for loop using bash sequences requires the least amount of typing:
for i in {0..19}
do
echo $i
done
But beyond counted for loops, brace expansion is the only way to create a loop with non-numeric "indexes":
for i in {a..z}
do
echo $i
done
*********进程替换
# cat a
e
d
c
b
a
# cat b
g
f
e
d
c
b
# __comm -3 <(sort a | uniq) <(sort b | uniq)__
a
f
g

View File

@@ -0,0 +1,761 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-24T15:03:48+08:00
====== 位置参数 ======
Created Saturday 24 December 2011
http://www.ibm.com/developerworks/cn/linux/l-bash-parameters.html
现在,很多 Linux® 和 UNIX® 系统上都有 bash shell它是 Linux 上常见的__默认 shell__。通过本文您将了解到如何在 bash 脚本中__处理参数和选项__以及如何使用 shell 的__参数扩展__检查或修改参数。本文重点介绍 bash文中的示例都是在以 bash 为 shell 的 Linux 系统上运行。但是,很多其他的 shell 中也有这些扩展,比如 ksh、ash 或 dash您可以在其他 UNIX 系统或者甚至是 Cygwin 之类的环境中使用这些 shell 和扩展。早前的一篇文章 Linux 技巧Bash 测试和比较函数 已经对本文中的构建工具进行了介绍。本文中的某些材料摘录自 developerWorks 教程 LPI 102 考试准备,主题 109: Shell、脚本、编程和编译该教程介绍了很多基本的脚本编程技术。
===== 传递的参数 =====
函数和 shell 脚本的妙处之一是,通过向单个**函数或脚本**传递参数 能够使它们表现出不同的行为。在本节中,您将了解到如何识别和使用传递的参数。
在函数或脚本中,您可以使用表 1 中列出的 bash __特殊变量__来引用参数。您可以给这些变量附上 $ 符号的前缀,然后像引用其他 shell 变量那样引用它们。
表 1. 函数的 Shell 参数
参数 目的
0, 1, 2, ... __位置参数__ 从__参数 0 开始__。参数 0 引用启动 bash 的__程序的名称__如果函数在 shell 脚本中运行,则引用 shell __脚本的名称__。有关该参数的其他信息比如 bash 由 -c 参数启动,请参阅 bash 手册页面。由单引号或双引号包围的字符串被作为一个参数进行传递__传递时会去掉引号__。如果是双引号则在调用函数之前将对 $HOME 之类的 shell 变量进行扩展。对于包含嵌入空白或其他字符(这些空白或字符可能对 shell 有特殊意义)的参数,需要使用单引号或双引号进行传递。
* 位置参数 从__参数 1 开始__。如果在**双引号中进行扩展**,则扩展就是一个词,由 IFS 特殊变量的第一个字符将参数分开,如果 IFS 为空则没有间隔空格。IFS 的默认值是空白、制表符和换行符。如果没有设置 IFS则使用空白作为分隔符仅对默认 IFS 而言)。
@ 位置参数 从__参数 1 开始__。如果**在双引号中进行扩展**,则每个参数都会成为一个词,因此 “$@” 与 “$1” “$2” 等效。__如果参数有可能包含嵌入空白那么您将需要使用这种形式。__
# 参数数量(不包含参数 0
注意:如果您拥有的参数多于 9 个,则不能使用 $10 来引用第十个参数。首先,您必须处理或保存第一个参数($1然后使用__ shift 命令__删除参数 1 并将所有剩余的参数下移一位,因此 $10 就变成了 $9依此类推。__$# __的值将被更新以反映参数的剩余数量。在实践中最常见的情况是将参数迭代到函数或 shell 脚本,或者迭代到命令替换使用 for 语句创建的列表,因此这个约束基本不成问题。
现在,您可以定义一个简单函数,其用途只是告诉您它所拥有的参数数量并显示这些参数,如清单 1 所示。
清单 1. 函数参数
[ian@pinguino ~]$ testfunc () { echo "__$# __parameters"; echo__ "$@"__; }
[ian@pinguino ~]$ testfunc
0 parameters
[ian@pinguino ~]$ testfunc a b c
3 parameters
a b c
[ian@pinguino ~]$ testfunc a "b c"
2 parameters
a b c
Shell __脚本处理参数的方式与函数处理参数的方式相同__。实际上您会经常发现脚本往往由很多小型的函数装配而成。清单 2 给出了一个 shell 脚本 testfunc.sh用于完成相同的简单任务结果是要使用上面的一个输入来运行这个脚本。记住使用 chmod +x 将脚本标记为可执行。
清单 2. Shell 脚本参数
[ian@pinguino ~]$ cat testfunc.sh
#!/bin/bash
echo "$# parameters"
echo "$@";
[ian@pinguino ~]$ ./testfunc.sh a "b c"
2 parameters
a b c
在表 1 中您会发现shell 可能将传递参数的列表引用为 $* 或 $@而__是否将这些特殊位置变量用引号引用将影响它们的解释方式__。对于上面的函数而言使用 $*、“$*”、$@ 或 “$@” 输出的结果差别不大,但是如果函数更复杂一些,就没有那么肯定了,当您希望分析参数或将一些参数传递给其他函数或脚本时,使用或不用引号的差别就很明显。清单 3 给出了一个函数,用于打印参数的数量然后根据这四种可选方案打印参数。清单 4 给出了使用中的函数。__IFS __默认变量使用一个空格作为它的第一个字符因此清单 4 添加了一条竖线作为 IFS 变量的第一个字符,更加清楚地显示了在 “$*” 扩展中的何处使用这个字符。
清单 3. 一个探究参数处理差别的函数
[ian@pinguino ~]$ type testfunc2
testfunc2 is a function
testfunc2 ()
{
echo "$# parameters";
echo Using '$*';
for p in $*;
do
echo "[$p]";
done;
echo Using __'"$*"'__;
for p in "$*";
do
echo "[$p]";
done;
echo Using '$@';
for p in $@;
do
echo "[$p]";
done;
echo Using '"$@"';
for p in "$@";
do
echo "[$p]";
done
}
清单 4. 使用 testfunc2 打印参数信息
[ian@pinguino ~]$__ IFS="|${IFS}"__ testfunc2 abc "a bc" "1 2
> 3"
3 parameters
Using $*
[abc]
[a]
[bc]
[1]
[2]
[3]
Using "$*"
__[abc|a bc|1 2__
__3]__
Using $@
[abc]
[a]
[bc]
[1]
[2]
[3]
Using "$@"
[abc]
[a bc]
[1 2
3]
仔细研究二者的差别,尤其要注意加引号的形式和包含空白(如空格字符和换行符)的参数。在一个 [] 字符对中,注意:“$*” 扩展实际上是一个词。
===== 选项和 getopts =====
传统的 UNIX 和 Linux 命令将一些传递的参数看作选项。过去,这些参数是单个的字符开关,与其他参数的区别在于拥有一个前导的连字符或负号。为方便起见,若干个选项可以合并到 ls -lrt 命令中,它提供了一个按修改时间(-t 选项)反向(-r 选项)排序的长(-l 选项)目录清单。
您可以对 shell 脚本使用同样的技术getopts 内置命令可以简化您的任务。要查看此命令的工作原理,可以考虑清单 5 所示的示例脚本 testopt.sh。
清单 5. testopt.sh 脚本
#!/bin/bash
echo "OPTIND starts at $OPTIND"
while getopts ":pq:" optname
do
case "$optname" in
"p")
echo "Option $optname is specified"
;;
"q")
echo "Option $optname has value $OPTARG"
;;
"?")
echo "Unknown option $OPTARG"
;;
":")
echo "No argument value for option $OPTARG"
;;
*)
# Should not occur
echo "Unknown error while processing options"
;;
esac
echo "OPTIND is now $OPTIND"
done
getopts 命令使用了两个预先确定的变量。OPTIND 变量开始被设为 1。之后它包含待处理的下一个参数的索引。如果找到一个选项则 getopts 命令返回 true因此常见的选项处理范例使用带 case 语句的 while 循环本例中就是如此。getopts 的第一个参数是一列要识别的选项字母,在本例中是 p 和 r。选项字母后的冒号 (:) 表示该选项需要一个值;例如,-f 选项可能用于表示文件名tar 命令中就是如此。此例中的前导冒号告诉 getopts 保持静默silent并抑制正常的错误消息因为此脚本将提供它自己的错误处理。
此例中的第二个参数 optname 是一个变量名,该变量将接收找到选项的名称。如果预期某个选项应该拥有一个值,而且目前存在该值,则会把该值放入 OPTARG 变量中。在静默模式下,可能出现以下两种错误情况。
如果发现不能识别的选项,则 optname 将包含一个 ? 而 OPTARG 将包含该未知选项。
如果发现一个选项需要值,但是找不到这个值,则 optname 将包含一个 : 而 OPTARG 将包含丢失参数的选项的名称。
如果不是在静默模式,则这些错误将导致一条诊断错误消息而 OPTARG 不会被设置。脚本可能在 optname 中使用 ? 或 : 值来检测错误(也可能处理错误)。
清单 6 给出了运行此简单脚本的两个示例。
清单 6. 运行 testopt.sh 脚本
[ian@pinguino ~]$ ./testopt.sh -p -q
OPTIND starts at 1
Option p is specified
OPTIND is now 2
No argument value for option q
OPTIND is now 3
[ian@pinguino ~]$ ./testopt.sh -p -q -r -s tuv
OPTIND starts at 1
Option p is specified
OPTIND is now 2
Option q has value -r
OPTIND is now 4
Unknown option s
OPTIND is now 5
如果您需要这样做,可以传递一组参数给 getopts 计算。如果您在脚本中已经使用一组参数调用了 getopts现在要用另一组参数来调用它则需要亲自将 OPTIND 重置为 1。有关更多详细内容请参阅 bash 手册或信息页面。
参数扩展
您已经了解了如何将参数传递给函数或脚本以及如何识别选项,现在开始处理选项和参数。如果在处理选项后可以知道留下了哪些参数,那应该是一种不错的事情。接下来您可能需要验证参数值,或者为丢失的参数指派默认值。本节将介绍一些 bash 中的参数扩展。当然,您仍然拥有 Linux 或 UNIX 命令(如 sed 或 awk的全部功能来执行更复杂的工作但是您也应该了解如何使用 shell 扩展。
我们开始使用上述的选项分析和参数分析函数来构建一个脚本。清单 7 中给出了 testargs.sh 脚本。
清单 7. testargs.sh 脚本
#!/bin/bash
showopts () {
while getopts ":pq:" optname
do
case "$optname" in
"p")
echo "Option $optname is specified"
;;
"q")
echo "Option $optname has value $OPTARG"
;;
"?")
echo "Unknown option $OPTARG"
;;
":")
echo "No argument value for option $OPTARG"
;;
*)
# Should not occur
echo "Unknown error while processing options"
;;
esac
done
return $OPTIND
}
showargs () {
for p in "$@"
do
echo "[$p]"
done
}
optinfo=$(showopts "$@")
argstart=$?
arginfo=$(showargs "${@:$argstart}")
echo "Arguments are:"
echo "$arginfo"
echo "Options are:"
echo "$optinfo"
尝试运行几次这个脚本,看看它如何运作,然后再对它进行详细考察。清单 8 给出了一些样例输出。
清单 8. 运行 testargs.sh 脚本
[ian@pinguino ~]$ ./testargs.sh -p -q qoptval abc "def ghi"
Arguments are:
[abc]
[def ghi]
Options are:
Option p is specified
Option q has value qoptval
[ian@pinguino ~]$ ./testargs.sh -q qoptval -p -r abc "def ghi"
Arguments are:
[abc]
[def ghi]
Options are:
Option q has value qoptval
Option p is specified
Unknown option r
[ian@pinguino ~]$ ./testargs.sh "def ghi"
Arguments are:
[def ghi]
Options are:
注意这些参数与选项是如何分开的。showopts 函数像以前一样分析选项,但是使用 return 语句将 OPTIND 变量的值返回给调用语句。调用处理将这个值指派给变量 argstart。然后使用这个值来选择原始参数的子集原始参数包括那些没有被作为选项处理的参数。这个过程使用参数扩展
${@:$argstart} 来完成。
记住在这个表达式的两侧使用引号使参数和嵌入空格保持在一起,如清单 2 后部所示。
如果您是使用脚本和函数的新手,请记住以下说明:
return 语句返回 showopts 函数的 exit 值,调用方使用 $? 来访问该函数。您也可以将函数的返回值和 test 或 while 之类的命令结合使用来控制分支和循环。
Bash 函数包括一些可选的词 —— “函数”,例如:
function showopts ()
这不是 POSIX 标准的一部分,不受 dash 之类的 shell 的支持,因此如果您要使用它,就不要将 shebang 行设为
#!/bin/sh
因为这会给您提供系统的默认 shell而它可能不按您希望的方式运行。
函数输出,例如此处两个函数的 echo 语句所产生的输出,不会被打印出来,但是调用方可以访问该输出。如果没有将该输出指派给一个变量,或者没有在别的地方将它用作调用语句的一部分,则 shell 将会尝试执行输出而不是显示输出。
子集和子字符串
此扩展的一般形式为 ${PARAMETER:OFFSET:LENGTH},其中的 LENGTH 参数是可选参数。因此,如果您只希望选择脚本参数的特定子集,则可以使用完整版本来指定要选择多少个参数。例如,${@:4:3} 引用从参数 4 开始的 3 个参数,即参数 4、5 和 6。您可以使用此扩展来选择除那些使用 $1 到 $9 可立即访问的参数之外的单个参数。${@:15:1} 是一种直接访问参数 15 的方法。
您可以将此扩展与单个参数结合使用,也可以与 $* 或 $@ 表示的整个参数集结合使用。在本例中,参数被作为一个字符串及引用偏移量和长度的数字来处理。例如,如果变量 x 的值为 “some value”
${x:3:5}
的值就是 “e val”如清单 9 所示。
清单 9. shell 参数值的子字符串
[ian@pinguino ~]$ x="some value"
[ian@pinguino ~]$ echo "${x:3:5}"
e val
长度
您已经知道 $# 表示参数的数量,而 ${PARAMETER:OFFSET:LENGTH} 扩展适用于单个参数以及 $* 和 $@,因此,可以使用一个类似的结构体 ${#PARAMETER} 来确定单个参数的长度也就不足为奇了。清单 10 中所示的简单的 testlength 函数阐明了这一点。自己去尝试使用它吧。
清单 10. 参数长度
[ian@pinguino ~]$ testlength () { for p in "$@"; do echo ${#p};done }
[ian@pinguino ~]$ testlength 1 abc "def ghi"
1
3
7
模式匹配
参数扩展还包括了一些模式匹配功能,该功能带有在文件名扩展或 globbing 中使用的通配符功能。注意:这不是 grep 使用的正则表达式匹配。
表 2. Shell 扩展模式匹配 扩展 目的
${PARAMETER#WORD} shell 像文件名扩展中那样扩展 WORD并从 PARAMETER 扩展后的值的开头删除最短的匹配模式(若存在匹配模式的话)。使用 @$ 即可删除列表中每个参数的模式。
${PARAMETER##WORD} 导致从开头删除最长的匹配模式而不是最短的匹配模式。
${PARAMETER%WORD} shell 像文件名扩展中那样扩展 WORD并从 PARAMETER 扩展后的值末尾删除最短的匹配模式(若存在匹配模式的话)。使用 @$ 即可删除列表中每个参数的模式。
${PARAMETER%%WORD} 导致从末尾删除最长的匹配模式而不是最短的匹配模式。
${PARAMETER/PATTERN/STRING} shell 像文件名扩展中那样扩展 PATTERN并替换 PARAMETER 扩展后的值中最长的匹配模式(若存在匹配模式的话)。为了在 PARAMETER 扩展后的值开头匹配模式,可以给 PATTERN 附上前缀 #,如果要在值末尾匹配模式,则附上前缀 %。如果 STRING 为空,则末尾的 / 可能被忽略,匹配将被删除。使用 @$ 即可对列表中的每个参数进行模式替换。
${PARAMETER//PATTERN/STRING} 对所有的匹配(而不只是第一个匹配)执行替换。
清单 11 给出了模式匹配扩展的一些基本用法。
清单 11. 模式匹配示例
[ian@pinguino ~]$ x="a1 b1 c2 d2"
[ian@pinguino ~]$ echo ${x#*1}
b1 c2 d2
[ian@pinguino ~]$ echo ${x##*1}
c2 d2
[ian@pinguino ~]$ echo ${x%1*}
a1 b
[ian@pinguino ~]$ echo ${x%%1*}
a
[ian@pinguino ~]$ echo ${x/1/3}
a3 b1 c2 d2
[ian@pinguino ~]$ echo ${x//1/3}
a3 b3 c2 d2
[ian@pinguino ~]$ echo ${x//?1/z3}
z3 z3 c2 d2
回页首
整合
在介绍其余要点之前,先来观察一下参数处理的实际示例。我构建了 developerWorks 作者程序包(有关 Linux 系统使用 bash 脚本的信息,请参阅 参考资料)。我们将所需的各种文件存储在 developerworks/library 库的子目录中。该库的最新发行版是 5.7 版,因此,模式文件位于 developerworks/library/schema/5.7 中、XSL 文件位于 developerworks/library/xsl/5.7 中,而示例模板则位于 developerworks/library/schema/5.7/templates 中。很明显,一个提供版本(本例中为 5.7)的参数即可满足脚本构建指向所有这些文件的路径的需要。因此脚本获取的 -v 参数必须有值。稍后对这个参数执行验证,方法是构建路径然后使用 [ -d "$pathname" ] 检查它是否存在。
这种方法对产品构建而言非常有效,但是在开发期间,文件被存储在不同的目录中:
developerworks/library/schema/5.8/archive/test-5.8/merge-0430
developerworks/library/xsl/5.8/archive/test-5.8/merge-0430 和
developerworks/library/schema/5.8/archive/test-5.8/merge-0430/templates-0430
这些目录中的当前版本为 5.80430 则表示最新测试版本的日期。
为了处理这一情况,我添加了一个参数 -p它包含了一段补充的路径信息 —archive/test-5.8/merge-0430。现在或者别的什么人可能忘记了前导斜杠或末尾斜杠而一些 Windows 用户可能使用反斜杠而不是正斜杠,因此我决定在脚本中对此进行处理。另外,您还注意到指向模板目录的路径包含了两次日期,因此需要在运行时设法摘除日期 0430。
清单 12 给出了用来处理两个参数和根据这些需求清理部分路径的代码。-v 选项的值存储在 ssversion 变量中,清理后的 -p 变量存储在 pathsuffix 中,而日期(连同前导连字符)则存储在 datesuffix 中。注释解释了每一步执行的操作。即使在这样一小段脚本中,您也可以找到一些参数扩展,包括长度、子字符串、模式匹配和模式替换。
清单 12. 分析用于 developerWorks 作者程序包构建的参数
while getopts ":v:p:" optval "$@"
do
case $optval in
"v")
ssversion="$OPTARG"
;;
"p")
# Convert any backslashes to forward slashes
pathsuffix="${OPTARG//\\//}"
# Ensure this is a leading / and no trailing one
[ ${pathsuffix:0:1} != "/" ] && pathsuffix="/$pathsuffix"
pathsuffix=${pathsuffix%/}
# Strip off the last hyphen and what follows
dateprefix=${pathsuffix%-*}
# Use the length of what remains to get the hyphen and what follows
[ "$dateprefix" != "$pathsuffix" ] && datesuffix="${pathsuffix:${#dateprefix}}"
;;
*)
errormsg="Unknown parameter or option error with option - $OPTARG"
;;
esac
done
像 Linux 中的大多数内容一样,也许通常对编程而言,这并非解决此问题的惟一解决方案,但它确实展示了您了解的扩展的一种更实际的用法。
回页首
默认值
在上一节中您已经了解了如何为 ssversion 或 pathsuffix 之类的变量指派选项值。在这种情况下,稍后将检测到空值,产品构建时会出现空路径后缀,因此这是可以接受的。如果需要为尚未指定的参数指派默认值怎么办?表 3 所示的 shell 扩展将帮助您完成这个任务。
表 3. 默认值相关的 Shell 扩展 扩展 目的
${PARAMETER:-WORD} 如果 PARAMETER 没有设置或者为空,则 shell 扩展 WORD 并替换结果。PARAMETER 的值没有更改。
`${PARAMETER:=WORD} 如果 PARAMETER 没有设置或者为空,则 shell 扩展 WORD 并将结果指派给 PARAMETER。这个值然后被替换。不能用这种方式指派位置参数或特殊参数的值。
${PARAMETER:?WORD} 如果 PARAMETER 没有设置或者为空shell 扩展 WORD 并将结果写入标准错误中。如果没有 WORD 则写入一条消息。如果 shell 不是交互式的,则表示存在这个扩展。
${PARAMETER:+WORD} 如果 PARAMETER 没有设置或者为空,则不作替换。否则 shell 扩展 WORD 并替换结果。
清单 13 演示了这些扩展以及它们之间的区别。
清单 13. 替换空变量或未设置的变量。
[ian@pinguino ~]$ unset x;y="abc def"; echo "/${x:-'XYZ'}/${y:-'XYZ'}/$x/$y/"
/'XYZ'/abc def//abc def/
[ian@pinguino ~]$ unset x;y="abc def"; echo "/${x:='XYZ'}/${y:='XYZ'}/$x/$y/"
/'XYZ'/abc def/'XYZ'/abc def/
[[ian@pinguino ~]$ ( unset x;y="abc def"; echo "/${x:?'XYZ'}/${y:?'XYZ'}/$x/$y/" )\
> >so.txt 2>se.txt
[ian@pinguino ~]$ cat so.txt
[ian@pinguino ~]$ cat se.txt
-bash: x: XYZ
[[ian@pinguino ~]$ unset x;y="abc def"; echo "/${x:+'XYZ'}/${y:+'XYZ'}/$x/$y/"
//'XYZ'//abc def/
回页首
传递参数
关于参数传递有一些微妙之处,如果不小心,可能会犯错误。您已经了解了使用引号的重要性以及引号对使用 $* 和 $@ 的影响,但是考虑以下的例子。假设您想要一个脚本或函数来操作当前工作目录中的所有文件或目录。为了说明这个例子,考虑清单 14 所示的 ll-1.sh 和 ll-2.sh 脚本。
清单 14. 两个示例脚本
#!/bin/bash
# ll-1.sh
for f in "$@"
do
ll-2.sh "$f"
done
#!/bin/bash
ls -l "$@"
脚本 ll-1.sh 只是将它的每个参数依次传递给脚本 ll-2.sh 而 ll-2.sh 执行传递的参数的一个长目录清单。我的测试目录包含了两个空文件 “file1” 和 “file 2”。清单 15 显示了脚本的输出。
清单 15. 运行脚本 - 1
[ian@pinguino test]$ ll-1.sh *
-rw-rw-r-- 1 ian ian 0 May 16 15:15 file1
-rw-rw-r-- 1 ian ian 0 May 16 15:15 file 2
到目前为止,一切进展得还不错。但是如果您忘记使用 * 参数,则脚本不会执行任何操作。它不会像 ls 命令那样自动执行当前工作目录的内容。可以做一个简单的修正,当没有给 ll1-sh 提供数据时为 ll-1.sh 中的这个条件添加检查并使用 ls 命令的输出来生成 ll-2.sh 的输入。清单 16 给出了一个可能的解决方案。
清单 16. 修正后的 ll-1.sh
#!/bin/bash
# ll-1.sh - revision 1
for f in "$@"
do
ll-2.sh "$f"
done
[ $# -eq 0 ] && for f in "$(ls)"
do
ll-2.sh "$f"
done
注意:我们小心地将 ls 命令的结果用引号引用起来,确保可以正确地处理 “file 2”。清单 17 给出了运行带 * 和不带 * 的新 ll-1.sh 的结果。
清单 17. 运行脚本 - 2
[ian@pinguino test]$ ll-1.sh *
-rw-rw-r-- 1 ian ian 0 May 16 15:15 file1
-rw-rw-r-- 1 ian ian 0 May 16 15:15 file 2
[ian@pinguino test]$ ll-1.sh
ls: file1
file 2: No such file or directory
很奇怪吧?当您传递参数时,尤其当这些参数是命令的输出时,处理起来可能需要些技巧。错误消息提示文件名被换行符分隔,这就给我们提供了线索。有很多种方法可以处理这个问题,但是有一种简单的方法就是,使用清单 18 所示的内置 read。自己可以试用一下。
清单 17. 运行脚本 - 2
#!/bin/bash
# ll-1.sh - revision 2
for f in "$@"
do
ll-2.sh "$f"
done
[ $# -eq 0 ] && ls | while read f
do
ll-2.sh "$f"
done
这个例子的目的就是要说明:注意细节并使用一些不常见的输入来进行测试可以使脚本更加可靠。祝您编程顺利!
回页首
结束语
如果您想了解有关 Linux 中 bash 脚本编程的更多内容,请阅读教程 “LPI 102 考试准备,主题 109: Shell、脚本、编程和编译”本文中的某些内容就是摘录自该教程。要了解有关一些其他可用来分析文本如参数值的命令请参阅教程 “LPI 101 考试准备: GNU 和 UNIX 命令”,您可以找到下面的其他 参考资料。最后,别忘了 对本文进行评价。
参考资料
学习
您可以参阅本文在 developerWorks 全球站点上的 英文原文。
请参阅教程 “ "LPI 102 考试准备,主题 109: Shell、脚本、编程和编译”developerWorks2007 年 1 月)获得在 Linux 上进行 Bash shell 定制和脚本编程的更多信息。它是大型 LPI 考试准备教程系列 的一部分,介绍了 Linux 基础原理以及通过系统管理员认证所需的知识。
请阅读 developerWorks 的文章了解使用 Bash 的其他方式:
Bash 实例,第 1 部分Bourne again shell (bash) 基本编程
Bash 实例,第 2 部分:更多的 bash 基本编程
Bash 实例,第 3 部分:探讨 ebuild 系统
系统管理员工具包:充分利用 bash
在 Bash shell 中工作
“Shell Command Language” 定义了开放组织和 IEEE 规定的 shell 命令语言。
可以在 developerWorks developerWorks Linux 专区 中找到更多面向 Linux 开发人员的资源,包括上个月 读者最喜欢的 Linux 文章和教程。
随时关注 developerWorks 技术活动和网络广播。
获得产品和技术
使用可直接从 developerWorks 下载的 IBM 试用软件 构建您的下一个 Linux 开发项目。
讨论
参与论坛讨论。
通过新的 developerWorks 空间 中的开发者博客、论坛、网络广播和社区主题参与 developerWorks 社区。
关于作者
Ian Shields
Ian Shields 为 developerWorks Linux 专区的许多 Linux 项目工作。他是 IBM 北卡罗莱那州 Research Triangle Park 的一名高级程序员。他于 1973 年作为一名系统工程师加入 IBM 位于澳大利亚堪培拉的子公司。之后,在加拿大蒙特利尔和北卡罗莱那州 RTP 从事通信系统和普及运算。他拥有多项专利并发表了若干篇论文。他毕业于 Australian National University本科学位是纯数学和哲学。他拥有北卡罗来纳州立大学的计算机硕士和博士学位。
为本文评分
平均分 4 星 共 9 个评分 平均分 (9个评分)
1 星1 星1 星
2 星2 星2 星
3 星3 星3 星
4 星4 星4 星
5 星5 星5 星
评论
添加评论:
请 登录 或 注册 后发表评论。
注意:评论中不支持 HTML 语法
有新评论时提醒我剩余 1000 字符
快来添加第一条评论
回页首
内容
传递的参数
选项和 getopts
参数扩展
整合
默认值
传递参数
结束语
参考资料
关于作者
建议
标签
Help
使用搜索文本框在 My developerWorks 中查找包含该标签的所有内容。热门标签 显示了特定专区最受欢迎的标签(例如 Java technologyLinuxWebSphere。我的标签 显示了特定专区您标记的标签(例如 Java technologyLinuxWebSphere
搜索所有标签
热门文章标签 |
我的文章标签跳转到标签列表
热门标签
1 (6)
3 (5)
awk (6)
bash (8)
boot (5)
buffer (5)
c (30)
eclipse (24)
elf (5)
emacs (5)
file_systems (13)
ftp (7)
hadoop (23)
kernel (19)
klx.marks (8)
kvm (9)
lamp (8)
linux (347)
linux_kernel (6)
linux_on_powe... (13)
linux_virtuali... (12)
linux_入门 (11)
linux环境进程间通信 (8)
linux系统调用 (5)
lvm (6)
lvs (5)
makefile (6)
network (7)
on_demand_busi... (7)
perl (21)
php (7)
php_(hypertext... (10)
posix (9)
python (58)
resource_virt... (18)
rpm (7)
shell (21)
shells (11)
socket (15)
system_p (7)
tools (6)
vim (18)
windows (6)
安全 (27)
安装 (14)
编程 (8)
编码 (10)
部分 (6)
存储 (16)
代码库 (12)
调试 (13)
多线程 (7)
发行版 (6)
负载均衡 (8)
管理 (90)
集群 (28)
脚本编程 (13)
开发工具 (37)
开放源码 (13)
内核 (93)
配置 (19)
迁移 (10)
驱动程序 (6)
认证 (7)
使用 (8)
数据库和数据管理 (14)
通用编程 (9)
图形 (12)
网络 (16)
网络编程 (6)
文档和文本管理 (6)
系统脚本 (10)
线程 (9)
性能 (33)
学习 (8)
移动和嵌入式系统 (21)
应用开发 (19)
硬件平台 (52)
云计算 (13)
桌面环境 (22)
热门标签结束
我的标签
要访问我的标签,请登录
查看热门标签
我的标签结束
更多更少
1 (6)
3 (5)
awk (6)
bash (8)
boot (5)
buffer (5)
c (30)
eclipse (24)
elf (5)
emacs (5)
file_systems (13)
ftp (7)
hadoop (23)
kernel (19)
klx.marks (8)
kvm (9)
lamp (8)
linux (347)
linux_kernel (6)
linux_on_powe... (13)
linux_virtuali... (12)
linux_入门 (11)
linux环境进程间通信 (8)
linux系统调用 (5)
lvm (6)
lvs (5)
makefile (6)
network (7)
on_demand_busi... (7)
perl (21)
php (7)
php_(hypertext... (10)
posix (9)
python (58)
resource_virt... (18)
rpm (7)
shell (21)
shells (11)
socket (15)
system_p (7)
tools (6)
vim (18)
windows (6)
安全 (27)
安装 (14)
编程 (8)
编码 (10)
部分 (6)
存储 (16)
代码库 (12)
调试 (13)
多线程 (7)
发行版 (6)
负载均衡 (8)
管理 (90)
集群 (28)
脚本编程 (13)
开发工具 (37)
开放源码 (13)
内核 (93)
配置 (19)
迁移 (10)
驱动程序 (6)
认证 (7)
使用 (8)
数据库和数据管理 (14)
通用编程 (9)
图形 (12)
网络 (16)
网络编程 (6)
文档和文本管理 (6)
系统脚本 (10)
线程 (9)
性能 (33)
学习 (8)
移动和嵌入式系统 (21)
应用开发 (19)
硬件平台 (52)
云计算 (13)
桌面环境 (22)
查看方式云 | 列表

View File

@@ -0,0 +1,207 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-24T20:46:18+08:00
====== 2 ======
Created Saturday 24 December 2011
http://rtornados.bokee.com/3557989.html
*************************
 ?  最近执行的foreground pipeline的状态
  *************************
在echo $?
输入的参数是结果都位0
  *************************
  - 减号 
  最近执行的foreground pipeline的选项参数。 
  *************************
echo $-
输入的参数使结果都是空的没有任何显示
$ //money
  本身的Process ID 
  # ps ax | grep bash 
  1000 -1 23:42:06 /usr/bin/bash
*************************
  内建变数(Shell Variables) 
  Bash有许多内建变数像PATH、HOME、ENV......等等。这些内建变数将在另一节中,专门一
一说明。
  函数function
  [ function ] name () { list; }
  function的参数是Positional Paraments
bash的另一个使你的工作变得轻松的方法是命令别名。命令别名通常是其他命令的缩写用来减
少键盘输入。例如,你经常要键入如下的命令,你也许会倾向于为它建立一个别名来减少工作量:
cd /usr/X11/lib/X11/fvwm/sample-configs
假如为这个长命令建立一个名为goconfig的别名在bash提示符下键入如下命令
alias goconfig=cd /usr/X11/lib/X11/fvwm/sample-configs
现在除非你退出bash键入goconfig将和原来的长命令有同样的作用。如果想取消别名可以使
下面的命令:
unalias goconfig
是一些很多用户认为有用的别名,你可以把它们写入你的.profile文件中提高工作效
alias ll=ls -l
alias log=logout
alias ls=ls -F
注意: 在定义别名时,等号的两头不能有空格,否则 shell 不能决定你需要做什么。仅在你的命令中
含有空格
或特殊字符时才需要引号。
输入重定向
输入重定向用于改变一个命令的输入源。一些命令需要在命令行里输入足够的信息才能工作。比如
rm你必须在命令行里告诉 rm它你要删除的文件。另一些命令则需要更详细的输入这些命令的输入
可能是一个文件。比如命令 wc统计输入给它的文件里的文件里的字符数单词数和行数。如果你仅在
命令行上键 入 wcwc 将等待你告 诉它要统计什么,这时 bash就好象死了一样你键入的每样东
西都出现在屏幕上但什么事也不会发生。这是因为wc 命令正在为自己收集输入。如果你按下Ctrl-
Dwc 命令的结果将被写在屏幕上。如果你输入一个文件名做参数象下面的例子一样wc将返回文
件所包含的字符数,单词数,和行数:
wc day.c
802 12423 342134 day.c
把文件作为wc的文件输入
wc < day.c
802 12423 342134
即输入重定向位day.c
输入重定向并不经常使用,因为大多数命令都以参数的形式在命令行上指定输入文件的文件名。尽
此,当你使用一个不接受文件名为输入参数的命令,而需要的输入又是在一个已存在的文件里时,
能用输入重定向解决问题。
输出重定向
输出重定向比输入重定向更常用,输出重定向把一个命令的输出重定向到一个文件里,而不是显示在
屏幕上,这个命令的使用场合还是很多的,也很有用处,例如,如果某个命令的输出很多,在屏幕上不
能完全显示,你能把它重定向到一个文件中,稍后再用文本编辑器来打开这个文件;当你想保存一个命
令的输出时也可以使用这种方法,还有,输出重定向可以用于把一个命令的输出当作另一个命令的输入
时。还有一种更简单的方法可以把一个命令的输出当作另一个命令的输入就是使用pipe符号
是 '>'。
注意:记忆输入/输出重定向符号的最好方法是把<看作是一个漏斗,漏斗的小口指向需要输入的命
令(因为需要接受输入的命令会在 <的左手边),而把>当作一个大口指向有输出的命令的漏斗。
ls 输出到文件: ls > log.txt
管道:管道可以把一系列命令连接起来
第一个命令的输出会通过管道传给第二个命令而作为第二个命令的输入,第二个命令的输出又会作为第
三个命令的输入等等,而管道行中最后一个命令的输出才会显示在屏幕上(但是如果命令行里使用了输
出重定向的话,将会放进一个文件里),通过使用管道符 | 来建立一个管道行,下面的示例就是一个管
道行:
cat day.c| grep "a" | wc -l
结果: 4 原理: cat命令列出文件的内容把day.c的内容输入到grep命令 查找的结果是又a字
符的行输入到wc命令
job control:
作业控制能够控制当前正在运行的进程的行为。特别地,你能把一个正在运行的进程挂起,稍后再恢复
的运行。bash保持对所有已启动的进程的跟踪你能在一个正在运行的进程的生命期内的任何时候把
它挂起或是使它恢 复运行。
按下 Ctrl-Z 使一个运行的进程挂起。bg命令使一个被挂起的进程在后台恢复运行反之 fg 命令使进
程在前台恢复运行。这几个命令在当用户想在后台运行而意外的把它放到了前台时,经常被用到。当一
个命令在前台被运行时, 它会禁止用户与 shell的交互直到该命令结束。这通常不会造成麻烦
为大多数命令很快就执行完了。如果你要运行的命令要花费很长的时间的话,我们通常会把它放到后
台,以使我们能在前台继续输入其他命令。
例如,你输入 这个命令:
command find / -name "test" > find.out
它将寻找整个文件系统中的名为test的文件并把结果保存在一个叫fing.out的文件里。如果在前台运行
的 话根据文件系统的大小你的shell将有数秒甚至数分钟不能使用你不想这样的话可以再输入以
下面的内容:
control-z
bg
find 命令首先被挂起再在后台继续被执行并且你能马上回到bash下。
几个最有用的bash内部命令
alias: 设置bash别名。
bg: 使一个被挂起的进程在后台继续执行。
cd: 改变当前工作目录。
exit: 终止shell。
export: 使变量的值对当前shell的所有子进程都可见 。
fc: 用来编辑历史命令列表里的命令。
fg: 使一个被挂起的进程在前台继续执行。
help: 显示bash内部命令的帮助信息。
kill: 终止某个进程

View File

@@ -0,0 +1,134 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-24T17:22:11+08:00
====== IFS与位置参数 ======
Created Saturday 24 December 2011
http://wiki.bash-hackers.org/syntax/expansion/wordsplit
===== IFS的缺省值 =====
[geekard@geekard ~]$ set |grep IFS #IFS的缺省值为空格、TAB和换行注意其定义形式使用的是ANSI-C quote详情参考bash reference manual的quote一节。
IFS=__$' \t\n'__
These characters are also assumed when IFS is __unset__. When IFS is empty (nullstring), **no word splitting** is performed at all.
* 当IFS被unset时bash是使用IFS的缺省值分词。
* 当IFS为空时(如IFS= )bash将不会对扩展后的结果进行分词。
===== 分词(只发生在没有引用的扩展结果) =====
bash在对命令行进行各种扩展(如下三种类型)后会对__扩展后生成的内容__进行分词而分词的依据就是IFS变量中的各字符(注意,**任何时候没有被引用(quoted)的空白符都是分词的依据**)。同时分词__只对没有被引用__的各类型扩展结果有效。
Word splitting occurs once any of the following expansions are done (and only then!)
* Parameter expansion
* Command substitution
* Arithmetic expansion
Bash will scan the results of these expansions for special** IFS characters** that mark word boundaries. This is only done on
__results that are not double-quoted__!
When a null-string (e.g., something that before expanded to »nothing«) is found, it is removed, unless it is quoted ('' or ""). 当分词后的结果中含有空串时(一般发生在扩展结果中含有连续的IFS字符例如IFS=":", 而扩展结果为a b:c::d时分词时就会产生空串。)会被删去除非该空串被引用a b:c:"":d此时分词后的空串不会被删去
Without any expansion beforehand, Bash won't perform word splitting! In this case, the initial token parsing is solely responsible. 使用IFS分词的情形__只能是先产生了没有引用的扩展结果__其它情况下bash使用空白符将命令行参数分割为各个word.
===== 实例: =====
[geekard@geekard ~]$ sp="a b"
[geekard@geekard ~]$ echo $sp #等效为echo a b; echo参数个数为2
a b
[geekard@geekard ~]$ echo "$sp" #等效为echo "a b"; echo参数个数为1
a b
[geekard@geekard ~]$ IFS=":"
[geekard@geekard ~]$ echo $sp #等效为echo a b; 参数个数为2注意a与b间的空格是__sp字符串自带的__。
a b
[geekard@geekard ~]$ echo "$sp" #等效为echo "a b"
a b
[geekard@geekard ~]$ sp="a:b" #sp__实际保存的值为a:b(bash会自动去掉引号)__
[geekard@geekard ~]$ echo $sp #bash先将$sp进行参数替换为a:b, 由于没有引号故接着用IFS对替换后的结果再次分词(word)故结果为a b;
a b
[geekard@geekard ~]$ echo "$sp" #bash想将$sp进行参数替换为a:b, __由于有引号故不再用IFS对替换后的结果分词__。
a:b
[geekard@geekard ~]$
[geekard@geekard ~]$ sp="a 'b c' d" #sp实际保存的值为a 'b c' d外层的引号自动去掉但内层不能去掉。
[geekard@geekard ~]$ echo $sp
a 'b c' d
[geekard@geekard ~]$
[geekard@geekard ~]$ set a "b c" d #位置参数的实际保存形式为: $1=a; $2=b c; $3=d
[geekard@geekard ~]$ echo $1,$2,$3
a,b c,d
[geekard@geekard ~]$ echo $* # $*值为a空格+b空格c+空格da后d前的空格是shell区分单词的空白符与IFS无关。__由于没有使用引号__故shell使用IFS对这个结果进行__进一步分词__。注意后面会将bc中空格换为冒号可以验证这步。
a b c d
[geekard@geekard ~]$ echo $@ # 同上
a b c d
[geekard@geekard ~]$ echo "$*" #$*被同上替换但由于外围有引号故shell__不再用IFS对其分词__。所以结果等效为所有位置参数组成的__一个字符串__。同理后面将验证。
a b c d
[geekard@geekard ~]$ echo "$@" #bash对"$@"形式做了特殊处理:"$@"="$!""$2""$3",然后按正常流程处理即各参数配替换:"$@"="a""b c""d"然后再对各替换结果按IFS进行分词。后面将验证。
a b c d
[geekard@geekard ~]$
===== 验证: =====
[geekard@geekard ~]$ set a b:c d
[geekard@geekard ~]$ IFS=":"
[geekard@geekard ~]$ echo **"$2"** #bash__不会对__含有引号的参数扩展结果用IFS进一步分词。
b:c
[geekard@geekard ~]$ echo $1,**$2**,$3 #在对没有引号的执行参数扩展后(但是__带引号的$*和$@是特殊情况__)bash会对扩展结果使用IFS进行分词分开的词会以空白符(一般是空格)分开同时外围不带引号。
a,__b c__,d
[geekard@geekard ~]$ echo $* #原理同上bash会对扩展结果a b:c d使用IFS进一步分词然后用空格分割得到的各个词。
a b c d
[geekard@geekard ~]$ for var in $*; do echo $var;done #进一步验证结果是空格分割的各个词且外围不带引号循环执行了4次
a
b
c
d
[geekard@geekard ~]$ echo $@ #原理及解释同上
a b c d
[geekard@geekard ~]$ for var in $@; do echo $var;done #解释同上
a
b
c
d
[geekard@geekard ~]$ echo "$*" #带引号的$*会被特殊处理对各个位置参数进行正常的参数扩展然后用IFS对各扩展结果进行分词最对得到的各个词__用IFS的第一个字符连接在一起组成一个新的词__。
a:b:c:d
[geekard@geekard ~]$ for var in "$*"; do echo $var;done #循环执行了一次故__"$*"代表一个词__在执行echo $var时bash会对$var进行参数扩展然后对扩展结果用IFS分词最后再将各个词用空格连接起来(注意,连接后各个词是独立的,而不是一个新词)。所以输出中没有冒号。
a b c d
[geekard@geekard ~]$ echo "$@" #带引号的$@也会被特殊处理__先将各位置参数外加引号__然后对各个位置参数进行正常的参数扩展。由于有引号bash不会对各扩展结果进一步用IFS分词。"$@"的结果为:"a" ''b:c'' "d"
a b:c d
[geekard@geekard ~]$
[geekard@geekard ~]$ for var in "$@"; do echo $var;done #证明"$@"结果的确为三个词,在执行第二次循环时,$var的替换结果"b:c"会进一步用IFS进行分词结果为b c。
a
b c
d
[geekard@geekard ~]$ for var in "$@"; do echo "$var" ;done #加了引号后替换结果不会被用IFS进一步分词显示结果为b:c进一步验证了上面的说法。
a
b:c
d
[geekard@geekard ~]$
===== Bash命令行解析过程 =====
1. token解析[geekard@geekard ~]$ echo $*
将命令行分成由固定元字符集分隔的记号SPACE, TAB, NEWLINE, __;__ , (, ), <, >, __|__, __&__
记号类型包括单词关键字I/O重定向符和分号。
2.检测每个命令的第一个记号查看是否为不带引号或反斜线的__关键字__。如果是一个开放的关键字如if和其他控制结构起始字符串function{或(则命令实际上为一__复合命令__。shell在内部对复合命令进行处理读取下一个命令并重复这一过程。如果关键字不是__复合命令起始字符串__(如then等一个控制结构中间出现的关键字),则给出语法错误信号。
3.依据__别名列表__检查每个命令的第一个关键字。如果找到相应匹配则替换其别名定义并退回第一步;否则进入第4步。该策略允许递归别名还允许定义关键字别名。如alias procedure=function
4.执行**大括号扩展**例如a{b,c}变成ab ac
5.如果~位于单词开头,用$HOME替换~。使用usr的主目录替换__~user__。
6.**参数(变量)替换**:对任何以符号$开头的表达式执行参数(变量)替换,注意参数扩展时的**大括号内容有多种形式**。
${foo:-bar} ${foo:=bar} ${foo:?bar} ${foo:+bar}
7.**命令替换**: 对形式$(string)的表达式进行命令替换这里是__嵌套的命令行处理__。
8.**算术替换**:计算形式为$((string))的算术表达式
9.把行的参数命令和算术__替换部分再次分成单词__这次它使用__$IFS__中的字符做分割符而不是步骤1的**元字符集**。
10.**通配符扩展**:对出现*, ?, [ / ]对执行路径名扩展,也称为通配符扩展
11. 按命令优先级表(跳过别名),进行命令查寻
12.设置完I/O重定向和其他操作后执行该命令。
**函数名---->别名----->内部命令----->外部命令**
总结: bashksh执行命令的过程分析命令变量求值命令替代``和$( ))-重定向-通配符展开-确定路径-执行命令;
关于引用
1. 单引号__跳过了前10个步骤__不能在单引号里放单引号
2. 双引号跳过了步骤1~5步骤9~10也就是说只处理6~8个步骤。
也就是说,双引号忽略了管道字符,别名,~替换,通配符扩展,和通过分隔符分裂成单词。
双引号里的单引号没有作用,但双引号允许参数替换,命令替换和算术表达式求值。可以在双引号里包含双引号,方式是加上转义符"\"还__必须转义$, `, \__。

View File

@@ -0,0 +1,94 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-23T12:50:22+08:00
====== 参数扩展(替换) ======
Created Friday 23 December 2011
===== 参数扩展的几种形式: =====
* ${#parameter}返回参数的__字符长度__
* ${parameter:+words} :若参数parameter存在且不为空则替换结果为__扩展后的字符串words__否则扩展为参数parameter的值。
* ${parameter:-words}若参数parameter不存在或为空则替换结果为__扩展后的字符串words__parameter不变否则为参数parameter的值。
* ${parameter:=words} 若参数parameter不存在或为空则替换结果为__扩展后的字符串words__parameter同时设置为扩展后的words否则为参数parameter的值。
* ${parameter:?words} 若参数parameter不存在或为空则显示结果为parameter: __words(扩展后的字符串)__同时**终止**当前命令和后续命令的执行否则为参数parameter的值。
* ${parameter:offset}
* ${parameter:offset:length} 子串扩展字串在parameter的offset(从0开始)开始长度为offset
* ${parameter%pattern} : 从**右向左**删除参数中符合pattern的__最少__字符替换结果为**删除并扩展**后的字符串
* ${parameter%%pattern}:从**右向左**删除参数中符合pattern的__最多__字符替换结果为**删除并扩展**后的字符串
* ${parameter#pattern}:从**左向右**删除参数中符合pattern的__最少__字符替换结果为**删除并扩展**后的字符串
* ${parameter##pattern}:从**左向右**删除参数中符合pattern的__最多__字符替换结果为**删除并扩展**后的字符串
* ${parameter/pattern/string}:将参数中符合模式pattern的字符串替换为word如果从左到右替换则pattern以#开头如果从右到左匹配模式则pattern以%开头(pattern必须要__完整匹配__),如果 string 为空,则末尾的 / 可能被忽略,匹配将被删除。
* ${PARAMETER__//__PATTERN/STRING} 对**所有**的匹配(而不只是第一个匹配)执行替换。
**注意**上面的words和pattern都可以使用bash支持的**匹配模式**bash将words__扩展后__替换${.....}; 同时pattern要__完整匹配__截取的内容,后四个形式非常适合**处理文件名和路径名**的情况。
上面的形式中如果忽略冒号则测试的条件是parameter__未初始化__。
[geekard@geekard ~]$ unset i; export bar=test_bar #将i变量从环境中删去这样i为未初始化变量
[geekard@geekard ~]$ echo ${i:-bar}, i=$i #如果i变量未初始化或初始化为空则参数扩展为__字符串bar____i不变__。
**bar,** i=
[geekard@geekard ~]$ echo ${i:=bar}, i=$i #如果i变量未初始化或初始化为空则参数扩展为__字符串bar__i变量__设置为字符串bar__。
**bar**, i=**bar**
[geekard@geekard ~]$ unset i; echo ${i:?bar}, i=$i; date #如果变量i未初始化或初始化为空则显示**i: =bar**同时__停止当前命令和后续命令的执行__。
bash:** i: bar**
[geekard@geekard ~]$ echo ${i:+bar}, i=$i #如果变量i存在且不为空则参数扩展为字符串__bar__的值否则扩展为__空值__。
, i=
[geekard@geekard ~]$ export i=i_test; echo ${i:+bar}, i=$i
bar, i=i_test
[geekard@geekard ~]$
[geekard@geekard ~]$ **path=/home/geekard/test/line** echo ${path:-No var path} #${foo}中的变量名foo一定要为当前执行__环境中的变量(不一定是环境变量)__。
No var path
[geekard@geekard ~]$ i=test_i
[geekard@geekard ~]$ echo ${i:+__ha aha * / __}
ha aha **bin codes Desktop documents download dumy.c musics notes pictures ppc softwares tmp video vms www** /
[geekard@geekard ~]$ echo ${i:+__"ha aha * /" __}
ha aha * /
[geekard@geekard ~]$
[geekard@geekard ~]$ path=/home/geekard/test/test_dir/test_file
[geekard@geekard ~]$ echo ${path%**/test**} #$path字符串中自右向左并__没有完整匹配__模式/test的字符串所以__没有字符串被删除__。
/home/geekard/test/test_dir/test_file
[geekard@geekard ~]$ echo ${path%/test/} #同上
/home/geekard/test/test_dir/test_file
[geekard@geekard ~]$ echo ${path%__test*__} #匹配最短的字符串为test_file
/home/geekard/test/test_dir/
[geekard@geekard ~]$ echo ${path%__*test*__} #同上,注意:最左边的*匹配__0个__字符(因为是最少)。
/home/geekard/test/test_dir/
[geekard@geekard ~]$ echo ${path%__/*dir*__}
/home/geekard/test
[geekard@geekard ~]$ echo ${path%?dir*} # __只要是bash的模式匹配字符和规则都适用__。
/home/geekard/test/test
[geekard@geekard ~]$ echo ${path%**[0-9a-zA-Z]dir***} #没有匹配该规则的字符串
/home/geekard/test/test_dir/test_file
[geekard@geekard ~]$ echo ${path%[0-9a-zA-Z_____]dir*}
/home/geekard/test/test
[geekard@geekard ~]$ echo ${path%[__^__0-9a-zA-Z]dir*}
/home/geekard/test/test
[geekard@geekard ~]$ **path=/home/geekard/*/test_file**
[geekard@geekard ~]$ echo ${path%**/test***}
/home/geekard/bin/ /home/geekard/codes/ /home/geekard/Desktop/ /home/geekard/documents/ /home/geekard/download/ /home/geekard/musics/ /home/geekard/notes/ /home/geekard/pictures/ /home/geekard/ppc/ /home/geekard/softwares/ /home/geekard/tmp/ /home/geekard/video/ /home/geekard/vms/ /home/geekard/www/
[geekard@geekard ~]$ echo $file
the test file is test_file
[geekard@geekard ~]$ echo __${#file}__
26
[geekard@geekard ~]$ echo ${file/test/} #替换字符为空的话,最后一个/可以省略
the file is test_file
[geekard@geekard ~]$ echo ${file/test/__/__}
the / file is test_file
[geekard@geekard ~]$ echo ${file/test/**1test/**} #默认替换从左到右找到的__第一个__符合模式的字符串
the 1test/ file is test_file
[geekard@geekard ~]$ echo ${file/test/**1test**}
the 1test file is test_file
[geekard@geekard ~]$ echo ${file__//__test/demo} #替换__所有__符合模式的字符串
the demo file is demo_file
[geekard@geekard ~]$ echo ${file/__#__test/demo} #模式前加#时模式串必须__从左到右完整匹配__。
the test file is test_file
[geekard@geekard ~]$ echo ${file/__#*test__/demo} #完整匹配,匹配**尽可能多**的字符。
demo_file
[geekard@geekard ~]$ echo ${file/%*test/demo}
the test file is test_file
[geekard@geekard ~]$ echo ${file/%*test*/demo}
demo
[geekard@geekard ~]$ echo ${file/%test*/demo}
the demo
[geekard@geekard ~]$

View File

@@ -0,0 +1,315 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-22T16:55:04+08:00
====== 命令行解析顺序 ======
Created Thursday 22 December 2011
http://blog.chinaunix.net/space.php?uid=8746761&do=blog&id=2015319
1. token解析将命令行分成由固定元字符集分隔的记号SPACE, TAB, NEWLINE, __;__ , (, ), <, >, __|__, __&__
记号类型包括单词关键字I/O重定向符和分号。
2.检测每个命令的第一个记号查看是否为不带引号或反斜线的__关键字__。如果是一个开放的关键字如if和其他控制结构起始字符串function{或(则命令实际上为一__复合命令__。shell在内部对复合命令进行处理读取下一个命令并重复这一过程。如果关键字不是__复合命令起始字符串__(如then等一个控制结构中间出现的关键字),则给出语法错误信号。
3.依据__别名列表__检查每个命令的第一个关键字。如果找到相应匹配则替换其别名定义并退回第一步;否则进入第4步。该策略允许递归别名还允许定义关键字别名。如alias procedure=function
4.执行**大括号扩展**例如a{b,c}变成ab ac
5.如果~位于单词开头,用$HOME替换~。使用usr的主目录替换__~user__。
6.**参数(变量)替换**:对任何以符号$开头的表达式执行参数(变量)替换,注意参数扩展时的**大括号内容有多种形式**。
${foo:-bar} ${foo:=bar} ${foo:?bar} ${foo:+bar}
7.**命令替换**: 对形式$(string)的表达式进行命令替换这里是__嵌套的命令行处理__。
8.**算术替换**:计算形式为$((string))的算术表达式
9.把行的参数命令和算术__替换部分再次分成单词__这次它使用__$IFS__中的字符做分割符而不是步骤1的**元字符集**。
10.**通配符扩展**:对出现*, ?, [ / ]对执行路径名扩展,也称为通配符扩展
11. 按命令优先级表(跳过别名),进行命令查寻
12.设置完I/O重定向和其他操作后执行该命令。
**函数名---->别名----->内部命令----->外部命令**
总结: bashksh执行命令的过程分析命令变量求值命令替代``和$( ))-重定向-通配符展开-确定路径-执行命令;
关于引用
1. 单引号__跳过了前10个步骤__不能在单引号里放单引号
2. 双引号跳过了步骤1~5步骤9~10也就是说只处理6~8个步骤。**bash不会对引号中的参数扩展结果进行IFS分词。**
也就是说,双引号忽略了管道字符,别名,~替换,通配符扩展,和通过分隔符分裂成单词。
双引号里的单引号没有作用,但双引号允许参数替换,命令替换和算术表达式求值。可以在双引号里包含双引号,方式是加上转义符"\"还__必须转义$, `, \__。
===== 一、读取命令行 =====
Shell解释命令的第一步是读取命令行从命令语句中**解析出空格和制表符**。命令行可以来自terminal也可以取自一个普通的文本文件如shell脚本读取命令语句时直至遇到一个分号“”、后台进程执行符合“&” 、逻辑与“&&” 、逻辑或“||”或__一个换行字符__时整个读取过程即告结束。如果读取的是一个__结构语句__如if/then、for或while等shell将会完整地读入整个结构语句。
在读入一条命令或结构语句之后shell将会命令语句进行__语法分析__把命令语句分解为**一系列单词或关键字(此时并**__不使用IFS__**)**。默认情况下shell假定每个单词都是由空格或制表符分隔由一系列连续的字符组成的或者包含空白符但用引号包含的字符。Shell从命令语句行首的第一个字符开始解析直至行尾结束依次__找出一系列单词__包括由“<” 、“>” 、“|” 、和“^”等特殊字符组成的单字。__不管IFS变量__的设置如何这个解析过程总是如此处理。至于如何处理IFS变量则是另外一个解析步骤。在这一步骤中还要处理别名替换、大括号替换和~扩展。
===== 二、回显输入的命令 =====
命令行解释的第二步是确定__命令回显标志__是否已经设置。如果已经设置shell将把读入的命令或结构语句回显到用户终端的**标准错误输出**。
子进程可以继承父进程的运行环境或打开的文件等但__无法继承命令回显标志__。如果特定shell的回显标志已经设置shell也无法回显由子进程处理的命令。
===== 三、变量替换 =====
命令行解释的第三步是变量替换,包括位置参数替换和特殊替换。变量替换表达式是由“$”与后随的变量名或参数名构成的、如$var。变量名或参数名也可以用“{}”括起来,如${var}。引用时通常需要在变量替换表达式前后加双引号如fname=“${var}”。但在变量包含若干单词而shell又需要处理每一个单词时属于例外。
===== 四、命令替换 =====
命令行解释的第四步是命令替换。命令替换表达式是由反向单引号“`”或$(…)括住的命令语句,表达式的值则是
===== 优先顺序 =====
一, Linux命令的分类:
包括:alias, keyword, function, built-in, $PATH这5类
基本顺序 shell会按照alias->keyword->function,->built-in->$PATH的顺序进行搜索, 本着”先到先得”的原则, 就是说如果有如名为mycmd的命令同时存在于alias和function中的话, 那么肯定会使用alias的mycmd命令(当然, 这不是绝对的, 下面会说到特例).
set +-h, hash, type, command, enable, builtin
1) hash命令:
首先, 我们来看hash这命令(和我上面说的”不是绝对的”有关系了!), hash命令用于记录在当前shell环境下用户曾经键入的命令路径记录缓存表, 主要是为了加快命令搜寻速度. 下面看个例子:
例:我在shell下键入 ls, find, pwd, ls, echo “Hello, world”, mail及if共7个命令(注意, ls执行2次), 下面是history的结果:
1 ls
2 find
3 pwd
4 ls
5 echo “Hello, world”
6 mail
7 if
那么, 现在我执行hash命令, 其显示结果为:
[ancharn@fc8 ~]$ hash
hits command
1 /bin/mail
2 /bin/ls
1 /usr/bin/find
不知大家发现了什么没有? 这个hash表左边一列表示该命令在当前shell环境下共被使用了几次, 右边一列表示命令路径. 但是我们发现这个hash缓存中缺少了if,pwd和echo3个命令, 为什么呢? 我们在这儿要得出一个重要的结论就是: (1) hash不会记录function, built-in命令(其实还包括alias), 为什么呢? 答案是因为他们没有路径, 即不会存在于某个目录之下, 它们是随shell而加载进而存在于内存中, 所以这样的命令还有必要进行缓存以提高搜索效率吗?!
但是有人会说, ls不是被hash记录下来了吗? 没错, 你的观察很细致, 通常ls在bash中是一个alias, 那么, 在这儿我们先下一个结论: (2) alias中若定义的是包含了路径的别名命令则不会被记录到hash中, 只有没有指定路径的alias才会被记录到hash中. 情况例子:
这是我当前shell(bash)环境下的ls别名的定义
[ancharn@fc8 //]$ alias ls
alias ls=ls color=auto
(注意:后面的”ls color=auto”没有指定如/bin/ls这样的路径)
所以, 正如你看到的, 上面我键入了2次ls命令(是ls color=auto的别名), 那么在hash中能够看到被记录; 下面看个write命令的例子:
[ancharn@fc8 //]$ alias write
-bash: alias: write: not found
[ancharn@fc8 //]$ write
usage: write user [tty]
[ancharn@fc8 //]$ hash
hits command
1 /usr/bin/write
1 /bin/mail
2 /bin/ls
1 /usr/bin/find
write 这个命令没有alias, 也就是说当执行write命令时其实找到的是PATH变量中的/usr/bin/write这个二进制文件来执行的, 这时hash记录了write的路径并被引用了1次, 然后我定义write别名就是write本身, 但是指定具体路径是/usr/bin/write:
[ancharn@fc8 //]$ alias write=/usr/bin/write
[ancharn@fc8 //]$ alias write
alias write=/usr/bin/write
[ancharn@fc8 //]$ write
usage: write user [tty]
[ancharn@fc8 //]$ hash
hits command
1 /usr/bin/write
1 /bin/mail
2 /bin/ls
1 /usr/bin/find
请看, hash表中的write的hits数还是1次; 这里要注意的是当我们定义了write的alias后(指定路径), PATH就不会被搜到了, 为什么呢? 很简单, 因为write的alias中已经指明了它的具体路径 了!
接着unalias掉write重新定义write别名:
[ancharn@fc8 //]$ unalias write
[ancharn@fc8 //]$ alias write
-bash: alias: write: not found
[ancharn@fc8 //]$ alias write=write
[ancharn@fc8 //]$ alias write
alias write=write
[ancharn@fc8 //]$ write
usage: write user [tty]
[ancharn@fc8 //]$ hash
hits command
2 /usr/bin/write
1 /bin/mail
2 /bin/ls
1 /usr/bin/find
这次, 我们没有指定write别名中的路径, 当我们定义好write的别名后去执行write时, hash表中就会增加一次hits. 这里要注意的是当我们定义了write的alias后(不指定路径, 请和上面的例子比较下), PATH就会被搜到了, 所以hash的hits增加了. 请大家切记alias中若定义的是包含了路径的别名命令则不会被记录到hash中, 只有没有指定路径的alias才会被记录到hash中这条结论.
另外, hash因为是built-in命令, 所以用help hash来查看帮助. 常用的有hash -r用于清空hash表, hash -d name用于delete某个command. 如:
[ancharn@fc8 //]$ hash
hits command
3 /usr/bin/write
1 /bin/mail
2 /bin/ls
1 /usr/bin/find
删除具体的:
[ancharn@fc8 //]$ hash -d ls
[ancharn@fc8 //]$ hash
hits command
3 /usr/bin/write
1 /bin/mail
1 /usr/bin/find
清空hash:
[ancharn@fc8 //]$ hash -r
[ancharn@fc8 //]$ hash
hash: hash table empty
2) set +-h:
set 命令大家应该很熟悉, 我们在这里主要说的是set +-h的作用: help set可以看到”-h Remember the location of commands as they are looked up.” 中文意思就是记忆命令的路径以便于查询. 当我们键入set +h后再运行hash:
[ancharn@fc8 //]$ set +h
[ancharn@fc8 //]$ hash
-bash: hash: hashing disabled
也就是说”set +h”用于禁用hash而”set -h”用于启用hash.
3) type:
此命令用于列出某个命令属于哪类. 如:
[ancharn@fc8 //]$ type -a pwd
pwd is a shell builtin
pwd is /bin/pwd
pwd属于内置和PATH变量中.
[ancharn@fc8 //]$ type pwd
pwd is a shell builtin
直接用type commandname可以告诉你该命令在运行时会执行哪一类.
4) command:
该命令的作用是: 如果你有一个命令如gcc既是一个function, 同时又是一个PATH变量中的命令, 那么如果你直接执行gcc, 按照顺序来说, 会执行function而不是gcc的PATH变量中的命令, 而用command gcc会跳过function的选择.
[ancharn@fc8 //]$ function gcc { echo “just a test for gcc”; }
[ancharn@fc8 //]$ gcc
just a test for gcc
[ancharn@fc8 //]$ command gcc
gcc: no input files
5) enable:
enable命令如果直接运行则会列出当前shell的所有built-in命令, enable -n commandname会在当前shell下disable掉该内置命令:
[ancharn@fc8 ~]$ type -a pwd
pwd is a shell builtin
pwd is /bin/pwd
[ancharn@fc8 ~]$ enable -n pwd
[ancharn@fc8 ~]$ type -a pwd
pwd is /bin/pwd
[ancharn@fc8 ~]$ enable pwd
[ancharn@fc8 ~]$ type -a pwd
pwd is a shell builtin
pwd is /bin/pwd
6) builtin
用于运行一个内置命令. 例如:
[ancharn@fc8 ~]$ cd /var
[ancharn@fc8 var]$ function pwd { echo “just a test for pwd”; }
[ancharn@fc8 var]$ type -a pwd
pwd is a function
pwd ()
{
echo “just a test for pwd”
}
pwd is a shell builtin
pwd is /bin/pwd
(注: pwd既是函数, 又是内置命令, 又存在PATH变量中)
[ancharn@fc8 var]$ pwd
just a test for pwd
[ancharn@fc8 var]$ builtin pwd // (注: 此时我们就去直接执行pwd这个内置命令)
/var
小结: 我们都知道了shell在搜索命令时的顺序是alias->keyword->function,->built- in->$PATH, 那么其中还有2点需要注意的就是 (1) hash不会记录function, built-in命令(其实还包括alias), (2) alias中若定义的是包含了路径的别名命令则不会被记录到hash中, 只有没有指定路径的alias才会被记录到hash中. 另外, (3) 不要忘记, 我们讨论的前提是a) 受限于具体的shell种类b)且只在当前shell环境有效.切记!!!
到这里, 请大家来思考一个问题:
请看下面的执行情况:
[ancharn@fc8 var]$ function gcc { echo “just a test for gcc”; }
[ancharn@fc8 var]$ alias gcc=gcc
[ancharn@fc8 var]$ gcc
just a test for gcc
[ancharn@fc8 var]$ /usr/bin/gcc
gcc: no input files
[ancharn@fc8 var]$ alias gcc=/usr/bin/gcc
[ancharn@fc8 var]$ gcc
gcc: no input files
[ancharn@fc8 var]$
为什么定义了gcc这个funtion后, 两次定义gcc的alias时指定不指定具体的/usr/bin/gcc路径时, 执行gcc这个命令的反应不同呢? 按照alias->keyword->function,->built-in->$PATH 这个顺序来看, 应该执行alias的gcc啊?! 请思考!
当然, 别着急, 后面我会给出答案. 但是, 请您思考下!
四, 命令举例:
* alias(别名):
alias 命令通常被设定在文件~/.bashrc和/etc/bashrc中,~/.bashrc通常用于用户自己的环境,而/etc/bashrc用于全局定义 (即对所有用户生效,当然,只对用户shell是bash生效). 具体的这两个文件的关系及如何加载在后面有介绍.
* Shell keyword(shell关键字):
诸如if,while,until,case,for这些命令.
* Function(函数):
举例:
定义个名为pwd的函数, 其功能是简单地显示”my function pwd”这句话
function pwd { echo “my function pwd”; }
定义好了之后可以用set或type -a pwd来查看取消则用unset pwd即可。
* Shell built-in command(shell内置命令):
命令enable可以查看所有当前shell环境下的内置命令; 或者用man cd任何一个内置命令均可查看到的manpage的上部列出了全部的内置命令.
* PATH variable
该变量定义在文件/etc/profile, /etc/profile.d/*.sh(POSIX), ~/.bash_profile(Bash)中.
其加载顺序是: 先/etc/profile (invoke /etc/profile.d/*.sh), 然后是~/.bash_profile, 再由~/.bash_profile调用执行 ~/.bashrc, 然后由~/.bashrc去调用执行 ~/.bashrc, ~/.bashrc再调用执行文件/etc/bashrc.
1) 为了查看具体的加载顺序, 你可以在四个文件中的头部和尾部分别添加两句话, 例如:
[ancharn@fc8 ~]$ cat ~/.bashrc
echo “start of ~/.bashrc”
if [ -f /etc/bashrc ] ; then
. /etc/bashrc
fi
alias ll=ls -l
alias cp=cp -i
alias mv=mv -i
alias rm=rm -i
……
echo “end of ~/.bashrc”
其它的文件一样添加, 这样当你用某个用户登录系统时就会看到如下的显示, 诸如:
start of /etc/profile
end of /etc/profile
start of ~/.bash_profile
start of ~/.bashrc
start of /etc/bashrc
end of /etc/bashrc
end of ~/.bashrc
end of ~/.bash_profile
从上面的显示你能够清晰的看到每个文件的加载顺序及相互调用执行关系(注意查看start和end).
2) PATH变量和hash的关系
这里, 我们来看一个例子:
[ancharn@fc8 ~]$ echo $PATH
/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/ancharn/bin
我首先在/home/ancharn/bin目录下写一个名为test.sh的脚本,内容如下:
[ancharn@fc8 bin]$ cat /home/ancharn/bin/test.sh
#!/bin/sh
# just test for PATH and hash
echo “This is my 1st shell script in /home/ancharn/bin directory.”
# end
[ancharn@fc8 bin]$
那么, 执行test.sh这个脚本如下
[ancharn@fc8 /]$ echo $PATH
/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/ancharn/bin
[ancharn@fc8 /]$ test.sh
This is my 1st shell script in /home/ancharn/bin directory.
[ancharn@fc8 /]$ hash
hits command
1 /home/ancharn/bin/test.sh
接着,在/usr/bin目录下建立一个同test.sh名的文件, 内容如下:
[ancharn@fc8 /]$ cat /usr/bin/test.sh
#!/bin/sh
# just test for PATH and hash
echo “This is my 2nd shell script in /usr/bin directory.”
# end
继续执行test.sh脚本:
[ancharn@fc8 /]$ test.sh
This is my 1st shell script in /home/ancharn/bin directory.
[ancharn@fc8 /]$ hash
hits command
2 /home/ancharn/bin/test.sh
说明什么呢? 如果按照PATH的顺序即/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/ancharn /bin, 会先找/usr/bin然后再找/home/ancharn/bin, 注意, 这个前提是hash表中没有该命令的记录, 因此我们看到/usr/bin/test.sh脚本并没有被执行, 因为在执行test.sh前, shell去hash表中查看了缓存, 进而继续执行了/home/ancharn/bin/test.sh脚本, 所以我们看到hits数增加了一次, 而/usr/bin/test.sh不会被执行.
现在, 我们清空hash, 重新执行test.sh脚本:
[ancharn@fc8 /]$ hash -r
[ancharn@fc8 /]$ hash
hash: hash table empty
[ancharn@fc8 /]$ test.sh
This is my 2nd shell script in /usr/bin directory.
[ancharn@fc8 /]$ hash
hits command
1 /usr/bin/test.sh
现在正常了. 所以一定要注意PATH和hash的这层关系.
注意: su, su-, bash login, bash norc这些命令的不同就在于是否执行了login-shell, 大家可以su和su -后, 再去运行echo $PATH看看有何不同.
好了, 回答上面的思考题, 其核心在于alias如果定义的如alias gcc=gcc时, 其实alias->keyword->function,->built-in->$PATH 这个顺序并没有变, 但是要知道alias gcc=gcc这种没有指定路径的alias会在找到gcc这个alias后, 再去找到后面指定的gcc, 怎么找? 当然到下一个了, 就是keyword->function….这个顺序了. 而如果是alias gcc=/usr/bin/gcc这样的指定具体路径的定义alias的话, 那么alias执行后就直接找到了那个具体文件而跳过了后面的所有搜索(即keyword->function,->built- in->$PATH). 请大家留意.
最后, 大家在做实验验证的时候可以分成2类验证, 因为一个命令不可能既属于keyword又属于built-in, 所以你可以:
1) 选择一个keyword如while, 定义一个while的alias,function,然后编写一个shell脚本名为while存放于PATH变量的某个路径下;
2) 选择一个built-in命令如pwd来验证.

View File

@@ -0,0 +1,7 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T16:01:15+08:00
====== 命名管道 ======
Created Sunday 26 February 2012

View File

@@ -0,0 +1,115 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T14:16:30+08:00
====== Introduction to Named Pipes ======
Created Sunday 26 February 2012
http://www.linuxjournal.com/article/2156
Sep 01, 1997 By Andy Vaught in SysAdmin
A very useful Linux feature is __named pipes which enable different processes to communicate__.
One of the__ fundamental features__ that makes Linux and other Unices useful is the “pipe”.
__Pipes allow separate processes to communicate without having been designed explicitly to work together.__ This allows tools quite narrow in their function to be combined in complex ways.
A simple example of using a pipe is the command:
ls | grep x
When bash examines the command line, it finds the vertical bar character | that separates the two commands. Bash and other shells run both commands, connecting the output of the first to the input of the second. The ls program produces a list of files in the current directory, while the grep program reads the output of ls and prints only those lines containing the letter x.
The above, familiar to most Unix users, is an example of an __“unnamed pipe”__. The pipe exists only__ inside the kernel __and cannot be accessed by processes that created it, in this case, the bash shell. For those who don't already know, a parent process is the first process started by a program that in turn creates separate child processes that execute the program.
The other sort of pipe is a **“named” pipe**, which is sometimes called a __FIFO__. FIFO stands for “First In, First Out” and refers to the property that the order of bytes going in is the same coming out. The “name” of a named pipe is actually__ a file name within the file system__. Pipes are shown by ls as any other file with a couple of differences:
% ls -l fifo1
__p__rw-r--r-- 1 andy users 0 Jan 22 23:11 fifo1|
The p in the leftmost column indicates that fifo1 is a pipe. The rest of the permission bits control who can read or write to the pipe** just like a regular file.** On systems with a modern ls, the | character at the end of the file name is another clue, and on Linux systems with the color option enabled, fifo| is printed in red by default.
On older Linux systems, named pipes are created by the **mknod** program, usually located in the /etc directory. On more modern systems, __mkfifo__ is a standard utility. The mkfifo program takes one or more file names as arguments for this task and creates pipes with those names. For example, to create a named pipe with the name pipe1 give the command:
mkfifo pipe
The simplest way to show how named pipes work is with an example. Suppose we've created pipe as shown above. In one virtual console1, type:
ls -l > pipe1
and in another type:
cat < pipe
Voila! The output of the command run on the** first console shows up on the second console**. Note that __the order__ in which you run the commands doesn't matter.
__内核自动处理pipe的读写两端进程的同步。__
If you haven't used virtual consoles before, see the article “Keyboards, Consoles and VT Cruising” by John M. Fist in the November 1996 Linux Journal.
If you watch closely, you'll notice that the first command you run appears to __hang__. This happens because the other end of the pipe is **not yet connected**, and so the kernel suspends the first process until the second process opens the pipe. In Unix jargon, the process is said to__ be “blocked”,__ since it is waiting for something to happen.
One very useful application of named pipes is to __allow totally unrelated programs to communicate with each other__. For example, a program that services requests of some sort (print files, access a database) could open the pipe for reading. Then, another process could make a request by opening the pipe and writing a command. That is, the “server” can perform a task on behalf of the “client”. Blocking can also happen if the client isn't writing, or the server isn't reading.
Pipe Madness
Create two named pipes, pipe1 and pipe2. Run the commands:
echo -n x | cat - pipe1 > pipe2 &
cat <pipe2 > pipe1
On screen, it will not appear that anything is happening, but if you run __top __(a command similar to__ ps__ for showing process status), you'll see that both cat programs are running like crazy copying the letter x back and forth in an endless loop.
After you press __ctrl-C __to get out of the loop, you may receive the message __“broken pipe”__. This error occurs **when a process writing to a pipe when the process reading the pipe closes its end.** Since the reader is gone, the data has __no place to go__. Normally, the writer will finish writing its data and close the pipe. At this point, the reader sees the EOF (end of file) and executes the request.
Whether or not the “broken pipe” message is issued depends on events at the exact instant the ctrl-C is pressed. If the second cat has just read the x, pressing ctrl-C stops the second cat, pipe1 is closed and the first cat stops quietly, i.e., without a message. On the other hand, if the second cat is waiting for the first to write the x, ctrl-C causes pipe2 to close before the first cat can write to it, and the error message is issued. This sort of random behavior is known as a __“race condition”__.
===== Process Substitution =====
**子进程替换是匿名临时管道的一种应用。在命令需要输入文件的地方可以用<(command which output filename) 来代替,在命令需要输出文件名的地方可以用>(command which output filename)来代替。**
Bash uses named pipes in a really neat way. Recall that__ when you enclose a command in parenthesis, the command is actually run in a “subshell”; __that is, the shell clones itself and the clone interprets the command(s) within the parenthesis. Since the outer shell is running only a single “command”, the output of a complete set of commands can be redirected as a unit. For example, the command:
__(ls -l; ls -l) >ls.out __
writes two copies of the current directory listing to the file ls.out.
**Process substitution **occurs when you put a < or > in front of the left parenthesis. For instance, typing the command:
cat <(ls -l)
results in the command ls -l executing in a subshell as usual, but__ redirects the output to a temporary named pipe__, which bash creates, names and later deletes. Therefore, __cat has a valid file name to read from,__ and we see the output of ls -l, taking one more step than usual to do so. Similarly, giving __>(commands)__ results in Bash naming a temporary pipe, which the commands inside the parenthesis read for input.
If you want to see whether two directories contain the same file names, run the single command:
**cmp <(ls /dir1) <(ls /dir2)**
The compare program cmp will see the names of two files which it will read and compare.
**Process substitution ** also makes the __tee command__ (used to view and save the output of a command) much more useful in that you can cause **a single stream of input to be read by multiple readers** without resorting to temporary files—bash does all the work for you. The command:
ls | tee >(grep foo | wc >foo.count) \
>(grep bar | wc >bar.count) \
| grep baz | wc >baz.count
counts the number of occurrences of foo, bar and baz in the output of ls and writes this information to three separate files. Command substitutions can even be nested:
cat <(cat <(cat <(ls -l))))
works as a very roundabout way to list the current directory.
As you can see, while __the unnamed pipes allow simple commands to be strung together, named pipes, with a little help from bash, allow whole trees of pipes to be created.__ The possibilities are limited only by your imagination.
Andy Vaught is currently a PhD candidate in computational physics at Arizona State University and has been running Linux since 1.1. He enjoys flying with the Civil Air Patrol as well as skiing. He can be reached at andy@maxwell.la.asu.edu.
----------------------------
Linux code for a client/server program using named pipes to sare some data between clients through a server
----------------------------
I was wondering however if there was a way to** keep a process attached to a pipe permanently**.
The answer is yes. I solved my problem with:
__tail -f <name_of_pipe>__ | <process_to_handle_output> &
注意cat > named_pipe 是不行的因为一旦对方关闭了管道的写fdcat就会返回。
----------------------------

View File

@@ -0,0 +1,116 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T16:00:42+08:00
====== Using Named Pipes (FIFOs) with Bash ======
Created Sunday 26 February 2012
http://www.linuxjournal.com/content/using-named-pipes-fifos-bash
Mar 27, 2009 By Mitch Frazier in HOW-TOs
It's hard to write a bash script of __much import__ without using a pipe or two. Named pipes, on the other hand, are much rarer.
Like un-named/anonymous pipes, named pipes provide __a form of IPC__ (Inter-Process Communication). With anonymous pipes, there's one reader and one writer, but that's not required with named pipes—__any number of__ readers and writers may use the pipe.
Named pipes are __visible in the filesystem __and can be read and written just as other files are:
$ ls -la /tmp/testpipe
prw-r--r-- 1 mitch users 0 2009-03-25 12:06 /tmp/testpipe|
Why might you want to use a named pipe in a shell script? One situation might be when you've got **a backup script **that runs via __cron__, and after it's finished, you want to shut down your system. If you do the shutdown from the backup script, **cron never sees the backup script finish**, so it never sends out the e-mail containing the output from the backup job. You could **do the shutdown via another cron job after the backup is "supposed" to finish**, but then you run the risk of shutting down __too early __every now and then, or you have to make the __delay much larger__ than it needs to be most of the time.
Using a named pipe, you can start the backup and the shutdown cron jobs__ at the same time__ and have the shutdown__ just wait __till the backup writes to the named pipe. When the shutdown job** reads something **from the pipe, it then pauses for a few minutes so the cron e-mail can go out, and then it shuts down the system.
这其实就是__利用PIPE的读写阻塞特性达到同步两个进程的目的__。
Of course, the previous example probably could be done fairly reliably by simply creating a regular file to signal when the backup has completed. A more complex example might be if you have a backup that **wakes up every hour** or so and reads a named pipe __to see if it should run__. You then could **write something** to the pipe each time you've made a lot of changes to the files you want to back up. You might even write the names of the files that you want backed up to the pipe so the backup doesn't have to check everything.
Named pipes are created via mkfifo __or__ mknod:
$ mkfifo /tmp/testpipe
$ mknod /tmp/testpipe p
The following shell script reads from a pipe. It first creates the pipe if it doesn't exist, then it reads in a loop till it sees "quit":
#!/bin/bash
pipe=/tmp/testpipe
trap "rm -f $pipe" EXIT #有问题trap捕获EXIT后执行的命令中最后__应该有exit__。否则脚本就不会停止。另外EXIT信号是当脚本终止时发送不管是何种原因如INTTERM。
if [[ ! -p $pipe ]]; then
mkfifo $pipe
fi
while true
do
if **read line <$pipe**; then
if [[ "$line" == 'quit' ]]; then
break
fi
echo $line
fi
done
echo "Reader exiting"
The following shell script writes to the pipe created by the read script. First, it checks to make sure the pipe exists, then it writes to the pipe. If an argument is given to the script, it writes it to the pipe; otherwise, it writes "Hello from PID".
#!/bin/bash
pipe=/tmp/testpipe
if [[ ! -p $pipe ]]; then
echo "Reader not running"
exit 1
fi
if [[ "$1" ]]; then
echo "$1" >$pipe
else
echo "Hello from $$" >$pipe
fi
Running the scripts produces:
$ sh rpipe.sh &
[3] 23842
$ sh wpipe.sh
Hello from 23846
$ sh wpipe.sh
Hello from 23847
$ sh wpipe.sh
Hello from 23848
$ sh wpipe.sh quit
Reader exiting
Note: initially I had the read command in the read script directly in the while loop of the read script, but the read command would usually return a non-zero status after two or three reads causing the loop to terminate.
while read line <$pipe
do
if [[ "$line" == 'quit' ]]; then
break
fi
echo $line
done
注意:
1. <$pipe放在while的里或外面是有区别的前者每次loop都重新打开$pipe文件后者在整个loop中只打开文件一次。
2. __bash对pipe是行缓冲__。
-----------------------------
You should __explicitly exit from the trap command__, otherwise the script will continue running past it. You should also catch a few other signals.
So: trap "rm -f $pipe" EXIT
Becomes: trap "rm -f $pipe; exit" INT TERM EXIT
Excellent article. Thank you!
----------
Not really necessary in this case
__The EXIT trap gets executed when the shell exits regardless of why it exits so trapping INT and TERM aren't really necessary in this case.__
However, your point about "exit" is good: trapping a signal removes the default action that occurs when a signal occurs, so if the default action is to exit the program and you want that to happen in addition to executing your code, you need to include an exit statement in your code.

View File

@@ -0,0 +1,150 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T14:37:25+08:00
====== 实验 ======
Created Sunday 26 February 2012
===== 准备: =====
#cd [[/tmp]]
#mkfifo ls
==== 实验一: ====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do echo "hello, $i" >ls; done
#被阻塞直到B终端中的read读打开该FIFO。
B termianl:
[geekard@geekard tmp]$ while__ read line <ls__ ;do echo $line; done
hello, 0 #A终端中进程立即返回。
#被阻塞。
^Cbash: ls: Interrupted system call #按C-c提示read系统调用被中断。
read line <ls位于while内部这样__每次loopls文件就会被读打开一次__。同理A中的 echo "hello, $i"也是__每次loop都写打开一次__。
B只显示一行的原因是AB中的进程存在__竞争条件__。理想情况下是AB__轮流执行__这样才会正常输出。但是实验一中A终端的进程在B终端进程**第一次读打开ls后**就__立即输出四次(因为B将ls保持读打开所以A每次loop都可以写打开ls)后退出__这样B终端进程一旦读到一行__下次__再读打开ls时就会被阻塞。
===== 实验二: =====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do echo "hello, $i" >ls; done
#被阻塞直到B终端中的cat读打开该FIFO。
[geekard@geekard tmp]$
B terminal:
[geekard@geekard tmp]$ cat ls
hello, 0
hello, 1
hello, 2
hello, 3
[geekard@geekard tmp]$
B中的进程__读打开文件ls一次__这样A进程每次loop的__写打开__都可以成功。
===== 实验三: =====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do echo "hello, $i" >ls; done
#被阻塞直到B终端中的cat读打开该FIFO。
[geekard@geekard tmp]$
B terminal:
[geekard@geekard tmp]$ while read line
>do
> echo $line
> done __<ls #重定向位于while语句块外面。__
hello, 0
hello, 1
hello, 2
hello, 3
[geekard@geekard tmp]$
B中的重定向位于while语句快外面因此__只读打开ls文件一次__。
===== 实验三: =====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ ))
> do
> echo "hello, $i"
>done __>ls #ls位于while语句块外面__
#被阻塞直到B终端中的cat读打开该FIFO。
[geekard@geekard tmp]$
B terminal:
[geekard@geekard tmp]$ while read line __<ls__ ;do echo $line; done
hello, 0 #A终端中进程立即返回。
#被阻塞。
^Cbash: ls: Interrupted system call #按C-c提示read系统调用被中断。
A中的ls被写打开一次当B进程读打开ls后立即向ls管道中__写入四次内容然后退出__。由于B中的 <ls 位于while loop内部因此__下一次再读打开__ls时就会被阻塞(A中的进程已退出写打开的ls已经关闭。)。
===== 实验四: =====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do echo "hello, $i" ; done >ls
#被阻塞直到B终端中的cat读打开该FIFO。
[geekard@geekard tmp]$
B terminal:
[geekard@geekard tmp]$ while read line ;do echo $line; done__ <ls__
hello, 0
hello, 1
hello, 2
hello, 3
[geekard@geekard tmp]$
===== 实验五: =====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do echo "hello, $i"; sleep 1; done >ls
[geekard@geekard tmp]$
B terminal:of doing the "read", but still get the same results. If I send data slowly (sleep 1 second between each line of data) it works.
[geekard@geekard tmp]$ while read line ;do echo $line; done <ls
hello, 0 #__每隔1秒__显示一行
hello, 1
hello, 2
hello, 3
[geekard@geekard tmp]$
===== 实验六: =====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do echo __-n __"hello, $i"; sleep 1; done >ls
#被阻塞直到B终端中进程执行4S后退出。
[geekard@geekard tmp]$
B terminal:
[geekard@geekard tmp]$ while **read line** ;do echo $line; done <ls #4s后退出。
[geekard@geekard tmp]$
===== 实验七: =====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do echo -n "hello, $i"; if__(( __i == 3 )) ;then echo -n__ -e __"\n"; fi; sleep 1; done >ls
#被阻塞直到B终端中进程执行4S后退出。
[geekard@geekard tmp]$
B terminal:
[geekard@geekard tmp]$ while read line ;do echo $line; done <ls
hello, 0hello, 1hello, 2hello, 3 #__4s后显示__输出内容。
[geekard@geekard tmp]$
这三个对比实验说明__bash对管道文件的缓冲方式是行缓冲__。
注意实验7中的(( i == 3 )) 不能用 [[ i == 3 ]],因为[[ ... ]] 不进行数字间的算术、关系、逻辑运算(但是字符串间的关系运算符支持,因此可以位 [[ i == "3" ]])。
===== 实验七: =====
A terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do echo "hello,$i"; done >ls
#被阻塞直到B终端中进程执行后退出。
[geekard@geekard tmp]$
B terminal:
[geekard@geekard tmp]$ for (( i = 0; i < 4; i++ )); do cat __<ls__; done
hello,0
hello,1
hello,2
hello,3
#被阻塞即使A中进程退出。
B被阻塞是因为cat默认__全缓冲__地读入一个文件因此 cat <ls会一直读读打开ls文件直到A中的进程向ls文件中写入结束。但是B中的第二次循环时因为没有进程写打开ls所以cat的都打开ls被阻塞。

View File

@@ -0,0 +1,90 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-21T19:39:02+08:00
====== 引号的使用 ======
Created Wednesday 21 December 2011
bash的引号包括三种类型: 单引号,双引号,和反引号。单/双引号可用来包含有空白字符的字符串,而反引号的功能是命令行替换(不转义时需要配对使用)。
**************bash的所有数据类型为字符串而且对**不转义**的引号不敏感在命令行解析的最后阶段会将这些__引号删去__。如果要保留引号则必须对其进行转义。
[geekard@geekard ~]$ echo 'df'df"df"
dfdfdf
[geekard@geekard ~]$
[geekard@geekard ~]$ echo 2 + 3
2 + 3
[geekard@geekard ~]$ echo df"df"df
dfdfdf
[geekard@geekard ~]$
************ 引号间可以嵌套,外层的引号需要配对且__保护内层的引号(内层的引号正常输出)__内层引号可以不配对.
[geekard@geekard ~]$ echo 'df"df"dkf'
df"df"dkf
[geekard@geekard ~]$ echo "dlf'df'df"
dlf'df'df
[geekard@geekard ~]$ echo "dfj'df"
dfj'df
[geekard@geekard ~]$
************ 单引号对其内的所有shell特殊字符转义双引号不对"$" "!" "`" "\" (转义字符) 转义
[geekard@geekard ~]$ echo 'PWD: $PWD'
PWD: $PWD
[geekard@geekard ~]$ echo "PWD:__ $PWD__"
PWD: /home/geekard
[geekard@geekard ~]$
[geekard@geekard ~]$ echo __"PWD:`pwd`"__
PWD:/home/geekard
[geekard@geekard ~]$
[geekard@geekard ~]$ echo __"dfsd\"fdf\"fdf"__
dfsd"fdf"fdf
[geekard@geekard ~]$ echo__ "dfsd"fdf"fdf"__
dfsdfdffdf
[geekard@geekard ~]$
[geekard@geekard ~]$ echo __"df\n\d\c" __
df\n\d\c
[geekard@geekard ~]$
#只有当\后的字符具有特殊含义时(对于双引号中的特殊字符包括__$ ! " ` ' \和回车__)\才具有转义字符的含义(将其后的字符作为正常字符输出,同时本\**不输出**),否则**\和其后的字符正常显示**。
[geekard@geekard ~]$ date
Wed Dec 21 19:48:05 CST 2011
[geekard@geekard ~]$** echo 'date: !!'**
date: !!
[geekard@geekard ~]$** echo "date: **__!!__**" #双引号不对其中的“!”字符转义,因此!!代表上一个命令**
echo "date: //echo 'date: !!'//"
date: echo 'date: !!'
[geekard@geekard ~]$
[geekard@geekard ~]$ echo** '!fdf'**
!fdf
[geekard@geekard ~]$ echo **"'!fdf'"**
bash: !fdf: event not found
[geekard@geekard ~]$ echo "dfs!"
echo "dfs" #bash进行历史记录查找后的结果
dfs
[geekard@geekard ~]$ echo __"dfs! "__ #除非__!后为空格__否则双引号中的会被视为历史记录查找特殊字符。
dfs!
[geekard@geekard ~]$ echo "dfs! s"
dfs! s
[geekard@geekard ~]$
*********** 字符串中若含有空格、换行、TAB键则必须要用引号包围
[geekard@geekard ~]$ echo "dfd\df"
dfd\df
[geekard@geekard ~]$ echo 'df #字符串中可以包含换行但是C语言的字符串中不行(必须要使用转义字符)。
> df'
df
df
[geekard@geekard ~]$ echo "df
> df"
df
df
[geekard@geekard ~]$ echo 'df\ #单引号中的字符都无特殊含义
> df'
df\
df
[geekard@geekard ~]$ echo "df\ #__双引号中的转义字符“\”起作用__
> df"
dfdf
[geekard@geekard ~]$

View File

@@ -0,0 +1,105 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-23T17:47:59+08:00
====== 数组 ======
Created Friday 23 December 2011
http://molinux.blog.51cto.com/2536040/469296
数组申明:
[root@localhost ~]# ARRAY=(a b c d)
#一对括表示是数组,数组元素用“空格“符号分割开。
[root@localhost ~]# echo $ARRAY
a
#数组也是指针形似,数组名称相当于一个指针,指向第一个数组元素,
#所以echo会显示出ARRAY[0]的值
数组赋值:
[root@localhost ~]# A[0]=9
[root@localhost ~]# A[10]=1
[root@localhost ~]# echo ${A[0]}
9
#赋值时可以单个赋值,如上
数组读取:
[root@localhost ~]# echo $ARRAY[1] //错误形式示例
a[1]
[root@localhost ~]# echo $ARRAY[2] //错误形式示例
a[2]
#如上面两种形式书写echo会先显示$ARRAY的值然后按照字符串显示[n],并且拼接起来。达不到取出值的目的.
[root@localhost ~]# echo ${ARRAY[0]}
a
[root@localhost ~]# echo ${ARRAY[1]}
b
[root@localhost ~]# echo ${ARRAY[3]}
d
#注意下表从0开始读取时候需用 ${} 将数组元素括起来.
[root@localhost ~]#ARRAY=(a b c d)
[root@localhost ~]# echo ${ARRAY[*]}
a b c d
[root@localhost ~]# echo ${#ARRAY[*]}
4
[root@localhost ~]#
[root@localhost ~]# A[0]=9 [root@localhost ~]# A[10]=1
[root@localhost ~]# echo ${A[*]}
9 1
[root@localhost ~]# echo ${#A[*]}
2
[root@localhost ~]# A[3]=5
[root@localhost ~]# echo ${A[*]}
9 5 1
[root@localhost ~]# echo ${#A[*]}
3
# 如上所示,${数组名[下标]} 下标是:*或者@ 可得到整个数组内容
#并且 ${#数组名[*]} 可返回整个数组非空值的个数
数组删除:
[root@localhost ~]# unset A
[root@localhost ~]# echo ${A[*]}
[root@localhost ~]# echo ${#A[*]}
0
数组特殊用法:
----分片:
[root@localhost ~]# echo ${ARRAY[*]}
a b c d e
[root@localhost ~]# echo ${ARRAY[*]:0:3}
a b c
[root@localhost ~]# echo ${ARRAY[*]:2:4}
c d e
# 如上在数组中可以用n进行数组的范围分片显示一个范围的数值
#
[root@localhost ~]# next=(${ARRAY[*]:2:4})
[root@localhost ~]# echo ${next[*]}
c d e
#如上分片后的部分数值赋给了新数组next
----替换:
[root@localhost ~]# echo ${ARRAY[*]}
a b c d e
[root@localhost ~]# echo ${ARRAY[*]/a/A}
A b c d e
[root@localhost ~]# echo ${ARRAY[*]/b/B}
a B c d e
[root@localhost ~]# echo ${ARRAY[*]/b/100}
a 100 c d e
[root@localhost ~]#
[root@localhost ~]# echo ${y[*]}
1 2 3 4 5
[root@localhost ~]# echo ${y[*]/2/200}
1 200 3 4 5
#如上,可以进行数组中值的替换。

View File

@@ -0,0 +1,162 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-22T14:06:03+08:00
====== 条件测试与命令替换 ======
Created Thursday 22 December 2011
===== 条件测试命令: =====
* test 和[ 是**等价**的条件测试命令都可以单独执行但__无输出值只有退出状态0表示测试为真非0表示测试为假。__
* 和 为**关键字**而非命令,是[ ]的**增强版本**,里面还可以用&&、||、<、>等逻辑和关系运算符但__不能有算术运算符__类似语言的语法~~因此返回值0表示测试为假返回值1表示测试为真这与test和[的返回状态值意义恰好相反,~~一般只用在条件测试中。
* __((...))比较特殊先对__**算术、关系、逻辑表达式**__计算(求值),如果结果非空或非零,则返回状态为真(0),否则返回假(1);注意:没有输出值,只有退出状态值。__
* __$((...))shell自动对括号中的__算术、关系、逻辑表达式__计算用计算后的值替换$((...))(而不是表示测试状态结果的真假,所以一般不用于测试)shell在分析命令行时会将$((...))进行命令行替换然后执行后续流程。__
==== 注意: ====
* 前三个是__专门用来做条件测试__的(因为它们作为命令执行时,没有输出值)而最后一个是shell用于__命令替换的语法__。
* 前两个__只能对__数值、字符串、文件做测试而第三个__只能对__算术、关系、逻辑表达式做条件测试。
[geekard@geekard ~]$ **((0))**; echo $?
1
[geekard@geekard ~]$ ((1)); echo $?
0
[geekard@geekard ~]$ unset haha; **(($haha))**; echo $? # 非初始化变量(不存在的变量),返回假
1
[geekard@geekard ~]$ unset haha; haha= ; **(($haha))**; echo $? # 初始化变量但为空NULL返回假
1
[geekard@geekard ~]$** ((3<2))**; echo $? # 返回假
1
[geekard@geekard ~]$ ((3>2)); echo $? # 返回真
0
[geekard@geekard ~]$ **((0&1))**; echo $?; ((1&2)); echo $?
1
1
[geekard@geekard ~]$
[geekard@geekard ~]$ __((i=2+3))__; echo i=$i, $? # 对表达式求值将结果5赋给变量i,__ 5非0__故双括号返回真。
i=5, 0
[geekard@geekard ~]$
[geekard@geekard ~]$ unset i; __$((i=2+3))__; echo i=$i,$? #__ shell__对$((...))中的表达式进行计算把__执行计算结果当作命令来运行__。
bash: 5: command not found
i=5,127
[geekard@geekard ~]$
[geekard@geekard ~]$ unset i; ((i=__i+3__)); echo i=$i, $? #括号中的变量如果不存在或初始化为NULL则计算时用0值或NULL值代入也可以前不加$符号。
i=3, 0
[geekard@geekard ~]$
[geekard@geekard ~]$ unset i; ((i=__$i+3__)); echo i=$i, $? #双括号中__变量替换的$符号可选__
i=3, 0
[geekard@geekard ~]$ unset i; ((i > 3)); echo i=$i, $? #i不存在以0值带入
i=, 1
[geekard@geekard ~]$
[geekard@geekard ~]$ unset i; ((__$i__=$i+3)); echo i=$i, $? #表达式中的等号__左边必须是变量名__
bash: ((: =+3: syntax error: operand expected (error token is "=+3")
i=, 1
[geekard@geekard ~]$
[geekard@geekard ~]$ echo $((2+3)); echo __$((2-3))__; echo $?; echo ((i+3))
5
-1
0
3
[geekard@geekard ~]$
[geekard@geekard ~]$ $((2+3)); echo $?; echo $((2+3)); echo $((2-2)); echo $?
bash: 5: command not found
127
5
0
0
[geekard@geekard ~]$ ((2+3)); echo $?; __echo ((2+3))__** #shell不会自动对((..))求值替换**
bash: syntax error near unexpected token `('
[geekard@geekard ~]$
[geekard@geekard ~]$ echo `((2+3))` #双括号没有输出只有退出状态故echo命令输出一空行
[geekard@geekard ~]$
之间的所有的字符都不会被文件名扩展或是标记分割,但是会有变量替换和命令替换。
===== test和[基础: =====
1)test命令或[]用于检查某个条件是否成立(如果使用[],则在表达式的__前后__需要留有空格),它可以进行__数值,字符和文件__3个方面的测试,具体如下:
(1)数值测试
-eq 等于
-ne 不等于
-gt 大于
-ge 大于等于
-lt 小于
-le 小于等于
(2)字符串测试
= 等于
!= 不相等
-z字符串 字符串长度伪则为真
-n字符串 字符串长度不伪则为真
(3)文件测试
-e文件名 文件存在为真
-r文件名 文件存在且为只读时为真
-w文件名 文件存在且可写时为真
-x文件名 文件存在且可执行为真
-s文件名 如果文件存在且长度不了0
-d文件名 文件存在且为目录时为真
-f文件名 文件存在且为普通文件为真
-c文件名 文件存在且为字符类型特殊文件为真
-b文件名 文件存在且为块特殊文件为真
(4)混合比较
-a 逻辑与
-o 逻辑或
除了支持这些语法外,还支持**关系和逻辑运算**(不支持算术运算)而且其返回状态值意义与test和[__正好相反__。(这主要是因为i[[借鉴的是__C语言的规则__即0和空表示假非0和非空表示真)
[geekard@geekard ~]$ echo 2 > 3 #[[除了支持[的语法还支持__关系和逻辑__运算符没有输出值只有退出状态。
[geekard@geekard ~]$
[geekard@geekard ~]$ echo 2 + 3 #[[__不支持算术运算和位运算__。
2 + 3
[geekard@geekard ~]$ echo 2 > 3 # 由于为关键字而非命令因此__[[后前无需空格__。
[geekard@geekard ~]$ echo 2>3
[geekard@geekard ~]$
geekard@geekard ~]$ 2 + 3 #不支持算术运算。
bash: conditional binary operator expected
bash: syntax error near `+'
[geekard@geekard ~]$ i=3
[geekard@geekard ~]$ __[ i > 2 ]__; echo $?
0
[geekard@geekard ~]$ __ i > 2 __; echo $?
0
[geekard@geekard ~]$ __(( i > 2 ))__; echo $?
0
[geekard@geekard ~]$__ (( 2 - 2 ))__; echo $? #表达式结果为0返回1
**1**
[geekard@geekard ~]$
[geekard@geekard ~]$ __$(( 2 + 3 ))__; echo $? # [ ((...)) [[...]]可以单独执行但是shell对$((...))进行命令行替换后执行它。
bash: 5: command not found
127
[geekard@geekard ~]$
((..))只支持**算术、关系、逻辑表达式**计算(求值)根据结果返回真或假__不支持字符串和文件的测试__。
===== 小技巧: =====
双括号还有妙用
for ((i=1;i<=num;i++))
shell里面是不允许if [ $a != 1 __||__ $b = 2 ]出现的(可以用**-a, -o**来做逻辑运算),要用
if [ $a != 1 ] || [ $b = 2 ]]
用双括号可以解决
if __
if [ "$a" -lt "$b" ]也可以改成双括号的形式
((“$a” < “$b”))
((...))不用于测试的情形是__为变量赋值__即将计算计算结果作为等号左边变量的值。
[geekard@geekard ~]$ __((k=2+3))__; echo k=$k, $?
k=5, 0
[geekard@geekard ~]$
[geekard@geekard ~]$ echo $((2+3))
5
[geekard@geekard ~]$ echo $((i=2+3))
5
[geekard@geekard ~]$

View File

@@ -0,0 +1,22 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-23T11:03:17+08:00
====== 正则表达式 ======
Created Friday 23 December 2011
[geekard@geekard ~]$ echo 192.168.1.1 |sed -n -e 's/^**\([0-9]\{1,3\}\.\)**\{3\}\(.*\)/\1/p' #\1只能引用三次匹配的__最后一次__匹配。
1.
[geekard@geekard ~]$ echo 192.168.1.1 |sed -n -e 's/^\([0-9]\{1,3\}\.\)\{3\}\(.*\)/\2/p'
1
[geekard@geekard ~]$ echo 192.168.1.1 |sed -n -e 's/^\([0-9]\{1,3\}\.\)\{3\}\(.*\)/**\3**/p' #错误,三次匹配作为一个整体,所以只有两个数字引用。
sed: -e expression #1, char 37: invalid reference \3 on `s' command's RHS
[geekard@geekard ~]$
#虽然\([0-9]\{1,3\}\.\)模式要匹配三次,但是\([0-9]\{1,3\}\.\)\{3\}作为__一个整体__只能__有一个数字对其引用__而且引用的是三次匹配的__最后一次匹配结果__而非三次结果。
[geekard@geekard ~]$ echo __192.168.1.__1 |sed -n -e 's/^**\(\(**[0-9]\{1,3\}\.\)\{3\}\)**\(**.*\)/\1/p' #在三次匹配的外面再加一个括号用于引用三次匹配的__整体结果__。
192.168.1.
[geekard@geekard ~]$ echo 192.168.__1.__1 |sed -n -e 's/^\(\([0-9]\{1,3\}\.\)\{3\}\)\(.*\)/**\2**/p' #数字代表模式中__括号自左向右的编号__。
1.
[geekard@geekard ~]$ echo 192.168.1.__1__ |sed -n -e 's/^\(\([0-9]\{1,3\}\.\)\{3\}\)\(.*\)/\3/p'
1

View File

@@ -0,0 +1,7 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T20:09:07+08:00
====== 终端颜色 ======
Created Sunday 26 February 2012

View File

@@ -0,0 +1,215 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T22:11:24+08:00
====== ANSI Escape Sequences- Colours and Cursor Movement ======
Created Sunday 26 February 2012
http://www.linuxselfhelp.com/howtos/Bash-Prompt/Bash-Prompt-HOWTO-6.html
===== 6.1 Colours =====
As mentioned before, non-printing escape sequences have to be enclosed in \[\033[ and \]. For colour escape sequences, they should also be followed by a __lowercase m__.
If you try out the following prompts in an xterm and find that you aren't seeing the colours named, check out your ~/.Xdefaults file (and possibly its bretheren) for lines like "**XTerm*Foreground: BlanchedAlmond**". This can be commented out by placing an exclamation mark __("!") __in front of it. Of course, this will also be dependent on what terminal emulator you're using. This is the likeliest place that your term foreground colours would be overridden.
To include blue text in the prompt:
PS1="\[**\033[34m**\][\$(date +%H%M)][\u@\h:\w]$ "
The problem with this prompt is that the blue colour that starts with the 34 colour code is __never switched back__ to the regular colour, so any text you type after the prompt is still in the colour of the prompt. This is also a dark shade of blue, so combining it with the bold code might help:
PS1="\[\033[1;34m\][\$(date +%H%M)][\u@\h:\w]$\[**\033[0m**\] "
The prompt is now in light blue, and it ends by switching the colour back to nothing (whatever foreground colour you had previously).
Here are the rest of the colour equivalences:
Black 0;30 Dark Gray 1;30
Blue 0;34 Light Blue 1;34
Green 0;32 Light Green 1;32
Cyan 0;36 Light Cyan 1;36
Red 0;31 Light Red 1;31
Purple 0;35 Light Purple 1;35
Brown 0;33 Yellow 1;33
Light Gray 0;37 White 1;37
Daniel Dui (ddui@iee.org) points out that to be strictly accurate, we must mention that the list above is for colours at the console. In an xterm, the code 1;31 isn't "Light Red," but "Bold Red." This is true of all the colours.
You can also set background colours by using 44 for Blue background, 41 for a Red background, etc. There are no bold background colours. Combinations can be used, like Light Red text on a Blue background: \[\033[44;1;31m\], although setting the colours separately seems to work better (ie. \[\033[44m\]\[\033[1;31m\]). Other codes available include 4: Underscore, 5: Blink, 7: Inverse, and 8: Concealed.
Aside: Many people (myself included) object strongly to the "blink" attribute. Fortunately, it doesn't work in any terminal emulators that I'm aware of - but it will still work on the console. And, if you were wondering (as I did) "What use is a 'Concealed' attribute?!" - I saw it used in an example shell script (not a prompt) to allow someone to type in a password without it being echoed to the screen.
Based on a prompt called "elite2" in the Bashprompt package (which I have modified to work better on a standard console, rather than with the special xterm fonts required to view the original properly), this is a prompt I've used a lot:
function elite
{
local GRAY="\[\033[1;30m\]"
local LIGHT_GRAY="\[\033[0;37m\]"
local CYAN="\[\033[0;36m\]"
local LIGHT_CYAN="\[\033[1;36m\]"
case $TERM in
xterm*)
local TITLEBAR='\[\033]0;\u@\h:\w\007\]'
;;
*)
local TITLEBAR=""
;;
esac
local GRAD1=$(tty|cut -d/ -f3)
PS1="$TITLEBAR\
$GRAY-$CYAN-$LIGHT_CYAN(\
$CYAN\u$GRAY@$CYAN\h\
$LIGHT_CYAN)$CYAN-$LIGHT_CYAN(\
$CYAN\#$GRAY/$CYAN$GRAD1\
$LIGHT_CYAN)$CYAN-$LIGHT_CYAN(\
$CYAN\$(date +%H%M)$GRAY/$CYAN\$(date +%d-%b-%y)\
$LIGHT_CYAN)$CYAN-$GRAY-\
$LIGHT_GRAY\n\
$GRAY-$CYAN-$LIGHT_CYAN(\
$CYAN\$$GRAY:$CYAN\w\
$LIGHT_CYAN)$CYAN-$GRAY-$LIGHT_GRAY "
PS2="$LIGHT_CYAN-$CYAN-$GRAY-$LIGHT_GRAY "
}
I define the colours as temporary shell variables in the name of readability. It's easier to work with. The "GRAD1" variable is a check to determine what terminal you're on. Like the test to determine if you're working in an Xterm, it only needs to be done once. The prompt you see look like this, except in colour:
--(giles@nikola)-(75/ttyp7)-(1908/12-Oct-98)--
--($:~/tmp)--
To help myself remember what colours are available, I wrote the following script which echoes all the colours to screen:
#!/bin/bash
#
# This file echoes a bunch of colour codes to the terminal to demonstrate
# what's available. Each line is one colour on black and gray
# backgrounds, with the code in the middle. Verified to work on white,
# black, and green BGs (2 Dec 98).
#
echo " On Light Gray: On Black:"
echo -e "\033[47m\033[1;37m White \033[0m\
1;37m \
\033[40m\033[1;37m White \033[0m"
echo -e "\033[47m\033[37m Light Gray \033[0m\
37m \
\033[40m\033[37m Light Gray \033[0m"
echo -e "\033[47m\033[1;30m Gray \033[0m\
1;30m \
\033[40m\033[1;30m Gray \033[0m"
echo -e "\033[47m\033[30m Black \033[0m\
30m \
\033[40m\033[30m Black \033[0m"
echo -e "\033[47m\033[31m Red \033[0m\
31m \
\033[40m\033[31m Red \033[0m"
echo -e "\033[47m\033[1;31m Light Red \033[0m\
1;31m \
\033[40m\033[1;31m Light Red \033[0m"
echo -e "\033[47m\033[32m Green \033[0m\
32m \
\033[40m\033[32m Green \033[0m"
echo -e "\033[47m\033[1;32m Light Green \033[0m\
1;32m \
\033[40m\033[1;32m Light Green \033[0m"
echo -e "\033[47m\033[33m Brown \033[0m\
33m \
\033[40m\033[33m Brown \033[0m"
echo -e "\033[47m\033[1;33m Yellow \033[0m\
1;33m \
\033[40m\033[1;33m Yellow \033[0m"
echo -e "\033[47m\033[34m Blue \033[0m\
34m \
\033[40m\033[34m Blue \033[0m"
echo -e "\033[47m\033[1;34m Light Blue \033[0m\
1;34m \
\033[40m\033[1;34m Light Blue \033[0m"
echo -e "\033[47m\033[35m Purple \033[0m\
35m \
\033[40m\033[35m Purple \033[0m"
echo -e "\033[47m\033[1;35m Pink \033[0m\
1;35m \
\033[40m\033[1;35m Pink \033[0m"
echo -e "\033[47m\033[36m Cyan \033[0m\
36m \
\033[40m\033[36m Cyan \033[0m"
echo -e "\033[47m\033[1;36m Light Cyan \033[0m\
1;36m \
\033[40m\033[1;36m Light Cyan \033[0m"
6.2 Cursor Movement
ANSI escape sequences allow you to move the cursor around the screen at will. This is more useful for full screen user interfaces generated by shell scripts, but can also be used in prompts. The movement escape sequences are as follows:
- Position the Cursor:
\033[<L>;<C>H
Or
\033[<L>;<C>f
puts the cursor at line L and column C.
- Move the cursor up N lines:
\033[<N>A
- Move the cursor down N lines:
\033[<N>B
- Move the cursor forward N columns:
\033[<N>C
- Move the cursor backward N columns:
\033[<N>D
- Clear the screen, move to (0,0):
\033[2J
- Erase to end of line:
\033[K
- Save cursor position:
\033[s
- Restore cursor position:
\033[u
The latter two codes are NOT honoured by many terminal emulators. The only ones that I'm aware of that do are xterm and nxterm - even though the majority of terminal emulators are based on xterm code. As far as I can tell, rxvt, kvt, xiterm, and Eterm do not support them. They are supported on the console.
Try putting in the following line of code at the prompt (it's a little clearer what it does if the prompt is several lines down the terminal when you put this in): echo -en "\033[7A\033[1;35m BASH \033[7B\033[6D" This should move the cursor seven lines up screen, print the word " BASH ", and then return to where it started to produce a normal prompt. This isn't a prompt: it's just a demonstration of moving the cursor on screen, using colour to emphasize what has been done.
Save this in a file called "clock":
#!/bin/bash
function prompt_command {
let prompt_x=$COLUMNS-5
}
PROMPT_COMMAND=prompt_command
function clock {
local BLUE="\[\033[0;34m\]"
local RED="\[\033[0;31m\]"
local LIGHT_RED="\[\033[1;31m\]"
local WHITE="\[\033[1;37m\]"
local NO_COLOUR="\[\033[0m\]"
case $TERM in
xterm*)
TITLEBAR='\[\033]0;\u@\h:\w\007\]'
;;
*)
TITLEBAR=""
;;
esac
PS1="${TITLEBAR}\
\[\033[s\033[1;\$(echo -n \${prompt_x})H\]\
$BLUE[$LIGHT_RED\$(date +%H%M)$BLUE]\[\033[u\033[1A\]
$BLUE[$LIGHT_RED\u@\h:\w$BLUE]\
$WHITE\$$NO_COLOUR "
PS2='> '
PS4='+ '
}
This prompt is fairly plain, except that it keeps a 24 hour clock in the upper right corner of the terminal (even if the terminal is resized). This will NOT work on the terminal emulators that I mentioned that don't accept the save and restore cursor position codes. If you try to run this prompt in any of those terminal emulators, the clock will appear correctly, but the prompt will be trapped on the second line of the terminal.
See also The Elegant Useless Clock Prompt for a more extensive use of these codes.
6.3 Moving the Cursor With tput
As with so many things in Unix, there is more than one way to achieve the same ends. A utility called "tput" can also be used to move the cursor around the screen, or get back information about the status of the terminal. "tput" for cursor positioning is less flexible than ANSI escape sequences: you can only move the cursor to an absolute position, you can't move it relative to its current position. I don't use "tput," so I'm not going to explain it in detail. Type "man tput" and you'll know as much as I do.

View File

@@ -0,0 +1,226 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T20:09:12+08:00
====== ANSI Escape sequences ======
Created Sunday 26 February 2012
http://ascii-table.com/ansi-escape-sequences.php
===== ANSI Escape sequences (ANSI Escape codes) =====
ANSI Escape sequences | VT100 / VT52 ANSI escape sequences | VT100 User Guide
These sequences __define functions that change display graphics, control cursor movement, and reassign keys__.
__ANSI escape sequence __is a sequence of ASCII characters, the first two of which are the ASCII __"Escape"__ character 27 (1Bh) and the left-bracket character __" [ "__ (5Bh). The character or characters following the escape and left-bracket characters specify an __alphanumeric__ code that __controls a keyboard or display function__.
ANSI escape sequences distinguish between uppercase and lowercase letters. Information is also available on VT100 / VT52 ANSI escape sequences.
Esc[//Line;Column//**H**
Esc[//Line;Column//**f ** __Cursor Position__:
Moves the cursor to the specified position (coordinates).
If you do not specify a position, the cursor moves to the home position at the **upper-left **corner of the screen (line 0, column 0). This escape sequence works the same way as the following Cursor Position escape sequence.
**可以使用下列命令往文件里写入转义字符序列:**
**# echo -ne '\e[23;34H' >tmp**
**#cat tmp**
**或:**
**printf("\033[0;37m%s \033[0m","K_Linux_Man");**
**printf("\033[0;34;1m%s \033[0m","K_Linux_Man");**
**printf("\033[0;32;1m%s \033[0m","K_Linux_Man");**
**printf("\033[0;37m%s \033[0m","K_Linux_Man");**
Esc[//Value//**A ** __Cursor Up:__
Moves the cursor up by the specified number of lines without changing columns. If the cursor is already on the top line, ANSI.SYS ignores this sequence.
Esc[//Value//**B ** __Cursor Down:__
Moves the cursor down by the specified number of lines without changing columns. If the cursor is already on the bottom line, ANSI.SYS ignores this sequence.
Esc[//Value//**C ** __Cursor Forward:__
Moves the cursor forward by the specified number of columns without changing lines. If the cursor is already in the **rightmost column**, ANSI.SYS ignores this sequence.
Esc[//Value//**D** __Cursor Backward:__
Moves the cursor back by the specified number of columns without changing lines. If the cursor is already in the leftmost column, ANSI.SYS ignores this sequence.
Esc[**s ** __Save Cursor Position:__
Saves the current cursor position. You can move the cursor to the saved cursor position by using the Restore Cursor Position sequence.
Esc[**u ** __Restore Cursor Position:__
Returns the cursor to the position stored by the Save Cursor Position sequence.
Esc[**2J ** __Erase Display:__
Clears the screen and moves the cursor to the **home position** (line 0, column 0).
Esc[**K** __Erase Line:__
Clears all characters from the cursor position to the __end of __the line (including the character at the cursor position).
Esc[//Value;...;Value//**m **__Set Graphics Mode:__
Calls the **graphics functions** specified by the following values. These specified functions__ remain active__ until the next occurrence of this escape sequence. Graphics mode changes the colors and attributes of text (such as bold and underline) displayed on the screen.
==== Text attributes ====
__0__ All attributes off
1 Bold on
4 Underscore (on monochrome display adapter only)
5 Blink on
7 Reverse video on
8 Concealed on
==== Foreground colors ====
30 Black
31 Red
32 Green
33 Yellow
34 Blue
35 Magenta
36 Cyan
37 White
==== Background colors ====
40 Black
41 Red
42 Green
43 Yellow
44 Blue
45 Magenta
46 Cyan
47 White
Parameters 30 through 47 meet the ISO 6429 standard.
Esc[=//Value//**h ** __Set Mode:__
Changes the **screen width or type** to the mode specified by one of the following values:
Screen resolution
0 40 x 25 monochrome (text)
1 40 x 25 color (text)
2 80 x 25 monochrome (text)
3 80 x 25 color (text)
4 320 x 200 4-color (graphics)
5 320 x 200 monochrome (graphics)
6 640 x 200 monochrome (graphics)
7 **Enables line wrapping**
13 320 x 200 color (graphics)
14 640 x 200 color (16-color graphics)
15 640 x 350 monochrome (2-color graphics)
16 640 x 350 color (16-color graphics)
17 640 x 480 monochrome (2-color graphics)
18 640 x 480 color (16-color graphics)
19 320 x 200 color (256-color graphics)
Esc[=//Value//**l ** __Reset Mode:__
Resets the mode by using the same values that Set Mode uses, **except for 7**, which disables line wrapping
(the last character in this escape sequence is a lowercase L).
Esc[//Code;String;...//**p** __Set Keyboard Strings:__
**Redefines a keyboard key** to a specified string.
The parameters for this escape sequence are defined as follows:
__Code__ is one or more of the values listed in the following table. These values represent keyboard keys and key combinations. When using these values in a command, you must type the semicolons shown in this table in addition to the semicolons required by the escape sequence. The codes in parentheses are **not available** on some keyboards. ANSI.SYS will not interpret the codes in parentheses for those keyboards unless you specify the /X switch in the DEVICE command for ANSI.SYS.
__String__ is either the** ASCII code **for a single character or **a string** contained in __quotation marks__. For example, both 65 and "A" can be used to represent an uppercase A.
IMPORTANT: Some of the values in the following table are not valid for all computers. Check your computer's documentation for values that are different.
从下表可以看到键盘上的__很多功能键其实是通过转义序列实现的__。
Key Code **SHIFT**+code **CTRL**+code **ALT**+code
F1 0;59 0;84 0;94 0;104
F2 0;60 0;85 0;95 0;105
F3 0;61 0;86 0;96 0;106
F4 0;62 0;87 0;97 0;107
F5 0;63 0;88 0;98 0;108
F6 0;64 0;89 0;99 0;109
F7 0;65 0;90 0;100 0;110
F8 0;66 0;91 0;101 0;111
F9 0;67 0;92 0;102 0;112
F10 0;68 0;93 0;103 0;113
F11 0;133 0;135 0;137 0;139
F12 0;134 0;136 0;138 0;140
HOME (num keypad) 0;71 55 0;119 --
UP ARROW (num keypad) 0;72 56 (0;141) --
PAGE UP (num keypad) 0;73 57 0;132 --
LEFT ARROW (num keypad) 0;75 52 0;115 --
RIGHT ARROW (num keypad) 0;77 54 0;116 --
END (num keypad) 0;79 49 0;117 --
DOWN ARROW (num keypad) 0;80 50 (0;145) --
PAGE DOWN (num keypad) 0;81 51 0;118 --
INSERT (num keypad) 0;82 48 (0;146) --
DELETE (num keypad) 0;83 46 (0;147) —
HOME (224;71) (224;71) (224;119) (224;151)
UP ARROW (224;72) (224;72) (224;141) (224;152)
PAGE UP (224;73) (224;73) (224;132) (224;153)
LEFT ARROW (224;75) (224;75) (224;115) (224;155)
RIGHT ARROW (224;77) (224;77) (224;116) (224;157)
END (224;79) (224;79) (224;117) (224;159)
DOWN ARROW (224;80) (224;80) (224;145) (224;154)
PAGE DOWN (224;81) (224;81) (224;118) (224;161)
INSERT (224;82) (224;82) (224;146) (224;162)
DELETE (224;83) (224;83) (224;147) (224;163)
PRINT SCREEN -- -- 0;114 --
PAUSE/BREAK -- -- 0;0 --
BACKSPACE 8 8 127 (0)
__ENTER __ 13 -- 10 (0
__TAB __ 9 0;15 (0;148) (0;165)
__NULL__ **0;3** -- -- --
A 97 65 1 0;30 #这里的A是键盘上标的A**默认为小写**ASCII为97Shift+A键为大写Aascii为65。
B 98 66 2 0;48
C 99 66 3 0;46
D 100 68 4 0;32
E 101 69 5 0;18
F 102 70 6 0;33
G 103 71 7 0;34
H 104 72 8 0;35
I 105 73 9 0;23
J 106 74 10 0;36
K 107 75 11 0;37
L 108 76 12 0;38
M 109 77 13 0;50
N 110 78 14 0;49
O 111 79 15 0;24
P 112 80 16 0;25
Q 113 81 17 0;16
R 114 82 18 0;19
S 115 83 19 0;31
T 116 84 20 0;20
U 117 85 21 0;22
V 118 86 22 0;47
W 119 87 23 0;17
X 120 88 24 0;45
Y 121 89 25 0;21
Z 122 90 26 0;44
1 49 __33__ -- 0;120 #Shift +1的结果为@字符其ASCII值为33.
2 50 64 0 0;121
3 51 35 -- 0;122
4 52 36 -- 0;123
5 53 37 -- 0;124
6 54 94 30 0;125
7 55 38 -- 0;126
8 56 42 -- 0;126
9 57 40 -- 0;127
0 48 41 -- 0;129
- 45 95 31 0;130
= 61 43 --- 0;131
[ 91 123 27 0;26
] 93 125 29 0;27
92 124 28 0;43
; 59 58 -- 0;39
' 39 34 -- 0;40
, 44 60 -- 0;51
. 46 62 -- 0;52
/ 47 63 -- 0;53
` 96 126 -- (0;41)
ENTER (keypad) 13 -- 10 (0;166)
/ (keypad) 47 47 (0;142) (0;74)
* (keypad) 42 (0;144) (0;78) --
- (keypad) 45 45 (0;149) (0;164)
+ (keypad) 43 43 (0;150) (0;55)
5 (keypad) (0;76) 53 (0;143)

View File

@@ -0,0 +1,7 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T20:52:37+08:00
====== ANSI escape code ======
Created Sunday 26 February 2012

View File

@@ -0,0 +1,57 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T20:53:10+08:00
====== Escape sequence ======
Created Sunday 26 February 2012
http://en.wikipedia.org/wiki/Escape_sequence
An escape sequence is a series of characters used to **change the state of computers and their attached peripheral devices**. These are also known as __control sequences__, reflecting their **use in device control(主要是终端控制)**. Some control sequences are special characters that always have the same meaning. Escape sequences use an **escape character **to change the meaning of the characters which follow it, meaning that the characters can be i__nterpreted as a command__ to be executed rather than as data.
Escape sequences are commonly used when a computer and a peripheral have __only a single channel through__ which to send information back and forth. If the device in question is __"dumb" __and can only do one thing with the information being sent to it (for instance, print it) then there is no need for an escape sequence. However most devices have more than one capability, and thus __need some way to distinguish information that is to be treated as data from information __that is to be treated as commands.
==== 转义序列的作用 ====
就是__提供并区分__设备控制信息该信息**混合**在正常的输出信息中。
An escape character is usually assigned to the __Esc key__ on a computer keyboard, and can be sent in other ways than as part of an escape sequence. For example, the Esc key may be used as an input character in editors such as EMACS, or for backing up one level in a menu in some applications. The Hewlett Packard HP 2640 terminals had a key for a "display functions" mode which would display graphics for all control characters, including Esc, to aid in debugging applications.
If the **Esc key** and other keys that __send escape sequences are both supposed to be meaningful to an application__, an ambiguity arises, if a terminal or terminal emulator is in use.
In particular, when the application receives the ASCII escape character, it is not clear__ whether __that character is the result of the user pressing the **Esc key **or whether it is the** initial character **of an escape sequence (e.g., resulting from an arrow key press). The traditional method of resolving the ambiguity is to observe whether or not __another character quickly follows __the escape character. If not, it is assumed not to be part of an escape sequence. This heuristic can fail under some circumstances, but in practice it works reasonably well, especially with faster modern communication speeds.
Escape sequences date back at least to the 1874 Baudot code.
===== Contents =====
1 Modem control
2 Comparison with control characters
3 ASCII video data terminals
4 Use in DOS and Windows
5 See also
===== Modem control =====
The **Hayes command set**, for instance, defines a single escape sequence, __+++__. (In order not to interpret +++ which may be a part of data as the escape sequence the sender stops communication for one second before and after the +++) .When the modem encounters this in a stream of data, it switches from its __normal mode__ of operation which simply sends any characters to the phone, to a __command mode__ in which the following data is assumed to be a part of the command language. You can switch back to the online mode by sending the O command.
modal ['məudəl] adj.模态的, 形态上的,情态的
The Hayes command set is **modal**, switching from command mode to online mode. This is not appropriate in the case where the commands and data will switch back and forth rapidly. An example of a non-modal escape sequence control language is the __VT100__, which used __a series of commands prefixed by the Control Sequence Introducer, escape-[__.
===== Comparison with control characters =====
A control character is__ a character __that, in isolation, has some control function, such as **carriage return** (CR).
__Escape sequences__, by contrast, consist of an escape character or sequence which __changes the interpretation __of following characters.
The earlier VT52 terminal used simple__ digraph__ commands like escape-A: in isolation, "A" simply meant the letter "A", but as part of the escape sequence "escape-A", it had a different meaning. The VT52 also supported parameters: it was not a straightforward control language encoded as substitution.
===== Escape character vs control character =====
Generally, an escape character is not a particular case of (device) control characters, nor vice versa. If we define__ control characters as non-graphic__, or as having a special meaning for an output device (e.g. printer or text terminal) then any escape character for this device is a control one.
But escape characters __used in programming (see below) are graphic__, hence are not control characters. Conversely most (but not all) of the ASCII "control characters" have some control function in isolation, therefore are not escape characters.
===== ASCII video data terminals =====
The VT100 terminal implemented the more sophisticated** ANSI standard** (now ECMA-48) for functions such as__ controlling cursor movement, character set, and display enhancements. __The Hewlett Packard HP 2640 series had perhaps the most elaborate escape sequences for** block and character** modes, __programming keys __and their soft labels, graphics vectors, and even saving data to tape or disk files.
===== Use in DOS and Windows =====
A utility, ANSI.SYS, can be used to enable the interpreting of the ANSI (ECMA-48) terminal escape sequences in a DOS command window in DOS or 16-bit Windows. The rise of GUI applications, which directly write to display cards, has greatly reduced the usage of escape sequences on Microsoft platforms, but they can still be used to create interactive random-access character-based screen interfaces with the character-based library routines such as printf without resorting to a GUI program.
[edit]

View File

@@ -0,0 +1,103 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T19:21:38+08:00
====== Linux 终端输出字体颜色说明 ======
Created Sunday 26 February 2012
http://jarit.iteye.com/blog/1070117
文本终端的颜色可以使用“ANS__I转义字符序列__”来生成。举例
  echo -e "\033[44;37;5m ME \033[0m COOL"
以上命令设置背景成为蓝色前景白色闪烁光标输出字符“ME”然后重新设置屏幕到缺省设置输出字符 “COOL”。“e”是命令 echo 的一个可选项,它用于激活特殊字符的解析器。“\033”是ESC的八进制ASCII表示是不可打印字符引导转义字符序列。“m”意味着__设置属性然后结束转义字符序列__这个例子里真正有效的字符是 “44;37;5” 和“0”。
修改“44;37;5”可以生成不同颜色的组合__数值和编码的前后顺序没有关系__。可以选择的编码如下所示
编码 颜色/动作
0 重新设置属性到缺省设置
1 设置粗体
2 设置一半亮度(模拟彩色显示器的颜色)
4 设置下划线(模拟彩色显示器的颜色)
5 设置闪烁
7 设置反向图象
22 设置一般密度
24 关闭下划线
25 关闭闪烁
27 关闭反向图象
30 设置黑色前景
31 设置红色前景
32 设置绿色前景
33 设置棕色前景
34 设置蓝色前景
35 设置紫色前景
36 设置青色前景
37 设置白色前景
38 在缺省的前景颜色上设置下划线
39 在缺省的前景颜色上关闭下划线
40 设置黑色背景
41 设置红色背景
42 设置绿色背景
43 设置棕色背景
44 设置蓝色背景
45 设置紫色背景
46 设置青色背景
47 设置白色背景
49 设置缺省黑色背景
其他有趣的代码还有:
__\033[2J __  清除屏幕
\033[0q  关闭所有的键盘指示灯
\033[1q  设置“滚动锁定”指示灯 (Scroll Lock)
\033[2q  设置“数值锁定”指示灯 (Num Lock)
\033[3q  设置“大写锁定”指示灯 (Caps Lock)
\033[15:40H 把关闭移动到第15行40列
__\007 __   发蜂鸣生beep
RedHat的字体和背景颜色的改变方法
命令:
PS1="[\e[32;1m\u@\h \W]\\$"
export PS1="[\e[32;1m\u@\h \W]\\$" 两者的区别请查看环境变量的相关资料
解释:
\e[32;1m这就是__控制字体和背景颜色__的转义字符30~37是字体颜色、40~47是背景颜色
例子中的32;1m数字的位置是可以对调的如\e[1;32m如果是在X环境下可以更换一下1的范围0~10可能有的没用处0或者不写\e [0;32m或\e[;32m显示浅颜色1显示高亮 4加下划线.....如果改后的效果不好但是又还原不了那就不写m前面的数字如\e[32;m或者直接注销再登陆
\u \h \W这是一些转义字符下面详细解释
\d 代表日期格式为weekday month date例如"Mon Aug 1"
\H 完整的主机名称。例如我的机器名称为fc4.linux则这个名称就是fc4.linux
\h 仅取主机的第一个名字如上例则为fc4.linux则被省略
\t 显示时间为24小时格式HHMMSS
\T 显示时间为12小时格式
\A 显示时间为24小时格式HHMM
\u :当前用户的账号名称
\v BASH的版本信息
\w :完整的工作目录名称。家目录会以 ~代替
\W 利用basename取得工作目录名称所以只会列出最后一个目录
\# :下达的第几个命令
\$ 提示字符如果是root时提示符为# ,普通用户则为:$
\n :新建一行
字体并不局限于一个颜色__可以有多个颜色__
PS1="[\e[32;1m\u@\e[35;1m\h \e[31;1m\W]\\$"
以上两个命令在注销后再登陆就失效了,用下面方法使其永久生效:
vi /etc/profile
在“export PATH .....”下面添加一行export PS1="[\e[32;1m\u@\h \W]\\$"
注销再登陆就成功了如果没生效使用source /etc/profile 命令试试,或者直接重启机器。

View File

@@ -0,0 +1,78 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T21:42:09+08:00
====== Terminal Function Key Escape Codes ======
Created Sunday 26 February 2012
http://aperiodic.net/phil/archives/Geekery/term-function-keys.html
Tue, 13 Dec 2005 Terminal Function Key Escape Codes
2:11PM | Geekery | #
I recently had the pleasure of trying to figure out a friend's terminal woes. His **function keys **weren't behaving properly. It turns out that his__ terminal was sending escape codes__ that differed from the __terminfo__ definition his terminal was using. I set out to find what the correct solution was. These are my results. (Note that I use Debian GNU/Linux. Some of this may be Debian-specific.)
Most **terminal emulators **these days emulate some superset of a __DEC VT100.__ (Not all of them emulate a VT100 exactly, though. cf. vttest.) The **VT100 didn't have function keys** in the sense that it didn't have keys labeled F1, F2, F3, etc. It did, however, have four keys over the numeric keypad labeled PF1 through PF4. These keys are generally regarded as analogous to F1 through F4 on a modern keyboard. They generated escape codes __^[OP through ^[OS__. The appropriate __$TERM__ type for a VT100 is vt100.
One of the best well known terminal emulators around, __xterm__, actually emulates (or strives to emulate) a __DEC VT220__. I'll get to xterm in a bit, but the VT220 was a more advanced terminal than the VT100. Among other things, it had** twenty function keys, labeled F1 through F20**. The first five were strictly for __local terminal functions__; the host never saw any escape codes from them. The remainder (F6 through F20) sent escape codes **^[[17~ to ^[[21~, ^[[23~ to ^[[26~, ^[[28~, ^[[29~, and ^[[31~ to ^[[34~.** The appropriate $TERM type for a VT220 is __vt220__. (Note that, on my system, the vt220 $TERM type actually defines VT100 escape codes for F1 through F4. There's no definition for F5.)
This brings me to xterm. xterm has a long history, and the function key definitions have __changed over time__. The original xterm from the X Consortium (even before they became the Open Group) used escape codes based on the VT220, but extended to cover the range from **F1 to F48**. F1 through F12 generated, respectively, codes ^[[11~ to ^[[15~, ^[[17~ to ^[[21~, ^[[23~, and ^[[24~.__ Shift-F1__ through Shift-F12 were used for F13 through F24, and generated codes from ^[[11;2~ to ^[[24;2~. Similarly __Ctrl-F1__ through Ctrl-F12 were used for F25 through F36 and generated codes ^[[11;5~ to ^[[24;5~, and__ Ctrl-Shift-F1__ through Ctrl-Shift-F12 were used for F37 through F48 and generated codes ^[[11;6~ to ^[[24;6~. None of the base xterm $TERM types on my system correspond to this series of escape codes, though you can still get xterm to exhibit the old behavior by setting the **OldXtermFKeys **resource to 'true'.
The current XFree86 xterm __mixes VT100 and VT220__. Since the original VT220 didn't have F1 through F5, the XFree86 xterm uses the escape sequences from the VT100's PF1 through PF4 for F1 through F4 while retaining the VT220-based escape sequences from the X Consortium xterm for F5 through F12. So the differences from the earlier xterms are: F1 through F4 generate escape codes ^[OP through ^[OS, F13 to F16 generate ^[O2P to ^[O2S, F25 to F28 generate ^[O5P to ^[O5S, and F37 to F40 generate ^[O6P to ^[O6S. On my system, the $TERM types that have the appropriate function key definitions are **xterm, xterm-debian, xterm-mono, and xterm-xfree86.**
The GNOME project's terminal emulator is __gnome-terminal__**.** It generates the exact same escape codes as the XFree86 xterm and will work with the same $TERM settings. Note, however, that some function keys are bound to gnome-terminal actions and __will not be passed through to applications__ running in the terminal. (For example, F1 calls up the GNOME help browser to view the gnome-terminal documentation.)
__multi-gnome-terminal __is based on gnome-terminal, but it implements **multiple tabbed terminal sessions** in a single window. It also does the function keys a little differently, though it's a bit more like the original VT220. F1 through F12 behave exactly the same as the XFree86 xterm. Shift-F1 through Shift-F10 function as F11 through F20 and generate escape codes from ^[[23~ to ^[[34~, just like the VT220. Note that this means there are two ways to get F11 and F12. (Actually, there are three, since Shift-F11 and Shift-F12 are also equivalent to F11 and F12.) On my system, the $TERM types with the appropriate function key definitions are xterm-color, xterm-r6, and xterm-vt220. xterm can be made to behave like this by setting the **SunKeyboard** resource to 'true'. Note that, like gnome-terminal, multi-gnome-terminal binds some function keys for its own use and may not pass then through to the programs in the terminal.
__rxvt __is a very popular xterm replacement. It uses the same escape sequences at the X11R6 xterm for F1 through F12. Shift-F1 through Shift-F12 work similarly to multi-gnome-terminal; they add ten to the number on the key (so there are again two ways to get F11 and F12). rxvt generates the same escape sequences as multi-gnome-terminal for F11 through F20, and uses ^[[23$ and ^[[24$ for F21 and F22, respectively. The sequence continues with Ctrl-F1 through Ctrl-F12 generating ^[[11^ through ^[[24^ for F23 through F34 (no overlap with previous sequences), Ctrl-Shift-F1 through Ctrl-Shift-F10 generating ^[[23^ through ^[[34^ for F33 through F42 (two-key overlap), and Ctrl-Shift-F11 and Ctrl-Shift-F12 generating ^[[23@ and ^[[24@ for F43 and F44. The base $TERM type for rxvt is__ rxvt__, though it ships with several types for different circumstances, including __rxvt-basic and rxvt-m__. It also comes with __rxvt-unicode__, but on my system that definition only lists function keys up to F20.
__GNU screen__ is also a terminal emulator, though it expects to run within another terminal environment (as opposed to displaying text in a graphical environment like xterm or displaying text on physical hardware like an actual terminal). As such, it translates many escape sequences from its containing terminal environment to the VT100-like environment it provides. It will recognize and translate the sequences for F1 through F12. For those, it will generate the same escape codes as the XFree86 xterm. It does not recognize F13 and above; those escape codes will **pass through unchanged to the programs **running within screen. (Note that the screen 'bindkey' command has a -k option that uses__ termcap __capabilities to represent keys. It understands k1 through FA, which correspond to F1 through F20.) The $TERM type for screen is __screen__.
key VT100 VT220 X11R6 xterm XFree86 xterm rxvt MGT screen
F1 ^[OP ^[[11~ ^[OP ^[[11~ ^[OP ^[OP
F2 ^[OQ ^[[12~ ^[OQ ^[[12~ ^[OQ ^[OQ
F3 ^[OR ^[[13~ ^[OR ^[[13~ ^[OR ^[OR
F4 ^[OS ^[[14~ ^[OS ^[[14~ ^[OS ^[OS
F5 ^[[15~ ^[[15~ ^[[15~ ^[[15~ ^[[15~
F6 ^[[17~ ^[[17~ ^[[17~ ^[[17~ ^[[17~ ^[[17~
F7 ^[[18~ ^[[18~ ^[[18~ ^[[18~ ^[[18~ ^[[18~
F8 ^[[19~ ^[[19~ ^[[19~ ^[[19~ ^[[19~ ^[[19~
F9 ^[[20~ ^[[20~ ^[[20~ ^[[20~ ^[[20~ ^[[20~
F10 ^[[21~ ^[[21~ ^[[21~ ^[[21~ ^[[21~ ^[[21~
F11 ^[[23~ ^[[23~ ^[[23~ ^[[23~ ^[[23~ ^[[23~
F12 ^[[24~ ^[[24~ ^[[24~ ^[[24~ ^[[24~ ^[[24~
F13 ^[[25~ ^[[11;2~ ^[O2P ^[[25~ ^[[25~
F14 ^[[26~ ^[[12;2~ ^[O2Q ^[[26~ ^[[26~
F15 ^[[28~ ^[[13;2~ ^[O2R ^[[28~ ^[[28~
F16 ^[[29~ ^[[14;2~ ^[O2S ^[[29~ ^[[29~
F17 ^[[31~ ^[[15;2~ ^[[15;2~ ^[[31~ ^[[31~
F18 ^[[32~ ^[[17;2~ ^[[17;2~ ^[[32~ ^[[32~
F19 ^[[33~ ^[[18;2~ ^[[18;2~ ^[[33~ ^[[33~
F20 ^[[34~ ^[[19;2~ ^[[19;2~ ^[[34~ ^[[34~
F21 ^[[20;2~ ^[[20;2~ ^[[23$
F22 ^[[21;2~ ^[[21;2~ ^[[24$
F23 ^[[23;2~ ^[[23;2~ ^[[11^
F24 ^[[24;2~ ^[[24;2~ ^[[12^
F25 ^[[11;5~ ^[O5P ^[[13^
F26 ^[[12;5~ ^[O5Q ^[[14^
F27 ^[[13;5~ ^[O5R ^[[15^
F28 ^[[14;5~ ^[O5S ^[[17^
F29 ^[[15;5~ ^[[15;5~ ^[[18^
F30 ^[[17;5~ ^[[17;5~ ^[[19^
F31 ^[[18;5~ ^[[18;5~ ^[[20^
F32 ^[[19;5~ ^[[19;5~ ^[[21^
F33 ^[[20;5~ ^[[20;5~ ^[[23^
F34 ^[[21;5~ ^[[21;5~ ^[[24^
F35 ^[[23;5~ ^[[23;5~ ^[[25^
F36 ^[[24;5~ ^[[24;5~ ^[[26^
F37 ^[[11;6~ ^[O6P ^[[28^
F38 ^[[12;6~ ^[O6Q ^[[29^
F39 ^[[13;6~ ^[O6R ^[[31^
F40 ^[[14;6~ ^[O6S ^[[32^
F41 ^[[15;6~ ^[[15;6~ ^[[33^
F42 ^[[17;6~ ^[[17;6~ ^[[34^
F43 ^[[18;6~ ^[[18;6~ ^[[23@
F44 ^[[19;6~ ^[[19;6~ ^[[24@
F45 ^[[20;6~ ^[[20;6~
F46 ^[[21;6~ ^[[21;6~
F47 ^[[23;6~ ^[[23;6~
F48 ^[[24;6~ ^[[24;6~

View File

@@ -0,0 +1,46 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2012-02-26T21:13:48+08:00
====== 转义字符实验 ======
Created Sunday 26 February 2012
转义字符(Escape charater)和控制字符(Control charater)是不同的概念。前者是改变其后面的某个字符或字符串的含义(例如,使其具有命令功能如改变光标位置,命令行编辑,前后景颜色等)后者只是__本字符__起到特殊作用。
[geekard@geekard ~]$** cat #默认从标准输入读取一行后返回。**
123 #这是__终端回显__的字符。
123
!@# #同上
!@#
asd
asd
ASD #Shift-a 回显
ASD
áóä #Alt -a 回显
áóä
^A[geekard@geekard ~]$ cat #Clt-s回显后暂停终端输入**Clt-d回显后cat退出**。
^[OP^[OQ^[OR^[OS ** #输入F1 F2F3F4后回显。**
PQRS **#程序输出。**
^[[A^[[D^[[C^[[B^[[H^[[5~^[[6~^[[F **#左右上下Home,PU, PD,END回显。回车后光标移到到屏幕上方。**
[geekard@geekard ~]$ cat color.c
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int i ;
printf("hello, world!\n");
printf("__\033OP__\n","K_Linux_Man"); #以下四行为F1-F4的转义字符串。
printf("\033OQ\n","K_Linux_Man");
printf("\033OR\n","K_Linux_Man");
printf("\033OS\n","K_Linux_Man");
exit(0);
}
[geekard@geekard ~]$ gcc color.c && [[./a.out]]
hello, world!
P #转义字符串被输出为单个字符。
Q
R
S
[geekard@geekard ~]$

View File

@@ -0,0 +1,84 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-24T13:47:19+08:00
====== 脚本的执行与终止 ======
Created Saturday 24 December 2011
[geekard@geekard ~]$ **cat test.sh **
tmpfile=tmpfile
touch tmpfile
trap "rm -f $tmpfile; exit" INT TERM EXIT
while true; do __sleep 1__; done
[geekard@geekard ~]$
[geekard@geekard ~]$ **./test.sh &**
[1] __6285__
[geekard@geekard ~]$ ps __ -t pts/0__ -opid,ppid,args f
PID PPID COMMAND
2412 2411 bash
__6285__ 2412 \_ bash
** 7414 ** 6285 | \_ sleep 1
7416 2412 \_ ps -t pts/0 -opid,ppid,args f
[geekard@geekard ~]$ kill -TERM 7414
bash: kill: (7414) - **No such process**
[geekard@geekard ~]$ ps -t pts/0 -opid,ppid,args f
PID PPID COMMAND
2412 2411 bash
6285 2412 \_ bash
** 7662** 6285 | \_ sleep 1
7663 2412 \_ ps -t pts/0 -opid,ppid,args f
[geekard@geekard ~]$ kill -TERM 7662
bash: kill: (7662) - No such process
修改一下test.sh文件延长sleep事件到10s
[geekard@geekard ~]$ **cat test.sh **
tmpfile=tmpfile
touch tmpfile
trap "rm -f $tmpfile; exit" INT TERM EXIT
while true; do __sleep 10__; done
[geekard@geekard ~]$ ./tesh.sh &
[1] __7909__
[geekard@geekard ~]$ ps -t pts/0 -opid,ppid,args f
PID PPID COMMAND
2412 2411 bash
__ 7909 __ 2412 \_ bash
** 7944** 7909 | \_ sleep 10
7951 2412 \_ ps -t pts/0 -opid,ppid,args f
[geekard@geekard ~]$ kill -TERM 7944
**Terminated**
[geekard@geekard ~]$ ls
bin Desktop download musics pictures softwares test.sh __ tmpfile__ vms
codes documents dumy.c notes ppc test file tmp video www
[geekard@geekard ~]$ ps -t pts/0 -opid,ppid,args f
PID PPID COMMAND
2412 2411 bash
7909 2412 \_ bash
** 7978 7909 | \_ sleep 10**
** 8006 2412 \_ ps -t pts/0 -opid,ppid,args f**
**[geekard@geekard ~]$ kill -TERM 7978**
**bash: kill: (7978) -** No such process
[geekard@geekard ~]$ ps -t pts/0 -opid,ppid,args f
PID PPID COMMAND
2412 2411 bash
7909 2412 \_ bash
__ 8047 __ 7909 | \_ sleep 10
8048 2412 \_ ps -t pts/0 -opid,ppid,args f
[geekard@geekard ~]$ kill -TERM 8047
Terminated
[geekard@geekard ~]$ ls
bin Desktop download musics pictures softwares test.sh __tmpfile__ vms
codes documents dumy.c notes ppc test file tmp video www
[geekard@geekard ~]$ kill -TERM__ 7909__
[geekard@geekard ~]$ ls
bin Desktop download musics pictures softwares test.sh __tmpfile__ vms
codes documents dumy.c notes ppc test file tmp video www
[geekard@geekard ~]$ ls
bin Desktop download musics pictures softwares test.sh __tmpfile__ vms
codes documents dumy.c notes ppc test file tmp video www
[geekard@geekard ~]$ ps -t pts/0 -opid,ppid,args f
PID PPID COMMAND
2412 2411 bash
8159 2412 \_ ps -t pts/0 -opid,ppid,args f
**[1]+ Done ./test.sh**
[geekard@geekard ~]$

View File

@@ -0,0 +1,18 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-24T13:12:33+08:00
====== 脚本调试 ======
Created Saturday 24 December 2011
shell脚本的调试通常都非常容易但没有特定的辅助工具。我一般采用以下方法
1.添加一些额外的echo语句来显示变量的内容(适用于报错不明显的情况)。
2.在脚本的适当位置设置各种shell选项
set -e: 任何语句退出码为假时停止脚本的执行
set -n : 只检查语法,不执行命令
set -v : 在执行命令前回显它们
set -x : 在执行命令前回显命令行解析后的结果
set -u :如果使用了未定义的变量就给出错误信息。
如果将上面的“-”换为"+"则取消设置。另外,在启动脚本时也可以指定上述选项,如:
#/bin/sh -n <script>

View File

@@ -0,0 +1,36 @@
Content-Type: text/x-zim-wiki
Wiki-Format: zim 0.4
Creation-Date: 2011-12-22T10:43:30+08:00
====== 表达式计算 ======
Created Thursday 22 December 2011
[geekard@geekard ~]$ unset kk; echo __((kk++))__
bash: syntax error near unexpected token `('
[geekard@geekard ~]$ echo__ $((++kk))__; echo $kk
1
1
[geekard@geekard ~]$ unset kk; echo __$((kk++))__; echo $kk
0
1
[geekard@geekard ~]$ unset k;i=$((__k__+1));echo $i
1
[geekard@geekard ~]$ unset k;i=$((__$k__+1));echo $i
1
[geekard@geekard ~]$ unset i;i=$((__i__+1));echo $i
1
[geekard@geekard ~]$ unset i;i=$((__$i__+1));echo $i
1
[geekard@geekard ~]$
[geekard@geekard ~]$ echo $((kk++))
0
[geekard@geekard ~]$ echo $kk
1
[geekard@geekard ~]$ unset kk
[geekard@geekard ~]$ echo $((++kk))
1
[geekard@geekard ~]$