I am glad that you are here! I was working on bioinformatics a few years ago and was amazed by those single-word bash commands which are much faster than my dull scripts, time saved through learning command-line shortcuts and scripting. Recent years I am working on cloud computing and I keep recording those useful commands here. Not all of them is oneliner, but i put effort on making them brief and swift. I am mainly using Ubuntu, Amazon Linux, RedHat, Linux Mint, Mac and CentOS, sorry if the commands don't work on your system.
This blog will focus on simple bash commands for parsing data and Linux system maintenance that i acquired from work and LPIC exam. I apologize that there are no detailed citation for all the commands, but they are probably from dear Google and Stack Overflow.
English and bash are not my first language, please correct me anytime, thank you. If you know other cool commands, please teach me!
Here's a more stylish version of Bash-Oneliner~
- Terminal Tricks
- Variable
- Math
- Grep
- Sed
- Awk
- Xargs
- Find
- Condition and Loop
- Time
- Download
- Random
- Xwindow
- System
- Hardware
- Networking
- Data Wrangling
- Others
Ctrl + a : move to the beginning of line.
Ctrl + d : if you've type something, Ctrl + d deletes the character under the cursor, else, it escapes the current shell.
Ctrl + e : move to the end of line.
Ctrl + k : delete all text from the cursor to the end of line.
Ctrl + l : equivalent to clear.
Ctrl + n : same as Down arrow.
Ctrl + p : same as Up arrow.
Ctrl + q : to resume output to terminal after Ctrl + s.
Ctrl + r : begins a backward search through command history.(keep pressing Ctrl + r to move backward)
Ctrl + s : to stop output to terminal.
Ctrl + t : transpose the character before the cursor with the one under the cursor, press Esc + t to transposes the two words before the cursor.
Ctrl + u : cut the line before the cursor; then Ctrl + y paste it
Ctrl + w : cut the word before the cursor; then Ctrl + y paste it
Ctrl + x + backspace : delete all text from the beginning of line to the cursor.
Ctrl + x + Ctrl + e : launch editor defined by $EDITOR to input your command. Useful for multi-line commands.
Ctrl + z : stop current running process and keep it in background. You can use `fg` to continue the process in the foreground, or `bg` to continue the process in the background.
Ctrl + _ : undo typing.
Esc + u
# converts text from cursor to the end of the word to uppercase.
Esc + l
# converts text from cursor to the end of the word to lowercase.
Esc + c
# converts letter under the cursor to uppercase, rest of the word to lowercase.
!53
!!
# run the previous command using sudo
sudo !!
Run last command and change some parameter using caret substitution (e.g. last command: echo 'aaa' -> rerun as: echo 'bbb')
#last command: echo 'aaa'
^aaa^bbb
#echo 'bbb'
#bbb
#Notice that only the first aaa will be replaced, if you want to replace all 'aaa', use ':&' to repeat it:
^aaa^bbb^:&
#or
!!:gs/aaa/bbb/
!cat
# or
!c
# run cat filename again
# '*' serves as a "wild card" for filename expansion.
/etc/pa*wd #/etc/passwd
# '?' serves as a single-character "wild card" for filename expansion.
/b?n/?at #/bin/cat
# '[]' serves to match the character from a range.
ls -l [a-z]* #list all files with alphabet in its filename.
# '{}' can be used to match filenames with more than one patterns
ls *.{sh,py} #list all .sh and .py files
$0 :name of shell or shell script.
$1, $2, $3, ... :positional parameters.
$# :number of positional parameters.
$? :most recent foreground pipeline exit status.
$- :current options set for the shell.
$$ :pid of the current shell (not subshell).
$! :is the PID of the most recent background command.
$_ :last argument of the previously executed command, or the path of the bash script.
$DESKTOP_SESSION current display manager
$EDITOR preferred text editor.
$LANG current language.
$PATH list of directories to search for executable files (i.e. ready-to-run programs)
$PWD current directory
$SHELL current shell
$USER current username
$HOSTNAME current hostname
set -o vi
# change bash shell to vi mode
# then hit the Esc key to change to vi edit mode (when `set -o vi` is set)
k
# in vi edit mode - previous command
j
# in vi edit mode - next command
0
# in vi edit mode - beginning of the command
R
# in vi edit mode - replace current characters of command
2w
# in vi edit mode - next to 2nd word
b
# in vi edit mode - previous word
i
# in vi edit mode - go to insert mode
v
# in vi edit mode - edit current command in vi
man 3 readline
# man page for complete readline mapping
# foo=bar
echo $foo
# bar
echo "$foo"
# bar
# single quotes cause variables to not be expanded
echo '$foo'
# $foo
# single quotes within double quotes will not cancel expansion and will be part of the output
echo "'$foo'"
# 'bar'
# doubled single quotes act as if there are no quotes at all
echo ''$foo''
# bar
var="some string"
echo ${#var}
# 11
var=string
echo "${var:0:1}"
#s
# or
echo ${var%%"${var#?}"}
var="some string"
echo ${var:2}
#me string
var="0050"
echo ${var[@]#0}
#050
{var/a/,}
{var//a/,}
#with grep
test="stringA stringB stringC"
grep ${test// /\\\|} file.txt
# turning the space into 'or' (\|) in grep
var=HelloWorld
echo ${var,,}
helloworld
cmd="bar=foo"
eval "$cmd"
echo "$bar" # foo
echo $(( 10 + 5 )) #15
x=1
echo $(( x++ )) #1 , notice that it is still 1, since it's post-increment
echo $(( x++ )) #2
echo $(( ++x )) #4 , notice that it is not 3 since it's pre-increment
echo $(( x-- )) #4
echo $(( x-- )) #3
echo $(( --x )) #1
x=2
y=3
echo $(( x ** y )) #8
factor 50
# 50: 2 5 5
seq 10|paste -sd+|bc
awk '{s+=$1} END {print s}' filename
cat file| awk -F '\t' 'BEGIN {SUM=0}{SUM+=$3-$2}END{print SUM}'
expr 10+20 #30
expr 10\*20 #600
expr 30 \> 20 #1 (true)
# Number of decimal digit/ significant figure
echo "scale=2;2/3" | bc
#.66
# Exponent operator
echo "10^2" | bc
#100
# Using variables
echo "var=5;--var"| bc
#4
grep = grep -G # Basic Regular Expression (BRE)
fgrep = grep -F # fixed text, ignoring meta-characters
egrep = grep -E # Extended Regular Expression (ERE)
rgrep = grep -r # recursive
grep -P # Perl Compatible Regular Expressions (PCRE)
grep -c "^$"
grep -o '[0-9]*'
#or
grep -oP '\d*'
grep '[0-9]\{3\}'
# or
grep -E '[0-9]{3}'
# or
grep -P '\d{3}'
grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
# or
grep -Po '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}'
grep -w 'target'
#or using RE
grep '\btarget\b'
# return also 3 lines after match
grep -A 3 'bbo'
# return also 3 lines before match
grep -B 3 'bbo'
# return also 3 lines before and after match
grep -C 3 'bbo'
grep -o 'S.*'
grep -o -P '(?<=w1).*(?=w2)'
grep -v bbo filename
grep -v '^#' file.txt
grep "$myvar" filename
#remember to quote the variable!
grep -m 1 bbo filename
grep -c bbo filename
grep -o bbo filename |wc -l
grep -i "bbo" filename
grep --color bbo filename
grep -R bbo /path/to/directory
# or
grep -r bbo /path/to/directory
grep -rh bbo /path/to/directory
grep -rl bbo /path/to/directory
grep 'A\|B\|C\|D'
grep 'A.*B'
grep 'A.B'
grep 'colou\?r'
grep -f fileA fileB
grep $'\t'
$echo "$long_str"|grep -q "$short_str"
if [ $? -eq 0 ]; then echo 'found'; fi
#grep -q will output 0 if match found
#remember to add space between []!
grep -oP '\(\K[^\)]+'
grep -o -w "\w\{10\}\-R\w\{1\}"
# \w word character [0-9a-zA-Z_] \W not word character
grep -d skip 'bbo' /path/to/files/*
sed 1d filename
sed 1,100d filename
sed "/bbo/d" filename
# case insensitive:
sed "/bbo/Id" filename
sed -E '/^.{5}[^2]/d'
#aaaa2aaa (you can stay)
#aaaa1aaa (delete!)
sed -i "/bbo/d" filename
# e.g. add >$i to the first line (to make a bioinformatics FASTA file)
sed "1i >$i"
# notice the double quotes! in other examples, you can use a single quote, but here, no way!
# '1i' means insert to first line
# Use backslash for end-of-line $ pattern, and double quotes for expressing the variable
sed -e "\$s/\$/\n+--$3-----+/"
sed '/^\s*$/d'
# or
sed '/^$/d'
sed '$d'
sed -i '$ s/.$//' filename
sed -i '1s/^/[/' file
sed -e '1isomething' -e '3isomething'
sed '$s/$/]/' filename
sed '$a\'
sed -e 's/^/bbo/' filename
sed -e 's/$/\}\]/' filename
sed 's/.\{4\}/&\n/g'
Add a line after the line that matches the pattern (e.g. add a new line with "world" after the line with "hello")
sed '/hello*/a world' filename
# hello
# world
sed -s '$a,' *.json > all.json
sed 's/A/B/g' filename
sed "s/aaa=.*/aaa=\/my\/new\/path/g"
sed -n '/^@S/p'
sed '/bbo/d' filename
sed -n 500,5000p filename
sed -n '0~3p' filename
# catch 0: start; 3: step
sed -n '1~2p'
sed -n '1p;0~3p'
sed -e 's/^[ \t]*//'
# Notice a whitespace before '\t'!!
sed 's/ *//'
# notice a whitespace before '*'!!
sed 's/,$//g'
sed "s/$/\t$i/"
# $i is the valuable you want to add
# To add the filename to every last column of the file
for i in $(ls); do sed -i "s/$/\t$i/" $i; done
for i in T000086_1.02.n T000086_1.02.p; do sed "s/$/\t${i/*./}/" $i; done >T000086_1.02.np
sed ':a;N;$!ba;s/\n//g'
sed -n -e '123p'
sed -n '10,33p' <filename
sed 's=/=\\/=g'
sed 's/A-.*-e//g' filename
sed '$ s/.$//'
sed -r -e 's/^.{3}/&#/' file
awk -F $'\t'
awk -v OFS='\t'
a=bbo;b=obb;
awk -v a="$a" -v b="$b" "$1==a && $10=b" filename
awk '{print NR,length($0);}' filename
awk '{print NF}'
awk '{print $2, $1}'
awk '$1~/,/ {print}'
awk '{split($2, a,",");for (i in a) print $1"\t"a[i]}' filename
awk -v N=7 '{print}/bbo/&& --N<=0 {exit}'
ls|xargs -n1 -I file awk '{s=$0};END{print FILENAME,s}' file
awk 'BEGIN{OFS="\t"}$3="chr"$3'
awk '!/bbo/' file
awk 'NF{NF-=1};1' file
# For example there are two files:
# fileA:
# a
# b
# c
# fileB:
# d
# e
awk 'print FILENAME, NR,FNR,$0}' fileA fileB
# fileA 1 1 a
# fileA 2 2 b
# fileA 3 3 c
# fileB 4 1 d
# fileB 5 2 e
# For example there are two files:
# fileA:
# 1 0
# 2 1
# 3 1
# 4 0
# fileB:
# 1 0
# 2 1
# 3 0
# 4 1
awk -v OFS='\t' 'NR=FNR{a[$1]=$2;next} NF {print $1,((a[$1]=$2)? $2:"0")}' fileA fileB
# 1 0
# 2 1
# 3 0
# 4 0
awk '{while (match($0, /[0-9]+\[0-9]+/)){
\printf "%s%.2f", substr($0,0,RSTART-1),substr($0,RSTART,RLENGTH)
\$0=substr($0, RSTART+RLENGTH)
\}
\print
\}'
awk '{printf("%s\t%s\n",NR,$0)}'
# For example, separate the following content:
# David cat,dog
# into
# David cat
# David dog
awk '{split($2,a,",");for(i in a)print $1"\t"a[i]}' file
# Detail here: http://stackoverflow.com/questions/33408762/bash-turning-single-comma-separated-column-into-multi-line-string
awk '{s+=$1}END{print s/NR}'
awk '$1 ~ /^Linux/'
awk ' {split( $0, a, "\t" ); asort( a ); for( i = 1; i <= length(a); i++ ) printf( "%s\t", a[i] ); printf( "\n" ); }'
awk '{$6 = $4 - prev5; prev5 = $5; print;}'
xargs -d\t
ls|xargs -L1 -p head
echo 1 2 3 4 5 6| xargs -n 3
# 1 2 3
# 4 5 6
echo a b c |xargs -p -n 3
xargs -t abcd
# bin/echo abcd
# abcd
find . -name "*.html"|xargs rm
# when using a backtick
rm `find . -name "*.html"`
find . -name "*.c" -print0|xargs -0 rm -rf
xargs --show-limits
# Output from my Ubuntu:
# Your environment variables take up 3653 bytes
# POSIX upper limit on argument length (this system): 2091451
# POSIX smallest allowable upper limit on argument length (all systems): 4096
# Maximum length of command we could actually use: 2087798
# Size of command buffer we are actually using: 131072
# Maximum parallelism (--max-procs must be no greater): 2147483647
find . -name "*.bak" -print 0|xargs -0 -I {} mv {} ~/old
# or
find . -name "*.bak" -print 0|xargs -0 -I file mv file ~/old
ls |head -100|xargs -I {} mv {} d1
time echo {1..5} |xargs -n 1 -P 5 sleep
# a lot faster than:
time echo {1..5} |xargs -n1 sleep
find /dir/to/A -type f -name "*.py" -print 0| xargs -0 -r -I file cp -v -p file --target-directory=/path/to/B
# v: verbose|
# p: keep detail (e.g. owner)
ls |xargs -n1 -I file sed -i '/^Pos/d' file
ls |sed 's/.txt//g'|xargs -n1 -I file sed -i -e '1 i\>file\' file.txt
ls |xargs -n1 wc -l
ls -l| xargs
echo mso{1..8}|xargs -n1 bash -c 'echo -n "$1:"; ls -la "$1"| grep -w 74 |wc -l' --
# "--" signals the end of options and display further option processing
ls|xargs wc -l
cat grep_list |xargs -I{} grep {} filename
grep -rl '192.168.1.111' /etc | xargs sed -i 's/192.168.1.111/192.168.2.111/g'
find .
find . -type f
find . -type d
find . -name '*.php' -exec sed -i 's/www/w/g' {} \;
# if there are no subdirectory
replace "www" "w" -- *
# a space before *
find mso*/ -name M* -printf "%f\n"
find / -type f -size +4G
find . -name "*.mso" -size -74c -delete
# M for MB, etc
find . -type f -empty
# to further delete all the empty files
find . -type f -empty -delete
find . -type f | wc -l
# if and else loop for string matching
if [[ "$c" == "read" ]]; then outputdir="seq"; else outputdir="write" ; fi
# Test if myfile contains the string 'test':
if grep -q hello myfile; then echo -e "file contains the string!" ; fi
# Test if mydir is a directory, change to it and do other stuff:
if cd mydir; then
echo 'some content' >myfile
else
echo >&2 "Fatal error. This script requires mydir."
fi
# if variable is null
if [ ! -s "myvariable" ]; then echo -e "variable is null!" ; fi
#True of the length if "STRING" is zero.
# Using test command (same as []), to test if the length of variable is nonzero
test -n "$myvariable" && echo myvariable is "$myvariable" || echo myvariable is not set
# Test if file exist
if [ -e 'filename' ]
then
echo -e "file exists!"
fi
# Test if file exist but also including symbolic links:
if [ -e myfile ] || [ -L myfile ]
then
echo -e "file exists!"
fi
# Test if the value of x is greater or equal than 5
if [ "$x" -ge 5 ]; then echo -e "greater or equal than 5!" ; fi
# Test if the value of x is greater or equal than 5, in bash/ksh/zsh:
if ((x >= 5)); then echo -e "greater or equal than 5!" ; fi
# Use (( )) for arithmetic operation
if ((j==u+2)); then echo -e "j==u+2!!" ; fi
# Use [[ ]] for comparison
if [[ $age -gt 21 ]]; then echo -e "forever 21!!" ; fi
# Echo the file name under the current directory
for i in $(ls); do echo file $i; done
#or
for i in *; do echo file $i; done
# Make directories listed in a file (e.g. myfile)
for dir in $(<myfile); do mkdir $dir; done
# Press any key to continue each loop
for i in $(cat tpc_stats_0925.log |grep failed|grep -o '\query\w\{1,2\}'); do cat ${i}.log; read -rsp $'Press any key to continue...\n' -n1 key; done
# Print a file line by line when a key is pressed,
oifs="$IFS"; IFS=$'\n'; for line in $(cat myfile); do ...; done
while read -r line; do ...; done <myfile
#If only one word a line, simply
for line in $(cat myfile); do echo $line; read -n1; done
#Loop through an array
for i in "${arrayName[@]}"; do echo $i; done
# Column subtraction of a file (e.g. a 3 columns file)
while read a b c; do echo $(($c-$b)); done < <(head filename)
#there is a space between the two '<'s
# Sum up column subtraction
i=0; while read a b c; do ((i+=$c-$b)); echo $i; done < <(head filename)
# Keep checking a running process (e.g. perl) and start another new process (e.g. python) immediately after it. (BETTER use the wait command! Ctrl+F 'wait')
while [[ $(pidof perl) ]]; do echo f; sleep 10; done && python timetorunpython.py
read type;
case $type in
'0')
echo 'how'
;;
'1')
echo 'are'
;;
'2')
echo 'you'
;;
esac
time echo hi
sleep 10
date +%F
# 2020-07-19
# or
date +'%d-%b-%Y-%H:%M:%S'
# 10-Apr-2020-21:54:40
# Returns the current time with nanoseconds.
date +"%T.%N"
# 11:42:18.664217000
# Get the seconds since epoch (Jan 1 1970) for a given date (e.g Mar 16 2021)
date -d "Mar 16 2021" +%s
# 1615852800
# or
date -d "Tue Mar 16 00:00:00 UTC 2021" +%s
# 1615852800
# Convert the number of seconds since epoch back to date
date --date @1615852800
# Tue Mar 16 00:00:00 UTC 2021
# print current date first (for the following example)
date +"%F %H:%M:%S"
# 2023-03-11 16:17:09
# print the time that is 1 day ago
date -d"1 day ago" +"%F %H:%M:%S"
# 2023-03-10 16:17:09
# print the time that is 7 days ago
date -d"7 days ago" +"%F %H:%M:%S"
# 2023-03-04 16:17:09
# print the time that is a week ago
date -d"1 week ago" +"%F %H:%M:%S"
# 2023-03-04 16:17:09
# add 1 day to date
date -d"-1 day ago" +"%F %H:%M:%S"
# 2023-03-12 16:17:09
sleep $[ ( $RANDOM % 5 ) + 1 ]
TMOUT=10
#once you set this variable, logout timer start running!
#This will run the command 'sleep 10' for only 1 second.
timeout 1 sleep 10
at now + 1min #time-units can be minutes, hours, days, or weeks
warning: commands will be executed using /bin/sh
at> echo hihigithub >~/itworks
at> <EOT> # press Ctrl + D to exit
job 1 at Wed Apr 18 11:16:00 2018
curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -f markdown -t man | man -l -
# or w3m (a text based web browser and pager)
curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc | w3m -T text/html
# or using emacs (in emac text editor)
emacs --eval '(org-mode)' --insert <(curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -t org)
# or using emacs (on terminal, exit using Ctrl + x then Ctrl + c)
emacs -nw --eval '(org-mode)' --insert <(curl https://raw.githubusercontent.com/onceupon/Bash-Oneliner/master/README.md | pandoc -t org)
wget -r -l1 -H -t1 -nd -N -np -A mp3 -e robots=off http://example.com
# -r: recursive and download all links on page
# -l1: only one level link
# -H: span host, visit other hosts
# -t1: numbers of retries
# -nd: don't make new directories, download to here
# -N: turn on timestamp
# -nd: no parent
# -A: type (separate by ,)
# -e robots=off: ignore the robots.txt file which stop wget from crashing the site, sorry example.com
Upload a file to web and download (https://transfer.sh/)
# Upload a file (e.g. filename.txt):
curl --upload-file ./filename.txt https://transfer.sh/filename.txt
# the above command will return a URL, e.g: https://transfer.sh/tG8rM/filename.txt
# Next you can download it by:
curl https://transfer.sh/tG8rM/filename.txt -o filename.txt
data=file.txt
url=http://www.example.com/$data
if [ ! -s $data ];then
echo "downloading test data..."
wget $url
fi
wget -O filename "http://example.com"
wget -P /path/to/directory "http://example.com"
curl -L google.com
sudo apt install pwgen
pwgen 13 5
#sahcahS9dah4a xieXaiJaey7xa UuMeo0ma7eic9 Ahpah9see3zai acerae7Huigh7
shuf -n 100 filename
for i in a b c d e; do echo $i; done | shuf
Echo series of random numbers between a range (e.g. shuffle numbers from 0-100, then pick 15 of them randomly)
shuf -i 0-100 -n 15
echo $RANDOM
echo $((RANDOM % 10))
echo $(((RANDOM %10)+1))
X11 GUI applications! Here are some GUI tools for you if you get bored by the text-only environment.
ssh -X user_name@ip_address
# or setting through xhost
# --> Install the following for Centos:
# xorg-x11-xauth
# xorg-x11-fonts-*
# xorg-x11-utils
xclock
xeyes
xcowsay
1. ssh -X user_name@ip_address
2. apt-get install eog
3. eog picture.png
1. ssh -X user_name@ip_address
2. sudo apt install mpv
3. mpv myvideo.mp4
1. ssh -X user_name@ip_address
2. apt-get install gedit
3. gedit filename.txt
1. ssh -X user_name@ip_address
2. apt-get install evince
3. evince filename.pdf
1. ssh -X user_name@ip_address
2. apt-get install libxss1 libappindicator1 libindicator7
3. wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
4. sudo apt-get install -f
5. dpkg -i google-chrome*.deb
6. google-chrome
# List yum history (e.g install, update)
sudo yum history
# Example output:
# Loaded plugins: extras_suggestions, langpacks, priorities, update-motd
# ID | Login user | Date and time | Action(s) | Altered
# -------------------------------------------------------------------------------
# 11 | ... <myuser> | 2020-04-10 10:57 | Install | 1 P<
# 10 | ... <myuser> | 2020-03-27 05:21 | Install | 1 >P
# 9 | ... <myuser> | 2020-03-05 11:57 | I, U | 56 *<
# ...
# Show more details of a yum history (e.g. history #11)
sudo yum history info 11
# Undo a yum history (e.g. history #11, this will uninstall some packages)
sudo yum history undo 11
# To audit a directory recursively for changes (e.g. myproject)
auditctl -w /path/to/myproject/ -p wa
# If you delete a file name "VIPfile", the deletion is recorded in /var/log/audit/audit.log
sudo grep VIPfile /var/log/audit/audit.log
#type=PATH msg=audit(1581417313.678:113): item=1 name="VIPfile" inode=300115 dev=ca:01 mode=0100664 ouid=1000 ogid=1000 rdev=00:00 nametype=DELETE cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0
sestatus
# SELinux status: enabled
# SELinuxfs mount: /sys/fs/selinux
# SELinux root directory: /etc/selinux
# Loaded policy name: targeted
# Current mode: enforcing
# Mode from config file: enforcing
# Policy MLS status: enabled
# Policy deny_unknown status: allowed
# Max kernel policy version: 31
ssh-keygen -y -f ~/.ssh/id_rsa > ~/.ssh/id_rsa.pub
ssh-copy-id <user_name>@<server_IP>
# then you need to enter the password
# and next time you won't need to enter password when ssh to that user
Copy default public key to remote user using the required private key (e.g. use your mykey.pem key to copy your id_rsa.pub to the remote user)
# before you need to use mykey.pem to ssh to remote user.
ssh-copy-id -i ~/.ssh/id_rsa.pub -o "IdentityFile ~/Downloads/mykey.pem" <user_name>@<server_IP>
# now you don't need to use key to ssh to that user.
# To bring your key with you when ssh to serverA, then ssh to serverB from serverA using the key.
ssh-agent
ssh-add /path/to/mykey.pem
ssh -A <username>@<IP_of_serverA>
# Next you can ssh to serverB
ssh <username>@<IP_of_serverB>
# add the following to ~/.ssh/config
Host myserver
User myuser
IdentityFile ~/path/to/mykey.pem
# Next, you could run "ssh myserver" instead of "ssh -i ~/path/to/mykey.pem myuser@myserver"
journalctl -u <service_name> -f
# A zombie is already dead, so you cannot kill it. You can eliminate the zombie by killing its parent.
# First, find PID of the zombie
ps aux| grep 'Z'
# Next find the PID of zombie's parent
pstree -p -s <zombie_PID>
# Then you can kill its parent and you will notice the zombie is gone.
sudo kill 9 <parent_PID>
free -c 10 -mhs 1
# print 10 times, at 1 second interval
# refresh every second
iostat -x -t 1
iftop -i enp175s0f0
uptime
if [ "$EUID" -ne 0 ]; then
echo "Please run this as root"
exit 1
fi
chsh -s /bin/sh bonnie
# /etc/shells: valid login shells
chroot /home/newroot /bin/bash
# To exit chroot
exit
stat filename.txt
ps aux
pstree
cat /proc/sys/kernel/pid_max
dmesg
$ip add show
# or
ifconfig
runlevel
# or
who -r
init 5
#or
telinit 5
chkconfig --list
# update-rc.d equivalent to chkconfig in ubuntu
cat /etc/*-release
man hier
# e.g. check the status of cron service
systemctl status cron.service
# e.g. stop cron service
systemctl stop cron.service
jobs -l
# nice value is adjustable from -20 (most favorable) to +19
# the nicer the application, the lower the priority
# Default niceness: 10; default priority: 80
nice -10 ./test.sh
export PATH=$PATH:~/path/you/want
chmod +x filename
# you can now ./filename to execute it
uname -a
# Check system hardware-platform (x86-64)
uname -i
links www.google.com
useradd username
passwd username
1. vi ~/.bash_profile
2. export PS1='\u@\h:\w\$'
# $PS1 is a variable that defines the makeup and style of the command prompt
# You could use emojis and add timestamp to every prompt using the following value:
# export PS1="\t@🦁:\w\$ "
3. source ~/.bash_profile
1. vi ~/.bash_profile
2. alias pd="pwd" //no more need to type that 'w'!
3. source ~/.bash_profile
alias -p
unalias ls
# print all shell options
shopt
# to unset (or stop) alias
shopt -u expand_aliases
# to set (or start) alias
shopt -s expand_aliases
echo $PATH
# list of directories separated by a colon
env
unset MYVAR
lsblk
partprobe
ln -s /path/to/program /home/usr/bin
# must be the whole path to the program
hexdump -C filename.class
rsh node_name
netstat -tulpn
readlink filename
type python
# python is /usr/bin/python
# There are 5 different types, check using the 'type -f' flag
# 1. alias (shell alias)
# 2. function (shell function, type will also print the function body)
# 3. builtin (shell builtin)
# 4. file (disk file)
# 5. keyword (shell reserved word)
# You can also use `which`
which python
# /usr/bin/python
declare -F
du -hs .
# or
du -sb
cp -rp /path/to/directory
pushd .
# then pop
popd
#or use dirs to display the list of currently remembered directories.
dirs -l
df -h
# or
du -h
#or
du -sk /var/log/* |sort -rn |head -10
df -i
# Filesystem Inodes IUsed IFree IUse% Mounted on
# devtmpfs 492652 304 492348 1% /dev
# tmpfs 497233 2 497231 1% /dev/shm
# tmpfs 497233 439 496794 1% /run
# tmpfs 497233 16 497217 1% /sys/fs/cgroup
# /dev/nvme0n1p1 5037976 370882 4667094 8% /
# tmpfs 497233 1 497232 1% /run/user/1000
df -TH
runlevel
init 3
#or
telinit 3
1. edit /etc/init/rc-sysinit.conf
2. env DEFAULT_RUNLEVEL=2
su
su somebody
repquota -auvs
getent database_name
# (e.g. the 'passwd' database)
getent passwd
# list all user account (all local and LDAP)
# (e.g. fetch list of grop accounts)
getent group
# store in database 'group'
chown user_name filename
chown -R user_name /path/to/directory/
# chown user:group filename
# e.g. Mount /dev/sdb to /home/test
mount /dev/sdb /home/test
# e.g. Unmount /home/test
umount /home/test
mount
# or
df
cat /etc/passwd
getent passwd| awk '{FS="[:]"; print $1}'
compgen -u
compgen -g
group username
id username
# variable for UID
echo $UID
if [ $(id -u) -ne 0 ];then
echo "You are not root!"
exit;
fi
# 'id -u' output 0 if it's not root
more /proc/cpuinfo
# or
lscpu
setquota username 120586240 125829120 0 0 /home
quota -v username
ldconfig -p
ldd /bin/ls
lastlog
last reboot
joe /etc/environment
# edit this file
ulimit -u
nproc --all
1. top
2. press '1'
jobs -l
service --status-all
shutdown -r +5 "Server will restart in 5 minutes. Please save your work."
shutdown -c
wall -n hihi
pkill -U user_name
kill -9 $(ps aux | grep 'program_name' | awk '{print $2}')
# You might have to install the following:
apt-get install libglib2.0-bin;
# or
yum install dconf dconf-editor;
yum install dbus dbus-x11;
# Check list
gsettings list-recursively
# Change some settings
gsettings set org.gnome.gedit.preferences.editor highlight-current-line true
gsettings set org.gnome.gedit.preferences.editor scheme 'cobalt'
gsettings set org.gnome.gedit.preferences.editor use-default-font false
gsettings set org.gnome.gedit.preferences.editor editor-font 'Cantarell Regular 12'
Add user to a group (e.g add user 'nice' to the group 'docker', so that he can run docker without sudo)
sudo gpasswd -a nice docker
1. pip install --user package_name
2. You might need to export ~/.local/bin/ to PATH: export PATH=$PATH:~/.local/bin/
1. uname -a #check current kernel, which should NOT be removed
2. sudo apt-get purge linux-image-X.X.X-X-generic #replace old version
sudo hostname your-new-name
# if not working, do also:
hostnamectl set-hostname your-new-hostname
# then check with:
hostnamectl
# Or check /etc/hostname
# If still not working..., edit:
/etc/sysconfig/network
/etc/sysconfig/network-scripts/ifcfg-ensxxx
#add HOSTNAME="your-new-hostname"
apt list --installed
# or on Red Hat:
yum list installed
apt list --upgradeable
# or
sudo yum check-update
sudo yum update --exclude=php*
lsof /mnt/dir
killall pulseaudio
# then press Alt-F2 and type in pulseaudio
lsscsi
http://onceuponmine.blogspot.tw/2017/08/set-up-your-own-dns-server.html
http://onceuponmine.blogspot.tw/2017/07/create-your-first-simple-daemon.html
http://onceuponmine.blogspot.tw/2017/10/setting-up-msmtprc-and-use-your-gmail.html
Using telnet to test open ports, test if you can connect to a port (e.g 53) of a server (e.g 192.168.2.106)
telnet 192.168.2.106 53
ifconfig eth0 mtu 9000
pidof python
# or
ps aux|grep python
ps -p <PID>
#or
cat /proc/<PID>/status
cat /proc/<PID>/stack
cat /proc/<PID>/stat
# Start ntp:
ntpd
# Check ntp:
ntpq -p
sudo apt-get autoremove
sudo apt-get clean
sudo rm -rf ~/.cache/thumbnails/*
# Remove old kernal:
sudo dpkg --list 'linux-image*'
sudo apt-get remove linux-image-OLDER_VERSION
pvscan
lvextend -L +130G /dev/rhel/root -r
# Adding -r will grow filesystem after resizing the volume.
sudo dd if=~/path/to/isofile.iso of=/dev/sdc1 oflag=direct bs=1048576
sudo dpkg -l | grep <package_name>
sudo dpkg --purge <package_name>
ssh -f -L 9000:targetservername:8088 [email protected] -N
#-f: run in background; -L: Listen; -N: do nothing
#the 9000 of your computer is now connected to the 8088 port of the targetservername through 192.168.14.72
#so that you can see the content of targetservername:8088 by entering localhost:9000 from your browser.
#pidof
pidof sublime_text
#pgrep, you don't have to type the whole program name
pgrep sublim
#pgrep, echo 1 if process found, echo 0 if no such process
pgrep -q sublime_text && echo 1 || echo 0
#top, takes longer time
top|grep sublime_text
aio-stress - AIO benchmark.
bandwidth - memory bandwidth benchmark.
bonnie++ - hard drive and file system performance benchmark.
dbench - generate I/O workloads to either a filesystem or to a networked CIFS or NFS server.
dnsperf - authorative and recursing DNS servers.
filebench - model based file system workload generator.
fio - I/O benchmark.
fs_mark - synchronous/async file creation benchmark.
httperf - measure web server performance.
interbench - linux interactivity benchmark.
ioblazer - multi-platform storage stack micro-benchmark.
iozone - filesystem benchmark.
iperf3 - measure TCP/UDP/SCTP performance.
kcbench - kernel compile benchmark, compiles a kernel and measures the time it takes.
lmbench - Suite of simple, portable benchmarks.
netperf - measure network performance, test unidirectional throughput, and end-to-end latency.
netpipe - network protocol independent performance evaluator.
nfsometer - NFS performance framework.
nuttcp - measure network performance.
phoronix-test-suite - comprehensive automated testing and benchmarking platform.
seeker - portable disk seek benchmark.
siege - http load tester and benchmark.
sockperf - network benchmarking utility over socket API.
spew - measures I/O performance and/or generates I/O load.
stress - workload generator for POSIX systems.
sysbench - scriptable database and system performance benchmark.
tiobench - threaded IO benchmark.
unixbench - the original BYTE UNIX benchmark suite, provide a basic indicator of the performance of a Unix-like system.
wrk - HTTP benchmark.
# installation
# It collects the data every 10 minutes and generate its report daily. crontab file (/etc/cron.d/sysstat) is responsible for collecting and generating reports.
yum install sysstat
systemctl start sysstat
systemctl enable sysstat
# show CPU utilization 5 times every 2 seconds.
sar 2 5
# show memory utilization 5 times every 2 seconds.
sar -r 2 5
# show paging statistics 5 times every 2 seconds.
sar -B 2 5
# To generate all network statistic:
sar -n ALL
# reading SAR log file using -f
sar -f /var/log/sa/sa31|tail
journalctl --file ./log/journal/a90c18f62af546ccba02fa3734f00a04/system.journal --since "2020-02-11 00:00:00"
lastb
who
w
users
tail -f --pid=<PID> filename.txt
# replace <PID> with the process ID of the program.
systemctl list-unit-files|grep enabled
lshw -json >report.json
# Other options are: [ -html ] [ -short ] [ -xml ] [ -json ] [ -businfo ] [ -sanitize ] ,etc
sudo dmidecode -t memory
dmidecode -t 4
# Type Information
# 0 BIOS
# 1 System
# 2 Base Board
# 3 Chassis
# 4 Processor
# 5 Memory Controller
# 6 Memory Module
# 7 Cache
# 8 Port Connector
# 9 System Slots
# 11 OEM Strings
# 13 BIOS Language
# 15 System Event Log
# 16 Physical Memory Array
# 17 Memory Device
# 18 32-bit Memory Error
# 19 Memory Array Mapped Address
# 20 Memory Device Mapped Address
# 21 Built-in Pointing Device
# 22 Portable Battery
# 23 System Reset
# 24 Hardware Security
# 25 System Power Controls
# 26 Voltage Probe
# 27 Cooling Device
# 28 Temperature Probe
# 29 Electrical Current Probe
# 30 Out-of-band Remote Access
# 31 Boot Integrity Services
# 32 System Boot
# 34 Management Device
# 35 Management Device Component
# 36 Management Device Threshold Data
# 37 Memory Channel
# 38 IPMI Device
# 39 Power Supply
lsscsi|grep SEAGATE|wc -l
# or
sg_map -i -x|grep SEAGATE|wc -l
lsblk -f /dev/sdb
# or
sudo blkid /dev/sdb
uuidgen
lsblk -io KNAME,TYPE,MODEL,VENDOR,SIZE,ROTA
#where ROTA means rotational device / spinning hard disks (1 if true, 0 if false)
lspci
# List information about NIC
lspci | egrep -i --color 'network|ethernet'
lsusb
# Show the status of modules in the Linux Kernel
lsmod
# Add and remove modules from the Linux Kernel
modprobe
# or
# Remove a module
rmmod
# Insert a module
insmod
# Remotely finding out power status of the server
ipmitool -U <bmc_username> -P <bmc_password> -I lanplus -H <bmc_ip_address> power status
# Remotely switching on server
ipmitool -U <bmc_username> -P <bmc_password> -I lanplus -H <bmc_ip_address> power on
# Turn on panel identify light (default 15s)
ipmitool chassis identify 255
# Found out server sensor temperature
ipmitool sensors |grep -i Temp
# Reset BMC
ipmitool bmc reset cold
# Prnt BMC network
ipmitool lan print 1
# Setting BMC network
ipmitool -I bmc lan set 1 ipaddr 192.168.0.55
ipmitool -I bmc lan set 1 netmask 255.255.255.0
ipmitool -I bmc lan set 1 defgw ipaddr 192.168.0.1
dig +short www.example.com
# or
host www.example.com
dig -t txt www.example.com
# or
host -t txt www.example.com
Send a ping with a limited TTL to 10 (TTL: Time-To-Live, which is the maximum number of hops that a packet can travel across the Internet before it gets discarded.)
ping 8.8.8.8 -t 10
traceroute google.com
nc -vw5 google.com 80
# Connection to google.com 80 port [tcp/http] succeeded!
nc -vw5 google.com 22
# nc: connect to google.com port 22 (tcp) timed out: Operation now in progress
# nc: connect to google.com port 22 (tcp) failed: Network is unreachable
# From server A:
$ sudo nc -l 80
# then you can connect to the 80 port from another server (e.g. server B):
# e.g. telnet <server A IP address> 80
# then type something in server B
# and you will see the result in server A!
#notice that some companies might not like you using nmap
nmap -sT -O localhost
# check port 0-65535
nmap -p0-65535 localhost
#skips checking if the host is alive which may sometimes cause a false positive and stop the scan.
$ nmap google.com -Pn
# Example output:
# Starting Nmap 7.01 ( https://nmap.org ) at 2020-07-18 22:59 CST
# Nmap scan report for google.com (172.217.24.14)
# Host is up (0.013s latency).
# Other addresses for google.com (not scanned): 2404:6800:4008:802::200e
# rDNS record for 172.217.24.14: tsa01s07-in-f14.1e100.net
# Not shown: 998 filtered ports
# PORT STATE SERVICE
# 80/tcp open http
# 443/tcp open https
#
# Nmap done: 1 IP address (1 host up) scanned in 3.99 seconds
$ nmap -A -T4 scanme.nmap.org
# -A to enable OS and version detection, script scanning, and traceroute; -T4 for faster execution
whois google.com
openssl s_client -showcerts -connect www.example.com:443
ip a
ip r
Display ARP cache (ARP cache displays the MAC addresses of device in the same network that you have connected to)
ip n
ip address add 192.168.140.3/24 dev eno16777736
sudo vi /etc/sysconfig/network-scripts/ifcfg-enoxxx
# then edit the fields: BOOTPROT, DEVICE, IPADDR, NETMASK, GATEWAY, DNS1 etc
sudo nmcli c reload
sudo systemctl restart network.service
hostnamectl
hostnamectl set-hostname "mynode"
curl -I http://example.com/
# HTTP/1.1 200 OK
# Server: nginx
# Date: Thu, 02 Jan 2020 07:01:07 GMT
# Content-Type: text/html
# Content-Length: 1119
# Connection: keep-alive
# Vary: Accept-Encoding
# Last-Modified: Mon, 09 Sep 2019 10:37:49 GMT
# ETag: "xxxxxx"
# Accept-Ranges: bytes
# Vary: Accept-Encoding
curl -s -o /dev/null -w "%{http_code}" https://www.google.com
curl -s -o /dev/null -w "%{redirect_url}" https://bit.ly/34EFwWC
# server side:
$ sudo iperf -s -p 80
# client side:
iperf -c <server IP address> --parallel 2 -i 1 -t 2 -p 80
sudo iptables -A INPUT -p tcp --dport 80 -j DROP
# only block connection from an IP address
sudo iptables –A INPUT –s <IP> -p tcp –dport 80 –j DROP
# If file is not specified, the file /usr/share/dict/words is used.
look phy|head -n 10
# phycic
# Phyciodes
# phycite
# Phycitidae
# phycitol
# phyco-
# phycochrom
# phycochromaceae
# phycochromaceous
# phycochrome
printf 'hello world\n%.0s' {1..5}
username=`echo -n "bashoneliner"`
tee <fileA fileB fileC fileD >/dev/null
tr -dc '[:print:]' < filename
tr --delete '\n' <input.txt >output.txt
tr '\n' ' ' <filename
tr /a-z/ /A-Z/
echo 'something' |tr a-z a
# aaaaaaaaa
diff fileA fileB
# a: added; d:delete; c:changed
# or
sdiff fileA fileB
# side-to-side merge of file differences
diff fileA fileB --strip-trailing-cr
# having two sorted and uniqed files (for example after running `$ sort -uo fileA fileA` and same for fileB):
# ------
# fileA:
# ------
# joey
# kitten
# piglet
# puppy
# ------
# fileB:
# ------
# calf
# chick
# joey
# puppy
#
# Find lines in both files
comm -12 fileA fileB
# joey
# puppy
#
# Find lines in fileB that are NOT in fileA
comm -13 fileA fileB
# calf
# chick
#
# Find lines in fileA that are NOT in fileB
comm -23 fileA fileB
# kitten
# piglet
nl fileA
#or
nl -nrz fileA
# add leading zeros
#or
nl -w1 -s ' '
# making it simple, blank separate
Join two files field by field with tab (default join by the first column of both file, and default separator is space)
# fileA and fileB should have the same ordering of lines.
join -t '\t' fileA fileB
# Join using specified field (e.g. column 3 of fileA and column 5 of fileB)
join -1 3 -2 5 fileA fileB
paste fileA fileB fileC
# default tab separate
# e.g.
# AAAA
# BBBB
# CCCC
# DDDD
cat filename|paste - -
# AAAABBBB
# CCCCDDDD
cat filename|paste - - - -
# AAAABBBBCCCCDDDD
cat file.fastq | paste - - - - | sed 's/^@/>/g'| cut -f1-2 | tr '\t' '\n' >file.fa
echo 12345| rev
seq 10
i=`wc -l filename|cut -d ' ' -f1`; cat filename| echo "scale=2;(`paste -sd+`)/"$i|bc
echo {1,2}{1,2}
# 1 1, 1 2, 2 1, 2 2
set = {A,T,C,G}
group= 5
for ((i=0; i<$group; i++)); do
repetition=$set$repetition; done
bash -c "echo "$repetition""
foo=$(<test1)
echo ${#foo}
echo -e ' \t '
# Split by line (e.g. 1000 lines/smallfile)
split -d -l 1000 largefile.txt
# Split by byte without breaking lines across files
split -C 10 largefile.txt
#1. Create a big file
dd if=/dev/zero of=bigfile bs=1 count=1000000
#2. Split the big file to 100000 10-bytes files
split -b 10 -a 10 bigfile
rename 's/ABC//' *.gz
basename filename.gz .gz
zcat filename.gz> $(basename filename.gz .gz).unpacked
rename s/$/.txt/ *
# You can use rename -n s/$/.txt/ * to check the result first, it will only print sth like this:
# rename(a, a.txt)
# rename(b, b.txt)
# rename(c, c.txt)
tr -s "/t" < filename
echo -e 'text here \c'
head -c 50 file
cat file|rev | cut -d/ -f1 | rev
((var++))
# or
var=$((var+1))
cat filename|rev|cut -f1|rev
cat >myfile
let me add sth here
# exit with ctrl+d
# or using tee
tee myfile
let me add sth else here
# exit with ctrl+d
cat >>myfile
let me add sth here
# exit with ctrl+d
# or using tee
tee -a myfile
let me add sth else here
# exit with ctrl+d
>filename
echo 'hihi' >>filename
#install the useful jq package
#sudo apt-get install jq
#e.g. to get all the values of the 'url' key, simply pipe the json to the following jq command(you can use .[]. to select inner json, i.e jq '.[].url')
cat file.json | jq '.url'
D2B=({0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1}{0..1})
echo -e ${D2B[5]}
#00000101
echo -e ${D2B[255]}
#11111111
echo "00110010101110001101" | fold -w4
# 0011
# 0010
# 1011
# 1000
# 1101
sort -k3,3 -s
cat file.txt|rev|column -t|rev
echo 'hihihihi' | tee outputfile.txt
# use '-a' with tee to append to file.
cat -v filename
expand filename
unexpand filename
od filename
tac filename
while read a b; do yes $b |head -n $a ; done <test.txt
identify myimage.png
#myimage.png PNG 1049x747 1049x747+0+0 8-bit sRGB 1.006MB 0.000u 0:00.000
Bash auto-complete (e.g. show options "now tomorrow never" when you press'tab' after typing "dothis")
complete -W "now tomorrow never" dothis
# ~$ dothis
# never now tomorrow
# press 'tab' again to auto-complete after typing 'n' or 't'
# print the current month, today will be highlighted.
cal
# October 2019
# Su Mo Tu We Th Fr Sa
# 1 2 3 4 5
# 6 7 8 9 10 11 12
# 13 14 15 16 17 18 19
# 20 21 22 23 24 25 26
# 27 28 29 30 31
# only display November
cal -m 11
openssl md5 -binary /path/to/file| base64
# NWbeOpeQbtuY0ATWuUeumw==
export LC_ALL=C
# to revert:
unset LC_ALL
echo test|base64
#dGVzdAo=
dirname `pwd`
zmore filename
# or
zless filename
some_commands &>log &
# or
some_commands 2>log &
# or
some_commands 2>&1| tee logfile
# or
some_commands |& tee logfile
# or
some_commands 2>&1 >>outfile
#0: standard input; 1: standard output; 2: standard error
# run sequentially
(sleep 2; sleep 3) &
# run parallelly
sleep 2 & sleep 3 &
# e.g. Run myscript.sh even when log out.
nohup bash myscript.sh
echo 'heres the content'| mail -a /path/to/attach_file.txt -s 'mail.subject' [email protected]
# use -a flag to set send from (-a "From: [email protected]")
xls2csv filename
speaker-test -t sine -f 1000 -l1
(speaker-test -t sine -f 1000) & pid=$!;sleep 0.1s;kill -9 $pid
history -w
vi ~/.bash_history
history -r
#or
history -d [line_number]
# list 5 previous command (similar to `history |tail -n 5` but wont print the history command itself)
fc -l -5
Ctrl+U
# or
Ctrl+C
# or
Alt+Shift+#
# to make it to history
# addmetodistory
# just add a "#" before~~
head !$
clear
# or simply Ctrl+l
rsync -av filename filename.bak
rsync -av directory directory.bak
rsync -av --ignore_existing directory/ directory.bak
rsync -av --update directory directory.bak
rsync -av directory user@ip_address:/path/to/directory.bak
# skip files that are newer on receiver (i prefer this one!)
cd $(mktemp -d)
# for example, this will create a temporary directory "/tmp/tmp.TivmPLUXFT"
mkdir -p project/{lib/ext,bin,src,doc/{html,info,pdf},demo/stat}
# -p: make parent directory
# this will create:
# project/
# project/bin/
# project/demo/
# project/demo/stat/
# project/doc/
# project/doc/html/
# project/doc/info/
# project/doc/pdf/
# project/lib/
# project/lib/ext/
# project/src/
#
# project/
# ├── bin
# ├── demo
# │ └── stat
# ├── doc
# │ ├── html
# │ ├── info
# │ └── pdf
# ├── lib
# │ └── ext
# └── src
cd tmp/ && tar xvf ~/a.tar
cd tmp/a/b/c ||mkdir -p tmp/a/b/c
cd tmp/a/b/c \
> || \
>mkdir -p tmp/a/b/c
file /tmp/
# tmp/: directory
#!/bin/bash
file=${1#*.}
# remove string before a "."
python -m SimpleHTTPServer
# or when using python3:
python3 -m http.server
read input
echo $input
declare -a array=()
# or
declare array=()
# or associative array
declare -A array=()
scp -r directoryname user@ip:/path/to/send
# Don't try this at home!
# It is a function that calls itself twice every call until you run out of system resources.
# A '# ' is added in front for safety reason, remove it when seriously you are testing it.
# :(){:|:&};:
!$
echo $?
unxz filename.tar.xz
# then
tar -xf filename.tar
tar xvfj file.tar.bz2
unxz file.tar.xz
tar xopf file.tar
tar xvf -C /path/to/directory filename.gz
# First cd to the directory, they run:
zip -r -D ../myzipfile .
# you will see the myzipfile.zip in the parent directory (cd ..)
# 'y':
yes
# or 'n':
yes n
# or 'anything':
yes anything
# pipe yes to other command
yes | rm -r large_directory
fallocate -l 10G 10Gigfile
dd if=/dev/zero of=/dev/shm/200m bs=1024k count=200
# or
dd if=/dev/zero of=/dev/shm/200m bs=1M count=200
# Standard output:
# 200+0 records in
# 200+0 records out
# 209715200 bytes (210 MB) copied, 0.0955679 s, 2.2 GB/s
watch -n 1 wc -l filename
# These options can make your code safer but, depending on how your pipeline is written, it might be too aggressive
# or it might not catch the errors that you are interested in
# for reference see https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
# and https://mywiki.wooledge.org/BashPitfalls#set_-euo_pipefail
set -o errexit # exit immediately if a pipeline returns a non-zero status
set -o errtrace # trap ERR from shell functions, command substitutions, and commands from subshell
set -o nounset # treat unset variables as an error
set -o pipefail # pipe will exit with last non-zero status, if applicable
set -Eue -o pipefail # shorthand for above (pipefail has no short option)
set -x; echo `expr 10 + 20 `
# or
set -o xtrace; echo `expr 10 + 20 `
# to turn it off..
set +x
fortune
htop
read -rsp $'Press any key to continue...\n' -n1 key
# download:
# https://github.com/harelba/q
# example:
q -d "," "select c3,c4,c5 from /path/to/file.txt where c3='foo' and c5='boo'"
# Create session and attach:
screen
# Create a screen and name it 'test'
screen -S test
# Create detached session foo:
screen -S foo -d -m
# Detached session foo:
screen: ^a^d
# List sessions:
screen -ls
# Attach last session:
screen -r
# Attach to session foo:
screen -r foo
# Kill session foo:
screen -r foo -X quit
# Scroll:
# Hit your screen prefix combination (C-a / control+A), then hit Escape.
# Move up/down with the arrow keys (↑ and ↓).
# Redirect output of an already running process in Screen:
# (C-a / control+A), then hit 'H'
# Store screen output for Screen:
# Ctrl+A, Shift+H
# You will then find a screen.log file under current directory.
# Create session and attach:
tmux
# Attach to session foo:
tmux attach -t foo
# Detached session foo:
^bd
# List sessions:
tmux ls
# Attach last session:
tmux attach
# Kill session foo:
tmux kill-session -t foo
# Create detached session foo:
tmux new -s foo -d
# Send command to all panes in tmux:
Ctrl-B
:setw synchronize-panes
# Some tmux pane control commands:
Ctrl-B
# Panes (splits), Press Ctrl+B, then input the following symbol:
# % horizontal split
# " vertical split
# o swap panes
# q show pane numbers
# x kill pane
# space - toggle between layouts
# Distribute Vertically (rows):
select-layout even-vertical
# or
Ctrl+b, Alt+2
# Distribute horizontally (columns):
select-layout even-horizontal
# or
Ctrl+b, Alt+1
# Scroll
Ctrl-b then \[ then you can use your normal navigation keys to scroll around.
Press q to quit scroll mode.
sshpass -p mypassword ssh [email protected] "df -h"
wait %1
# or
wait $PID
wait ${!}
#wait ${!} to wait till the last background process ($! is the PID of the last background process)
sudo apt-get install poppler-utils
pdftotext example.pdf example.txt
ls -d */
ls -1
# or list all, do not ignore entries starting with .
ls -1a
script output.txt
# start using terminal
# to logout the screen session (stop saving the contents), type exit.
tree
# go to the directory you want to list, and type tree (sudo apt-get install tree)
# output:
# home/
# └── project
# ├── 1
# ├── 2
# ├── 3
# ├── 4
# └── 5
#
# set level directories deep (e.g. level 1)
tree -L 1
# home/
# └── project
# 1. install virtualenv.
sudo apt-get install virtualenv
# 2. Create a directory (name it .venv or whatever name your want) for your new shiny isolated environment.
virtualenv .venv
# 3. source virtual bin
source .venv/bin/activate
# 4. you can check check if you are now inside a sandbox.
type pip
# 5. Now you can install your pip package, here requirements.txt is simply a txt file containing all the packages you want. (e.g tornado==4.5.3).
pip install -r requirements.txt
# 6. Exit virtual environment
deactivate
More coming!!