Place Holder Products Code
Bash MySQL
Notes Return of the Fed Login
Admin Control Panel Email Control Panel Product Control Panel Debug Info Beacon Create Snippet Tag Control Panel

Bash Snippets

In my opinion the greatest single source of bash info is The Grymoire.
Also, though I'm calling this bash, it's more a collection of *nixy shortcuts and one liners. I've been using zsh for a couple years now.


Here are some more in-depth tutorials on various things shell:


Use tee to make changes to read-only file as root.

:w !sudo tee %

Where w=write, !=exec, %=current file

xrandr -s 0

-s for set, 0 special for 'reset'

This copies all matched files somewhere.

Here, we recursively copy all files ending in .jpeg in the current directory into the folder

find ./ -name *.jpeg -exec cp {} ~/home/user/jpegs \;

./ - current location, *.jpeg - bash match whatever.jpeg -exec cp {} - execute cp on each file found, destination folder, \; find argument terminator (an escaped semicolon)

Tar has lots of switches...

Consider compressed.tar.bz2:

tar xvjf xxxxx.tar.bz2

x - extract, v - verbose, j - deal w/ bz2, f - read from file

awk -F':' '{print $1}' /etc/passwd

cat /etc/passwd but cut to only show the name, syntax much like that of cut

-F':' - line delimiter is a colon, '{print $1}' - command to run, here prints 2nd delimited field

Sometimes I forget what I bound where, and need to do a little back tracing.

Here, we send KILL to the process using 8080 for tcp:

fuser -k 8080/tcp

Without -k will just report on process.

Less confusion with dd! Watch dd with pipe viewer (pv)

Here we run it as sudo, such that both sides of pipe are executed as root

sudo -- sh -c \'pv /path/to/disk.iso | dd of=/dev/disk2\'

-- signifies end of parameters to sudo, -c tells sh to execute passed command.

* If we were instead to run sudo pv /path/to/file | dd of=/path/to/dest pv would be run as root, but dd would not, causing command to fail if calling user doesn't have write privileges to dest.

$PATH is a colon separated list of paths. Here we replace colon's with newlines

echo $PATH | tr \: \\n

Here we pipe our path variable to tr. \: escapes the colon, like \\n escapes the newline character.

Use openssl's md5 on 512 bytes of urandom, and then cut input to get just hash

echo $(dd if=/dev/urandom bs=512 count=1 2&> /dev/null) | openssl dgst -md5 | cut -d" " -f2

Makes a nice password!

 

Sometimes apt cannot connect to the repositories using IPv6.  Disable with the following:

-o Acquire::ForceIPv4=true

As in apt update -o Acquire::ForceIPv4=true

Look through files for instances of a pattern.  I find I use this often in a build environment to see what's been used where.

Here, we look for all instances of 'pattern' in the current directory (and sub directories), print line, and line number

grep -rn . -e 'pattern'

-r for recursive, -n for print line number, . is the dir to search in  -e specifies next arg as pattern (can be basic regex)

Use grep to extract IPs using a regular expression.

Here, we extract from the auth log by reading it and piping to grep:

cat /var/log/auth.log | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}"

-E for extended regex, -o for show just the matched part

rsync rocks.  Here, we'll use it to 'synchronize' a remote folder with a local one, while excluding a couple files.  rsync can even pick up where it left off if interrupted, and it won't overwrite data unnecessarily!

Here's the general form of local -> remote:

rsync -avh --exclude 'path/to/exclude' --exclude 'path/to/exclude2' /path/to/source user@server:/path/to/remote/dest

-a for 'archive', which is rsync shorthand for "recursion" and "preserving almost everything", -v for verbose, -h for human readable numbers.  --exclude excludes files from transfer and is relative to the source path!.

There are a million ways to use rsync, and lots of flags.  A couple of others I've found useful include -P for showing progress, -z to enable compression, --delete which provides a powerful destructive updating ability, and --dry-run to safely test the command.

Sometimes you want copy something over the network, but don't need all features of something like rsync.

Here, we use scp to copy a file from a remote host to our local machine:

scp <username>@<host>:/path/to/file /path/to/dest

scp uses ssh, and so allows the user to specify port and other options (almost) like ssh does.  To set port for instance, add -P port_num.

-rwxrwxrwx isn't the easiest thing to read, let's be honest. Verify your assumptions by viewing permissions in octal with stat:

stat -c "%a %n" /path/to/dir

-c for format, %a for access rights in octal, %n for file name

Nice programs often come with heavily marked up sample config files.  Sweet!  But what if I'm often changing the only uncommented directive on line 300+?  Extract live settings with grep:

grep -ve \# /etc/program/config.sample > /etc/program/config

Here -v says match opposite, and -e for expression to follow.  We also need to escape whatever the comment character/string is with \.

config is now the same as config.sample with all comments removed.

Sometimes a process w/ a window hangs.  Imagine a recursive shell command with a typo.

Get the offending process's pid via click, and kill in one command:

 kill -9 $(xprop _NET_WM_PID | cut -d' ' -f3)

Here -9 tells kill to send SIGKILL, which can't be caught/ignored by the target.

xprop reports X server window properties.  The _NET_WM_PID property contains the pid, and we cut the 3rd field separated by a space to get just the pid.

Finally, the $(...) bashism substitutes the retrieved pid into the kill command, resulting in kill -9 "pid of what you clicked"

Imagine you're curious about a program's resource usage as you modify its parameters... Repeatably watch a process using top and pgrep.

Easy "Enter, q, up" repeat...

top -p  $(pgrep i3status)

Here, $(pgrep i3status)  substitutes the pid of i3status into top's -p - for pid - flag argument.

* Also, at least in GNU top, when running top, you can press o and then enter COMMAND=i3status for the same effect

A quick and dirty LOC approximation based on file extension.

Replace *.php in the example with whatever will match the files you want to line count.

t=0;n=0;find dir-to-search/ -name "*.php" -exec wc -l {} \; > loc.TEMP;  while read l; do n=$(echo -n $l | cut -d" " -f1); t=$((t+n)); done < loc.TEMP; echo "Total Lines: $t"; rm loc.TEMP

This is pretty straightforward.  First, we makes sure our vars are set to zero.

It makes use of find's -exec to run wc, and then sloppily extracts the number of lines with cut.  Then it performs some bash math with the $(()) construct, echos the result and deletes the temporary file.

Finally, it makes use of that great bash while read line; do <stuff> ; done < file while paradigm!

Let's say that sometimes you snapshot your VM, which causes your time to be incorrect.  Many things - say updating your software via apt - might not work.

Update with ntpdate!

ntpdate -s time.nist.gov

-s for set.  The NIST kindly provides time.nist.gov to get a good date and time from!

Quick reference for the right syntax to load a Samba share on Linux via the command line.  This will mount a guest-accessible share, or can be used to pass username/password combination!

Requires cifs support for mount.  On Debian stream Linux (Ubuntu, Mint, etc) this is available in the package cifs-utils.

mount -t cifs -o user=guest //server.ip.addr/ShareName local_mount_point

This will prompt for a password.  If the share is guest accessible, provide no password by simply hitting enter.

Here -t specifies the mount type and -o passes options.

Backup a MySQL Database entirely, or just a selected series of tables using mysqldump.

mysqldump --add-drop-table -u username -ppassword databasename tbl_one tbl_two > outputfile.sql

This creates a file which if executed as MySQL will recreate the dumped table(s) complete with the data contained in each.

--add-drop-table adds a DROP TABLE IF EXISTS tblName line before each table creation in the output file.  This recreates the table from scratch each time, and is useful if using mysqldump as part of a backup or development <-> production routine.

-u and -p pass the MySQL server login creds.  If -p is present, but no password follows (note the LACK of a space between switch and the password) mysqldump will prompt for a password.

Multiple tables may be specified separated by spaces after the database name.  If no tables are specified, the entire database is dumped.

Quickly run an SQL script from the command line without actually using the SQL interpreter.

Useful for automation of backups for instance.

mysql -u username -ppass db_name < /path/to/script.sql

Simply pipe the SQL script into mysql with <.

-u and -p pass the MySQL server login creds.  If -p is present, but no password follows (note the LACK of a space between switch and the password) mysql will prompt for a password.

Nicely Create a symlink preserving, gzipped tarball archive (*.tar.gz)

tar czhf output_archive.tar.gz /folder/to/backup

Here, c indicates that we want to create an archive, z says use gzip as well, h 'defreferences' symlinks (preserving them), f indicates that we're going to specify the output destination next.

Verify your assumptions about file permissions by testing them!

www-data comes to mind.  Will Apache be able to read that folder?

sudo -u <user> test -r </location/to/test>

sudo's -u allows you to specify a specific user (instead of the usual root).  Then run test -r which determines if the passed file exists and can be read.

NB. If your terminal isn't setup to display return codes then you can run the following (example with www-data):

sudo -u www-data test -r /var/data; echo $?

$? holds whatever the last command returned.  Generally, 0 => Success,  Not 0 => Failure.

It is sometimes useful to map a directory somewhere else on your disk.  Sometimes symlinks won't do.

One scenario might be a development environment.  Setup a single host in Apache, and then bind your current development directory to that host's location (ie. /var/www/html).

sudo mount -o bind /path/to/be/mounted /place/to/mount

Mount's -o signifies options to follow.  Under the hood, bind remounts one portion of the filesystem's tree structure somewhere else.

You could also use mount options like ro or rw to protect certain actions in certain locations within your FS.

Although cat is super common, it's less known that without a file provided, it reads from standard in.  So it can be used as a kind of input reflector.

Used with od, which 'dumps files in octal and and other formats', we have ourselves a quick and dirty encoder!

Simply pipe cat into od.

cat | od -x

Enter whatever character or string you want in some other form.  cat reads until it encounters and end-of-file (EOF), so hit enter, and then send an EOF.  You can do this with a newline ^d (CTRL+D).

Some useful od options are -x for hex and -b for octal.

I've often found myself wanting to combine PDF documents, and always have to resort to google to figure out how to do it without installing some unnecessarily large software.  The following Ghostscript, which is very likely already installed on your system, will do the trick:

gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOutputFile=merged.pdf part1.pdf part2.pdf

Credit to Roel Van de Paar's SO answer from here.

It may sometimes be helpful to be able to quickly split a single PDF, and we can do this by selecting and saving specific portions of a larger PDF.  We can use Ghostscript.

gs -dBATCH -dNOPAUSE -q -dPDFSETTINGS=/prepress -sOutputFile=extracted.pdf -dFirstPage=1 -dLastPage=2 -sDEVICE=pdfwrite <input_file.pdf>

We also take advantage of the dPDFSETTINGS=/prepress optimization which significantly reduces the size of the output file.

The first page in the document is 1, not 0.

It can sometimes be useful to have grep return the lines that come directly before or after a match.  This can be easily achieved in the GNU version of grep.

In addition to being recursive and showing the line number, the following shows 2 lines before each match, and 4 lines after.

grep -rn . -A 4 -B 2 -e pattern

Here -A and -B represent lines after and lines before.  The same number can be used for both with the -C switch.