Category Archives: Command Line

Power Management from the Command Line

To be able to invoke commands like suspend and hibernate from the command line not so long ago required having root privileges or using the desktop environment built-in tools. Now to invoke suspend, hibernate, shutdown, or restart, D-Bus can be invoked as Regular user. I created a script called pwrman to ease the task (requires UPower to be installed).

(I got this idea from a person from the Arch Linux forums. I forgot who you are, so sorry, but thank you.)

bashrc

The ~/.bashrc is a the bash shells’ setting file. The ~/.bashrc can also be used to specify other bash shell related items like abbreviating commands and creating shortcuts. Here is my ~/.bashrc, all bells, no whistles.

The ABC’s of creating MP3s



Being content with GUI ripping software was something that didn’t happen to me using Linux. I had expected my music player software to handle `the task but I can’t remember any that did (not remembering to me is the same as working poorly I’m discovering). As for stand-alone rippers I haven’t heard any that were notable. Because I’m a big fan of software being efficient and to the point (do one thing and do it well) I was a bit nonplussed when I began wondering how I was going to import my CDs to MP3s. A good number of tasks that I had regularly done through the GUI, I discovered are better done through the command line and though I haven’t tested every MP3-related application this looks like it may be true for them as well. Here’s a complete-ish guide to ripping, organizing, repairing, and volume normalizing an audio collection well, done mostly through the CLI.

Rip

RipIT is program that can do just about anything that a GUI version to do. It’s default options will be good enough for most cases (running ripit is all that is needed). Having a greater amount of control however can save time in the end. A wrapper script can be created to help with this:

The ripcd script below defines:

  • The ripping preset (extreme here because storage space isn’t an issue).
  • The directory creation template. RipIT goes online and gets the album tag information which can be used organize directories by tag (here the common "$artist/$album" is used”).
  • Looping (prompts when for new CD when ripping is done)
  • Ripping priority so RipIT plays nice with other programs.
  • Query the MusicBrainz music database instead as it is usually more accurate (editor approval required).
  • The Audio sub-directory to rip it (my Audio directory is divided as such: # ls ~/Audio/ Audiobooks Music Others Podcasts)

Normalize

Normalizing audio means to adjust the volume of audio files to a standard level. This is often a good idea as average volumes levels per album usually differ to some degree. A great program called mp3gain can do this easily. I created a script for this that first normalizes by type (either Music collection, or Audiobook collection… since there are usually differing recording standards for each), then normalize relative to other albums in that catagory. Here’s the script:

Repair

Lame is used by RipIT for encoding of the audio files and does a very good job of it, occasionally though I’ve found it to make a mistake. For these MP3s, previous rips, and for MP3s that have been previously downloaded it is good idea to check them and see if they are in good shape. An excellent tool called MP3 Diags can test MP3s and fix common problems. Repairing MP3s I’ve discovered makes inter-operability between different players play nice. MP3 Diags also includes a very nice (though basic) tag editor.

A more Desirable rm

Update: Added mv option -t and --backup=t (thanks to Matt) to prevent file collisions from same-named files. Thanks Matt! A bashrc version and a bash script are both available.

Warning: Currently I am not using this script. This script works good for most instances but I discovered it does have problems when compiling. During compiling some files get force removed (the rm -f, -fr, or -rf options) that it looks like mv will not do. When this happens files don’t get removed and compiling errors can occur. I am still trying to figure out how to do this.

I’ve deleted files before that I wish I could have back and resorted to file saving utilities to try and save them but I have never done this before:

rm -rf ~/ .Arachnophilia

I was doing a bit of spring (fall?) cleaning and as you may have guessed: the space between ~/ and .Arachnophilia was not intended. An accidental slip of the thumb caused this and yes, this had caused my computer to delete my whole home directory! I remembered a post I had read at cyberciti and quickly I did sudo init 1 to drop into single user mode. The post was for recovering a text file though and single user mode didn’t accept any input anyhow so I rebooted into my beloved Parted Magic.

R! and R? (Request!! and Recover?)

Parted Magic now has Clonezilla once again and luckily I had back up several days ago. I wrote down the files I had been working on since then to try and recover. The Arch Wiki has a good page on this. I tried extundelete first and while its probably the best and most through way to recover all files, it recovers the files like this:

ls . | head -n 2
010392
010394

Since the files I’ve been working on were just text files and scripts, Photorec was a better application for this. Photorec is able to analyze and determine certain file types including text files and script files.

Afterward, I was left with directories and files that looked like recup_dir/f1029394.txt. 14000 and some text files if I remember right. To be able to find which one were the correct ones using grep is awesome. So quick too. I just had to be able to remember some text within the file and search for it like this:

grep -rn --include=*.txt "the text here" .

Using grep alone instead of combined it with some program like find is much quicker. The -r does a recursive search, the -n will print the line number, and --include specifies the file type in this case. Piping grep works good for more refined searches like searching for two patterns on a string or removing a string:

grep -rn --include=*.txt "the text here" . | grep -n "this too"
grep -rn --include=*.txt "the text here" . | grep -v "string not with this"

I found the files I needed, thankfully, and was able to get them back.

~/Never\ Again\?/

First thing I did after restoring from my backup was to see if I could prevent this again. The answer was pretty simple: move the files to the trash instead. By adding:

alias rm="mv -t ~/.local/share/Trash/files --backup=t --verbose"

to the ~/.bashrc files will get moved to the trash instead of deleted. The ~/.local/share/Trash/file is defined by the freedesktop.org organization so it has interoperability with both KDE and Gnome. With the -t and --backup=t options mv will rename files with duplicate file names to ones with appended numbers (e.g. abc.~2~).

Here’s is a more detailed version in a bash script that includes feedback:

Setting Up a Scripting Environment

When first starting learning Linux, I didn’t realize lot of it lies beneath the surface. Linux still holds on to it’s developmental roots and a good deal of it’s power can be found directly from the command line. Windows doesn’t have this type of functionality, and though Mac OS X has some of it few people know about it. If needing to do powerful or automated commands with Linux (whether it be switch mouse buttons, or launch multiple programs at once), many times I can turn to the command line and write a bash script for it. The command line can be very powerful: there are few things that can only be done only from a window, and many more from the command line that can’t be done in a window.

Setting up a scripting environment means creating a place to store the scripts, easily getting to them, and executing them like a regular command.

Directory Setup

First thing I do is set up a directory to place the scripts in. This directory is usually best in the home folder and is preferably invisible as it’s not necessary to see it all the time. This may sound inconvenient at first but since commands will be run from the terminal it is quickly gotten used to. I like to name the directory ~/.scripts, others follow Linux filesystem convention and use ~/.local/bin (dot files are hidden files and are not shown unless explicitly stated):

mkdir ~/.scripts

The tilda character (~) signifies that the directory is the home directory and is used as a shortcut because it is quicker than typing /home/user. To quickly switch to that directory, I create a shortcut in the bash configuration file. Shortcuts can be defined in the bash configuration file using aliases. The bash configuration file is called ~/.bashrc. Adding the shortcut:

alias cds="cd ~/.scripts"

cds tells me to: change to the directory of scripts. After I save it, I re-source the bash configuration file to reload the new settings.

source ~/.bashrc

Now typing the shortcut cds will change to the script directory.

Run Scripts Just Like Regular Commands

I create new scripts here or put those I find here. Creating a script is outside this post but once they are here they will need to be executable:

chmod +x script-name

To be able to run the script like a regular command, the bash shell will need to be let known of the new executable path (~/.scripts). Anytime a command is run in bash, it looks for programs or scripts that are in the path directive. Currently known paths can be discovered by:

echo $PATH

To add the script directory to the known paths, it needs to be defined in the ~/.bashrc file. The bash configuration file may already have some paths defined in the export PATH... line. If it does, the script directory can be added to the line. If it doesn’t, I add both the script directory and the current paths ($PATH) to be sure the new path(s) don’t override the old:

export PATH="~/.scripts:$PATH"

Different paths are separated by a colon (:) and as many can be added as needed. Saving and sourcing ~/.bashrc will have the new directory(ies) be recognized by the bash shell.

Related

  • If you like to learn more about copying scripts (or text) from a window and pasting it to a file from the command line, see Command Line to Clipboard.

Command Line Calculator

I can usually type faster in the terminal than doing mouse click on a gui calculator, so I created this scipt to be able to do it quickly from the terminal. There are alot of command line calculators out there so use the one you are comfortable with but I like using bc because of the syntax. For example, you can type:

calc "6/(3*10)"

or something more complex:

calc "8^2*(10/2+(13.2032565*2030)/.349548)" 100

100 is optional, it will specify how many decimals you want to carry it out to (the default is 4).

HTML Entities from the Command Line

While doing HTML work I tend to do my work with text editors. For this, I use Arachnophilia a Java HTML editor with easy, editable, customizable tags (Review here).

Arachnophilia has support to convert characters to HTML entities but isn’t easy to get to (HTML > More Functions > Char to Entity. There are various web sites that do but if willing to use the terminal they can be quickly gotten there as well. Thanks to script by Darren this can be done easily. It requires script Perls’ HTML::Entities module to do so (for help installing Perl modules look at this page). You’ll probably need redirect the script to point to the Perl program proper:

whereis perl

More than likely its in /usr/bin/perl. After fixing that run the script. This will put you in a sub-shell that you can copy and paste characters to be encoded:

You can also convert a whole file. This will print to standard output (terminal text):

htmlentities filename

Or convert a file by doing:

htmlentities  < file > convertedfile

Installing Perl Modules Manually

HeaderIf you do Perl programming or if a program you have needs a perl module, you could download and compile it manually but the easier way would be to use CPAN (the Comprehensive Perl Archive Network).

CPAN

First thing you should do is see if your distribution has it in it’s packages repository so that it can be easily added/removed. If it isn’t, be sure you have perl installed and start the cpan shell:

perl -MCPAN -e shell

Then upgrade the local CPAN module database:

install Bundle::CPAN
reload cpan

Then to install (for example HTML::Template):

install HTML::Template
exit

Once the database is downloaded you don’t need to use the shell anymore to add a module:

perl -MCPAN -e 'install HTML::Template'

Vi(m) Reference Card

I use the Vi Reference Card all the time but I seem to have lost my copy and since my printer is broke I decided to make an html version of it:

Vi(m) Reference Card

Backup Configurations with tar Helpers

Update: This article has been supplanted by The Beauty of rsync and Backup Script.

When I have to do a reinstall, sometimes I have to install from scratch – doing a clean install is just sometimes necessary. My configurations are priceless to me and after my reinstall I restore them from a backup copy. Here’s how I quickly add my configurations to an include file and back them up on a regular basis done with a basic script.

Basic tar Command

A good number of GUI programs can do this (you can read about the ones I’ve looked at here), but they seem to make the process more complicated, so I went back to tar and created a script. And it makes adding and subtracing files easy. For those new to tar the basic archive command is:

tar -c --xz -f <backupname>.tar.xz /folder/file /folder ...

Include/Exclude Files

By putting the archive command in a bash script it can be used later. Files and folders can be appended to the command in the script but for multiple files consider using include and exclude files:

#!/bin/bash

tar --files-from=include.txt --exclude-from=exclude.txt -c --xz -f backupname.tar.xz

Include/exclude files contain the path on a line of what or what-not to backup. The include file should list the full path and cannot use regexps but the exclude fine can:

/home/*/.local/share/Trash/files
/home/*/.Trash

Add Paths to Include/Exclude Quickly

To add paths to the include/exclude files readlink can be used. Adding to the include and exclude files can be done like this (using the script below):

$ bckcfg i /etc/rc.conf 
 Added "/etc/rc.conf" to bckcfg include file.
$ bckcfg e ~/.thumbnails/
 * "bckcfg" doesn't check if patch is correct because the exclude file can
 contain regexps.  Be sure the path is correct (e.g. '/mnt/win/*')
 Add "/home/todd/.thumbnails/" to bckcfg exclude file? (y/n): y
 Added "/home/todd/.thumbnails/" to bckcfg exclude file.

Creating notes of backups can be done by:

$ bckcfg n Added font config
$:cat bckcfg-not.txt | tail -n 2
20xx-08-14-02:52:23 PM: Added dhcp cacher config
20xx-08-14-02:52:33 PM: Added font config

The Backup Script

To backup do:

bckcfg c

The script names the backup by several definable variables and removes old backups if desired.

Consider putting it in a cron job to get regular backups.

o/

Restore Settings on a Broken Firefox

Update: 09-29-11 – Using script to automate process, see end of post.

When people have a issue with Firefox I’ve seen many people will resort to deleting their old profile (or folder) and creating a new one. This works but doing this will get rid of any passwords, history, bookmarks… therein. Having used Firefox quite a bit creating a new profile from time to time is a good idea anyhow as cruft, bad extensions, … can slow down browsing.

Manually

Copying the Firefox configs can be done by:

cd ~/.mozilla/firefox/

Backup the old profile and profile list:

mv xxxxxxxx.default{,.bck}
mv profiles.ini{,.bck}

Create a new profile:

firefox -CreateProfile <profilename>

This command will return the name of the new folder. Copy the basic settings to the new profile:

cd *.default.bck
cp places.sqlite key3.db cookies.sqlite mimeTypes.rdf formhistory.sqlite signons.sqlite permissions.sqlite webappsstore.sqlite persdict.dat content-prefs.sqlite ../*.<profilename>

This will transfer the bookmarks, browsing history, form entries, passwords, personal dictonary changes, and page zooms. There might be a couple other things wanted to add (possibly your firefox preferences), take a look at Transferring data to a new profile for more information.

Follow

Get every new post delivered to your Inbox.

Join 52 other followers