decompress—a wrapper script to decompress various archive types

00-post

The Arch Linux BBS has a thread where people put up their scripts so that others can peruse them. A long time ago someone came up with the idea to create a script that would detect various archive formats and decompress them. That post is unfortunately gone now, but I kept the idea and have expanded on it a bit: I’ve added a couple archive types, file detection, program detection, and archive list support. I gave it a good, overall test so I feel comfortable with it.

Options can be in any order:

$ decompress archive-r.zip --help
decompress [*-l] ... — wrapper script to decompress various archive types
  -l, --list  - list archive contents

If an archive’s existence isn’t detected it will be displayed:

$ decompress archive-r.zip
archive non-existent: archive-r.zip

If a program’s existence isn’t detected it will be displayed:

$ decompress archive-q.zip
program required: unzip

Listing support is available:

$ decompress -l archive-q.zip
 archive-q.zip
       32  2016-04-11 10:39   file-q1
       32  2016-04-11 10:39   file-q2

Listing and decompressing can be done for multiple documents:

$ ls
archive-a.tar.bz2  archive-f.tgz       archive-k.txz  archive-p.xz
archive-b.tb2      archive-g.tar.lz    archive-l.7z   archive-q.zip
archive-c.tbz      archive-h.tar.lzma  archive-m.bz2
archive-d.tbz2     archive-i.tlz       archive-n.gz
archive-e.tar.gz   archive-j.tar.xz    archive-o.lz
$ decompress archive-*
archive-a.tar.bz2...
archive-b.tb2...
archive-c.tbz...
archive-d.tbz2...
archive-e.tar.gz...
archive-f.tgz...
archive-g.tar.lz...
archive-h.tar.lzma...
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
archive-i.tlz...
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
archive-j.tar.xz...
archive-k.txz...
archive-l.7z...
archive-m.bz2...
archive-n.gz...
archive-o.lz...
archive-p.xz...
archive-q.zip...

.exe and .rar files are untested because I was lazy. If there is an error its error message will be displayed.

decompress can be found in my general-scripts repository.

compress—a tar wrapper script to simplify archiving files

00-post

I have become accustomed to using long options over the years as they are easier to remember. I do however use tar in numerous ways. I needed to have a quick way to remember how to archive files; I wrote this script to make it real basic:

$ cd ~
$ compress .local/bin/ Development/general-scripts/
archive name [archive.tar.gz]: /dev/sda4/sc
scripture.css  scripts.tar.xz
archive name [archive.tar.gz]: /dev/sda4/scripts.tar.xz
archive exists, overwrite? (y/n): y
archive created: scripts.tar.xz

The compression type to be used will depend on which extension is typed; tar has a nice option called --auto-compress. So, in the above example, typing ...tar.xz will use the LZMA compression algorithm. Just typing Enter on the archive name and the default archive.tar.gz will be used. The script also supports tab-completion for typing the archive name to help navigate folders and files.

compress can be found in my general-scripts repository.

lnk—forward thinking file linking

When I first used ln I tried using it before reading the documentation. I had assumed that linking was a basic enough operation to make the syntax ln [source-target] [linkname] all I needed to do. I learned though the common deployment of ln is otherwise. Since I created enough links, and because I felt the syntax should be basic, I created a script to get this behavior.

Besides a basic syntax that was logical to me, there are a few other reasons why I created the script. To know what they are, it helps to know the basics of linking.

Principalia linkathica

The default/non-optioned use of ln creates a hard link. A “hard link” is essentially just another name for an existing file. Because the hard link and its source (“target” in the documentation’s wording) share the same file system inode, they are almost indistinguishable (the inode contains all the information about a file).

Hard links are rarely used however. For several reasons its alternative a symbolic link is. While the ln default behavior does create a hard link, its existence is likely a inherited artifact—hard links came before symbolic links and program syntax had to be maintained to run as the users expected.

A symbolic link is more versatile than a hard link. It is sometimes referred to as a “symlink” or a “soft link” and it has some advantages. It can be:

  • readily used on directories
  • used across file system boundaries
  • created if the source/target doesn’t exist
  • color formatted with the ls command (and often is by default)

Further explanation of what a symbolic link is (as explained in the ln Info page, lightly paraphrased):

A symbolic link is a special type of file that refers to a different file by name. Most operations that are passed to the link file (opening, reading, writing…) are deferred by the kernel to operate on its target. Some operations (e.g. removing) work on the link file itself. The owner and group of a symlink have no effect on the file access of the target — they only have an effect on the removing of the symlink itself. On the GNU system, the file mode bits of a symlink have no significance and cannot be changed.

A symlink can be defined either with absolute or relative paths, the later being commonly used on removable media.

Examples:

cd $HOME
touch  file.txt
ln                 file.txt  file_hrdlink.txt
ln -s  /home/$USER/file.txt  /home/$USER/file_symlink-absolute.txt
ln -s     ../$USER/file.txt  file_symlink-relative.txt
ln -s     ../$USER/FILE.txt  file_symlink-relativebroken.txt

“Dance with the one that brung ya”

A basic syntax was what I wanted to be able to link by and why I created the script, additionally, a couple more benefits were able to be added:

  • symbolic links used by default as they are more flexible
  • absolute paths used for consistency and because they are usually more inductive to resolve
  • existence tests used on the source target and destination directory

Usage:

lnk [source-target] [directory-or-linkname] — a generic linker

Examples:

lnk can be found in my general-scripts repository.

Command line dictionary

command-line-dictionary

As a person who likes to write it has always been helpful for me to have a dictionary nearby. As a regular command line user to have a dictionary I could access from there was something I really wanted. I hadn’t predicted this would be much of a task, however, I found it an uphill battle.

What I felt a command line dictionary should offer:

1) a basic description that is accurate
2) the capability to be accessed offline
3) a formatting that is easy to read

Availability

In my original attempt I didn’t find any. I looked at a number of programs but most were inadequate in one way or another. I was baffled and I almost gave up looking. I did eventually find one but before that the two most promising programs were dictd and sdcv.

Dictd

Dictd is a protocol/software-framework for a networking dictionary, it contains both a server and a client. The idea is to have a server where numerous clients can connect to it. This would be useful for local network use or for something like an online dictionary group. However, it seems that the development has been quiet, and I had trouble installing several of its dictionaries… I could never get it to work.

The basic setup steps that are required to make it function are:

  1. install package and a dictionary for it
  2. start the dictd daemon (requires very little overhead) and check if the dictionaries are available (dict -I).
  3. look up a word definition using a particular dictionary (e.g. dict --database gcide)

sdcv

I used to use this program (Stardict console version) for years. It provided a basic, easy-to-use, unambiguous, definition. These days, however, the parent program StarDict is no longer in development. Additionaly, there were formatting problems that broke reading flow, and made it difficult to read.

Forest through the trees

I may not always get what I want, at other times, if I’m paying attention, I’ll find what I need. I discovered a program that while not a full-blown dictionary does pretty good. It technically might not even be a dictionary. From the man page:

wn - command line interface to the WordNet lexical database... it outputs synsets and relations to be displayed as formatted text.

In more human-speak: it details relationships between words. Its use as a thesaurus would be of a more direct comparison; however it can work for a dictionary as it does provide definitions and contextual examples. The definitions may be basic, but they are to the point. The only feature it does not provide that I use sometimes is word pronunciations.

wn lexical -over
...
The adj lexical has 2 senses (first 1 from tagged texts)
...
1. (2) lexical -- (of or relating to words; "lexical decision task")
2. lexical -- (of or relating to dictionaries) 

Creating good enough alone

The output of wn can be difficult to read: it jumbles a lot of information together, and only roughly organizes it. (FYI, in the above example I’ve filtered out a couple lines.) To help the reading of it in a smooth natural way, I’ve created a couple scripts to format the output. One script is called dict and the other is called thes.

dict

I’ve put them in a repository for any who are interested.

Two fine DAE scripts

fine-dae-scripts

Anybody that knows my command line habits, or me in general, knows that my memory could be better. It can be good when I need it to be, however, if I don’t have to remember a thing I’m writing it down. This is why I built my DAE scripts. DAE, which I pronounce as day, is an acronym for Digital Audio Extraction, also known as ripping audio CDs. The scripts are a wrapper for a command that has only a few options yet I have no way to remember the command options that I may not use again for awhile. Hence I created these basic scripts. They are straightforward scripts that just cover the essentials.

(These scripts are only basic wrappers, most of the work is done by the RipIt developer(s)… I thank them very much for their effort.)

How they look

There are two scripts. They are demonstrated here in use, as it is the best way to describe them.

daeme (pronounced like lame) is for MP3s:

daefe

daefe is for MP4s:

daefe

After these steps RipIt does a CDDB query from the Internet (if available) and allows tag editing if desired.

Settings

I bypassed adding a few settings in the script and rather allowed them to be specified in the RipIt configuration file as their values will likely remain the same:

faacopt=-s
dirtemplate="${artist} — ${album}"
playlist=0
eject=1

Audiobooks

The daefe script can also be used for audiobooks. The procedure encodes an entire CD to a file and writes a track/chapter index to another file. The chapter index file can be merged into the audiobook for an integrated audiobook, read ArchWiki:Audiobook for more details.

Download

Both scripts have error checking and I consider them reliable. I have put them in a repository for anyone interested.

gurl—a general downloader

I like to keep things basic. Because a command-line, download program is already a part of the base package installation, it is all I need. Once I learned curl I liked it quite a bit. As always I need help remembering the options so I wrote a general wrapper script and it seems to be all I need. It features redirect following, progress bar, and resume support. It looks like this:

# gurl http://.../archlinux-2014.09.03-dual.iso
###############################                                           43.4%

gurl can be found in my general scripts repository.

bckfile—backup a file with sequential numbering

I have discovered over the years that protecting a file, its content, and developing in a controlled, deliberate method is usually something good to keep in mind. I have learned that if I feel an document, project… is important, then to backup the data and then do the edit is the methodology I need to learn.

When I decide to backup a file, first thing I do is to see if there is a _vault directory. In any location where I had to backup previously, I created this directory. After the first time I did this I realized I was going to have to number these file backups. I reasoned out that filename_[0-9][0-9] would be a format that would be sufficient, if there was an extension it would be filename_[0-9][0-9].ext.

As I could see that file backups were something that I would regularly do, I decided to create a script that would automate this task. It tests the destination directory if the file exists. For the first backup, the script prepends 00 to the file, after that its prepend the sequential number.

The usage is basic: I define the source file and optionally the destination directory. The current directory is assumed if only the source file is specified.

An example:

$ bckfile file.txt _vault
‘file.txt’ -> ‘_vault/file_01.txt’

The script does have one limitation: the filename can only contain one period and it must be for the extension. This is necessary as determination for an otherwise intention would take a lot of work 😊.

bckfile can be found in my general scripts repository.