Mounting a Windows NTFS Partition as a Regular User (Ubuntu)

To be able to mount a Windows NTFS partition in Linux as a regular user (e.g. mount /dev/sda1 /mnt/Windows), rebuilding the driver with internal FUSE (Filesystem in USErspace) support is required, and then setting correct permissions is needed.

Download and Compile

First setting a couple variables eases the process:

blddir=~/Downloads/build      # A good place to do compiling
pkgname=ntfs-3g               # The package/driver name

Here the package version variables defined to match the actual extracted package source namings (why 1: gets prepended and 2ubuntu3 gets appended I’m unsure of):

pkgname_ver=$(dpkg -l | grep ^[i,h]i | awk '{print $2"_"$3}' | grep $pkgname | sed 's/1://')
PKGNAME_VER=$(echo $pkgname_ver | sed 's/\(.*\)-.*/\1/')

Note: Theoretically this should not be needed if you use udisks2. Unfortunately it looks like no one has found out how to use udiskctl yet.

Install the compiling (building) programs and then packages need to build ntfs-3g:

sudo apt-get install build-essential fakeroot dpkg-dev lynx devscripts
sudo apt-get build-dep $pkgname

Create the building directories and change to it’s directory:

[ ! -d "$blddir" ] && mkdir -p "$blddir"
cd "$blddir"
[ ! -d "$pkgname" ] && mkdir "$pkgname"
cd "$pkgname"

Download the source code (which gets extracted after downloading):

apt-get source "$pkgname"

The source code is oddly owned by root, to make it editable change it’s permissions:

sudo chown -R username:username .

Entering the source code directory (required to build):


Change the FUSE option to internal, comment the change, then compile:

sed -i 's/--with-fuse=external/--with-fuse=internal/g' debian/rules
dch -i "Changed fuse option to internal in configuration rules"
dpkg-buildpackage -rfakeroot -b

Replace the current NTFS-3G driver with the one just compiled with internal FUSE support:

sudo gdebi ntfs-3g_2011.4.12AR.4-2ubuntu3_i386.deb

And hold (freeze) the package so it doesn’t get updated with a new version on a system update:

echo ntfs-3g hold | sudo dpkg --set-selections

The driver will need to be set to setuid-root (there are risks doing this, read this for more information):

sudo chown root $(which ntfs-3g)
sudo chmod 4755 $(which ntfs-3g)

Finally, give the user the ability to be able to mount volumes:

sudo gpasswd -a username disk

Reboot to have the new driver loaded and the user to be put in the disk group.


The fstab will need to have the right options to be able to mount as a regular user. In my next post, I’ll show what my fstab looks like.

Bug Fix

I had a problem with gcc-4.6_4.6.1 on my install. It would error out at the beginning of a compile. The workaround for me was to use an earlier version of GCC and then define it when compiling:

sudo apt-get install gcc-4.4
CC=gcc-4.4 dpkg-buildpackage -rfakeroot -b


Storing login/password Websites in a File

I find that it is a good idea to update my Internet passwords from time to time. Previously to do so, I opened Firefox’s Preferences window and then went to the Saved Passwords window. From here, I’d toggle between Firefox and the Saved Passwords window, goto the sites that were listed, and change the password.

After doing this, I decided it would be quicker if I just had them in a text file. In the text file once I had updated the password on the website, I’d comment the line so I’d know I had done so.

For text editing, I commonly use Vim and it works great for this.

The nice thing about working in the terminal too is that once the text file is opened the webpages can be opened by Ctrl clicking on them.

I created a three of scripts to help the process: one to edit the list, one to generate the password, and one to copy the password to the clipboard.

 sitepass-ls   - list of programs/sites using common pw
  a | add   - add entry to list
  e | edit  - edit list
  s | sort  - sort list alphabetically
  u | uncom - uncomment list for new password
 sitepass-gn  - generate password for common use and other use.
  c | common - generate common password
  o | other  - generate other  password
 sitepass-cb  - copy common, other, and previous passwords to clipb.
  c  | common  - copy common
  o  | other   - copy other
  cp | comprv  - copy previous common
  op | othprv  - copy previous other
  x  | clear   - clear contents of clipboard

Here are the scripts:




Swap File for Suspend

Warning: I have not found this method to be unreliable; therefore, I have reverted back to using a swap partition.

I decided not to clutter my partitioning scheme anymore with a swap partition so from now on I’m using a swap file instead. This shows how to do use a create and use swap file during installation.

Create the Swap File

Boot the install disk and load Linux (for Ubuntu use the ‘Try Ubuntu’ to get to a functioning environment). Partition now (if required, GParted recommended) as it is generally easier than using the installer partitioner. When partitioning is done open the terminal so the swap file can be created.

You’ll need the kernel-defined root partition name (if you don’t already know it):

sudo fdisk -l | grep ^/dev

To simplify tasks define the root partition as a variable. For example, if your root partition is sda2:


Create the mount point and mount the partition:

sudo mkdir /mnt/$root_part && sudo mount /dev/$root_part /mnt/$root_part

Create the swap file (this is created before doing the install so it’s at the beginning of the partition) by doing:

fallocate -l 1G /mnt/$root_part/swapfile  # G = Gigabyte, M = Megabyte
chmod 600 /mnt/$root_part/swapfile
mkswap /mnt/$root_part/swapfile

Unmount, then install your system:

umount /mnt/$root_part

Install your System

Install as normal. With the installer, define the partition(s) to the desired mount point (for example, sda2 to be / (root), sda3 to be /home?,…).

List the Swap File

After the install has completed, the swap file information will need to be listed in the static filesystem configuration file (fstab).

To do this, the partition will likely need to be mounted again:

sudo mount /dev/$root_part /mnt/$root_part

Add the swap file to root partition fstab file using the editor of choice (for example: gksudo gedit /mnt/$root_part/etc/fstab) and adding:

/swapfile none swap defaults 0 0

Define the Kernel Options

After the install has completed, the swap file location will need to be defined as a kernel option to the bootloader.

Change apparent root (to be able to update the bootloader later):

for i in /dev /dev/pts /proc /sys; do sudo mount -B $i /mnt$root_part$i; done
chroot /bin/bash /mnt/$root_part

Get root parition UUID (partition Unique IDentifier):


Get the swap file first block physical location on the partition by running the command (the value needed is given on the first row of the ‘physical’ column):

filefrag -v /swapfile

The bootloader will need the kernel options defining the swap file partition UUID and first block physical location of the swap file (resume_offset) in this form:

resume=UUID=the-root-partition-UUID resume_offset=the-swap-file-physical-address

These will need to be added to the configuration file. For the original GRUB (GRUB Legacy), edit /boot/grub/menu.lst and add to the kernel line the above kernel options. For GRUB2, edit /etc/default/grub and add the kernel options to the GRUB_CMDLINE_LINUX_DEFAULT="..." line, then:


Also the initial ram filesystem (basically a device/software loader for items that need initialized during kernel boot) may need this information as well. For Ubuntu, define the kernel options by doing:

echo "resume=UUID=the-root-partition-UUID resume_offset=the-swap-file-physical-address" | sudo tee /etc/initramfs-tools/conf.d/resume
sudo update-initramfs -u

Exit chroot, unmount, and reboot to new system:

for i in /sys /proc /dev/pts /dev; do sudo umount /mnt$root_part$i; done
umount /mnt/$root_part

Test now if hibernation works. If it doesn’t you can try to add and switch to the ‘userspace’ suspend framework instead.

Userspace Suspend/Hibernation

uswsusp is a rewrite of the kernel suspend framework for use as a ‘userspace’ tool. It generally has better support for suspending to a swap file so using it here is generally necessary.

Reboot into the new operating system and install uswsusp.

Ubuntu pre-configures uswsusp (defines the root partition, gets the swap file size, runs sudo swap-offset /swapfile, places these values in the configuration file /etc/uswsusp.conf, then creates a new initramfs) so all that needed to do is install it. Other distributions may need to configure it. Once installed and configured, reboot again and test.


Converting Ext4 to JFS

Because I have an older laptop and disk I/O can really bottleneck on the motherboard, I decided to move from the ext4 filesystem to JFS. Recently, I’ve used ext4 because it was fairly fast and definitely reliable; however, with the kernel moving to 2.6.30 new data integrity features have been added that slow it fairly noticeable on an eight year old computer. Moving to JFS has made a fair difference in improving the speed of the system, it’s caveat being that it that journals only metadata (not metadata and data like ext3/4)).

Backup, Convert, Restore

The JFS filesystem utilities will be needed (for Debian/Ubuntu):

sudo apt-get install jfsutils

Reboot to a rescue CD, and backup partition(s)/disks onto another drive. For this example two partitions are used: one for root, one for home. Mount root, home, and then the backup drive:

mkdir /mnt/{,,}
mount /dev/ /mnt/
mount /dev/ /mnt/
mount /dev/ /mnt/

Create the backup directories:

mkdir -p /mnt//backup-rsync/{root,home}

Backup both partitions:

rsync -axS /mnt// /mnt//backup-rsync/root
rsync -axS /mnt// /mnt//backup-rsync/home

Check integrity of backup, then create a JFS filesystem on both partitions:

mkfs.jfs /dev/
mkfs.jfs /dev/

Restore the backup contents back to the root and home partitions; first method:

rsync -axS /mnt//backup-rsync/root/ /mnt/
rsync -axS /mnt//backup-rsync/home/ /mnt/

Or this method to be sure files are defragmented (JFS is somewhat prone to fragmentation, heavy use may require occasional defragmenting):

(cd /mnt//backup-rsync/root/ && tar -cS -b8 --one -f - .) | (cd /mnt/ && tar -xS -b8 -p -f -)
(cd /mnt//backup-rsync/home/ && tar -cS -b8 --one -f - .) | (cd /mnt/ && tar -xS -b8 -p -f -)

Updating the System

The system needs to know of the filesystem changes. Changing apparent root from the rescue CD to the current Linux install is done by:

cd /mnt/
mount -t proc proc proc/
mount -t sysfs sys sys/
mount -o bind /dev dev/
chroot . /bin/bash

Update the chrooted system current mounts file:

grep -v rootfs /proc/mounts > /etc/mtab

The fstab file (the static filesystem configuration) needs to be updated. The information that will need adjusting is: the UUID (possibly), the filesystem type, and options. The UUID’s (unique disk identifiers) may have changed, they can be appended onto the fstab file (so that they can be easily moved) like this:

blkid /dev/ | cut -d "\"" -f 4 >> /etc/fstab
blkid /dev/ | cut -d "\"" -f 4 >> /etc/fstab

Edited /etc/fstab with set UUID, type, and options:

# /dev/sda2
UUID=5d9753dd-f45f-425a-85e2-25746897fdfa / jfs   noatime,errors=remount-ro 0 1
# /dev/sda4
UUID=d3f9eafd-1117-4c75-a309-b21dece655d1 /home jfs noatime                 0 2

noatime lessens disk writes by not creating a timestamp every time a file is accessed (it isn’t seen as very useful anymore since it was developed primarily for servers with statistics in mind).

JFS supposedly works very well with the Deadline Scheduler; the Grub configuration need to specify to use it. This example is for Grub2 though it is similar with original Grub; edit /etc/default/grub and append:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=deadline"

The other Grub configurations need to be updated with the new information:


Then the Grub bootloader will have to be re-installed to the MBR (I think this is because the version of Grub put on the MBR has directions on how to be able to find its configurations for a specific filesystem).

grub-install /dev/  # Disk here is more likely and not partition

Exit the chroot and unmount temporary filesystems:

umount {proc,sys,dev}


Ubuntu Oneiric: Initial Musings

Update: Because of hardware problems the information about Oneiric’s speed are off, please ignore these mentions. Correction: Also, Unity is a collaboration of Gnome 3.0 and the Ubuntu Launcher with the Launcher generally replacing Gnome 3’s Activities Start Menu.


First thing I noticed as Oneiric booted up was how pretty it was: from the unassuming theme to the colorful launcher, the Oneiric looks are sweet. The second thing I noticed, however, was how slow it was. Upon logging in: the desktop took about 60 seconds before becoming usable; the application menu took 10 seconds to open, and the file browser another seven. My first impression: a bit scared (no worries, read on).

Note: A quick background to explain my experience: I have an eight year old laptop that I’d like to be able to hold onto. I know a good number of other Linux users with older computers because basically, I think, we feel that for what we need to do that these computers are good enough. Up to this point, I’ve used the original Gnome (Gnome classic < 3.0) fine on this computer (many Firefox tabs, Gimp, Inkscape concurrently) and it ran adequately enough. We are at a time though where it is certain that Gnome is changing (Gnome re-engineered the desktop with Gnome 3.0 (a more "modern", though more resource-intensive desktop)). Shuttleworth (Ubuntu's high commander) was like many though and couldn't understand it's ergonomics and announced a split from Gnome 3 with the Ubuntu-designed Unity desktop (basically a Gnome 2.x desktop with some tweaks and a new application bar). Unity too though is more resource-intensive than Gnome Classic and judging by other posts I've seen I am not the only one questioning if I need new hardware.


Ubuntu certainly is putting good thinking into creating an efficient desktop. The colorful icons on the launcher distinguish differing programs very well. When they are clicked they provide nice feedback so you know the program is loading. I think that going the route of icons only was a nice touch (as I generally know what I have open in a program). The theme too is a design that is well thought out and works well for applications that run full screen. Unity saves screen real estate by combining the title bar, gnome panel and program options (File Edit …) into one. Since I don’t usually need the program options visible this works well for me.

The scrollbar is re-engineered too and is just hinted (a small four pixel-wide color bar) and pops-up on roll over. I’ve found this useful since it is something that I don’t always use. Other new niceties are an improved system font that has great readability and tabs have been made much smaller from the typically roomy Gnome originals.


The launcher appearing too basic originally worried me, but I began to like it because it was so. It is nice that the colorful icons stand out but I wonder if a bit later on they won’t stand out too much. If they matched Oneiric’s notification icons (monotone icons) they might be less distracting (the bright colors attract my eyes easily). I like how the launcher simply explains how many windows belong to an application by arrows to the left of the icon, and which application is focused by an arrow on the right:

The launcher though does have an Achilles heal in it’s auto-hide functionality. This feature probably has it’s reasoning based in Unity’s netbook origins where screen real-estate was the first-most thought. On a normal desktop though, auto-hide functionality takes away the direct route I am typically used to. For one, applications a lot of times open up under the launcher causing the launcher to auto-hide. This meant that I would have to go from a visual representation to a mnemonic one for my open applications. I discovered that I had to put my pointer to the left edge and wait for the launcher to re-appear a good many times. Later on I just moved applications away from the launcher but since most applications launched there this got tedious too. This behavior added a lot of work for me and there is no direct option to fix it.

The application menu on the launcher is very thorough. It’s most useful feature in my opinion being the search box where you can search applications and documents (the cursor even starts there). It is slow to load on this old computer (10s cold start, 3s warm), but I find it so useful I can take the wait.

Red Zone Issues

A few things happened that caused me a good amount of concern. First, after loading up the desktop I installed Gparted to format a USB flash drive (the new Ubuntu Software Center is very nice, though very slow)

only to have Gparted crash on me mid-format. I’ve never seen Gparted crash ever before and this really threw me (Note: running the last several days though no other application has crashed on me except Firefox once [though I haven’t tried Gparted again]). Others bugs were: resuming from suspend failing two times (out of about twenty), and having the mouse freeze up once. The big adjustment I’ve had to make is due a bug (I think) on how I normally perform my tasks: I’ve had to learn to look for a blinking cursor. There is something about Oneiric where I’ve clicked text boxes a good number of times and typed only to have the first keypress missed. I believe this behavior is due to the first keypress actually selecting the text box. I’m not sure why this behavior occurs (never seen or heard of it before) but I hope it gets fixed soon.

Ups and Downs

Up: Desktop now volume-less, leaving it available just for my work files.
Down: Flash installed by default… groan.
Down: Firefox not pgo yet.
Down: Mail Notification requires Thunderbird to be open.
Down: File manager started from launcher opens behind Firefox.


I did manage to get most my problems fixed over the last few days. The speed can be improved a good deal making it about on par with Natty, the dock can become just about as usable as the Gnome panel Application Switcher, and the missing key presses… well.

Tomorrow I’m going to write Ubuntu Oneiric: Tuning the Desktop on a adjustments that I made that improved my desktop experience.

MPD Locally

Recently I updated the Ubuntu wiki to add using MPD locally and cleaned the Arch wiki of the same name some. MPD on the Arch wiki is a good source of information but it needs help: organization, some tech things, grammatically… but its holding together for now. Because I am mainly using Ubuntu now it is wiser for me to use MPD locally (clean installs are still recommended over updates (just engineered that way)) and having a home partition simplifies things greatly. Anyways, I’ve added .desktop file information and a fix for PulseAudio too.

REO Speedwagon to Ario MPD-wagon

One of the reasons that the MPD page on Ubuntu’s wiki is so scare, I believe, is because Ubuntu uses Banshee. Banshee is a nice MP3 player with about every feature I’d want from an MP3 player. It also has a really nice layout. For my tastes though, I’d like my MP3 player to be more responsive and lighter (MP3s aren’t incredibly resource intensive files to play) and that’s why I like MPD. Banshee on my eight year old computer takes about thirty seconds to load and has a slight (very slight but noticable to me) delay when changing tracks.

Take a look at this:

This is Ario, a MPD client I hadn’t heard of before. The flow is beautiful, very logical to me. Works great, gonna be using it for a Bit.

Missed Touchpad Button Clicks

I had gotten this laptop as a gift/hand-me-down from someone else. Since the first thing I did was install Linux, I hadn’t thought otherwise that the buttons hadn’t been treated to well: left-click was very stubborn, often missing on some very obvious pushes. The action/response of the button resembled a sticky button. Because right-click was better, I created a script that would switch/toggle left and right click. I toggled it twice to test it (so that it reverted back to the original) when and found that left-click was working normally. Not sure why this fixed the problem and have yet to see another problem like this but I’m glad it’s good again. I created a script to quickly do this then added .desktop file to have it load on Login. The script:

Then I created a desktop file touchpad-button-fix.desktop in ~/.config/autostart to start it on Login:

Additional, the touchpad button may revert to it’s original behavior after resuming from sleep. To run the script upon resume it will need to be defined to pm-utils. Put this in /etc/pm/sleep.d/90_touchpad-button-fix:

Then make them executable:

sudo chmod +x ~/.config/autostart/touchpad-button-fix.desktop
sudo chmod +x /etc/pm/sleep.d/90_touchpad-button-fix

Older Computer: Streaming Media Servers

Recently I had a notion after I bought my PS3 about media servers. The PlayStation 3 is pretty neat. Being just a little computer (with a big graphic card) it is able play audio, videos, and display pictures. The PS3 has categories of its’ differing abilities: Music, Video, Game, Network… On the Music, Video and Picture categories I noticed there is an option to find ‘Media Servers’. This got me intrigued: I have a basic wired/wireless home network that connects my PS3, Laptop and Printer (this also is pretty neat, a minuature Cisco router does this seamlessly) and I wondered if the media I had on my laptop could be shared with my PS3. With it I’d be better able to view/listen my media by using my TV, but would it be able to run decently on an eight year old laptop?… Yes.


This is what I was recommended first when I first asked about media servers. I think this may have been because it is the most commonly used media server on Linux. MediaTomb was easy to install, configure (all three media servers I tried are just basic daemons with easy to edit configurations), and didn’t bog down my system when it ran normally. MediaTomb does provide nice thumbnail support and after editing the configuration and restarting the daemon it showed up immediately on my PS3. After running MediaTomb for awhile though however, I gave up using MediaTomb because at times it would get heady. MediaTomb appears to rescan the library from time to time and then it appears to do some parsing of files. Doing this would run up my fan on my laptop which is generally reserved for heavier tasks like working with ffmpeg.


Not sure I want to mention much here as it probably isn’t worth the time. A bit after installing uShare (a day) I discovered it wasn’t being developed anymore. uShare ran nice for one day but after adding a video that wasn’t support (or maybe just a new start to the PS3) The PS3 gave a “A DLNA Protocol Error (501)” that I could never fix. I tried waiting for the library to fully scan on my laptop before turning on the PS3, removed any questionable media files (unsupported codecs, DRMm have reported to cause problems) with no luck. uShare has not been maintained since 2007. When it did run, it ran well and light. uShare does not have support for thumbnails, and it does not monitor (or rescan) directories while running (the daemon will need to be restarted if you add new music for instance).


Never got this to work, but I heard it is fast and cool.


Since I’ve written this article I’ve been using Rygel which is an ok media server. At least it is doing the trick now.

A more Desirable rm

Update: Added mv option -t and --backup=t (thanks to Matt) to prevent file collisions from same-named files. Thanks Matt! A bashrc version and a bash script are both available.

Warning: Currently I am not using this script. This script works good for most instances but I discovered it does have problems when compiling. During compiling some files get force removed (the rm -f, -fr, or -rf options) that it looks like mv will not do. When this happens files don’t get removed and compiling errors can occur. I am still trying to figure out how to do this.

I’ve deleted files before that I wish I could have back and resorted to file saving utilities to try and save them but I have never done this before:

rm -rf ~/ .Arachnophilia

I was doing a bit of spring (fall?) cleaning and as you may have guessed: the space between ~/ and .Arachnophilia was not intended. An accidental slip of the thumb caused this and yes, this had caused my computer to delete my whole home directory! I remembered a post I had read at cyberciti and quickly I did sudo init 1 to drop into single user mode. The post was for recovering a text file though and single user mode didn’t accept any input anyhow so I rebooted into my beloved Parted Magic.

R! and R? (Request!! and Recover?)

Parted Magic now has Clonezilla once again and luckily I had back up several days ago. I wrote down the files I had been working on since then to try and recover. The Arch Wiki has a good page on this. I tried extundelete first and while its probably the best and most through way to recover all files, it recovers the files like this:

ls . | head -n 2

Since the files I’ve been working on were just text files and scripts, Photorec was a better application for this. Photorec is able to analyze and determine certain file types including text files and script files.

Afterward, I was left with directories and files that looked like recup_dir/f1029394.txt. 14000 and some text files if I remember right. To be able to find which one were the correct ones using grep is awesome. So quick too. I just had to be able to remember some text within the file and search for it like this:

grep -rn --include=*.txt "the text here" .

Using grep alone instead of combined it with some program like find is much quicker. The -r does a recursive search, the -n will print the line number, and --include specifies the file type in this case. Piping grep works good for more refined searches like searching for two patterns on a string or removing a string:

grep -rn --include=*.txt "the text here" . | grep -n "this too"
grep -rn --include=*.txt "the text here" . | grep -v "string not with this"

I found the files I needed, thankfully, and was able to get them back.

~/Never\ Again\?/

First thing I did after restoring from my backup was to see if I could prevent this again. The answer was pretty simple: move the files to the trash instead. By adding:

alias rm="mv -t ~/.local/share/Trash/files --backup=t --verbose"

to the ~/.bashrc files will get moved to the trash instead of deleted. The ~/.local/share/Trash/file is defined by the organization so it has interoperability with both KDE and Gnome. With the -t and --backup=t options mv will rename files with duplicate file names to ones with appended numbers (e.g. abc.~2~).

Here’s is a more detailed version in a bash script that includes feedback: and Linux… Arghh.

Update: Since this I’ve learned that Barnes and Noble carry Audiobooks in the MP3 format. To download them though the Windows application OverDrive Media Console is required. Good news is that you can D/L the file and open it through the application so you don’t have to order it through Windows. Thankfully it’s a good program and does it’s job well.

Recently, I decided to get and audiobook to be able to listen to on my MP3 player. I had heard on TV the commercial that audiobooks could be downloaded and played on my computer or MP3 player. I went to Audible, found the book I wanted and downloaded it. When it started downloading, I noticed the extension was .aa. I hadn’t noticed but below the Download link was a mention of how to import the file to iTunes. The .aa extension is a specially created extension short for Audible Audio and it only works on several types of portable music players that support it. iPods are one, and newer Creative Zen, and SanDisks do too. Having already spent $30 dollars though, I was determined to get this to play on my slightly older MP3 player. Unfortunately, the only way to do this (without spending $20 to $30 on software that removes DRM illegally) is a time-consuming, and somewhat laborious process.

Burn, Burn, Burn… Rip, Rip, Rip

I was a bit thrown off of the MP3 mention:

Saying MP3 player (to me) seems a bit too generic to me and sadly it had me boot up my dusty Windows install :) to be able to start the process. I did bit of research and booting to Windows is necessary – there is no way to convert .aa files in Linux as of yet. I copied my .aa files to the Windows partition, rebooted to Windows, and then installed iTunes. To begin: in iTunes I had to create a new playlist ‘File > New Playlist’, and drag an .aa file to it. I had multiple .aa files so I had to do them separately, one at a time. After that, I did ‘File > Burn Playlist to Disc’, select ‘Gap Between Songs’ as ‘none’, and hit ‘Burn’. This is where the long part of the process takes. For my three part audiobook (three .aa files) the total ~15 hour audiobook spanned 13 discs. I did this for each .aa with a new playlist. I tested a CD after it was done and it played fine but oddly I noticed that iTunes had decided to break the audiobook up into 8 minute tracks – wish it didn’t do. When finished, I decided to rip in Linux. Reboot.

In Linux, Ripit is a good command line tool for ripping CDs. In Ubuntu, install by:

aptget install ripit lame

Lame is required for encoding. Make a new directory for the audiobook. In my case:

mkdir -p ~/Audiobooks/Stephen\ King/Full\ Dark\,\ No\ Stars\ \(Unabridged\)
cd ~/Audiobooks/Stephen\ King/Full\ Dark\,\ No\ Stars\ \(Unabridged\)

And begin burning:

ripit --playlist 0 --bitrate 64 --quality 0 --loop 1 --outputdir ~/Audiobooks/Stephen\ King/Full\ Dark\,\ No\ Stars\ \(Unabridged\)

I did a bit of research on this and for most audiobooks the are encoded in 32bit mono but since some have sound effects, 64bit is the way to go. Also using 64bit helps because a bit of quality will be lost in the conversion. This unfortunately makes the MP3s slightly larger (the original files were about 75MB each, afterward they were around 150) but is really the best choice. The other commands here do: playlist 0 (don’t create a playlist), quality 0 (encode slower for slightly better MP3 quality), loop 1 (will eject disc after rip and prompt for a new one), and outputdir (to specify where to put the ripped folders). Ripit automatically queries the FreeDB database for MP3 tagging (there won’t be any valid entries likely) and there is no way to override it. So for each CD I had to enter into it: not to use a DB entry (0 none of the above); to label with the “Default Album…”; and for genre I just hit enter (none). Ripit burned the CDs into folders named ‘Unknown Artist – Unknown Album’ and was smart enough not to overwrite the folders of the same name and sequenced them. When it was done, I had a list like this:

ls -1
Unknown Artist - Unknown Album
Unknown Artist - Unknown Album 1
Unknown Artist - Unknown Album 10
Unknown Artist - Unknown Album 11
Unknown Artist - Unknown Album 12
Unknown Artist - Unknown Album 2
Unknown Artist - Unknown Album 3
Unknown Artist - Unknown Album 4
Unknown Artist - Unknown Album 5
Unknown Artist - Unknown Album 6
Unknown Artist - Unknown Album 7
Unknown Artist - Unknown Album 8
Unknown Artist - Unknown Album 9

Put it Together, Polish it Up

To be able to put the audiobook back together, I’d have to join the numerous MP3s back together. Here, I choose to use mp3cat. I was a bit unsure which way to go. The best source I could find to do this was this question at stack overflow. I decided to use mp3cat because here it is said that (later on in post by joelhardi) that mp3wrap “inserts its own custom data format in amongst the MP3 frames (the “wrap” part), which causes issues with playback, particularly on iTunes and iPods.” mp3cat pulls the tag (ID3) information out (which leaves only the binary part of the file) and then joins the MP3s together. To install:

tar xvf mp3cat-0.4.tar.gz
cd mp3cat-0.4/
make install
sudo cp mp3cat /usr/local/bin

For my audiobook, part 1 of my audiobook spanned the first folder (0) to folder 4, part 2 from 5-8, and part 3 from 9 to 12. To concatenate the files back together, I’d have to define the folder span of the MP3s to put together. However, because the folders will are not going to be recognized in the correct order (e.g. folder 10 will come after folder 1 [as recognized by the shell]) I had to zero pad them (e.g. …Album 000, …Album 001,…):

mv Unknown\ Artist\ -\ Unknown\ Album/ Unknown\ Artist\ -\ Unknown\ Album\ 0
rename 's/\d+/sprintf("%03d",$&)/e' *
ls -1
Unknown Artist - Unknown Album 000
Unknown Artist - Unknown Album 001
Unknown Artist - Unknown Album 002

The rename command here grabs any number and turns it into a it’s three digit equivalent. Since all my folders had numbering at the end of the name this solution worked good in this case. I created a script to concatenate the MP3s with for ease of use if I ever have to do this again. I had help from some people at the Arch Linux forums to help finish this, particularly rockin turtle. Thanks guys!

To use it for example:

mp3cat-multiplefolders 0 4 "01 Part1"

This did exactly as I wanted except for one thing: I learned that the track length information is also part of the binary file. This would cause some MP3 players to report the length incorrectly thinking that the MP3 was an eight minute track instead of the five or so hours that each actually was. To fix this, the only program I could find was MP3 Diags. This is a great tool to repair damaged MP3s with. Unfortunately, it a GUI utility only and I was hoping for a command line one. To install on Ubuntu: apt-get install mp3diags. I just clicked on 4 to fix everything (which was just the track length issue (bc)).

Tag Line and Submit

I searched online for tag information of the audiobook on both the FreeDB and MusicBrainz databases but had no luck. At this point I knew I’d have to create the audiobook information tags (ID3) manually. Originally I used EasyTag but it created a couple issues afterward when I went back and checked it with MP3 Diags, particularly when adding cover art (problems with frames and such). So it looks like its best to use MP3 Diags tag editor as it had all the basic parts I needed. The tag editor was basic but worked nice and I found the only tricky thing was adding cover art (which I had to be copied to the clipboard and pasted in). The only way I could find to put an image to the clipboard was to find the image in Firefox and copy it from there. The only question I have now is if MP3 Diags correctly assigned it as CoverArt as it has no ability to specify it. However, when playing the files in Banshee the CoverArt is show correctly.

Since this audiobook hadn’t been listed in an ID3 database, I thought I’d put it up in case anyone else was crazy enough to do this :). After looking at FreeDB some more, I read in the forums that they didn’t think this was a place for audiobooks and that the FreeDB was geared toward audio CDs. However, MusicBrainz had audiobook listings so I decided to put it there. Originally I had tried to use MusicBrainz’s own application (Picard) to tag the MP3s with but to tag with Picard the original needs to be a CD (the DiscID creator feature requires a CD to be inserted). However, Picard does have a plugin available called “Add cluster as release” that works. I installed it following the instructions, put the audiobook into a cluster, right-clicked the ‘Album’ > Plugin > Add cluster as release, created an account (required to submit), and filled out the form as best as I could:

Finally, I found the audiobook on Amazon and did ‘Relate to URL’ to add coverart. When done I was told that MusicBrainz has a process that puts new entries into a queue for peer review.

Thoughts and Conclusion

All and all the process went smooth so I’m happily listening to my audiobook now. I’m happy with the quality for the MP3s except for one thing: when I reach the length in the track where the segments were joined I can hear a slight dip in volume. Barely noticeable so I’m not too worried. I also would have liked to be able to fix the track length error from the command line and enter ID3 information from there as well. If anyone knows of anything, I’d appreciate hearing from you. And finally, if anyone knows if it is possible to get audiobooks in MP3 format that would help me quite a bit.

Calibration of a Samsung Series 3 350 32″ TV

This isn’t exactly Linux related but I thought I’d share my experiences on how I calibrated my TV. A great source on doing this is the LCD TV Buying Guide website. This is a great source on not only why TV’s need to be calibrated but also because it has settings for most popular models. Why do TV’s need to be calibrated? Basically most companies rig their TV settings so they look the best on the show floor. However this leads to things like picture tint, contrast exaggeration and other things that doesn’t reproduce the picture realistically. Even if your TV model doesn’t exist on the website, they say to look at a similar model that will a lot of times provide the settings you need. I found this to be true with one model of TV I had, but not another. Even if they have your model it doesn’t hurt to be a bit skeptical. I found for my model I had to change a few settings a tad bit further. I’m going to print the settings now and then tell you how you can calibrate it too if you’re able to connect your computer to the TV. This TV has the model number LN32C350D1DXZA but is known as the model LN32C350.

Picture Mode Standard
Backlight 10
Contrast 97
Brightness 55
Sharpness 36
Color 34
Tint G50/R50
Advanced Settings
Black Tone Off
Dynamic Contrast Off
Gamma 0
Color Space Auto
Flesh Tone 0
Edge Enhance Off
White Balance
Red Offset 18
Green Offset 15
Blue Offset 24
Red Gain 2
Green Gain 21
Blue Gain 10
Picture Options
Color Tone Cool
Size 16:9
Digital Noise Filter Off
HDMI Black Level Normal
Film Mode Off

The contrast is a bit stretched as to give a more in depth look, 92 is more accurate though.

Another good way to calibrate your TV is to use Lagom’s excellent computer monitor calibration tests. I always use this when I get a new computer monitor and it’s well worth the time if all else fails. Also another option is to use the THX video calibration that you will find on LucasArts movies (usually in the options menu).

Thanks to katzmaier at cnet forums for the basis if the white balance setting. Well that’s about it. If you haven’t messed around before with monitor calibration, I think you’ll find the doing so to be a pleasant experience.

A reasonable looking replication of how the TV looks after calibration


Get every new post delivered to your Inbox.

Join 58 other followers