Category Archives: Linux

A Journey of Fidelity (As Luck Has It)

A few weeks ago, my monitor kept blanking out on me. I’d been having a couple troubles with X crashes so I attributed them to that. I had been wanting to change my partitioning scheme so I decided to just re-install and use the older version for awhile (thinking the problem came after an update). So I installed Windows, then started Parted Magic’s Gparted to resize the NTFS partition only to get 22 Unaccounted Clusters errors. ‘ntfsresize‘ (which GParted uses) does a filesystem integrity check before resizing I found out. Thinking that Windows must not have unmounted the disk cleanly on shutdown, I rebooted and forced a filesystem check… no errors. When I went back and tried again, I got the same problem. I learned that there are different versions of the NTFS filesystem so I reinstalled Windows again and let the Windows installer format the partition instead of Gparted which I had done previously. When I went to resize again, I got the same problem. Here I eventually came to the conclusion that very possibly my hard drive was failing on me. This threw me off because my drive was only a year and a half old and because it was a Western Digital. Nonetheless, I had to check. So I ran a S.M.A.R.T conveyance test and then an extended test. Both tests passed. I knew (…Ughh!) that I’d have to run a ‘badblocks‘ test. I ran a non-destructive test (… long wait here) and discovered I had 44 bad sectors on my hard drive. I checked the Western Digital website (who had a very nice warranty check/RMA program) and thankfully my drive was still under warranty. I got a replacement drive (in only two days!!), did tests this time, and installed Windows again. When I went to resize… uggggh, I got the same problem again. At this point, I got out an older version of Parted Magic (6.3) and everything worked… perfectly.

Through all this the fun part was my monitor which kept blanking out on me (its just getting old) and was only able to read the screen for about a minute at a time.

I got a new monitor now too and am doing good again. It turns out that ‘ntfsresize‘ had a bug in it. I’m not sure what version of ‘ntfsresize‘ had the bug but it’s also on the Ubuntu 11.10 install disk. I upgraded to Parted Magic 11-11-11 and was able to resize my NTFS partition.

On this journey I learned is to never buy a a new drive and not test it, que sera sera. Because of this, I wrote badblocks page on the Arch Wiki for reference.

A Beautiful fstab

I know what partitions I have and like to know what is mounted and where. To do this, I keep a tidy static filesystem file (/etc/fstab).

I use labels instead of UUID’s just because they look nicer, but also because this allows me to resize them if need be. It’s hard to go wrong with UUID’s but since I know I likely won’t be putting a USB drive named ‘Windows’ or ‘Ubuntu’ in the USB port, I’m likely pretty safe. You may have noticed too that I choose not let HAL/DBUS (is it dbus that does mounting now?) handle my Windows and Storage partitions. I choose to do this for several reasons. One is because when I copy files I almost always find it much quicker from the command line (i.e. cp file1 file2 ... /mnt/Storage/backups/) rather than navigate through multiple directories in the file browser. The second reason is for security because sensitive data I don’t always want available. The third is to protect the Windows partition. If a crash were to happen, I find it a good inconvenience to have to boot Windows to be able to fix the NTFS volume.

Here it is:

# /etc/fstab: static file system information
# <file system>          <dir>        <type>  <options>           <dump/pass>
# Temporary file systems:
tmpfs                    /tmp         tmpfs   nodev,nosuid                0 0

# Internal hard disk (sda[2,3,5,6,7]): 
LABEL=SYSTEM\040RESERVED /mnt/SR      ntfs-3g noatime,noauto,user         0 0
LABEL=ACER               /mnt/Windows ntfs-3g noatime,noauto,user         0 0
LABEL=Arch               /            ext4    errors=remount-ro,noatime   0 1
LABEL=Home               /home        ext4    noatime                     0 2
LABEL=Swap               swap         none    defaults                    0 0

# External hard disk (sdb1)
LABEL=Backup             /mnt/Backup  ext4    noatime,noauto,user         0 3

noatime has been applied to save disk writes and unnecessary timestamps everytime the file is accessed, and the user option allows me to mount without superuser privileges. For the Windows partition to be able mounted as a regular user, the NTFS-3G driver will need to be compiled with internal FUSE support.

Mounting a Windows NTFS Partition as a Regular User (Ubuntu)

To be able to mount a Windows NTFS partition in Linux as a regular user (e.g. mount /dev/sda1 /mnt/Windows), rebuilding the driver with internal FUSE (Filesystem in USErspace) support is required, and then setting correct permissions is needed.

Download and Compile

First setting a couple variables eases the process:

blddir=~/Downloads/build      # A good place to do compiling
pkgname=ntfs-3g               # The package/driver name

Here the package version variables defined to match the actual extracted package source namings (why 1: gets prepended and 2ubuntu3 gets appended I’m unsure of):

pkgname_ver=$(dpkg -l | grep ^[i,h]i | awk '{print $2"_"$3}' | grep $pkgname | sed 's/1://')
PKGNAME_VER=$(echo $pkgname_ver | sed 's/\(.*\)-.*/\1/')

Note: Theoretically this should not be needed if you use udisks2. Unfortunately it looks like no one has found out how to use udiskctl yet.

Install the compiling (building) programs and then packages need to build ntfs-3g:

sudo apt-get install build-essential fakeroot dpkg-dev lynx devscripts
sudo apt-get build-dep $pkgname

Create the building directories and change to it’s directory:

[ ! -d "$blddir" ] && mkdir -p "$blddir"
cd "$blddir"
[ ! -d "$pkgname" ] && mkdir "$pkgname"
cd "$pkgname"

Download the source code (which gets extracted after downloading):

apt-get source "$pkgname"

The source code is oddly owned by root, to make it editable change it’s permissions:

sudo chown -R username:username .

Entering the source code directory (required to build):


Change the FUSE option to internal, comment the change, then compile:

sed -i 's/--with-fuse=external/--with-fuse=internal/g' debian/rules
dch -i "Changed fuse option to internal in configuration rules"
dpkg-buildpackage -rfakeroot -b

Replace the current NTFS-3G driver with the one just compiled with internal FUSE support:

sudo gdebi ntfs-3g_2011.4.12AR.4-2ubuntu3_i386.deb

And hold (freeze) the package so it doesn’t get updated with a new version on a system update:

echo ntfs-3g hold | sudo dpkg --set-selections

The driver will need to be set to setuid-root (there are risks doing this, read this for more information):

sudo chown root $(which ntfs-3g)
sudo chmod 4755 $(which ntfs-3g)

Finally, give the user the ability to be able to mount volumes:

sudo gpasswd -a username disk

Reboot to have the new driver loaded and the user to be put in the disk group.


The fstab will need to have the right options to be able to mount as a regular user. In my next post, I’ll show what my fstab looks like.

Bug Fix

I had a problem with gcc-4.6_4.6.1 on my install. It would error out at the beginning of a compile. The workaround for me was to use an earlier version of GCC and then define it when compiling:

sudo apt-get install gcc-4.4
CC=gcc-4.4 dpkg-buildpackage -rfakeroot -b


Storing login/password Websites in a File

I find that it is a good idea to update my Internet passwords from time to time. Previously to do so, I opened Firefox’s Preferences window and then went to the Saved Passwords window. From here, I’d toggle between Firefox and the Saved Passwords window, goto the sites that were listed, and change the password.

After doing this, I decided it would be quicker if I just had them in a text file. In the text file once I had updated the password on the website, I’d comment the line so I’d know I had done so.

For text editing, I commonly use Vim and it works great for this.

The nice thing about working in the terminal too is that once the text file is opened the webpages can be opened by Ctrl clicking on them.

I created a three of scripts to help the process: one to edit the list, one to generate the password, and one to copy the password to the clipboard.

 sitepass-ls   - list of programs/sites using common pw
  a | add   - add entry to list
  e | edit  - edit list
  s | sort  - sort list alphabetically
  u | uncom - uncomment list for new password
 sitepass-gn  - generate password for common use and other use.
  c | common - generate common password
  o | other  - generate other  password
 sitepass-cb  - copy common, other, and previous passwords to clipb.
  c  | common  - copy common
  o  | other   - copy other
  cp | comprv  - copy previous common
  op | othprv  - copy previous other
  x  | clear   - clear contents of clipboard

Here are the scripts:




Swap File for Suspend

Warning: I have not found this method to be unreliable; therefore, I have reverted back to using a swap partition.

I decided not to clutter my partitioning scheme anymore with a swap partition so from now on I’m using a swap file instead. This shows how to do use a create and use swap file during installation.

Create the Swap File

Boot the install disk and load Linux (for Ubuntu use the ‘Try Ubuntu’ to get to a functioning environment). Partition now (if required, GParted recommended) as it is generally easier than using the installer partitioner. When partitioning is done open the terminal so the swap file can be created.

You’ll need the kernel-defined root partition name (if you don’t already know it):

sudo fdisk -l | grep ^/dev

To simplify tasks define the root partition as a variable. For example, if your root partition is sda2:


Create the mount point and mount the partition:

sudo mkdir /mnt/$root_part && sudo mount /dev/$root_part /mnt/$root_part

Create the swap file (this is created before doing the install so it’s at the beginning of the partition) by doing:

fallocate -l 1G /mnt/$root_part/swapfile  # G = Gigabyte, M = Megabyte
chmod 600 /mnt/$root_part/swapfile
mkswap /mnt/$root_part/swapfile

Unmount, then install your system:

umount /mnt/$root_part

Install your System

Install as normal. With the installer, define the partition(s) to the desired mount point (for example, sda2 to be / (root), sda3 to be /home?,…).

List the Swap File

After the install has completed, the swap file information will need to be listed in the static filesystem configuration file (fstab).

To do this, the partition will likely need to be mounted again:

sudo mount /dev/$root_part /mnt/$root_part

Add the swap file to root partition fstab file using the editor of choice (for example: gksudo gedit /mnt/$root_part/etc/fstab) and adding:

/swapfile none swap defaults 0 0

Define the Kernel Options

After the install has completed, the swap file location will need to be defined as a kernel option to the bootloader.

Change apparent root (to be able to update the bootloader later):

for i in /dev /dev/pts /proc /sys; do sudo mount -B $i /mnt$root_part$i; done
chroot /bin/bash /mnt/$root_part

Get root parition UUID (partition Unique IDentifier):


Get the swap file first block physical location on the partition by running the command (the value needed is given on the first row of the ‘physical’ column):

filefrag -v /swapfile

The bootloader will need the kernel options defining the swap file partition UUID and first block physical location of the swap file (resume_offset) in this form:

resume=UUID=the-root-partition-UUID resume_offset=the-swap-file-physical-address

These will need to be added to the configuration file. For the original GRUB (GRUB Legacy), edit /boot/grub/menu.lst and add to the kernel line the above kernel options. For GRUB2, edit /etc/default/grub and add the kernel options to the GRUB_CMDLINE_LINUX_DEFAULT="..." line, then:


Also the initial ram filesystem (basically a device/software loader for items that need initialized during kernel boot) may need this information as well. For Ubuntu, define the kernel options by doing:

echo "resume=UUID=the-root-partition-UUID resume_offset=the-swap-file-physical-address" | sudo tee /etc/initramfs-tools/conf.d/resume
sudo update-initramfs -u

Exit chroot, unmount, and reboot to new system:

for i in /sys /proc /dev/pts /dev; do sudo umount /mnt$root_part$i; done
umount /mnt/$root_part

Test now if hibernation works. If it doesn’t you can try to add and switch to the ‘userspace’ suspend framework instead.

Userspace Suspend/Hibernation

uswsusp is a rewrite of the kernel suspend framework for use as a ‘userspace’ tool. It generally has better support for suspending to a swap file so using it here is generally necessary.

Reboot into the new operating system and install uswsusp.

Ubuntu pre-configures uswsusp (defines the root partition, gets the swap file size, runs sudo swap-offset /swapfile, places these values in the configuration file /etc/uswsusp.conf, then creates a new initramfs) so all that needed to do is install it. Other distributions may need to configure it. Once installed and configured, reboot again and test.


Converting Ext4 to JFS

Because I have an older laptop and disk I/O can really bottleneck on the motherboard, I decided to move from the ext4 filesystem to JFS. Recently, I’ve used ext4 because it was fairly fast and definitely reliable; however, with the kernel moving to 2.6.30 new data integrity features have been added that slow it fairly noticeable on an eight year old computer. Moving to JFS has made a fair difference in improving the speed of the system, it’s caveat being that it that journals only metadata (not metadata and data like ext3/4)).

Backup, Convert, Restore

The JFS filesystem utilities will be needed (for Debian/Ubuntu):

sudo apt-get install jfsutils

Reboot to a rescue CD, and backup partition(s)/disks onto another drive. For this example two partitions are used: one for root, one for home. Mount root, home, and then the backup drive:

mkdir /mnt/{,,}
mount /dev/ /mnt/
mount /dev/ /mnt/
mount /dev/ /mnt/

Create the backup directories:

mkdir -p /mnt//backup-rsync/{root,home}

Backup both partitions:

rsync -axS /mnt// /mnt//backup-rsync/root
rsync -axS /mnt// /mnt//backup-rsync/home

Check integrity of backup, then create a JFS filesystem on both partitions:

mkfs.jfs /dev/
mkfs.jfs /dev/

Restore the backup contents back to the root and home partitions; first method:

rsync -axS /mnt//backup-rsync/root/ /mnt/
rsync -axS /mnt//backup-rsync/home/ /mnt/

Or this method to be sure files are defragmented (JFS is somewhat prone to fragmentation, heavy use may require occasional defragmenting):

(cd /mnt//backup-rsync/root/ && tar -cS -b8 --one -f - .) | (cd /mnt/ && tar -xS -b8 -p -f -)
(cd /mnt//backup-rsync/home/ && tar -cS -b8 --one -f - .) | (cd /mnt/ && tar -xS -b8 -p -f -)

Updating the System

The system needs to know of the filesystem changes. Changing apparent root from the rescue CD to the current Linux install is done by:

cd /mnt/
mount -t proc proc proc/
mount -t sysfs sys sys/
mount -o bind /dev dev/
chroot . /bin/bash

Update the chrooted system current mounts file:

grep -v rootfs /proc/mounts > /etc/mtab

The fstab file (the static filesystem configuration) needs to be updated. The information that will need adjusting is: the UUID (possibly), the filesystem type, and options. The UUID’s (unique disk identifiers) may have changed, they can be appended onto the fstab file (so that they can be easily moved) like this:

blkid /dev/ | cut -d "\"" -f 4 >> /etc/fstab
blkid /dev/ | cut -d "\"" -f 4 >> /etc/fstab

Edited /etc/fstab with set UUID, type, and options:

# /dev/sda2
UUID=5d9753dd-f45f-425a-85e2-25746897fdfa / jfs   noatime,errors=remount-ro 0 1
# /dev/sda4
UUID=d3f9eafd-1117-4c75-a309-b21dece655d1 /home jfs noatime                 0 2

noatime lessens disk writes by not creating a timestamp every time a file is accessed (it isn’t seen as very useful anymore since it was developed primarily for servers with statistics in mind).

JFS supposedly works very well with the Deadline Scheduler; the Grub configuration need to specify to use it. This example is for Grub2 though it is similar with original Grub; edit /etc/default/grub and append:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=deadline"

The other Grub configurations need to be updated with the new information:


Then the Grub bootloader will have to be re-installed to the MBR (I think this is because the version of Grub put on the MBR has directions on how to be able to find its configurations for a specific filesystem).

grub-install /dev/  # Disk here is more likely and not partition

Exit the chroot and unmount temporary filesystems:

umount {proc,sys,dev}


Ubuntu Oneiric: Initial Musings

Update: Because of hardware problems the information about Oneiric’s speed are off, please ignore these mentions. Correction: Also, Unity is a collaboration of Gnome 3.0 and the Ubuntu Launcher with the Launcher generally replacing Gnome 3’s Activities Start Menu.


First thing I noticed as Oneiric booted up was how pretty it was: from the unassuming theme to the colorful launcher, the Oneiric looks are sweet. The second thing I noticed, however, was how slow it was. Upon logging in: the desktop took about 60 seconds before becoming usable; the application menu took 10 seconds to open, and the file browser another seven. My first impression: a bit scared (no worries, read on).

Note: A quick background to explain my experience: I have an eight year old laptop that I’d like to be able to hold onto. I know a good number of other Linux users with older computers because basically, I think, we feel that for what we need to do that these computers are good enough. Up to this point, I’ve used the original Gnome (Gnome classic < 3.0) fine on this computer (many Firefox tabs, Gimp, Inkscape concurrently) and it ran adequately enough. We are at a time though where it is certain that Gnome is changing (Gnome re-engineered the desktop with Gnome 3.0 (a more "modern", though more resource-intensive desktop)). Shuttleworth (Ubuntu's high commander) was like many though and couldn't understand it's ergonomics and announced a split from Gnome 3 with the Ubuntu-designed Unity desktop (basically a Gnome 2.x desktop with some tweaks and a new application bar). Unity too though is more resource-intensive than Gnome Classic and judging by other posts I've seen I am not the only one questioning if I need new hardware.


Ubuntu certainly is putting good thinking into creating an efficient desktop. The colorful icons on the launcher distinguish differing programs very well. When they are clicked they provide nice feedback so you know the program is loading. I think that going the route of icons only was a nice touch (as I generally know what I have open in a program). The theme too is a design that is well thought out and works well for applications that run full screen. Unity saves screen real estate by combining the title bar, gnome panel and program options (File Edit …) into one. Since I don’t usually need the program options visible this works well for me.

The scrollbar is re-engineered too and is just hinted (a small four pixel-wide color bar) and pops-up on roll over. I’ve found this useful since it is something that I don’t always use. Other new niceties are an improved system font that has great readability and tabs have been made much smaller from the typically roomy Gnome originals.


The launcher appearing too basic originally worried me, but I began to like it because it was so. It is nice that the colorful icons stand out but I wonder if a bit later on they won’t stand out too much. If they matched Oneiric’s notification icons (monotone icons) they might be less distracting (the bright colors attract my eyes easily). I like how the launcher simply explains how many windows belong to an application by arrows to the left of the icon, and which application is focused by an arrow on the right:

The launcher though does have an Achilles heal in it’s auto-hide functionality. This feature probably has it’s reasoning based in Unity’s netbook origins where screen real-estate was the first-most thought. On a normal desktop though, auto-hide functionality takes away the direct route I am typically used to. For one, applications a lot of times open up under the launcher causing the launcher to auto-hide. This meant that I would have to go from a visual representation to a mnemonic one for my open applications. I discovered that I had to put my pointer to the left edge and wait for the launcher to re-appear a good many times. Later on I just moved applications away from the launcher but since most applications launched there this got tedious too. This behavior added a lot of work for me and there is no direct option to fix it.

The application menu on the launcher is very thorough. It’s most useful feature in my opinion being the search box where you can search applications and documents (the cursor even starts there). It is slow to load on this old computer (10s cold start, 3s warm), but I find it so useful I can take the wait.

Red Zone Issues

A few things happened that caused me a good amount of concern. First, after loading up the desktop I installed Gparted to format a USB flash drive (the new Ubuntu Software Center is very nice, though very slow)

only to have Gparted crash on me mid-format. I’ve never seen Gparted crash ever before and this really threw me (Note: running the last several days though no other application has crashed on me except Firefox once [though I haven’t tried Gparted again]). Others bugs were: resuming from suspend failing two times (out of about twenty), and having the mouse freeze up once. The big adjustment I’ve had to make is due a bug (I think) on how I normally perform my tasks: I’ve had to learn to look for a blinking cursor. There is something about Oneiric where I’ve clicked text boxes a good number of times and typed only to have the first keypress missed. I believe this behavior is due to the first keypress actually selecting the text box. I’m not sure why this behavior occurs (never seen or heard of it before) but I hope it gets fixed soon.

Ups and Downs

Up: Desktop now volume-less, leaving it available just for my work files.
Down: Flash installed by default… groan.
Down: Firefox not pgo yet.
Down: Mail Notification requires Thunderbird to be open.
Down: File manager started from launcher opens behind Firefox.


I did manage to get most my problems fixed over the last few days. The speed can be improved a good deal making it about on par with Natty, the dock can become just about as usable as the Gnome panel Application Switcher, and the missing key presses… well.

Tomorrow I’m going to write Ubuntu Oneiric: Tuning the Desktop on a adjustments that I made that improved my desktop experience.

MPD Locally

Recently I updated the Ubuntu wiki to add using MPD locally and cleaned the Arch wiki of the same name some. MPD on the Arch wiki is a good source of information but it needs help: organization, some tech things, grammatically… but its holding together for now. Because I am mainly using Ubuntu now it is wiser for me to use MPD locally (clean installs are still recommended over updates (just engineered that way)) and having a home partition simplifies things greatly. Anyways, I’ve added .desktop file information and a fix for PulseAudio too.

REO Speedwagon to Ario MPD-wagon

One of the reasons that the MPD page on Ubuntu’s wiki is so scare, I believe, is because Ubuntu uses Banshee. Banshee is a nice MP3 player with about every feature I’d want from an MP3 player. It also has a really nice layout. For my tastes though, I’d like my MP3 player to be more responsive and lighter (MP3s aren’t incredibly resource intensive files to play) and that’s why I like MPD. Banshee on my eight year old computer takes about thirty seconds to load and has a slight (very slight but noticable to me) delay when changing tracks.

Take a look at this:

This is Ario, a MPD client I hadn’t heard of before. The flow is beautiful, very logical to me. Works great, gonna be using it for a Bit.

Missed Touchpad Button Clicks

I had gotten this laptop as a gift/hand-me-down from someone else. Since the first thing I did was install Linux, I hadn’t thought otherwise that the buttons hadn’t been treated to well: left-click was very stubborn, often missing on some very obvious pushes. The action/response of the button resembled a sticky button. Because right-click was better, I created a script that would switch/toggle left and right click. I toggled it twice to test it (so that it reverted back to the original) when and found that left-click was working normally. Not sure why this fixed the problem and have yet to see another problem like this but I’m glad it’s good again. I created a script to quickly do this then added .desktop file to have it load on Login. The script:

Then I created a desktop file touchpad-button-fix.desktop in ~/.config/autostart to start it on Login:

Additional, the touchpad button may revert to it’s original behavior after resuming from sleep. To run the script upon resume it will need to be defined to pm-utils. Put this in /etc/pm/sleep.d/90_touchpad-button-fix:

Then make them executable:

sudo chmod +x ~/.config/autostart/touchpad-button-fix.desktop
sudo chmod +x /etc/pm/sleep.d/90_touchpad-button-fix

Older Computer: Streaming Media Servers

Recently I had a notion after I bought my PS3 about media servers. The PlayStation 3 is pretty neat. Being just a little computer (with a big graphic card) it is able play audio, videos, and display pictures. The PS3 has categories of its’ differing abilities: Music, Video, Game, Network… On the Music, Video and Picture categories I noticed there is an option to find ‘Media Servers’. This got me intrigued: I have a basic wired/wireless home network that connects my PS3, Laptop and Printer (this also is pretty neat, a minuature Cisco router does this seamlessly) and I wondered if the media I had on my laptop could be shared with my PS3. With it I’d be better able to view/listen my media by using my TV, but would it be able to run decently on an eight year old laptop?… Yes.


This is what I was recommended first when I first asked about media servers. I think this may have been because it is the most commonly used media server on Linux. MediaTomb was easy to install, configure (all three media servers I tried are just basic daemons with easy to edit configurations), and didn’t bog down my system when it ran normally. MediaTomb does provide nice thumbnail support and after editing the configuration and restarting the daemon it showed up immediately on my PS3. After running MediaTomb for awhile though however, I gave up using MediaTomb because at times it would get heady. MediaTomb appears to rescan the library from time to time and then it appears to do some parsing of files. Doing this would run up my fan on my laptop which is generally reserved for heavier tasks like working with ffmpeg.


Not sure I want to mention much here as it probably isn’t worth the time. A bit after installing uShare (a day) I discovered it wasn’t being developed anymore. uShare ran nice for one day but after adding a video that wasn’t support (or maybe just a new start to the PS3) The PS3 gave a “A DLNA Protocol Error (501)” that I could never fix. I tried waiting for the library to fully scan on my laptop before turning on the PS3, removed any questionable media files (unsupported codecs, DRMm have reported to cause problems) with no luck. uShare has not been maintained since 2007. When it did run, it ran well and light. uShare does not have support for thumbnails, and it does not monitor (or rescan) directories while running (the daemon will need to be restarted if you add new music for instance).


Never got this to work, but I heard it is fast and cool.


Since I’ve written this article I’ve been using Rygel which is an ok media server. At least it is doing the trick now.

A more Desirable rm

Update: Added mv option -t and --backup=t (thanks to Matt) to prevent file collisions from same-named files. Thanks Matt! A bashrc version and a bash script are both available.

Warning: Currently I am not using this script. This script works good for most instances but I discovered it does have problems when compiling. During compiling some files get force removed (the rm -f, -fr, or -rf options) that it looks like mv will not do. When this happens files don’t get removed and compiling errors can occur. I am still trying to figure out how to do this.

I’ve deleted files before that I wish I could have back and resorted to file saving utilities to try and save them but I have never done this before:

rm -rf ~/ .Arachnophilia

I was doing a bit of spring (fall?) cleaning and as you may have guessed: the space between ~/ and .Arachnophilia was not intended. An accidental slip of the thumb caused this and yes, this had caused my computer to delete my whole home directory! I remembered a post I had read at cyberciti and quickly I did sudo init 1 to drop into single user mode. The post was for recovering a text file though and single user mode didn’t accept any input anyhow so I rebooted into my beloved Parted Magic.

R! and R? (Request!! and Recover?)

Parted Magic now has Clonezilla once again and luckily I had back up several days ago. I wrote down the files I had been working on since then to try and recover. The Arch Wiki has a good page on this. I tried extundelete first and while its probably the best and most through way to recover all files, it recovers the files like this:

ls . | head -n 2

Since the files I’ve been working on were just text files and scripts, Photorec was a better application for this. Photorec is able to analyze and determine certain file types including text files and script files.

Afterward, I was left with directories and files that looked like recup_dir/f1029394.txt. 14000 and some text files if I remember right. To be able to find which one were the correct ones using grep is awesome. So quick too. I just had to be able to remember some text within the file and search for it like this:

grep -rn --include=*.txt "the text here" .

Using grep alone instead of combined it with some program like find is much quicker. The -r does a recursive search, the -n will print the line number, and --include specifies the file type in this case. Piping grep works good for more refined searches like searching for two patterns on a string or removing a string:

grep -rn --include=*.txt "the text here" . | grep -n "this too"
grep -rn --include=*.txt "the text here" . | grep -v "string not with this"

I found the files I needed, thankfully, and was able to get them back.

~/Never\ Again\?/

First thing I did after restoring from my backup was to see if I could prevent this again. The answer was pretty simple: move the files to the trash instead. By adding:

alias rm="mv -t ~/.local/share/Trash/files --backup=t --verbose"

to the ~/.bashrc files will get moved to the trash instead of deleted. The ~/.local/share/Trash/file is defined by the organization so it has interoperability with both KDE and Gnome. With the -t and --backup=t options mv will rename files with duplicate file names to ones with appended numbers (e.g. abc.~2~).

Here’s is a more detailed version in a bash script that includes feedback:


Get every new post delivered to your Inbox.

Join 53 other followers