Site Map - skip to main content - dyslexic font - mobile - text - print

Hobby Public Radio

Your ideas, projects, opinions - podcasted.

New episodes Monday through Friday.



Host Image
Host ID: 131

episodes: 39

hpr2382 :: A Non Spoilery Review of "git commit murder" and "Forever Falls" by Michael Warren Lucas

Released on 2017-09-19 under a CC-BY-SA license.

A Non Spoilery Review of "git commit murder" and "Forever Falls" by Michael Warren Lucas

I met Michael at Kansas Linux Fest 2017 where he was a speaker. Turns out we've probably been walking past each other in the halls at Penguicon the last three years that I have attended. Michael is a BSD guy and one of us. As well as being an open source advocate, he works professionally as a systems admin and network engineer. I bought his texts "SSH Mastery" (because I've always needed help getting my head around reverse IP tunneling), "Networking for Systems Administrators", and "$ git commit murder", his latest novel. Because I was a good customer, Michael threw in "Forever Falls" for free.

"git commit murder" takes place at a BSD convention. The gathering in the novel is slightly less informal than the Linux conferences I've attended. The conference is targeted at the users, contributors, and managers of the fictional "SkyBSD". Our protagonist, Detroit native Dale Whitehead has come to Canada to deliver a talk on his mesh networking project. The conference is disrupted when attendees start to die in what appear to be unrelated accidents. Dale is unwilling to accept these deaths as accidents, and puts his analytical mind to discovering the killer. He also employees his hacking skills, having already created an admin account on the host university's server within minutes of checking in. This makes him understandably reluctant to discuss his theories with the authorities until he has positively identified a culprit.

The SkyBSD community is not without contention. A significant number of contributors want to move from Subversion to git for version control and just as many are vehemently opposed. Also, the recent release of candid photos meant to embarrass a contributor has many calling for a Code of Conduct and the banning of violators. Others think this is going too far. Dale has to contemplate whether either of these is reason for murder? Perhaps it is a struggle by an old guard who is not ready to surrender leadership to a younger generation?

At first, it was hard to get to like Michael's protagonist, Dale Whitehead. Dale suffers from an extreme form of Attention Deficit Disorder which requires medication and causes him to actively shun the company of other people. The same affliction that allows him to get "in the zone" when programming also makes being in crowds a fresh hell for Dale. He is in constant terror that some aberrant behavior on his part will reveal his condition to his companions and he finds it much easier to deal with other humans via e-mail or IRC. It's clear Michael Lucas has an understanding of the condition, either via research or contact with someone who suffers ADD.

At least one character in the story seemed to me to bear a passing resemblance to a familiar conference fixture in real life. Michael told me the sequel might be set at an open source/Sci Fi convention in a city near the great lakes. Time will tell if the Tuesday Afternoon Solaris Overview or a kilt wearing organizer will make an appearance.

"Forever Falls" is also a mystery, as well a SciFi story. Ella Forecourt is a recruit right out of college for the Montague Corporation. As a corporate security officer, she is assigned to investigate the death of a Montague research scientist at the Freefall installation. In the course of the novel, you learn that Montague has proprietary technology that allows them to "portal" into other universes or dimensions where the laws of physics are different from those of our universe. In Freefall, gravity runs parallel to the surface of the world. In other words, you don't fall down, you fall sideways, and with no ground to stop you, if you fall, you fall forever.

Montague has a research facility built into the "Cliff". With gravity travelling sideways, the surface of the planet appears as an endless cliff. "Above" the facility is a huge metal awning to deflect falling boulders. On top of the awning is where the security team discovers the body of Dr. Devin Grupper. The damage to the body suggests Dr. Grupper impacted with terminal velocity. Even in the lighter gravity of Freefall constant acceleration means terminal velocity is governed by air resistance. Montague does use airships for transport, but there are no records of how Grupper could have secured transportation and a pilot to wind up smashed on the awning without a ship going missing. Thus Security Second Ella Forecourt is assigned to the case. "Forever Falls" is but one in a series of Montague Portal novels by Michael Lucas. I look forward to reading the rest of the series.

hpr1945 :: The Quassel IRC System

Released on 2016-01-15 under a CC-BY-SA license.

Quassel is a centralized IRC hub that allows several client computers to appear as only one connection to the IRC server, i.e. Freenode. About the same time NYBill posted Episode 1869 "IRSSI Connectbot", I was wondering how to merge all my simultaneous IRC connections from multiple hosts to the same channel on the same server into one connection. I did a search on "GUI front end IRSSI" and came up with Quassel instead. I think NYBill and I are trying to solve pretty much the same problem. I'm not trying to say my solution is better than NYBill's, I'm just saying it's the one that appeals the most to me.

Problem: IRC servers (or at least Freenode) do not allow simultaneous connection from multiple hosts using the same user identifier. I.E., if I was logged in on the PC on my desk via XChat as FiftyOneFifty, if at the same time I was connected to IRC via a PC on the kitchen counter, I would have to use "Kitchen5150" as my identifier. If I was away from home, but left a computer connected to IRC back home, if I connected againover Android I'd have to be Andro5150. I could adopt all these other personas as aliases, which protected them from theft and allowed me to still have admin rights on channels where I was admin depite using a different login. These multiple versions of me running in IRC inevitably lead to confusion about which was the "real" FiftyOneFifty, a situation which MrJackson is all too familiar with, I'm sure.

IRSSI Solution: Connect to a server via ssh, then login into IRC using the IRSSI terminal client inside a GNU screen or TMUX session. When moving between local hosts, disconnect from the current screen or tmux session, ssh into the server from the new host, and reconnect to the session running irssi. The irssi ncurses interface may not be as pretty or easy for some users as a GUI, but I understand it is quite functional.

Quassel Solution: Connect to IRC server via a single host running quassel-core. Connect multiple simultaneous clients to the core via quassel-client. All clients share the same IRC display at the same time, all the while transparent to the server (i.e. Freenode), which only sees the one login from the host running quassel-core.

There are two components two this system, quassel-core and quassel-client. You want to install quassel-core on to a system with a persistent Internet connection, say a home or cloud server. I first used Arch on and RPI model 2, so quassle-core setup for Arch may be found here: .

A. Install the core

  1. Install quassel-core on the server [sudo pacman -S quassel-core]

  2. Generate a certificate

  3. Start core (i.e. sudo systemctl start quassel)

  4. Enable quassel on every startup (sudo systemctl enable quassel)

    • There is something in the wiki about a but preventing the enable fundction from working. "systemctl enable" just creates a sysmlink into the proper startup directory, so the wiki replaces it with a copy command "cp /usr/lib/systemd/system/quassel.service /etc/systemd/system/"
  5. Set up Port Forwarding on your router. I suggest you use an external port other than the default 4242 (Security Through Obscurity, see my Port Forwarding episode).

All the configuration is done by the client!

B. Install quassel-client

  1. All you need to connect is an IP address and the external port number. The first account you create will be master and the only account with the ability to create other users. In other words, if someone else had your server's IP address and the port Quassel-core is listening on, they could beat you to establishing a master account and controll Quassel on your server.

  2. Once you have established a connection to a core and set your password, you can set up the default IRC servers and channels. It's a GUI interface, so I'm not going to walk you through the menus and various inputs. I only had success setting up one IRC server (Freenode) in the initial setup on the first client (as you connect addition clients, you will find your channels are already configured), and then only if I avoided ssl connections. Channels are entered into a list in the normal way (#channel_1, #channel_2, etc), but once you connect to a server, /join commands become persistant. I added a second IRC server, tllts, once I finished the initial setup.

The user interface is similar to XChat,but not quite as polished.

  1. You get popup notifications when someone uses your handle in a chat, but scrolling back to find it, rather than being in a different color, it shows up in a garish reverse text. Easier to spot, but not as eligant.

  2. No way to search back posts for your handle or anything else.

  3. Links posted by others only have "copy this link function", not "open this link in default browser"

  4. I don't seem to have spellchecking enabled in my IRC client. I discovered spell checkwas central in Linux, rather than every app having it's own version (i.e. I assume FireFox under Windows has it's own spellcheck libraries as Office has it's own library). I wonder if I installed hunspell on the Quassel core server, if I would suddenly get spellcheck ( ).

There is a perfectly adequate Android client for Quassel. Like AndChat, YAAIC, and the others, it seems to drop the connection unless you actively participating, but since the server is persistent, you never miss out on what was said while your client was disconnected.

The last time I was awy for the weekend, I shut off all my PC's and network devices. One drawback of a local Quassel server would be my LAN and Quassel Core server would need to be up even when I was away from home.

Migrating Quassel from my local server to the cloud: About a week after I'd set up Quassel, a buddy anounced he had secured a Digital Ocean Droplet ($5 a month, limited storage, limited bandwidth). He was open to letting his friends use the service, as long as their requirements were low impact. I jumped on the oppurtunity to move my quassel-core over to the "cloud". Remember the five and a half steps to setting up quassel-core under Arch? According to my friend who manages the Digital Ocean Droplet running Ubuntu Server, it was pretty much "sudo aptitude install quassel-core". Once the core was running I then configured the new core from one of the clients (i.e., pointed quassel-client to a new IP and port number, then created an account and password). Since I was on a new server, I had to set up connections my IRC channels again. After that, every client I migrated to the new core inherited those channels from the server. A week or so after moving the core to the cloud, I came home to find my Internet had been down for a few hours. Cycling the power on the ISPs tranceiver and my router fixed my Internet connection, and since Digital Ocean had experienced no interruption, I was still able to scroll back to the five hours of IRC I missed.

hpr1944 :: sshfs - Secure SHell FileSystem

Released on 2016-01-14 under a CC-BY-SA license.

This is a topic Ken Fallon has been wanting someone to do for some time, but I didn't want to talk about sshfs until the groundwork for ssh in general was laid. Fortunately, other hosts have recently covered the basics of ssh, so I don't have to record a series of episodes just to get to sshfs.

From the sshfs man page: SSHFS (Secure SHell FileSystem) is a file system for Linux (and other operating systems with a FUSE implementation, such as Mac OS X or FreeBSD) capable of operating on files on a remote computer using just a secure shell login on the remote computer. On the local computer where the SSHFS is mounted, the implementation makes use of the FUSE (Filesystem in Userspace) kernel module. The practical effect of this is that the end user can seamlessly interact with remote files being securely served over SSH just as if they were local files on his/her computer. On the remote computer the SFTP subsystem of SSH is used.

In short, sshfs offers a dead simple way of mounting remote network volumes from another system on at a specified mount point on your local host, with encrypted data communications. It's perfect for at hoc connections on mobile computers or more permanent links. This is tutorial is going to be about how I use sshfs, rather than covering every conceivable option. I really think my experience will cover the vast majority of use cases without making things complicated, besides, I don't like to discuss options I haven't used personally.

There are other ways to mount remote storage, most noteably SAMBA, but unless you are trying to connect to a Windows share, sshfs is far less trouble to set up, escpecially since most distros come with ssh-server already installed.

The first thing to do when preparing to use sshfs is to create a mountpoint on your local computer. For most purposes, you should create a folder inside your home folder. You should plan to leave this folder empty, because sshfs won't mount inside a folder that already has files in it. If I was configuring sshfs on a machine that had multiple users, I might set up a mount point under /media, then put symlinks in every user's home folder.

The sshfs command syntax reminds me of many of the other extended commands based ssh, like scp. The basic format is: sshfs username@<remote_host>: mountpoint

To put things in a better perspective, I'll use my situation as an example. My home server is on If you have a hostname set up,you can use that instead of an IP. For the sake of arguement, my mountpoint for network storage is /home/fifty/storage . So, I can mount the storage folder on my server using:

sshfs fifty@ /home/fifty/storage

By default, your whole home directory on the remote system will be mounted at your mountpoint. You may have noticed the colon after the IP address, it is a necessary part of the syntax. Lets say you don't wish to mount your whole remote home folder, perhaps just the subdirectory containing shared storage. In my case, my server is an Raspberry Pi 2 with a 5Tb external USB drive which is mounted under /home/fifty/storage . Say, I only want to mount my shared storage, not everything in my home folder, I modify my command to be:

sshfs fifty@ /home/fifty/storage .or. sshfs fifty@ /home/fifty/storage

Except that generally doesn't work for me, and I'll come to that presently. The 5Tb USB drive on the server isn't actually mounted in my home folder, it automounts under /media. The directory /home/fifty/storage on the server is actually a symlink to the actual mountpoint under /media. To make sshfs follow symlinks, you need to add the option '-o follow_symlinks', so now my sshfs command looks like:

sshfs fifty@ /home/fifty/storage -o follow_symlinks

You may have noticed, the "-o" switch comes at end the end of the command. Usually switches come right after the command, and before the arguements.

This will allow sshfs to navigate symlinks, but I've discovered not all distros are comfortable using a symlink as the top levelfolder in a sshfs connection. For example, in Debian Wheezy, I could do:

sshfs fifty@ /home/fifty/storage -o follow_symlinks

Other distros, Ubuntu, Mint, Fedora so far don't like to connect to a symlink at the top level. For those distros, I need to use:

sshfs fifty@ /home/fifty/storage -o follow_symlinks

and walk my way down to storage.

Other related options and commands I haven't used but you may be interested in include -p , for Port. Lets say the remote server you want to mount is not on your local network, but a server out on the Internet, it probably won't be on the default ssh port. Syntax in this case might look like:

sshfs -p 1022 fifty@ /home/fifty/storage -o follow_symlinks

Reading the man page, I also find "-o allow_root" which is described as "allow access to root" . I would expect, combined with a root login, this would mount all of the storage on the remote system, not just a user's home directory, but without direct expertience, Iwouldn't care to speculate further.

The mount can be broken with 'fusermount -u <mountpoint>'.

At this point, I could explain to you how to modify /etc/fstab to automatically mount a sshfs partition. The trouble is, /etc/fstab is processed for local storage before any network connections are made. Unless you want to modify the order in which services are enabled, no remote storage will ever be available when /etc/fstab is processed. It makes far more sense to encapsulate your sshfs command inside a script file and either have it autoloaded with your desktop manager or manually loaded when needed from a terminal.

One thing to watch out for, is saving files to the mountpoint when the remote storage is not actually mounted, i.e., you save to a default path under a mountpoint you expect to be mounted and is not, so all the sudden you have files in a folder that is supposed to be empty. To remount the remote storage, you have to delete/move the paths created at your designated mountpoint, to leave a pristeen, empty folder again.

Weihenstephaner Vitus - The label says it's a Weizenbock, so we know its a strong, wheat based lager

IBU 17 ABV 7.7%

hpr1924 :: Port Forwarding

Released on 2015-12-17 under a CC-BY-SA license.

Port Forwarding

In Episode 1900, Ahuka advised you not to expose the ssh service to the Internet on the default port 22, there we agree. This is called "Security Through Obscurity". Whenever possible, server functions exposed to the Internet should be on non-default port numbers (the exception being HTTP on a public web server). I disagree however, in Ahuka's method of changing the port. He said you should change the port on the server itself:


Open /etc/ssh/sshd_config file and look for line Port 22 and change line to Port 2222. Restart sshd server. systemctl restart sshd

Sshd is running on a non-standard port, connection attempts to the system will fail. You need to connect using following command:

$ ssh -p 2222 user@your-ip OR $ ssh -p 2222

This could make sense if you manage a business or school network, where you have numerous users within your network with whom you share varying levels of trust. Still, I don't think anyone who can brute force your shh logon or shared keys would be stymied by a simple change of ports. But Ahuka also mentioned home networks, and I think we would rather keep things simple. I would humbly suggest keep ssh servers set to port 22 internally, and using a technology called "port forwarding" available on most consumer routers. Port forwarding is simply an administrator configured table that redirects incoming traffic on one IP port to a specific internal IP address and IP port on your internal network. In fact, unless you have only one PC connected directly to you ISP with no router or firewall, you will still need to setup port forwarding to tell the router which machine on your network the for which incoming communication is intended.

In other words, let's say you've enabled ssh on port 40001 of a machine with an internal address of You try to login remotely via ssh on port 40001 using the external IP assigned to you by your ISP (which is taken from a range assigned to them by the IANA). The external IP of your router should be displayed on your router's status page, or you could type "what is my IP" into Google. Instead of an IP in the range 192.168.x.y, like you are probably using internally, your external address will be in the Class A or B range, for instance

So let's say you have ssh server running on port 40001 on a machine with IP adddress on your home network. Your server has an external address of You are at work or on vaction or whatever and you want to ssh into that machine on your home network, i.e,

ssh -p 40001 you@

Unless the router itself supports ssh server (entirely possible with third party Linux based firmwares like Open-WRT and DD-WRT), if you haven't configured port fowarding, the router won't have any idea what to do with an incoming request on port 40001. You need to set up your port forwarding table in your router (don't worry, it's all point and click). IP forwarding may be under Advanced, in the menus, or Security, or Firewall, or a combination of the above.

You will be asked to enter the external port number (in our example, 40001), TCP or UDP or both (in our case, ssh is both, so you may have to create two separate entries), the internal IP address (in our example and the internal port number (if you changed it internally as Ahuka recommended, in our example 40001, but, and this is the whole point of this podcast, you are going to have to set up port forwarding anyway, so why change the port number locally in the first place? If the terms TCP (Transport Control Protocol) and UDP (User Data Protocol) are unfamiliar to you, the difference can easily be explained. Using TCP, the computer transmitting data stops every few packets (I think the default is three, but don't hold me to it) until it gets an acknowledgment from the receiver that the packets were successfully received, then the sender continues. With UDP, the sender blurts out the whole transmission without caring whether the receiver go it or not.

Wikipedia has a great article on official and unofficial standardized port numbers. Once you get into five digits, conflicts to already assigned ports are rare, but it's still best to consult the Wiki. The higher numbers are generally not officially assigned, some particular software product is just "squatting" on the number. In fact, using the port number for a technology you are certain will never be used on your network may further obfuscate the service for which you are actually using it. You may think port 40001 is surely high enough to be free of conflict, but the Wiki says 40000 is used by "SafetyNET p Real-time Industrial Ethernet protocol".

Another advantage of port redirection is you could use a different external port number with every host on your network, i.e., 40001 redirects to you server, 40002 redirects to your desktop, 40003 redirects to the old laptop in the kid's room, etc. Personally, I'd only have port redirection into a single machine that is connected persistently (like a server), and the ssh from it into other hosts on the network (yes, this would be a connection of at least three nested shells). You can even run graphical programs over ssh with the -X argument, but I'm leaving that on for later discussion. Of course, we will loose that functionality when we move from x-server to Wayland, so if you need a GUI you may have to investigate technologies like VNC or VPN.

Of course, everything depends on having a static IP locally on the ssh server (either set on host itself or manual assignment of IP on the router, if possible). You either need a static external address on the WAN (i.e., external address as seen from the Internet) side or employ a domain forwarding service. Also keep in mind, once we get Ivp6, everything above goes out the window.

hpr1873 :: TiT Radio 21 - I Thought I Had Better Links

Released on 2015-10-07 under a CC-BY-SA license.

Another installment of TiT Radio with Kevin Wisher, pegwole, netminer, and FiftyOneFifty

Some of these links may have bee discussed during the show:

hpr1860 :: FiftyOneFifty interviews Chris Waid of Save WiFi

Released on 2015-09-18 under a CC-BY-SA license.

The Save WiFi program has been instituted to combat the greatest threat the open source movement has faced from government over regulation. If you have listened to, The Linux Link Tech Show, Linux for the Rest od US,or HPR recently, you may already be aware that recent decisions by the FCC have already forced router manufactures to lock down their equipment against the installation of non factory firmware. My guest, Chris Waid, CEO of Think Penguin and a leader in the Save WiFi project, joins me to explain how Linux on the desktop may also become subject to FCC regulation. As manufacturers incorporate more Software Defined Radio into PC's, the FCC may feel it has no choice but to lock down (or lock out), not only open source software, but any software that is not pre vetted and pre certified, even on proprietery OS's.

Right now, there is a narrow window where the FCC has invited comment from the public, and Hacker Public Radio invites all our listeners to add their voices against this ill advised course of action.

There is one small saving grace. Kevin Wisher found an Ars Technica article where an unnamed FFC spokesman seems to be saying locking open source firmware out of routers was not the intended consequence (even though Open-WRT was mentioned by name in the updated rules). I think the FCC might prefer manuafacturers avoid incorporating radio hardware that is so easily manipulated:

I want to give special thanks to Chris Waid for going above and beyond for recording our conversation because I was having ISP problems. I want to appologize in advance for any audio problems, I was way low and had to fix it in post.

hpr1842 :: TiT Radio 20 You've Been Pwned (probably)

Released on 2015-08-25 under a CC-BY-SA license.

Longtime listeners of Hacker Public Radio will remember 'TiT Radio', a semi-weekly FOSS "news" and commentary show that appeared on HPR, recorded by the cast of "Linux Cranks" on the off schedule weeks. "Linux Cranks" eventually morphed into the "Kernel Panic Oggcast". While Peter is on walkabout, the cast of KPO has resurrected "Tit Radio" on a temporary basis. The listener is cautioned, while KPO is family friendly, "TiT Radio" makes no such commitment. Please join netminer, FiftyOneFifty, and pegwole as they drag you down the rabbit hole that has always been "TiT Radio".

Our show topics were drawn from these links. Not all these topics made it into the show, but feel free to browse anyway:

hpr1823 :: Kansas Linux Fest 2015, March 21-22, Lawrence KS, Interview 2 of 2

Released on 2015-07-29 under a CC-BY-SA license.

Ryan Sipes: KLF Organizer; Systems Administrator, Northeast Kansas Library System; Organizer of Lawrence (KS) Linux User Group; with Ikey Doherty, Ryan is a developer for Solus (formerly Evolve OS); a contributor to Vulcan text editor, written in Vala (Ryan's KLF talk, "How to Write a GTK/Gnome Application", was pretty much a tutorial in Vala)

Ryan's projects and employer

KLF related interviews with Ryan Sipes

Evolve OS related interviews

KLF sponsors:

The beers:

hpr1820 :: Kansas Linux Fest 2015, March 21-22, Lawrence KS, Interview 1 of 2

Released on 2015-07-24 under a CC-BY-SA license.

From the LAMP Stack break-fix competition, to the breakfast buffet they funded on Sunday, the Rackspace crew presented their organization as the managed hosting company that puts the customer first, by making sure no customer has to wait in a long queue before taking to a human, and to staying on the line as long as it takes to make sure all problems are solved and all questions are answered. This kind of commitment to service naturally requires are larger number of people working tech support, and by the end of the weekend I think it was clear to everyone Rackspace was in Kansas to recruit. I was impressed when one of the Rackspace representatives told me, "We can teach people tech. We can't teach people to want to help other people". Rackspace dedicates a significant part of employee time to training and improving the skills of their help desk staff. If there is a drawback it's that when one shift is training, the other two are expected to pull extra hours to cover the third shift.

hpr1818 :: Review of HPR's Interview Recorder: Zoom H1

Released on 2015-07-22 under a CC-BY-SA license.

The Hacker Public Radio network owns a Zoom H1 digital voice recorder. If you are going to attend an open source event and think you would like to record interviews for Hacker Public Radio, make inquires to the mailing list and the correspondent with the recorder in their possession (currently FiftyOneFifty) will send it to you. This episode is a review of the devices features and how to use them.

Manufacturer page:

How to use the H1 as an USB Mic

hpr1742 :: How to Get Yourself On an Open Source Podcast - Presentation for Kansas Linux Fest, 22 March 2015

Released on 2015-04-07 under a CC-BY-SA license.

Howdy folks, this is 5150 for Hacker Public Radio. What you are about to hear is a presentation titled "How to Get Yourself on an Open Source Podcast" that I delivered at Kansas Linux Fest on 22 March 2015. Since it was not recorded (I was told the SD card was full), and there has been interest expressed by my fellow podcasters, I thought it might be worth re-recording. I am afraid Mike Dupont is not satisfied with any of the video from KLF 2015, this may be the only talk from that event you get to hear. However, show notes are extensive, All I can tell you is, three out of the four audience members seemed to enjoy my presentation. I shall deliver the rest of this podcast as if you gentile listeners were my live audience.

A. Howdy folks, my name is Don Grier. I'm an IT consultant and farmer from South Central Kansas. I am also a podcaster. You might recognize my voice from such podcasts as Hacker Public Radio, the Kernel Panic Oggcast, or Linux LUG Cast, where I use the handle, FiftyOneFifty.

I. When fellow Hacker Public Radio host Mike Dupont told me KLF would be a reality, I struggled to find a topic that I knew well enough to give a talk about. It was almost in jest that I said I could talk about "How to Get Yourself on an Open Source Podcast". Actually, since that was as far as my proposal went, I was shocked and honored to find myself on the same roster with so many other speakers with impressive credentials and technical topics.

II. This afternoon, I hope not only to chronicle my personal history with Linux and open source related podcasts, but to show you why I believe podcasting can be as an important part of giving back to the community as contributing code, or documentation, or cash. Linux podcasts bind the community by providing education, both as basic as Linux Reality or as specific as GNU World Order. Podcasts announce new innovations, and tell us of Free and Open Source software adoption and opposition in corporations and governments. Podcasts herald community events like this one, and provide a little humor at the end of a long day.

B. Some of you may wonder why I'm using old school technology to organize my notes at a high tech conference. At this point, 5150 holds up several stapled sheets of paper in large print. The plain and simple truth is that I can't read my phone or tablet with my glasses on; and I'm already using bifocals. It just seems every time I get new glasses, the lower lenses work for about two weeks, then I have to take then off to see the phone. But this last time I figured I'd outsmart my the system and just order a single focus lenses. I was still congratulating myself on my thriftiness when I put my new glasses on, sat down at the computer, and realized I couldn't read the keyboard.

C. Before I talk about my history as a podcaster, I think I should tell you my history with Linux.

I. My first experience with Linux was with a boxed set of Mandrake 7.2 around 2002. I always maintain at least a second running system in the house, in case the primary machine coughs up a hairball. I'd always been a geek alternative OS's, and I wanted a tertiary machine on my network that wouldn't be affected by the propagation of Windows viruses.

a. There wasn't much flash to Linux apps in those days, I recall I was not impressed by whichever browser shipped with Mandrake. I don't recall what I knew about installing additional applications from repositories, but in any case I was still on dialup.

b. The Pentium I that I installed Mandrake on had both a modem and an Ethernet card. The installer asked which one I used to reach the Internet, and only set up one of the two devices. This annoyed me as I'd planned to use the Linux box as a gateway to see if it would save a few CPU cycles on the P4 I used as a gaming machine back then. I really wouldn't have know where to go on the Internet for help, and I expect help would not be as forth coming 13 years ago.

II. My next experience with Linux came around 2007. The school I consulted for had several Windows 98 machines not compatible with the software they wanted to run. Even though the machines were P4's, we determined the cost of XP plus memory upgrades could better be applied to new machines. As a result, I was able to bring several of the machines home. Over time, I boosted their memory with used sticks from eBay, and even the odd faster processor. As a noob, I installed Feisty Fawn on a system out in the machine shed, and spent a lot of that winter hacking on that box when I should have been overhauling tractors. Just as I was delving into NDIS wrappers, Gusty brought support for my Gigabyte wireless card, which combined with a double fork isolating power box, gave me reasonable certainty that the box out in the shed was safe from lightning storms. About six months later, I rescued up a refugee from a major meteorological event and set it up in my house running Mint. For the first time I didn't have to leave the house to get my Linux on.

D. Just before I set up that first Linux box, we finally got broadband out to the farm, and I'd discovered podcasts. I figured there must be Linux podcasts to go along the general tech and computing podcasts I followed, as well as a fondly remembered weekly SciFi revue show that started out as a Sunday afternoon show on a Wichita radio station, was canceled twice, and re-emerged as a semi weekly podcast, only to disappear forever a couple months after I started listening again, but not before I download all the episodes I missed.

I. In my initial search for Linux related content, all I came up with were four drunk off their ass Scots discussing the minutia of Ruby on Rails. While I liked the format, I lacked the commitment to become a Ruby programmer so I could understand the show.

II. A few days later I came across "The Techie Geek". Russ Wenner mixed tutorials with reviews of new applications and upcoming events. Better yet, he introduced me to a world of other Linux podcasts. Through "The Techie Geek", I learned of the irreverent banter of the "Linux Outlaws", the subdued studiousness of what was then called "The Bad Apples", the contained chaos of the "Linux Cranks", the classroom like atmosphere of the "Linux Basement" during Chad's Drupal tutorial period, tech hints and movie reviews delivered at the speed of 75 miles per hour by Dave Yates of "Lotta Linux Links", the auditory dissonance of "The Linux Link Tech Show", and the constant daily variety of "Hacker Public Radio".

E. In 2010, I made my first contribution to Hacker Public Radio. The great thing about HPR is that there is no vetting process, we only ask your audio be intelligible (not polished, not even good, we just have to be able to understand you) and that the topic be of interest to geeks. If you consider yourself a geek, any topic that interests you is welcome. There is no maximum or minimum runtime, just get the show uploaded on-time. While topics tend concern open source, this is not a requirement. I believe my second HPR concerned how to migrate Windows wireless connection profiles between systems. I'd spent a few hours figuring it out one day for a customer and I thought I should consolidate what I learned in one place. HPR provides a podcasting platform at no cost to the podcaster. It serves as both a venue for broadcasters without the resources to host their own site or without the time to commit to a regular schedule. It can also serve as an incubator for hosts trying to find their own audience. It's never been easier to become a podcaster with HPR. I would start with an e-mail introduction (as a courtesy) to Next, record you audio. When you have a file ready to upload, select an open slot in the calendar page and follow the instructions, be prepared to paste in your shownotes.

F. I also credit HPR for getting me my first invite to participate in my first podcast with multiple hosts. Once a month, Hacker Public Radio records a Community News podcast, recorded on the first Saturday afternoon after the end of the previous month (exact times and server details are published in the newsletter). All HPR hosts, and indeed listeners are invited to participate, it is just asked that you have listened to most the the past month's shows so you can participate in the discussion.

I. Like most multi-host audio podcast's, HPR uses Mumble to record shows, including the annual New Year's Eve show, which has dozens of participants. There is a Mumble tutorial on to help you get started.

II. I started to take part in Hacker Public Radio's Community News a few months after recording my first podcast. I did it because I wanted to take a greater part in HPR, not because I considered it an audition, but it is a good way to show other people that you can politely and intelligently participate in a group discussion. (Actually, I have a tendency to wander off into tangents and unintentionally dominate the topic, something I struggle with to this day).

III. Another way to join in a round table discussion on HPR is to participate in the HPR Book Club. Once a month, we take an audio book that is freely available on the Internet and share our opinions. Recording schedules and the next book to be reviewed are available in the HPR newsletter.

G. I believe sharing one or more Community News with Patrick Dailey (aka pokey) influenced him to invite me into the cast of Dev Random. The semi weekly Dev Random recorded of the Saturdays Kernel Panic didn't. While we sometimes accidentally talked about tech and open source, we always saved the most disturbing things we'd seen on the Internet in the previous two weeks for discussion on the show, things that could not be discussed on other podcasts. Despite rumors to the contrary, dev random is not dead, only resting, and shall one day rise again to shock and disgust new generations of listeners.

H. Sometimes you just have to be in the right place at the right time. I won't insult the Kernel Panic Oggcast by calling it a sister show to Dev Random, it just recorded on opposite Saturdays and had some of the same cast members in common. Anyway, I'd been participating in the forum for a while, suggesting topics from FOSS stories I'd come across in social media during the week. I was idling in #oggcastplanet on Freenode when Peter Cross asked for people from the channel to participate in the show on a day only a couple of the regular cast showed up. Dev Random used the same Mumble server, so I used my existing credentials to take Peter up on his offer, and for better or worse I've been a KPO cast member ever since.

I. While we are on the topic, having a presence on Freenode IRC chat is a great way to get your name or handle known in the podcasting world. Many podcasts have their own channel set up that listeners participate in during live streaming podcasts. Saying something helpful, (or more likely smart alecky) might get you mentioned on the show and make you familiar to the shows audience. I've seen several individuals move from regular forum or chat participants to the hosts of their own show or contributors to HPR. From my own experience, after spending several weeks as silent participants in Podbrewers, listening to the stream and commenting in the chat, RedDwarf and myself were invited to bring our own beers and join the cast.

I. While many podcasts still have their own IRC channels, other than providing a conduit between the hosts, they are most active during live broadcasts. Between shows, many of the podcasters I listen to gravitate to hanging around in Freenode's #oggcastplanet , since podcasters typically have a chat client open during work and leisure hours. In fact, at KPO we use #oggcastplanet as our primary communications channel during live streaming.

II. I still recall the day monsterb and Peter64 asked me about the origin of my handle, given it's similarity to their colleague, threethirty. I'd heard both on podcasts I followed, and I felt like I was talking to rock stars.

III. Now that I am a podcaster in my own right, with a presence in #oggcastplanet, I try to make a point to say hello when I see an unfamiliar handle in the channel. I expect the spambots consider me the nicest guy in IRC.

IV. As it happens, IRC was also responsible for my involvement in the Linux LUG Cast. LLC was conceived after the re-imaginging and final demise of Steve McLaughlin's project, "Linux Basix". Kevin Wisher, chattr, and honkeymaggo wanted to do a show along the same lines while incorporating the spirit of the unrecorded online LUG that always preceded it on the mumble server. I was brought along by the simple expediency of never having closed the #LinuxBasix channel in my chat client. We have been going for a little more than a year and have attracted a following, but frankly we have not found the listener participation we were looking for. This was meant to be a true online Linux Users Group for people couldn't travel to a LUG. So far, it's usually been the same four of five guys talking about what Linux projects succeed, what failed, and what we we're going to try next. I've learned a lot in the past year, and I expect the listeners have as well, but we are always hoping to get more live participation. Rural areas like the midwest are our target audience. The details of the Mumble connection are posted at, we always monitor the IRC channel #linuxlugcast while recording, and the Feedback link is posted on the website.

Thank you for your time and attention this afternoon, especially considering the caliber of talks running in the other two channels. I can be contacted at . Are there any questions?

hpr1730 :: 5150 Shades of Beer 0005 River City Brewing Company Revisited

Released on 2015-03-20 under a CC-BY-SA license.

The great thing about brew pubs is that they always trying new beers so the customer experience doesnt become as stale asa half finished can of Budweiser let out overnight. That means I can return to the same place and experience a whole new vista of flavors. Such was the case last Sunday, when a social affair brought me withing blocks of the River City Brewing Company in Wichita Kansas. I had the forethought to be my three growlers for refilling, and by the time the meeting was of it was time for a burger and a beer anyway. Lets talk about the meal first.

Having already tried their pizza and amazing Cuban sandwich on previous trips, this time a went for a burger. From the River City menu ( ) The Memphis Burger is topped with sweet pepper bacon, cheddar cheese, crispy onion strings and chipotle BBQ sauce. On top of all that, the hamburger was grilled to perfection, in my case that being exceedingly rare. (One of my Dads friends, every time he sees me eating a steak or a burger, always comments You know, Ive seen a critter hurt worse that that and live). I was most impressed by the onion strings. These are not the French fried onion rings that you find atop your green beans on Thanksgiving, but rather the most delicate strings of onion imaginable, battered and fried. I found myself wishing Id thought to order extra BBQ sauce for my French fries, which were hearty and sprinkled with fresh ground black pepper. Id never thought of peppering my fries before, but be assured Ill do so in the future.

To accompany my burger, I selected the Breckenridge Bourbon Smoked Imperial Stout. It weighs in at 9.0%abv, so you get a smaller that average portion in an 11oz brandy snifter. While stouts are usually nearly as bitter as IPAs, I dont notice it as much when coupled with the beers bold flavor. Unlike IPAs, stouts tend to have enough malty richness to add balance. In the case of this beer, the barley is smoked over hazelnuts before fermentation, giving this beer its flavor and its name. Ive want to try a smoked stout since I heard Tracy Hotlz speak of them back on the old Podbrewers show. I dont think Id want to be restricted to an exclusive diet of smoked beers, but this was a welcome change from the ordinary, and a great compliment to my beefy repast. Truly an excellent brew.

Now, on to the contents of my three growlers. I wish I could give you first impressions, but come on, I just couldnt wait for you folks. It was hard enough to wait for the containers to chill overnight in the fridge.

The first beer is even more unique than the smoked stout. Donut Whole Love Affair #3 Pineapple Wit is made with actual pineapple donuts (from River Citys Facebook page ). The first taste you encounter is tart pineapple on the tip of your tongue joined by powdered sugar as the beer washes towards the back of you mouth. The sugar taste tends to stay with you between sips, but the whole effect is subtle and wonderful, not fruit juicy like a shandy. The wheat beer hovers in the background, not enough to obscure the donut, but blending the pastry taste into the breadyness of the beer. I didnt know what to expect of this beer when I ordered it, but I am most pleased I did. 5.65abv 11 IBUs 16oz Weizen

Next, we have Pryze Fyter Red Rye. By far, this is the smoothest and richest rye beer Ive ever tasted. Im a big fan of rye beers, but they tend to be a little more harsh than wheat beers, and are of course more bitter. Like rye whiskey, rye beer is an acquired taste for many people, and best suited for those with a palette that craves bold flavors. According to the menu, Carmel malts, a copious amount of rye. Spicy, floral, earthy, and ready to smack you in the kisser. 5.6%abv 55 IBUs 16oz Nonic

Finally, we have the Buffeit Bourbon Baltic Porter. Of the two bourbon barrel aged porters on the menu, my barman described this slightly sweeter. While Ive never been a fan of the woody tasting bourbons of Tennessee, barrel aging lends a roundness to beers, and compliments the roasted malts and the hops. This is the strongest of the beers I brought home, at 7.2%abv, 47IBUs, and would be served in a 13z Tulip glass.

I made the mistake of not taking a beer menu home with me for documentation, as a list of currently available beers no longer appears on line. Chris Arnold took the time to scan a copy and send it to my e-mail. Thanks Chris. I dont think River City Brewing Company will mind me attaching the menu to my notes for you listeners to salivate over. There are two in particular Im sorry to have missed, the Stinky Pete Plum Saison (they always seems to be out of the raisin and plum beers) and the Emerald City Stout (a man has only so many growlers).

That brings me to my next topic. Among the many interviews I want to do from Linux Fest next week, Im also going to visit the Free State Brewery, only a couple blocks away. I called ahead, and they wont fill other pubs growlers (thats going to cost you some points Free State). On the upside, Ill have a couple new growlers to add to my collection.


hpr1722 :: Kansas Linux Fest 2015, March 21-22, Lawrence KS

Released on 2015-03-10 under a CC-BY-SA license.

We are pleased to announce the first annual Kansas Linux Fest (, hashtag #KLF15. It will be hosted by the Lawrence Public Library, Lawrence Kansas, March 21-22, 2015. The Kansas Linux Fest is a project of the Free/Libre Open Source and Open Knowledge Association of Kansas ( and other organizations.

Special recognition needs to be paid to Hacker Public Radio contributor James Michael DuPont for taking point in making a community event in the central United States a reality. Speakers ( ) include Open Source Advocate Dave Lester, Hal Gottfried, cofounder of the Open Hardware Evangelist Kansas City Open Hardware Group, David Stokes, MySQL Community Manager at Oracle, Ben C. Roose, Technology Consultant for Live Performance, Kevin Lane, Technical Consultant IV at HP Enterprise Services, Jonathan George, CEO @boxcar, and podcaster and open source evangelist, FiftyOneFifty.

Registration for conference tickets can be found on the KLF website. Fan tickets are free, but supporter level tickets may be purchased with a free will donation which will go towards marketing and food.

You will find links on the homepage that will allow you to follow the conference on social and other media, as well as an RSS feed. There is also information on how to become involved with Free/Libre Open Source and Open Knowledge Association of Kansas.

hpr1718 :: What's In My Pickup Toolbox

Released on 2015-03-04 under a CC-BY-SA license.

The mystery of my pickup toolbox.

hpr1692 :: Boulevard Brewing Company "Sample Twelve"

Released on 2015-01-27 under a CC-BY-SA license.

Unrelated tech stuff: Recently, Knightwise showed me a link to use a Raspberry Pi as a streaming music box, much like a Sonos player . I looked at the enclosures people had come up with and saw transistor radios from the 40s and 50s which were true works of art, but don't provide a great selection of controls. It was then I remembered seeing a 1950's juke box wallbox control ( ) in a local "antique" shop. I'm never sure when addressing our European friends what parts of the American experience they are familiar with, but in the 40s to the 70s, in just about every American diner with a jukebox, at every booth there would be a remote console with a coin slot. Usually, you would have card tiles that could be rotated by a knob or by tabs, and each song would have a code made up of a letter and a number. Dropping in the required currency and making a selection would cause the song to be played on the jukebox (and sometimes on a set of stereo speakers in the wall unit). As you may see from the eBay link in the shownotes, wall boxes progressed from just a dozen titles in the 40s to far more complex systems, some with digital read out in the 80s. Most were marvels of late art deco design.

My parents were far to frugal to let me drop coins into one of these pioneering marvels of analog networking, but thanks to a couple modders who have tied their panels into a Raspberry Pi, I can give you a general overview of how these units communicated with the central jukebox via primitive serial protocols. First off, if you have the expectation of following in Phil Lavin's or Stephin Devlin's footsteps, be prepared to pay more for a wallbox certified to be ready to connect and work with the same brand's jukebox (while all wallboxes seemed to communicate by serial pulse, each company employed a different scheme). Wallboxes of all conditions seem to start around $50 on eBay, but can go into the thousands. As I said, all of the wallboxes are marvels of art deco design if they have no other purpose than to occupy your space and become a conversation piece. Right now on eBay, there is an example of a wallbox converted into a waitorless ordering system (this looks like it is from the 70s, only now do we have this functionallity with iPads at every table). In other words, where once was "Stairway to Heaven", now there was "Steak and Eggs: $4.95". The add on plaque covering the face of the unit identified the system as T.O.B.Y., for Totally Order By Yourself. I could find nothing on the tech on Google, but I really hope it was successful, because it truly would have been a master hack.

First step. most wallboxes were powered from the jukebox, you can't just plug them into 120v alternating current, you will likely need a 25 or 30v adapter (research your model). If everything works, you should be able to drop your quarter, punch a letter number combo (which will stay down), then a motor will whir and you selected keys will punch back out. What happens in the background, the motor will cause an energized arm to sweep in a circle, making a circuit with electrodes in it's path. They keys selected determine how many pulses go down the output line, like a finger dialing a rotary phone.

Each manufacturer used a different code. In the case of Steve Devlin's Rowe Ami, there would be an initial set of pulses for the number, a pause, then a more complex set for characters A-V (earlier wallboxes had 10 letters and 0-9 to create 100 selections, later boxes had as many as 200). Phil Lavin's Seeburg uses pulses corresponding to two base 20 digits, both protocols were discovered through trial and error. Each gentleman uses a different method to protect his Pi from overvolt. Devlin uses a 3.5v voltage regulator, which also makes the pulses appear more "square", Lavin uses an optical relay to electrically separate the Pi from Seeburg console entirely.

Both Lavin and Devlin use there wallboxes to control Sonos streaming players. My idea is more flexible, I'd like the Pi to be able to launch either streaming podcasts, or play the last ep of a selection of podcasts, or launch various home automation processes. I didn't think this talk warranted it's own podcast yet because it is clearly an unfinished idea, but I thought this application of old tech was too cool to wait until I was actually motivated to do something with it. If I get a wallbox, I might be inclined instead to connect each button to a momentary switch and wire each in turn to one of the Pi's 40 I/O pins for an even more flexible instruction set.

Boulevard brewing Company "Sample Twelve" K.C. Mo

This is a unique marketing campaign from my favorite K.C. brewer. The twelve pack contains four varieties of beer, two are established Boulevard offerings, and the other two are bottled with non gloss "generic" labels that appear to have been hand typed. In other words, we are to believe we have been sold two prototype beers for our approval.

80 Acre "Hoppy" Wheat Beer (the quotes are mine). The graphics consist of an old Farmall tractor towing a pickup trailer carrying a gigantic hops bud. From this presentation, one would expect an oppressivly hoppy beer, fortunately for the hop timid this is a rather satisfying abulation that only registers 20 IBUs. I detect a distinct citrus taste, so I suspect Citra or related hops but Boulevard is keeping the exact specs closer to the vest than some other brewers. The brewers escription of the beer may be found here (link in the shownotes) Pours corn silk yellow with lots of head but not a lot of lacing. Damp wheat aroma.

Oatmeal Stout: This is the first of the "generic" label "test" beers. Pours opaque dark brown with a very small lite brown head that disappears. Milk chocolate aroma. Thin mouth feel, choclately after taste that lasts more than a flavor washing over your tongue (i.e., you drink it, then you taste the chocolaty/coffee like essence). For locally brewed Oatmeal Stouts, I'd give the nod to Free State in Lawrence KS, but I wouldn't turn down the brew from K.C. if they decide to produce it. As it is not yet an "official", they don't document this beer on the Boulevard web page.

Unfiltered Wheat Beer: There is a graphic of a farmer gathering wheat bundles to build shocks, surrounded by hops vines. Pours the color of cloudy golden wheat straw, lots of persistent head that leaves little lacing. Slight biscuity aroma. Distinctly more citrusy than the 80 Acre. Not much malt and just a little hops bitterness. Despite the name, you can safely drink this beer to th bottom without winding up with a mouthful of particulates.

Mid Coast IPA: The last "experimental" beer. At 104 IBUs, this is where all the hops you expected from 80 Acre went. Pours wheat straw golden, thick white head that leaves little lacing, with a hoppy aroma. Even at 104 IBU, its has a slight sweet taste and doesn't seem to be one of those "my hops can beat up your hops beers". The label states: "The hoppiest thing we have ever brewed. Pretty nervy for a bunch of midwesterners". It's a great complement to the baked ham and spicey glaze I'm having for dinner (link in the show notes, even though I had to improvise somewhat).

Before I leave you, I wanted to play the sounds of dusk from my new homesite. I can think of no more eloquent argument why living on the lake is better than living in town.

Note: Recorded with 2.4Ghz Creative Labs GH0220B headset. I am not happy with the result.

hpr1684 :: 5150 Shades of Beer Jacob Leinenkugels Winter Explorer Pack

Released on 2015-01-15 under a CC-BY-SA license.

Jacob Lienenkugels Winter Explorer Pack "Chippewa Falls, WI since 1867"

Winters Bite - Do you know what it smells like when you open a tin of cocoa (the semi-sweet kind, not the unsweetend) and no matter how you do it, a litle of the powder puffs out? The best descrition I can give this beer is it tastes just like that smell, even down to the dryness. Neither cloyingly sweet or leaving you wondering who mixed the chocolate syrup into you beer, just a sublte taste of dry cocoa. This lager pours dark with very little head. This beer (my favorite it this group) is only available in the Explorer pack, and it's ABV and ingredients are not featured on

Helles Yeah - (German blonde lager, Helles means "light" in German, but unlike American beers, it refers only to color). Straw color, very clear, moderate head that disapears w/o lacing. Sublte flavor, a hit of hops and just slightly more than a pinch of pepper. 5.5 ABV Malts: Pale malts Hops: Five All-American hops including Simcoe and Citra

Cranberry Ginger Shandy - [From Wikipedia, Shandy is beer mixed with a soft drink, carbonated lemonade, ginger beer, ginger ale, or apple juice or orange juice.] Pours cloudy yellow amber, moderate head that disapears w/o lacing. Leinenkugel managed to resist the urge to color it red. Not as syrupy as Shock Top\'s Cranberry Belgian Ale, but unlike many fruit adjunct brews, neither is the flavor so subtle you have to go searching for it. I like to use ginger in cooking, and I can also detect the taste of that sweet spice in this weiss beer as well. 4.2% ABV Malts: Pale and Wheat Hops: Cluster Other: Natural cranberry and ginger flavors

Snowdrift Vanilla Porter - Pours dark brown with just a litle carmel color head that disipates imediately. Vanilla bean aroma. Vanilla flavor is perhaps more subtle than Breckenridge's Vanilla Porter, but there will be know doubt you are enjoying a beer flavored by vanilla and roasted malts, with a hint of chocolate to keep it from being too sweet. 6.0 ABV Malts: Two- and six- row Pale Malt, Caramel 60, Carapils, Special B, Dark Chocolate and Roasted Barley Hops: Cluster & Willamette Other: Real vanilla

BONUS ROUND -Leinenkugels Orange Shandy - Wheat beer, likely exactly the same one that's in the Cranberry Ginger Shandy, but in this case the tart/sweet orange juice taste dosn't completely obscure the flavor of the beer. I like them both, but I think I would grab the orange shandy on a hot day. 4.2% ABV Malts: Pale and Wheat Hops: Cluster Other: Natural orange flavor

hpr1650 :: OCPLive2014 Night Life In Elysburg PA

Released on 2014-11-28 under a CC-BY-SA license.

A running commentary by FiftyOneFifty and Tankenator on the nightlife in Elysburg PA

hpr1647 :: Oggcast Planet Live 2014: The Cooking Show

Released on 2014-11-25 under a CC-BY-SA license.

OggCast 2014. we cook dinner, I drink beer, a time is had by all. I'd like to amp this, but Audacity won't let me, so listen carefully.

Broam, Briptastic, and FiftyOneFifty talk about the meal they are making for Saturday Night at Oggcast Planet Live 2014 from when they thought about it until dinner was served, as well as that day's fun at Knoebels theme park at Elysburg PA and the plans to visit the ghost town of Centralia the following day.

hpr1646 :: 5150 Shades of Beer 0003 River City Brewing Company and Wichita Brewing Company

Released on 2014-11-24 under a CC-BY-SA license.
Image of beer


hpr1643 :: Unison Syncing Utility

Released on 2014-11-19 under a CC-BY-SA license.

Unison is a file syncing/backup utility, similar to SyncBack on Windows, available in most repros.

  1. The graphical interface requires the installation of unison, and unison-gtk.. Unison may be installed w/o the graphical component, but all operations must be initiated from a system running the GUI.
    • Network backups require RSH or SSH to be installed on both machines
  2. The standard wisdom seems to be the rsync does not do a true 2 way sync, i.e., to sync to the newest file version going both ways you would have to do rsync ~/LocalFolder you@server:/home/you/RemoteFolder then turn around and do rsync you@server:/home/you/RemoteFolder ~/LocalFolder. Add that to the fact that like cp, or scp, rsync requires separate commands for files with extensions, files without, and hidden files, creating a bash script for syncing files is more complex than creating a Unison profile.
  3. Step One: If, like me you are syncing only Documents, make your subfolder structure the same on both machines, ergo, if one PC has /home/you/Documents/recipe and second PC has /home/you/Documents/Recipes, edit your folder structure to be the same on both PCs to avoid duplicate files and folders
  4. Launch Unison and create a backup profile First use, create a profile
    • Name of profile
    • Synchronization kind (Local, SSH, RSH, TCP)
    • "First" Directory (you can browse your mounted volumes)
    • "Second" Directory, if you chose Local
    • Host Machine Name (or IP Address)
    • User Name (If you haven't registered SSH keys, you will be prompted for a password on every synchronization.
    • Check whether you want to use compression, (on fast networks or slow processors, compression may create more overhead than it's worth).
    • Target directory (If it's on a remote server, you will need to type the full path, there is no browsing to the folder.)
    • Tell Unison if either folder uses FAT (say an un-reformatted USB stick)
    • If you are backing up to another system, Unison needs to be installed on both. If you are backing up to a server with no GUI desktop manager, you can install just the unison package without unison-gtk, but all the syncs will have to be initiated from the machine with a GUI. (Of course, if you back up to a remote volume that is mounted locally, it should be completely transparent to Unison). If you choose to sync via ssh (recommended), you will need ssh and ssh-server installed appropriately on each machine.
  5. Select and run your profile.
    • The first time, expect to get a warning that no archive files (index files that speed up the synchronization scan) were found. They will be created on the first sync.
    • Unison will look for differences between the files in the two selected directories. The differences will be displayed graphically, with arrows pointing left or right, indicating which directory contains the most current version of the file (by modification date). You can choose to merge files either left or right (a conventional backup), do a merge (i.e., Unison itself decides how to combine data from files with the same name (obviously, that could be messy), or to do a sync (ergo, the most current version of a file overwrites older version, regardless of location). Click "Go" to do a true sync.

hpr1637 :: Communities Are Made of People

Released on 2014-11-11 under a CC-BY-SA license.

hpr1632 :: 5150 Shades of Beer: 0002 Wichita Brewing Company

Released on 2014-11-04 under a CC-BY-SA license.


hpr1627 :: 5150 Shades of Beer: 0001 He'Brew Hops Selection from Smaltz Brewing Company

Released on 2014-10-28 under a CC-BY-SA license.

Smaltz Brewing Company - He'Brew (The Chosen Beer) Hops Collection

David's Slingshot - Pours golden, like an American lager, large head that subsides, rye aroma. Blend of multi-grain malts, an emphasis on hops w/o being excessively hoppy. Citrus taste from the hops. Malts: Specialist 2-row, Carmel Pils, Rye Ale, Crystal Rye, Vienna, Wheat, Flaked Oats Hops: Cascade, SAAZ, Summit, Citra, Crystal

Genesis Dry, so dry you could be excused for wanting a glass of water to go with your beer. Bready, not biscuity, like a fresh sourdough loaf, almost makes you want to spread butter over your beer. Just enough hops to be interesting rather than annoying. Just a little sweet on the back end, so subtle you'll likely miss it on the first sip. Watery mouth feel. 5.5% ACL. Malts: Specialty 2-row, Munich, Core Munich 40, Wheat, Dark Crystal Hops: Warrior, Centennial, Cascade, Simcoe

Bittersweet Lenny's R.I.P.A. Double Rye (an ode to comedian Lenny Bruce). Pours very dark amber, small head. Aroma of sweet rye bread. Sweet honey taste w/o being cloying, washed away by the hops. Strong rye flavor, much more than Slingshot. Malts: 2-row, Rye Ale Malt, Torrified Rye, Crystal Rye 75, Crystal Malt 80, Wheat, Kiln Amber, Core Munich 60 Hops: Warrior, Cascade, Simcoe, Saaz, Crystal, Chinook, Amarillo, Centennial

Hop Manna IPA Pours medium amber with a good head. Little distinct aroma. For the hops enthusiast who doesn't want other flavors getting in the way, but still not so hoppy that the hops get in the way of the hops. Hoppy enough to satisfy most hops heads without making your tongue feel like it is under assault from the Hop High Command. Malt: Specialty 2-row, Wheat, Munich, Vienna, Core Munich 60 HOPS: Warior, Cascade, Citra, Amarillo, Crystal, Centennial Dry Hop: Centennial, Cascade, Citra Even though hoppy beers aren't my preference, Smaltz/He'Brew were 4 out of 4 winners. If you see this brand, grab it with both hands. Even if I hated the beer, I'd be a fan because each bottle lists the malts and hops, giving the home brewer a shot at replicating the brew and the expert consumer a hint of what the beer is going to taste like before purchasing.

hpr1364 :: Vintage Tech Iron Pay Phone Coin Box

Released on 2013-10-24 under a CC-BY-SA license.

A review of vintage tech, in the form of an iron pay phone coin box.

photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box
photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box
photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box
photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box
photo of Vintage_Tech_Iron_Pay_Phone_Coin_Box

hpr1363 :: Some pacman Tips By Way of Repacing NetworkManager With WICD

Released on 2013-10-23 under a CC-BY-SA license.

A while back, I used my Arch laptop to pre-configure a router for a customer, which of course required me set up a static IP on my eth0. I should have done this from the command line, instead I used the graphical Network Manager. I had a lot of trouble getting the graphical application to accept a change in IP, and in getting to go back to DHCP when I was done, and I wound up going back and forth between the Network Manager and terminal commands. I've mentioned before my ISP is behind two NATed networks, the router in the outbuilding where the uplink to the ISP is (this is also the network my server is on) and the router in my house. The static IP I used for the customer router configuration was in the same address range as my "outside" network Though I successfully got eth0 back on DHCP, there was a phantom adapter still out there on the same range as the network my server was on, preventing me from ssh'ing in. I did come across a hack, if I set eth0 to an IP and mask of all zeros, then stopped and started dhcpcd on eth0, I could connect. I had also used the laptop on a customer's WiFi recently, and the connection was horrible.

I decided to see if just installing the wicd network manager would clear everything up (and it did), but before installing Wicd, I had to update the system, so first a little bit about pacman

Arch's primary package manager is pacman. The -S operator is for sync operations, including package installation, for instance:

# sudo pacman -S <package_name>
..... installs a package from the standard repos and is more or less equivalent to the Debian instruction ....
# sudo apt-get install <package_name>
The option -y used with -S refreshes the master package list and -u updates all out of date packages, so the command

# sudo pacman -Syu .... is equivalent to the Debian instruction .... 
# sudo apt-get update .... followed by .... 
# sudo apt-get upgrade
# sudo pacman -Syu <package_name1> <package_name2>

would update the system, then install the selected packages
Perhaps because of my slow Internet, the first time through a few of the update packages timed out without downloading, so nothing installed. The second time through, even one of the repos didn't refresh. Thinking this was a connectivity problem, I kept trying the same update command over and over. Finally, I enlisted the help of Google.
'pacman -Syy' forces a refresh of all package lists "even if they appear to be up to date". This seems to automagically fix the timeout and connection problems, and the next time I ran the update, it completed without complaint. I was mad at myself when I found the solution, because I remember I'd had the exact same problem and the exact same solution before and had forgotten them. Podcasting your errors is a great way of setting them in your memory.
About the same time, I ran out of space on my 10Gb root partition. I remembered Peter64 had a similar problem, but I found a different solution than he did.
# sudo pacman -Sc
.... cleans packages that are no longer installed from the pacman cache as well as currently unused sync databases to free up disk space. I got 3Gb back! 'pacman -Scc' removes all files from the cache.
Use pacman to install the package 'wicd' and if you want a graphical front end, 'wicd-gtk' or 'wicd-kde' (in the AUR). For network notifications, install 'notification-daemon', or the smaller 'xfce4-notifyd' if you are NOT using Gnome.
None of this enables wicd or makes it your default network manager on reboot, that you must do manually. First, stop all previously running network daemons (like netctl, netcfg, dhcpcd, NetworkManager) you probably won't have them all. Lets assume for the rest of the terminal commands, you are root, then do:
# systemctl stop <package_name> i.e # systemctl stop NetworkManager

Then we have to disable the old network tools so they don't conflict with wicd on reboot.
# systemctl disable <package_name> i.e. # systemctl disable NetworkManager

Make sure your login is in the users group
# gpasswd -a USERNAME users

Now, we have to initialize wicd
# systemctl start wicd.service
# wicd-client

Finally, enable wicd.service to load on your next boot up
# systemctl enable wicd.service

hpr1356 :: So, you've just installed Arch Linux, now what? Arch Lessons from a Newbie, Ep. 01

Released on 2013-10-14 under a CC-BY-SA license.

Manually installing packages from the AUR

Since completing my conversion from Cinnarch to Antergos, (, the published tutorial didn't work for me the first time, but the new Antergos forums were most helpful (, a few utilities I installed under Cinnarch seem to be unavailable, notably, 'yaourt' Yet An Other User Repository, the package manager for the AUR (Arch User Repositories).[The AUR are unofficial, "use at your own risk" repositories, roughly analogous to using a ppa in Ubuntu.] I tried 'sudo pacman -S yaourt' and learned it wasn't found it the repositories (I should note that when I removed the old Cinnarch repos from /etc/pacman.conf, I must have missed including the new Antergos repos somehow). I have since completed the transition.

Anyway, some experienced Arch users like Peter64 and Artv61 had asked me why I was using yaourt anyway instead of installing packages manually, which they considered to be more secure. I decided to take the opportunity to learn how to install packages manually, and to my surprise, it was not nearly as complex as I had feared. I had promised a series of podcasts along the theme, "So, you've just installed Arch Linux, now what?" This may seem like I've jumped ahead a couple steps, but I wanted to bring it to you while it was fresh in my mind.

Your first step may be to ensure you really have to resort to the Arch User Repositories to install the app you are looking for. I'd found Doc Viewer allowed me to access PDFs in Arch, but I really preferred Okular that I'd used in other distros. When 'sudo pacman -S okular' failed to find the package, I assumed it was only available from the AUR. However, a Google search on [ arch install okular ] revealed the package I needed was kdegraphics-okular, which I installed from the standard Arch repos.

Once you've determined the package you need exists in the AUR and not in the standard repos, you need to locate the appropriate package build, your Google search will probably take care of that. The URL should be in the form<package-name>. For the sake of example, lets go to Chromium is already in standard Arch repos, but if you want Chrome, you will have to find it in the AUR. Find the link labeled "Download the tarball", it will be a file ending ing .tar.gz Before downloading a file, the Arch Wiki instructions for manually installing packages from the AUR recommend creating a designated folder to put them in, they suggest creating a "builds" folder in your home directory.

If you have a multi-core machine, you may be able to take advantage of a slight compiler performance increase by making adjustments to your /etc/makepkg.conf . Look for "CFLAGS=", it should have a first parameter that looks like -march=x86_64 or -march=i686 . Which ever it is, change it to -march=native and eliminate the second parameter that reads -mtune=generic . This will cause gcc to autdetect your processor type. Edit the next line, which begins with "CXXFLAGS", to read CXXFLAGS="${CFLAGS}", the just causes the CXXFLAGS setting to echo CFLAGS. Details are located in

Before installing your first AUR package, you will have to install base-devel, [ pacman -S base-devel , {as root, so become root or use sudo}]. Look for that .tar.gz file you downloaded, still using Chrome as an example, it's google-chrome.tar.gz . Unravel the tarball with "tar -xvzf google-chrome.tar.gz". Now, in your ~/builds folder you should have a new directory named "google-chrome". Drop down into the new folder. Since user repos are not as trusted as the standard ones, it might be a good idea to open PKGBUILD and look for malicious Bash instructions. Do the same with the .install file. Build the new package with "make -s". The "-s" switch lets the compiler resolve any unmet dependencies by prompting you for the your sudo password.

You will have a new tarball in the format of <application name>-<application version number>-<package revision number>-<architecture>.pkg.tar.xz , in our google-chrome example, the file name was google-chrome-27.0.1453.110-1-x86_64.pkg.tar.xz . We install it with pacman's upgrade function "pacman -U google-chrome-27.0.1453.110-1-x86_64.pkg.tar.xz". This command will install the new package and create an RPM.

Before running Arch, I did not realize spell checking was centrally configured in Linux, I always assumed each application had it's own spell checker. After installing Arch, I noticed auto-correct wasn't working anywhere. At length, I looked for a solution. I found Libre Office and most browsers rely on hunspell for spell checking functions. To get it working, you just need to install hunspell and the hunspell library appropriate for you language, i.e. "pacman -S hunspell hunspell-en"

StraightTalk/Tracphone, a quick review.

Before leaving for Philadelphia last spring, I decided I needed a cheap smartphone on a prepaid plan. The only one with reliable service in my area is StraightTalk, or Tracphone, sold in Walmart. For $35 a month, they advertise unlimited data, talk, and text. The one drawback, any form of tethering, wired or wireless, violates StraightTalk's TOS (frankly I missed that condition before buying the phone). Hmm, would Chromecast count? Anyway, for some people, no tethering would be an immediate deal breaker. Frankly, I can see the advantages to tethering, but the one scenario I'm most interested in is isolating an infected system from a customer's network, and still be able to access anti malware resources. The budget phone I bought only supports 3G, and I'm not in the habit of streaming media to it, much less sharing it to another device.

That doesn't mean I don't use the bandwidth. I put a 16 gig SD card in my phone, and started using it as an additional pipeline to download Linux iso's. Anything I download, I can transfer to my network with ES File Explorer. I downloaded several Gigs in the first month to test the meaning of Unlimited. Towards the end of the month, and after I bought prepaid card for the next month, I had an off and on again data connection, I thought the provider was punishing me for being a hog, it turns out the phone was glitchy, and turning it off and back on again always re-establishes the data connection. Therefore, I am happy to report that StraightTalk actually seems to mean what they say when they advertise "Unlimited". Unfortunately, many of my direct downloads fail md5sum check. Direct downloads on 3G come down as fast as 75-100 MBps, but torrents seem to top out at 45MBps, the same as my home connection.

hpr1290 :: MultiSystem: The Bootable Thumb Drive Creator

Released on 2013-07-12 under a CC-BY-SA license.

MultiSystem is a tool for creating bootable USB thumb drives that give you the option launching multiple ISO images and other built in diagnostic utilities. It can be an invaluable tool for system repair techs. Not to mention the many recovery and repair Live CDs that are available to fix Linux, most bootable Windows repair and anti-virus utilities run from a Linux based ISO. The tech can even create ISO images of Windows installation media and replace a stack of DVDs with one thumb drive. Besides the installable package, there is also a MultiSystem LiveCD that, if I understand correctly, contains some recomended ISOs to install on your thumb drive.

MultiSystem Icon

For complete episode show notes please see

hpr1228 :: Utilizing Maximum Space on a Cloned BTRFS Partition

Released on 2013-04-17 under a CC-BY-SA license.

Utilizing Maximum Space on a Cloned BTRFS Partition

by FiftyOneFifty

  1. If you clone a disk to a disk, Clonezilla will increase (decrease) the size of each partition proportional to the relative size of the drives.
    1. I wanted to keep my / the same size and have no swap (new drive was SSD), so I did a partition to partition clone instead
    2. Created partitions on the new SSDs with a GParted Live CD, 12Gb root (Ext4) and the remainder for /home, (btrfs, because I planned to move to SSD from the start, and last summer only btrfs supported TRIM)
  2. After cloning /dev/sda1 to /dev/sdb1 and /dev/sda2 to /dev/sdb2 using Clonezilla, I inspected the new volumes with the GParted Live CD
    1. /dev/sdb2 had 40% inaccessible space, i.e., the usable space was the same size as the old /home volume
    2. GParted flagged the error and said I could correct it from the menu (Partition->Check) but btrfs doesn't support fsck, so it didn't work
    3. Tried shrinking the volume in GParted and re-expanding it to take up the free space, also didn't work.
  3. Discovered 'btrfs utility' and that it was supported by the GParted Live CD
    1. Make a mount point
      • sudo mkdir /media/btrfs
    2. Mount the btrfs volume
      • sudo mount /dev/sdb2 /media/btrfs
    3. Use btrfs utility to expand the btrfs file system to the maximum size of the volume
      • sudo btrfs filesystem resize max /media/btrfs
    4. Unmount the btrfs volume
      • sudo umount /dev/sdb2
  4. Rechecked /dev/sdb2 with GParted, I no longer had inaccessible space

hpr1224 :: Podio Book Report on Jake Bible's "Dead Mech"

Released on 2013-04-11 under a CC-BY-SA license.
In today's show FiftyOneFifty shares his review of the PodioBook by Jake Bible's "Dead Mech" and Reflections Upon Podcasting from the Bottom of a Well

hpr1220 :: Cinnarch 64 bit, Installation Review

Released on 2013-04-05 under a CC-BY-SA license.

Howdy folks, this is FiftyOneFifty, and today I wanted to talk about my experiences installing the 64 bit version of Cinnarch net edition on a dual core notebook. Cinnarch of course is a relatively new Arch based distro running the Cinnamon fork of Gnome. I had previously installed Arch proper on this notebook, but when I rebooted to the hard drive, I lost the Ethernet connection. This is not uncommon, but there the notebook sat while until I had time to work the problem. I wanted to start using the notebook, and I'd heard good things about Cinnarch, so it seemed like a simple solution. I went into knowing Cinnarch was in alpha, so i shouldn't have been surprised when an update broke the system less then a week after the install, but that comes later in my story.

Complete show notes are available here:

hpr1194 :: Copying a Printer Definition File Between Systems

Released on 2013-02-28 under a CC-BY-SA license.

I recently learned where Linux stores the PPD created when you set up a printer and how to copy it between PCs.  I'd like to briefly share that information with you.

This is how to copy a printer definition file (equivalent of a printer driver) from a system where the printer is already configured to another system that you want to be able to access the same printer.  Reasons you might need to do this:

a.  The normal CUPS (Common Unified Printing System) set up doesn't have the right definition file for your printer.  In rare instances, you might have to download a ppd from the manufacturer or another source.  If so, copying the ppd may be easier than downloading it again.

b.  You configure CUPS and find there are no pre-provided printer drivers.  I thought this was the case when I first tried to configure CUPS under Linaro on my ODroidX.   For all intents and purposes, Linaro is an Arm port of mainline Ubuntu (Unity included).  I installed CUPS via Aptitude and tried to configure a printer as I would on any Linux system.  When I got to printer selection, the dropdown to select a manufacturer (the next step would be to choose a model) was greyed out, as was the field to enter a path to a ppd file.  I closed the browser and tried again, and the same thing happened.  This is what prompted me to find out where to find a PPD file on another system and copy it.  I never got to see how it would work, because when I had the ppd file copied over and ready to install, the manufactures and models in CUPS were already populated.  There had bee an update between my first and second attempts to configure CUPS on the ODroidX, but I'd rather say it was a glitch the first time, instead of the ppd's suddenly showing up in the repo.

c.  When I installed Arch on another system, I found there was far less options for choosing models, in my instance, there was only one selection for HP Deskjets.  I suspect borrowing the model specific ppd from another distro will increase the functionality of the printer.

Copying the ppd

1.  On the computer where the printer is already configured, find the .ppd (Postscript Printer Definition) file you generated (filename will be the same as the printer name) in /etc/cups/ppd/model (or possibly just /etc/cups/ppd, neither my ODroidX or my Fedora laptop have the "model" folder).
2. Copy to your home folder on the new system (You can't place the file in it's final destination yet, unless you are remoted in as root)
3. According to the post I found on, CUPS looks for a GZipped file [ gzip -c myprinter.ppd > myprinter.ppd.gz ; the '-c' arguement creates a new file, rather than gzipping the old one, and you use redirection to generate the new file.]  Recall that I never got to try this, because when I re-ran CUPS, the printer selections were already populated. 
4. Copy the archived file to /etc/cups/ppd/model on the machine that needs the printer driver

Configure CUPS (IP Printer)
1. Open localhost:631 in a browser
2. Click Administration tab
3. Click "Add a Printer" button
4. Log in as an account with root priviledges
5. For Ethernet printers, select "AppSocket/HP JetDirect" button and click "Continue"
6. From the examples presented, " socket://PRINT_SERVER_IP_ADDRESS:9100  " works for me, click continue
7. On the next page, fill in a printer name, this will be the file name for the PPD generated as well as how the printer is labled in the printer select dialog.  The other fields are optional.  Click continue.
8. (I am assuming if the LinuxQuestions post was right, CUPS will find the gz file and show the manuafacturer and model as options) From the list, select a manufacturer, or input the path to your PPD file
9. Select the printer model
9a.I think you could copy over the ppd as is and type the path to it in the field where it asks for a ppd file. 
10.Modify or accept the default printer settings

Or just copy the ppd and compare the settings in /etc/cups/printers.conf

hpr1167 :: Kernels in the Boot, or What to Do When Your /boot folder Fills Up

Released on 2013-01-22 under a CC-BY-SA license.

Synopsis of the Problem

You may have heard me mention that I purchased a used rack server a couple years ago to help teach myself Linux server administration. It's an HP DL-380 G3 with dual single core Zeons and 12Gb of RAM. It came with two 75Gb SCSI drives in RAID1, dedicated to the OS. Since the seller wanted more for additional internal SCSI drives, and those old server drives are limited to 120Gb anyway, I plugged in a PCI-X SATA adapter and connected 750Gb drive externally and mounted it as /home. I moved over the 2Gb USB drive I had on my Chumby (as opposed to transferring the files) and it shows up as /media/usb0. I installed Ubuntu server 10.04 (recently updated to 12.04) because CentOS didn't support the RAID controller out of the box and I had frustrated the lack of support for up to date packages on Debian Lenny on the desktop.

With 75Gb dedicated to the OS and application packages, you can imaging my surprise when after a update and upgrade, I had a report that my /boot was full. It was until I look at the output from fdisk that I remembered the Ubuntu installer created a separate partition for /boot. At the risk of oversimplifying the purpose of /boot, it is where your current and past kernel files are stored. Unless the system removes older kernels (most desktop systems seem to) the storage required for /boot will increase with every kernel upgrade.

This is the output of df before culling the kernels

Filesystem                         1K-blocks      Used Available Use% Mounted on
/dev/mapper/oriac-root              66860688   6593460   56870828  11% /
udev                                 6072216         4    6072212   1% /dev
tmpfs                                2432376       516    2431860   1% /run
none                                    5120         0       5120   0% /run/lock
none                                 6080936         0    6080936   0% /run/shm
cgroup                               6080936         0    6080936   0% /sys/fs/cgroup
/dev/cciss/c0d0p1                     233191    224953          0 100% /boot
/dev/sda1                          721075720 297668900  386778220  44% /home
/dev/sdb1                         1921902868 429219096 1395056772  24% /media/usb0

This directory listing shows I had many old kernels in /boot


The Solution I Found

I ran across some articles that suggested I could use 'uname -r' to identify my current running kernel (3.2.0-31, the -32 apparently kernel ran out of space before it completed installing) and just delete the files with other numbers. That didn't seem prudent, and fortunately I've found what seems to be a more elegant solution on .

Verify your current running kernel

uname -r

Linux will often keep older kernels so that you can boot into and older version from Grub (at least on a desktop). Fedora has an environment setting to tell the OS just how many old kernels you want to maintain [installonly_limit in /etc/yum.conf]. Please leave a comment if you know of an analog in Debian/Ubuntu.

List the kernels currently installed on you system.

dpkg --list | grep linux-image

Cull all the kernels but the current one

The next line is the key, make sure you copy and paste exactly from the shownotes. I'm not much good with regular expressions, but I can see it's trying to match all the packages starting with 'linux-image' but containing a number string different from the one returned by 'uname -r', and remove those packages. Obviously, this specific command will only work with Debian/Ubuntu systems, but you should be able to adapt it to your distro. The '-P' is my contribution, so you can see what packages you are eliminating before the change becomes final.

sudo aptitude -P purge ~ilinux-image-\[0-9\]\(\!`uname -r`\)

Make sure Grub reflects your changes

Finally, the author recomends running 'sudo update-grub2' to make sure Grub reflects your current kernel status (the above command sees to do this after every operation anyway, but better safe than sorry.

It's worth noting I still don't have my -32 kernel update, so I'll let you know if the is anything required to get kernel updates get started again.

My df now shows 14% usage in /boot and a directory listing on /boot only shows the current kernel files.

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/oriac-root 66860688 5405996 58058292 9% /
udev 6072216 12 6072204 1% /dev
tmpfs 2432376 516 2431860 1% /run
none 5120 0 5120 0% /run/lock
none 6080936 0 6080936 0% /run/shm
cgroup 6080936 0 6080936 0% /sys/fs/cgroup
/dev/cciss/c0d0p1 233191 29321 191429 14% /boot
/dev/sda1 721075720 297668908 386778212 44% /home
/dev/sdb1 1921902868 429219096 1395056772 24% /media/usb0


hpr1113 :: TermDuckEn aptsh - screen - guake

Released on 2012-11-07 under a CC-BY-SA license.

I recently discovered apt shell (aptsh), a psuedo shell which gives users of distributions which use apt for package management quick access to the functionality of apt-get. You should find aptsh in the repositories of Debian based distros. Once installed, you can launch 'aptsh' as root from the command prompt (i.e. 'sudo aptsh').

One of the drawbacks of installing software from the terminal is that sometimes you don't know the exact name of the package you want to install. From the aptsh> prompt, 'ls' plus a search string will show all the packages that have that string in their names. You can type 'install' plus a partial package name and use TAB completion to finish the instruction. The function of the 'update' and 'upgrade' commands are self explanatory, unfortunately, you can't string them together on the same line like you can in bash:

sudo apt-get update && sudo apt-get -y safe-upgrade

Instead, you use the backtick [ ` ] key to put aptsh into queue mode. In queue mode, you can enter commands one by one to be launched in sequence at a later time. To bring your system up to date, you could run:

aptsh> `

* aptsh> update

* aptsh> upgrade

* aptsh> `

aptsh> queue-commit-say yes

Backtick toggles queue entry, and queue-commit runs the queue. “queue-commit-say y” tells aptsh to answer in the affirmative to any queries from the commands executed in the queue in much the same way “apt-get -y safe-upgrade” confirms software updates without user interaction. Apt shell is capable of other apt related tasks, but I think I've covered the most useful ones.

The trouble with running aptsh is that unless you start it in a terminal with the computer and leave it running all day (as opposed to opening it as a new shell within you terminal every time you want to update or install), despite the convienience of package name search and TAB completion, it really won't save you any keystrokes. With that in mind, I started looking for ways to have the apt shell available at a keystroke (we will leave the wisdom of leaving a shell open with a subset of root privileges for another day). I had guake installed, but rarely used it because I usually have multiple terminal tabs open since I am logged into my server remotely. [Actually, I had forgotten guake supports tabbed terminals quite well. You can open a new tab with <Shift><Ctrl>T and switch between terminal tabs by <Ctrl><PgUp> and <Ctrl><PgDn> or clicking buttons that appear at the bottom of the guake window. I had how, forgotten this until doing further research on this story. Since this revelation ruins my story, we will forget about tabbed terminal support in guake and not mention it again.]

I am also going to assume everyone is familiar with guake. If not, suffice it to say guake is a terminal that pops down in the top third of the screen when you hit a hotkey, <F12> being the default. It returns to the background when you press <F12> again or click the lower part of the desktop. It is patterned after the command shell in the game Quake that let you input diagnostic and cheat codes, hence the name. Since I wasn't using guake as a terminal anyway, I wanted to see if I could make it run apt shell by default. I found you can access guake's graphical configuration manager by right clicking inside the open terminal and selecting preferences.

On the first preferences tab, I found “command interpreter”, but since aptsh is only a pseudo shell, it isn't found in the dropdown list. However, one option was “screen”, which would give me a way to run multiple terminals that I thought guake lacked. Next, I had to look up how to configure screen. I figured there must be a way to make screen run aptsh in one session by default, and I found it. In the show notes I've included my .screenrc file from my home folder, which I make with the help of this article from the online Red Hat Magazine:


hardstatus alwayslastline

hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{=kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B}%Y-%m-%d %{W}%c %{g}]'

# Default screens

screen -t shell1 0

screen -t apt-shell 1 sudo aptsh

screen -t server 2 ssh 5150server

screen -t laptop 3 ssh 5150@Redbook


The first two lines set up the screen status line, the first puts it at the bottom of the terminal, the second sets up the status line to display the hostname and date, and an indicator that highlights which screen windows you are looking at. The # Default screens section below sets up sessions screen opens by default. The first line opens up a regular terminal named “shell1” and assigns it to session zero. The second opens a window called “apt-shell” (this is how it's identified on the status line) and launches apt shell. The last two log me into my server (host name aliasing made possible by configuring my homefolder/.ssh/config , thanks Ken Fallon) and my laptop running Fedora respectively. I still have to cycle through your screen windows and type in my passwords for sudo and ssh. The configuration could be set up to launch any bash command or script by default. The cited article doesn't include any more configuration tips, but I'm certain there are ways to set up other options, such as split windows by default.

Since I also run screen on my remote connection to my server, I have to remember the command prefix is <Crtl>a,a. Ergo, if I want to move to the next window in the screen session (running under guake) on the local PC, the command is <Ctrl>a, then n. To go to the next screen window in the screen session on my server, running inside another screen session on my local PC, it's <Ctrl>a,a,n.

So, that's how I learned to run apt shell inside screen inside guake. I can be contacted at or by using the contact form on TheBigRedSwitch.DrupalGardens.Com

hpr1106 :: Of Fuduntu, RescaTux (or the Farmer Buys a Dell)

Released on 2012-10-29 under a CC-BY-SA license.

This is another one of my How I Did It Podcasts (or How I Done It if you rather) where my goal is to pass along the things I learn as a common Linux user administering my home computers and network, and engaging in the types of software tinkering that appeals to our sort of enthusiast.

I'd been thinking for a while about replacing the small computer on my dinner table. I had been using an old HP TC1000, one of the original active stylus Windows tablets, of course now upgraded to Linux. With the snap in keyboard, it had a form factor similar to a netbook, with the advantage that all the vulnerable components were behind the LCD, up off the table and away from spills. It had served my purpose of staying connected to IRC during mealtimes, and occasional streaming of live casts, but I wanted more. I wanted to be able to join into Mumble while preparing meals, I wanted to be able to load any website I wanted without lockups, and I wanted to stream video content and watch DVDs.

I was concerned that putting a laptop on the table was an invitation to have any spilled beverage sucked right into the air intakes, and I never even considered a desktop system in the dining room until I saw a refurbished Dell Inspiron 745 on (I wouldn't normally plug a specific vendor, but now GearXS is putting Ubuntu on all it's used corporate castoff systems). This Dell had the form factor that is ubiquitous in point-of-sale, a vertical skeleton frame with a micro system case on one side and a 17” LCD on the other, placing all the electronics several inches above the surface on which it is placed. I even found a turntable intended for small TVs that lets me smoothly rotate the monitor to either at my place on the table or back towards the kitchen where I am cooking. I already had a sealed membrane keyboard with an integrated pointer and wireless-N USB dongle to complete the package. Shipped, my “new” dual core 2.8Ghz Pentium D system with 80Gb hard drive and Intel graphics was under $150. [The turntable was $20 and an upgrade from 1Gb to 4Gb of used DDR2 was $30, but both were worth it.] Since the box shipped with Ubuntu, I thought installing the distro of my choice would be of no consequence, and that is where my tale begins.

I'm going to start my story towards the end, as it is the most important part. After the installation of four Linux distros in as many days (counting the Ubuntu 10.04 LTS the box shipped with, a partial installation of SolusOS 2r5, Fuduntu and finally Lubuntu 12.04), I discovered I couldn't boot due to Grub corruption (machine POSTed, but where I should have seen Grub, I got a blank screen with a cursor in the upper left corner).

A. I thought I would do a total disk wipe and start over, but DBAN from the UBCD for Windows said it wasn't able to write to the drive (never seen that before)

B. Started downloading the latest RescaTux ISO. Meanwhile, I found an article that told me I could repair Grub with a Ubuntu CD , so I tried booting from the Lubuntu 12.04 CD (using the boot device selector built into the hardware). Same black screen, preceded by a message that the boot device I had selected was not present. Same thing with the Fuduntu DVD that had worked the day before. With the exception of UBCD, I couldn't get a live CD to boot.

C. Now having downloaded the RescaTux ISO, and suspecting a problem with the optical drive, I used Unetbootin to make a RescaTux bootable thumb drive. RescaTux

( ) has a pre-boot menu that let's you choose between 32 and 64 bit images, but that was as far as I got, nothing happened when I made my selection.

D. At this point, I am suspecting a hardware failure that just happened to coincide with my last install. This is a Ultra Small Form Factor Dell, the kind you see as point of sale or hospital systems, so there weren't many components I could swap out. I didn't have any DDR2 laying around, but I did test each of the two sticks the system came with separately with the same results. I then reasoned a Grub error should go away if disabled the hard drive, so I physically disconnected the drive and disabled the SATA connector in the BIOS. I still couldn't boot to a live CD. Deciding there was a reason his machine was on the secondary market, I hooked everything back up and reset the BIOS settings to the defaults, still no luck.

E. As a Hail Mary the next day, I burned the RescaTux ISO to a CD and hooked up and external USB optical drive. This time, I booted to the Live CD, did the two step grub repair, and when I unplugged the external drive, I was able to boot right into my Lubuntu install. Now booting to Live CDs from the original optical drive and from the thumb drive worked. RescaTux FTW.

Now a little bit on how I got in this mess. As I said, the Dell shipped with 10.04, but I wanted something less pedestrian than Ubuntu (ironic I wound up there anyway). I tried Hybride, but once again, like my trial on the P4 I mentioned on LinuxBasix, the Live CD booted, but the icons never appeared on the desktop (I think it's a memory thing, the Dell only shipped with a gig, shared with the integrated video). After Hybride, I really wanted to be one of the cool kids and run SolusOS, but the install hung twice transferring boot/initrd.img-3.3.6-solusos. I casted around for a 64bit ISO I had on hand, and remembered I'd really wanted to give Fuduntu a try. Fuduntu is a rolling release fork of Fedora, with a Gnome 2 desktop, except that the bottom bar is replaced with a Mac style dock, replete with bouncy icons (cute at first,but I could tell right away they would get on my nerves). However, I found I liked the distro, despite the fact I found the default software choices a little light for a 900Mb download (Google Office, Chromium, no Firefox, no Gimp). Worst of all, no Mumble in the repos at all (really Fuduntu guys? While trying to install Mumble, do you know how many reviews I found that can be summed up as "Fuduntu is great, but why is there no Mumble?"). Unfortunately, I put Mumble on the back burner while I installed and configured my default set of comfort apps from the repos (Firefox, XChat, Gimp, VLC, LibreOffice, etc). [BTW, with the anticipated arrival of a 2.4ghz headset, I hope to be able to use the new machine to join the LUG/podcast while preparing and dare I say eating dinner.]

I visited the Mumble installation page on SourceForge, and found they no longer linked to .deb files and fedora .rpms, as they assume you can install from your repositories. Thinking someone must have found an easy solution, I hit Google. The best answer I found was a page on the Fuduntu forums ( ), that suggested downloading the Mumble and a dozen prerequisite library .rpm's from a third party site called I visited, and found when I looked up each library, I got a dozen different links to versions of the file. Then I saw a link that seemed to offer the promise of simplifying my task, if I subscribed to, I could add their whole catalog as a repo. While researching the legitimacy of, I found them mentioned in the same sentence as RPMFusion as an alternate repository for Fedora. I decided to install the RPMFusion repos as well, thinking I might find some of the needed libraries in there. I registered with pbone, and discovered I would only have access to their repository for 14 days free, after which it would cost $3 a month (after all, hosting such a service must cost money). I figured the free trial would at least get Mumble installed, and went through the set up. Among the questions I had to answer were which Fedora version I was running (I picked 17, since Fuduntu is rolling) and 32 or 64 bit. generated a custom .repo file to place in my /etc/yum.repos.d directory. At this time, I'd already set up RPMFusion.

The fun started when I ran 'yum update'. I got "Error: Cannot find a valid baseurl for repo: rpmfusion-free". It turns out ( ) the location of the RPMFusion servers are usually commented out in the .repo files, Fedora must know where they are, but I guess Fuduntu does not. I uncommented each of the baseurl statements (there are three) in each of the RPMFusion .repo files (there are four files, free, non-free, free-testing, and non-free testing). I then re-ran 'yum update', this time I was told the paths for the RPMFusion baseurl's didn't exist. I opened up the path in a browser and confirmed it was indeed wrong. I pruned sub directories from the path one by one until I found a truncated url that actually existed on the RPMFusion FTP server. I looked at the .repo files again and figured out the paths referenced included global environment variables the were inconstant between Fedora and Fuduntu. For instance, $release in Fedora would return a value like 15, 16, or 17, where in Fuduntu it resolves to 2012. I figured if I took the time, I could walk up and down the FTP server and come up with literal paths to put in the RPMFusion .repo files, but instead I just moved the involved .repo files into another folder to be dealt with another day.

I again launched 'yum update'. This time had no errors, but I was getting an excessive amount of new files from my new repo ('yum update' updates your sources and downloads changed files all in one operation). It's possible the rolling Fuduntu is closer Fedora 16, so when I told I was running 17, all the files in the alternate repo were newer than what i had. In any case, I had no wish to be dependent of a repo I had to rent at $3 a month, so I canceled the operation, admitted defeat, and started downloading the 64bit version of Lubuntu. I know I said I would rather have a more challenging distro, but because of it's location, this needs to be a just works PC, not a hack on it for half a day box. I would have like to have given Mageia, Rosa, or PCLinuxOS a shot, but too many packages from outside the repos (case in point, Hulu Desktop) are only available in Debian and Fedora flavors. You know the rest, I installed Lubuntu, borked my Grub, loop back to the top of the page.

hpr1101 :: Recovery of an (en)crypted home directory in a buntu based system

Released on 2012-10-22 under a CC-BY-SA license.

Recovery of an (en)crypted home directory in a 'buntu based system

by 5150

This is going to be the archetypal “How I Did It” episode because if fulfills the criterion of dealing with an issue most listeners will most likely never have to resolve, but might be invaluable to those few who some day encounter the same problem, how to recover an encrypted home folder on an Ubuntu system.

I enabled home folder encryption on installation of a Linux Mint 8 system some years back and it never gave me trouble until the day that it did. Suddenly, my login would be accepted, but then I would come right back to GDM. Finally I dropped into a text console to try to recover the contents of my home folder, and instead found two files, Access-Your-Private-Data.desktop and README.txt . README.txt explained that I had arrived in my current predicament because my user login and password for some reason were no longer decrypting my home folder (Ubuntu home folder encryption is tied to your login, no additional password is required). Honestly, until I lost access to my files, I 'd forgotten that I'd opted for encryption. I found two articles that described similar methods of recovery. I'd tried that following their instructions and failed, probably because I was mixing and matching what seemed to be the easiest steps to implement from the two articles. When I took another look at the material weeks later, I discovered I missed a link in the comments that led me to an improved method added at Ubuntu 11.04 that saves several steps:

  1. Boot to an Ubuntu distribution CD (11.04 or later)

  2. Create a mount point and mount the hard drive. Of course, if you configured you drive(s) with multiple data partitions (root, /home, etc) you would have to mount each separately to recover all the contents of your drive, but you only have to worry about decrypting your home directory. If you use LVM, and your home directory spans several physical drives or logical partitions, I suspect things could get interesting.

    1. $sudo mkdir /media/myhd

      1. /media is owned by root, so modifying it requires elevation

    2. You need to confirm how your hardrive is registered with the OS. I just ran Disk Utility and confirmed that my hard drive was parked at /dev/sda, that meant that my single data partition would be at /dev/sda1

    3. $sudo mount /dev/sda1 /media/myhd

    4. Do a list on /media/myhd to confirm the drive is mounted

      1. $ls /media/myhd

    5. The new recovery command eliminates the need to re-create your old user

      1. $sudo ecryptfs-recover-private (yes, ecrypt not encrypt)

      2. You will have to wait a few minutes while the OS searches your hard drive for encrypted folders

        1. When a folder is found, you will see

          INFO: Found [/media/myhd/home/.ecryptfs/username/.Private].

          Try to recover this directory? [Y/n]

          • Respond “Y”

        2. You will be prompted for you old password

        3. You should see a message saying your data was mounted read only at


          • I missed the mount point at first, I was look for my files in /media/myhd/home/myusername

    6. If you try to list the files in /tmp/ecryptfs.{SomeStringOfCharacters}, you will get a “Permission Denied” error. This because your old user owns these files, not your distribution CD login

      1. [You will probably want to copy “/tmp/ecryptfs.{SomeStringOfCharacters}” into your terminal buffer as you will need to reference it in commands. You can select if with your mouse in the “Success” message and copy it with <Ctrl><Alt>c, paste it later with <Ctrl><Alt>v

      2. I tried to take ownership of /tmp/ecryptfs.{SomeStringOfCharacters}, I should have thought that would have worked.

        1. From my command prompt, I can see my user name is “ubuntu”

        2. $ sudo chown -R ubuntu /tmp/ecryptfs.{SomeStringOfCharacters}

          • -R takes ownership of subdirectories recursively

          • It's a good time to get a cup of coffee

    7. Next, we need to copy the files in our home directory to another location, I used an external USB drive (it was automounted under /media when I plugged it in). If you had space on the original hard drive, I suppose you could create a new user and copy the files to the new home folder. I decided to take the opportunity to upgrade my distro. Some of the recovered files will wind up on my server and some on my newer laptop.

      1. One could run Ubuntu's default file manager as root by issuing “sudo nautilus &” from the command line (the “&” sends the process to the background so you can get your terminal prompt back)

        1. Before copying, be sure to enable “View Hidden Files” so the configuration files and directories in you home directory will be recovered as well. As I said, there are select configuration files and scripts in /etc I will want to grab as well.

      2. I had trouble with Nautilus stopping on a file it couldn't copy, so I used cp from the terminal so the process wouldn't stop every time it needed additional input.

        1. $ cp -Rv /tmp/ecryptfs.{SomeStringOfCharacters} /media/USBDrive/Recovered

          • Of course the destination will depend on what you've named your USB drive and what folder (if any) you created to hold your recovered files

          • -Rv copies subdirectories recursively and verbosely, otherwise the drive activity light may be your only indication of progress. The cp command automatically copies hidden files as well.

          • Because of the file ownership difficulties, I could only copy the decrypted home folder in its entirety,

      3. I still had trouble with access do to to ownership once I detached the external drive and remounted it on my Fedora laptop, but I took care of that with:

        1. $ su -c 'chown -R mylogin/media/USBDrive/Recovered'

hpr1094 :: Linux, Beer, and Who Cares?

Released on 2012-10-11 under a CC-BY-SA license.

By BuyerBrown, RedDwarf, and FiftyOneFifty

This is a recording of an impromptu bull session that came about one night after BuyerBrown, RedDwarf, and I had been waiting around on Mumble for another host to join in. After giving up on recording our scheduled podcast, we stayed up for about an hour talking and drinking when Buyer suddenly asked Red and I to find current events articles concerning Linux. When that task was completed, Buyer announced he was launching a live audiocast over with us as his guests. You are about to hear the result. Topics range from the prospects of Linux taking over the small business server market, now that Microsoft has retreated from the field, Android tablets and the future of the desktop in general, and the (at the time) revelation that Steam would be coming to Linux (on the last point, let me be the first to say that I am glad some of the concerns in my rant appear to be unfounded, apparently after a lot of work, Left for Dead 2 runs faster under Linux than it does under Windows with equivalent hardware. This podcast was recorded on a whim but I can't promise it won't happen again.

hpr1000 :: Episode 1000

Released on 2012-05-31 under a CC-BY-SA license.

Hacker Public Radio commemorated it's 1000th episode by inviting listeners, contributors, and fellow podcasters to send in their thoughts and wishes of the occasion. The following voices contributed to this episode.

FiftyOneFifty, Chess Griffen, Claudio Miranda, Broam, Leo LaPorte and Dick DeBartolo, Dan Lynch, Becky and Phillip (Corenominal) Newborough, Dann Washko, Frank Bell, Jezra, Fabian Scherschel, k5tux, CafeNinja, imahuph, Johan Vervloet, Kevin Granade, Knightwise, MrX, NYBill, Quvmoh, pokey, MrGadgets, riddlebox, Saturday Morning Linux Review, Scott Sigler, Robert E. Wooden, Sigflup, BrocktonBob, Trevor Parsons, Ulises Manuel López Damián, Verbal, Ahuka, westoztux, Toby Meehan, Chris Garrett, winigo, Ken Fallon, Lord Draukenbleut, aukondk, Full Circle Podcast

hpr0938 :: Cloning Windows WiFi Profiles and Installing Skype Under 64-bit Fedora

Released on 2012-03-06 under a CC-BY-SA license.

The other day I was copying a customer's files and settings from a old laptop to a new one. Much of this tedious task was handled automatically by Fab's Autobackup (, and 25% until Valentines Day BTW), but I was disappointed that his dozen WiFi access point profiles and passwords were not one among the settings that Fab's copied for me. For a family laptop, you usually just have to re-enter the password for just the home router, and maybe once again for your work wireless. If your are a tech for an enterprise, and the new mobile workstation needs to connect to multiple access points, you always wind up walking around the business or campus, connecting to each in SSID in turn and entering a different key. This time, the laptop would be used in multiple remote offices. The user would have been able to re-create those connections as he traveled to each office, but he asked me if it wouldn't be possible instead to transfer the profiles with the rest of his data.

I had no doubt that I would be able to find a free tool to backup and restore wireless connections, but I have become wary of Windows utilities that can be found at the end of a Google search but have not been recommended by other techs or a trusted website. I was surprised to find my answer in some functions added to the DOS netsh, (or "net shell") command, starting with Windows Vista.

Open a Windows command prompt on the laptop that already has the WiFi keys set up, ergo the old one, and type:

netsh wlan show profiles

then press return. This will give you a list of your existing wireless connection profiles by name (i.e. by SSID). Now you can pick a WiFi profile name and enter on the command line:

netsh wlan export profile name="SSID_above_in_quotes" folder="C:\destination"

Quotes are required for the WiFi profile name, but not for the destination folder unless you use spaces in your Windows directory names. If you want to create export files for all your wireless connections, you may omit the "name=" part.

netsh wlan export profile folder=<destination_path>

Omitting "file=" of course creates export files in the current directory.

The netsh wlan export profile command generates a .XML export file for each selected profile. Each export file contains an SSID, channel, encryption type and a hash of the encryption key to be transferred to the new laptop, except that it doesn't work, at least not for me and several others who posted articles to the web. On my first try, I was able to import everything but the encryption key, all the access points showed up in "Manage Wireless Networks", but I was prompted for a key when I tried to connect. I thought maybe this was Microsoft's attempt at security, but I could see a field for the hash in the .XML and when I went back to the article on netsh and it was clear I was supposed to get the keys too. A little more googlsearch revealed a second article on netsh that gave me an argument the first one omitted, adding key=clear at the very end of the netsh command causes the keys to be exported in clear text! Our command now looks like:

netsh wlan export profile folder=<destination_path> key=clear

Copy your .XML profile files to the new laptop (I am assuming via USB key). The filenames will be in the format:

Wireless Network connection-<profile-name-same-as-SSID>.xml

You understood me correctly, this DOS command generates file names with spaces in them. Copy the .XML files to the new system and import the profiles with:

netsh add profile filename="<file name in quotes to account for spaces>.xml"

It's not quite as odious as it looks because DOS now supports TAB completion, so you just have to type:

netsh add profile filename="Wi and press <TAB>

The rest of the name of the first profile will be filled in, complete with the terminating quote. Press <ENTER> and you should get a message that wireless profile has been imported. To import the remaining profiles, just use <F3> or the up arrow and edit the last command. Since it was set to auto-connect, the laptop I was working on made a connection to the local access point the instant the corresponding profile was imported.

Learning these new netsh functions may make configuring WiFi more convenient (I can maintain a library of wireless profiles for the organizations I service, or I could implement an encryption key update via a batch file). I can also see ominous security implications for networks where users aren't supposed to be privy to the connection keys and have access to pre-configured laptops, such as schools. One could whitelist the MAC addresses of only the organization's equipment, but there is always that visiting dignitary to whom you are expected to provide unfettered network access. Besides, anyone with access to the command line can use ipconfig to display the laptop's trusted MAC address, which can be cloned for access from the parking lot or from across the street. The only way I see to secure the connection from someone with physical access to a connected laptop is to install kiosk software that disables the command line.

Installing Skype on 64-bit Fedora

Last week I decided to install Skype as an alternative way to contact people with land lines. I haven't played with Skype since I had it on my Windows workstation, so I downloaded and installed the .rpm for Fedora 13+. All Skype has is a 32-bit package for Fedora, and sure enough, when I tried to launch Skype, the icon bounced around Compiz fashion, then the application item on the taskbar closed without doing anything. I looked for information in troubleshooting Skype from the logs, and an Arch wiki article told me I might have to create ~/.Skype/Logs, which I did. The application continued to crash without generating a log. I heard someone mention once in a call-in podcast that they'd had to perform additional steps to make 32-bit Skype work in 64 bit Fedora 15, and a Google search took me to the khAttAm blog (link below). I experienced some trepidation because the steps involve installing additional 32 bit libraries (if you heard me on the Hacker Public Radio New Years Eve shows, you might have heard me say I've experienced a bit of dependency hell over conflicts between 32 and 64 bit libraries) but the instructions in the article went flawlessly (I don't know if represents one person or more than one, but you rock!).

First, as root run yum update

Next, add the following line to /etc/rpm/macros (create it if it doesn't exist):

%_query_all_fmt %%{name}-%%{version}-%%{release}.%%{arch}

Finally, install these 32-bit libraries:

yum install qt.i686 qt-x11.i686 libXv.i686 libXScrnSaver.i686

After that, I was able to launch the application and log into my Skype account.

hpr0594 :: Using FFMPEG To Convert Video Shot With An Android Phone

Released on 2010-11-11 under a CC-BY-NC-SA license.
This episode comes with detailed shownotes which can be found on the hpr site

Become a Correspondent