I recently installed Xubunut 11.04 on an old notebook laptop, and wanted to run a dual monitor setup using an external monitor.
The laptop ships with Intel Graphics Media Accelerator 900, and I’m using this display controller:
kenneho@laptop:~$ lspci |grep Display
00:02.1 Display controller: Intel Corporation Mobile 915GM/GMS/910GML Express Graphics Controller (rev 04)
My external monitor is running with a 1280×1024 resolution, and are placed on the left hand side of the laptop.
At work we have a few scripts that we would like to monitor, and define a few messages that should trigger an SMS to be sent to the person on call.
As most of these scripts log to syslog, and we have a our linux servers set up to forward syslog messages to a central log host, we going to monitor the central syslog for important messages originated by the scripts. In order to avoid message storms, however, we need a way of throttling duplicate messages. On our central log host we’re running swatch for real time analyzis of the incoming syslog messages.
At work the other day I was creating regular expressions (regexps) in perl for use with swatch. Instead of testing the regexps by entering them into the swatch config file, restart swatch, and use the “logger” command to trigger swatch to take action, I thought I’d simplify it by creating a small script. Since swatch is written i perl, I created a tiny script in perl so that there wouldn’t be any regexps mismatch between how the script interprets regexps and how swatch interprets them.
Image courtesy of Renjith Krishnan / FreeDigitalPhotos.net
Most computers today have support for virtual memory
. An application (i.e. process) running on such computers sees its address space as one large range of contiguous addresses, even if its memory chunks may be scattered around the physical memory (RAM). This means that when the process requests a particular memory location, the computers must figure out which physical memory location this corresponds to. The mapping of virtual memory to physical memory is stored in the page tables
For processes that uses lots of memory, the virtual to physical memory mapping (i.e. pagetable) will need to hold a lot of mappings (called page table entries, or PTEs), and may grow to very large sizes. Having very large pagetables claims extra resources on the system, as more memory are needed to hold the page tables, and more CPU cycles must be used to search the pagetable. The system may benefit from keeping the number of page table entries at a minimum.
This is where hugepages comes in handy. Using hugepages, we increase the memory chunks allocated by the process, thus reducing number of memory mappings needed by the process. Instead of one page table entry mapping for example 4 kB of data (which is the default size on many systems), each entry may map for example 4 MB of data.
To test how a process memory usage affects the pagetable on one of our linux servers at work, I created two tiny applications written i C. The first allocates 19 GB of memory using regular memory allocations, and the second one allocated the same amount of memory using hugepages.
Subversion client software have traditionally stored (i.e. cached) plaintext user passwords, meaning that you password is accessible by anyone who can access files in your ~/.subversion/auth folder. With Subversion 1.6, however, support for KWallet and GNOME Keyring have been added, allowing for using these to store your subversion password encrypted. Of course it’s possible to turn off password caching, but then you’ll have to type in your password for most svn commands you issue.
As I’m using Gnome based environments, I’ll outline the steps needed to get svn client and Gnome keyring working within a SSH session, without needing to login in using the graphical interface. I’m sure much of it applies to KWallet too, but I haven’t tested this.
To have your svn client use the password stored in GNOME Keyring your svn client must be compiled with this this option. You can compile the svn client yourself, or simply download it from http://www.open.collab.net/downloads/subversion/.
My mom is running Ubuntu 9.10 on her laptop, but for some reason there’s been a problem with getting sound to work. The laptop is running the Intel 82801I chipset, and the sound servers running is the default one (Pulseaudio).
The other day I had to transfer an approximately 1.5 GB file from my laptop to my desktop, which are not connected via any network. I thought I’d use my USB pen drive for this task, but found that it’s only a 1 GB drive.
My linux guru co-worker recommended looking into the linux utility “split”, which, and this may not come as a big surprise, split files into smaller chunks. So using this excellent utility, I transferred the file by doing this:
I recently bough myself a new external USB hard drive to hold backups of my laptop and one of my desktop computers running Ubuntu and Fedora, respectively. I wanted to run encrypted backups of both computers individually, so that they were protected by separate key phrases.
After reading up on a few different solutions, I came across two great tools for this purpose:
- “encfs” to create encrypted folders
- “rdiff-backup” to create the backups
When you access data or load applications on your computer, the data blocks containing the data or applications are typically fetched from your local hard drive. These data blocks are loaded into memory, and are then ready for your computer to process. Since fetching data blocks from the hard drive or other external sources are very expensive in terms of time, your operating system typically implements a disk cache – data blocks that are loaded into memory remain there for some period of time just in case it needs to be accessed again. If the data block is access again while already being resident in the disk cache, your computer saves time by not having to load the data block from disk.
rsync is a great tool for running backups. When running multiple backups of the same file, rsync stores the data only once unless the data have changed. In the latter case, only the file deltas, i.e. the difference between the two files, is transferred, which can save both transfer time and bandwitdh.
One really favorably effect of running rsync like this is that you can browse the backup folders afterward, and each backup will look like a full backup. So just as easily as browsing any folder structure on your disk, you can browser the different backups of your system.