Byte-swap an entire file using perl
I recently needed to byte-swap an input file, and came up with an easy way with perl:
$ perl -0777ne 'print pack(q{V*},unpack(q{N*},$_))' inputfile > outputfile
This will byte-swap 4-byte sequences. If you need to byte-swap 2-byte sequences, you can just adjust the formats for pack/unpack to the lower-case version like so:
$ perl -0777ne 'print pack(q{v*},unpack(q{n*},$_))' inputfile > outputfile
(Of course there are more efficient ways to handle this, but for a quick and dirty job this may just be what you need.)
We use the -0777 option to ensure perl reads the input file in its entirety and doesn’t affect newlines, etc.
perl
Deconstructing an OO Blog Designs in Ruby 1.9
I’ve become interested in Avdi Grimm’s new book Object on Rails, however I found the code to be terse. Avdi is an expert Rubyist and he makes extensive use of Ruby 1.9 with minimal explanation. In all fairness, he lobbies you to buy Peter Cooper’s Ruby 1.9 Walkthrough. Instead of purchasing the videos, I wanted to try and deconstruct them myself.
In his first chapter featuring code, Mr. Grimm creates a Blog and Post class. For those of you who remember the original Rails blog demo, the two couldn’t look more different.
Blog#post_source
In an effort to encourage Rails developers to think about relationships between classes beyond ActiveRecord::Relation, he creates his own interface for defining how a Blog should interact with a “post source”.
# from http://objectsonrails.com/#sec-5-2
class Blog
# ...
attr_writer :post_source
private
def post_source
@post_source ||= Post.public_method(:new)
end
end
The code above defines the Blog class and makes available post_source= via the attr_writer method. Additionally, it defines the attribute reader as a private method. The idea being that a private method can be changed without breaking the class’s API. If we decide we want …
ruby
UTOSC, here I come
Recently the Utah Open Source Foundation announced their schedule for this year’s UTOSC, scheduled for May 3-5 at Utah Valley University. I’m not sure I’ve ever before felt sufficiently ambitious to submit two talk proposals for a conference, but I did this time, and both were accepted, so I’ll give one talk on database constraints, from simple to complex, and another on geospatial data visualization such as we’ve been doing a whole lot of lately. Jon Jensen will also be there with two talks of his own, one on website performance and the other a “screen vs. tmux faceoff”. We use screen pretty heavily company-wide, but I’ve wanted to learn tmux for quite a while, so this one is on my list to see.
DATABASE CONSTRAINTS
Database constraints are something I’ve always strongly encouraged, and my commitment to clearly constrained data has become more complete recently after a few experiences with various clients and inconsistent data. The idea of a database constraint is to ensure all stored data meets certain criteria, such as that a matching record exists in another table, or the “begin” date is prior to the “end date”, or even simply that a particular field is not empty. Applications …
conference
Integrating Propel ORM
Propel ORM Integration
Most of us have worked in environments that are organized in an less than desirable manner. A common recurring problem is the organization and communication between the business logic and the database model. One helpful methodology to help with a problem like this is implementing an ORM (Object-Relational Mapping) system. There are many to choose from, but I would like to discuss use and integration of Propel ORM.
Propel is currently only used in PHP, but supports many different databases. Currently Propel supports the following databases: MySQL, Postgres, Sqlite, MsSQL, and Oracle.
Installation and Setup
The main point of this post is to show how easily you can start integrating an ORM into your working environment. The explanation and examples below assume that you have installed the correct packages and configured Propel to work with your environment properly.
The Propel website offers great documentation on how to do that:
Integration
After you have set everything up, in particular the build.properties file, you can now generate your schema.xml file. This generated file describes your database in XML, everything form datatypes …
database php
An Introduction to Google Website Optimizer
On End Point’s website, Jon and I recently discussed testing out use of Google Website Optimizer to run a few A/B tests on content and various website updates. I’ve worked with a couple of clients who use Google Website Optimizer, but I’ve never installed it from start to finish. Here are a few basic notes that I made during the process.
What’s the Point?
Before I get into the technical details of the implementation, I’ll give a quick summary of why you would want to A/B test something. A basic A/B test will test user experiences of content A versus content B. The goal is to decide which of the two (content A or content B) leads to higher conversion (or higher user interactivity that indirectly leads to conversion). After testing, one would continue to use the higher converting content. An example of this in ecommerce may be product titles or descriptions.
A/B tests in Google Website Optimizer
I jumped right into the Google Website Optimizer sign-up and wanted to set up a simple A/B test to test variations on our home page content. Unfortunately, I found right away that basic A/B tests in Google Website optimizer require two different URLs to test. In test A, the user would be see …
analytics seo testing
Liquid Galaxy Website Launch
We have just launched a new site to promote our Liquid Galaxy turn-key systems, our suite of Liquid Galaxy services, as well as presenting the current range of the Liquid Galaxy’s capabilities.
Check the Liquid Galaxy Website here.
End Point has been developing the Google Liquid Galaxy project into a commercially available platform over the last two years. In case you are unfamiliar with the Liquid Galaxy project here is a brief rundown:
- Originally developed by engineers at Google on their 20% time
- Provides an immersive viewing environment for Google Earth by running multiple instances of the software synced across any number of displays
- The core software is available as open source
So far the majority of the people using this system (only a small number to date) are advanced hackers and hobbyists who have set up mini versions using computer display monitors. Some of these talented developers have completed projects like porting open source video games or experimenting with different display configurations.
Meanwhile, End Point has been hard at work developing a standardized, portable, and robust turn-key version of the Liquid Galaxy system. Through trial and error and the …
company visionport
Monitoring cronjob exit codes with Nagios
If you’re like me, you’ve got cronjobs that make email noise if there is an error. While email based alerts are better than nothing, it’d be best to integrate this kind of monitoring into Nagios. This article will break down how to monitor the exit codes from cronjobs with Nagios.
Tweaking our cronjob
The monitoring plugin depends on being able to read some sort of log output file which includes an exit code. The plugin also assumes that the log will be truncated with every run. Here’s an example of a cronjob entry which meets those requirements:
rsync source dest 2>&1 > /var/log/important_rsync_job.log; echo "Exit code: $?" >> /var/log/important_rsync_job.log
So let’s break down a couple of the more interesting points in this command:
- 2>&1 sends the stderr output to stdout so it can be captured in our log file
- Notice the single > which will truncate the log every time it is run
- $? returns the exit code of the last run command
- Notice the double » which will append to the log file our exit code
Setting up the Nagios plugin
The check_exit_code plugin is available on GitHub, and couldn’t be easier to setup. Simply specify the log file to …
monitoring
Easy creating ramdisk on Ubuntu
Hard drives are extremely slow compared to RAM. Sometimes it is useful to use a small amount of RAM as a drive.
However, there are some drawbacks to this solution. All the files will be gone when you reboot your computer, so in fact it is suitable only for storing some temporary files—those which are generated during some process and are not useful later.
I will mount the ramdisk in my local directory. I use Ubuntu 11.10, my user name is ‘szymon’, and my home directory is ‘/home/szymon’.
I create the directory for mounting the ramdisk in my home dir:
mkdir /home/szymon/ramdisk
When creating the ramdisk, I have a couple of possibilities:
- ramdisk—there are sixteen standard block devices at /dev/ram* (from /dev/ram0 to /dev/ram15) which can be used for storing ram data. I can format it with any of the filesystems I want, but usually this is too much complication
- ramfs—a virtual filesystem stored in ram. It can grow dynamically, and in fact it can use all available ram, which could be dangerous.
- tmpfs—another virtual filesystem stored in ram, but because it has a fixed size, it cannot grow like ramfs.
I want to have a ramdisk that won’t be able to use all of my ram, and I want to …
hosting linux ubuntu