Three Things: Times Two
It’s been a while since I’ve written up a “Three Things” article where I share a few featured web development tidbits picked up recently. So I made this a double episode!
1. event.stopPropagation() and event.stopImmediatePropagation()
I recently came across these two methods in jQuery, described here and here. Both of these methods [prevent the event from bubbling up the DOM tree, preventing any parent handlers from being notified of the event]. In my web application, my $(‘html’) element had a listener on it, but I added specific listeners to children elements that when clicked on calls event.stopPropagation to cancel the event on the $(‘html’) element. See the code below for a simplified example:
jQuery(function() {
jQuery('html').click(function() {
jQuery.hideSomething();
});
jQuery('.popup').click(function(event) {
event.stopPropagation();
});
})
2. alias_attribute
The alias method in Rails is one that I use frequently. But I recently came across the alias_attribute method as well. This might make the most sense to use when using shared views for multiple models with varying attributes.
3. Excel behavior in …
jquery rails tips
Cannot parse Cookie header in Ruby on Rails
Yesterday I resolved a client emergency for a Ruby on Rails site that continues to leave me scratching my head, even with follow-up investigation. In short, the emergency came up after an email marketing campaign was sent out in the morning, and resulted in server (HTTP 500 Status Code) errors for every customer that clicked on the email links. Despite the fact that Rails exception emails are sent to the client and me, the errors were never reaching the exception email code, so I was unaware of the emergency until the client contacted me.
Upon jumping on the server, I saw this in the production log repeatedly:
ArgumentError (cannot parse Cookie header: invalid %-encoding (...)):
ArgumentError (cannot parse Cookie header: invalid %-encoding (...)):
ArgumentError (cannot parse Cookie header: invalid %-encoding (...)):
The URLs that the production log was complaining about had a bunch of Google Analytics tracking variables:
- utmcmd=Email
- utmcct=customeremail
- utmccn=New Site Sale 70% off
- etc.
After a user visits the site, these variables are typically stored as cookies for Google Analytics tracking. Upon initial investigation, the issue appeared to be triggered from any Google …
analytics ecommerce piggybak rails
Enforcing Transaction Compartments with Foreign Keys and SECURITY DEFINER
In support of End Point’s evolving offering for multi-master database replication, from the precursor to Bucardo through several versions of Bucardo itself, our code solutions depended on the ability to suppress the actions of triggers and rules through direct manipulation of the pg_class table. Most PostgreSQL database developers are probably familiar with the construct we used from the DDL scripts generated by pg_dump at one time.
Disable triggers and rules on table “public”.“foo”:
UPDATE pg_class SET
relhasrules = false,
reltriggers = 0
FROM pg_namespace
WHERE pg_namespace.oid = pg_class.relnamespace
AND pg_namespace.nspname = 'public'
AND pg_class.relname = 'foo';
Re-enable all triggers and rules on “public”.“foo” when finished with DML that must not fire triggers and rules:
UPDATE pg_class SET
reltriggers = (
SELECT COUNT(*) FROM pg_trigger
WHERE pg_class.oid = pg_trigger.tgrelid
),
relhasrules = (
SELECT COUNT(*) > 0
FROM pg_rules
WHERE schemaname = 'public' …
bucardo database postgres
PL/Perl multiplicity issues with PostgreSQL: the Highlander restriction
I came across this error recently for a client using PostgreSQL 8.4:
ERROR: cannot allocate multiple Perl interpreters on this platform
Most times when you see this error it indicates that someone was trying to use both a PL/Perl function and a PL/PerlU function on a server in which Perl’s multiplicity flag is disabled. In such a case, only a single Perl interpreter can exist for each Postgres backend, and trying to create a new one, as happens when you execute two functions written in PL/Perl and PL/PerlU, the error above is thrown.
However, in this case it was not a combination of PL/Perl and PL/PerlU — I confirmed that only PL/Perl was installed. The error was caused by a slightly less known limitation of a non-multiplicity Perl and Postgres. As the docs mention at the very bottom of the page, “…so any one session can only execute either PL/PerlU functions, or PL/Perl functions that are all called by the same SQL role”. So we had two roles both trying to execute some PL/Perl code in the same session. How is that possible — isn’t each session tied to a single role at login? The answer is the SECURITY DEFINER flag for functions, which causes the …
database perl postgres
Musica Russica Launches with Piggybak
The new home page for Musica Russica.
Last week, we launched a new site for Musica Russica. The old site was running on an outdated version of Lasso and Filemaker and was approximately 15 years old. Although it was still chugging along, finding hosting support and developers for an outdated platform becomes increasingly challenging as time goes on. The new site runs on Ruby on Rails 3 with Nginx and Unicorn and uses open source Rails gems RailsAdmin, Piggybak, CanCan and Devise. RailsAdmin is a great open source Rails Admin tool that I’ve blogged about before (here, here, and here). Piggybak is End Point’s home grown light-weight ecommerce platform, also blogged about several times (here, here, and here). Below are a few more details on the site:
- The site includes Rails 3 goodness such as an elegant and thorough MVC architecture, advanced routing to encourage clean, user-friendly URLs, the ability to integrate modular elements (Piggybak, RailsAdmin) with ease, and several built-in performance options. The site also features a few other popular Rails gems such as Prawn (for printing order and packing slip PDFs), Rack-SSL-Enforcer (a nice tool for enforcing SSL pages), …
clients ecommerce piggybak rails
DevCamps: Creating new camps from a non-default Git branch
I recently set up part of a new Rails project DevCamps installation with a unique Git repo setup and discovered a trick for creating camps from a Git branch other than master. Admittedly, the circumstances that led to me discovering this trick are a bit specific to this project, but the trick itself can be useful in other situations as well.
The Git repo specified in local-config had a master branch with nothing in it but the standard “initial commit.” This relatively new project uses a simplifed git-flow workflow and as such, all its code was still in the “develop” branch.
In my case, this empty-ish master branch meant there were no tracked files in CAMP_PATH/public directory. This meant that Git did not create that directory when the repo is cloned by mkcamp
. This meant that apache2 would refuse to start. Camping without a web server makes my back hurt, so I snooped around a little bit…
I discovered two things:
- You can tell
git clone
which branch to checkout initially by passing it a ‘–branch $your_non_default_branch’ switch - The
mkcamp
command will happily pass that switch (as well as any other spicy options you include) along to …
camps git hosting
Automatically kill process using too much memory on Linux
Sometimes on Linux (and other Unix variants) a process will consume way too much memory. This is more likely if you have a fair amount swap space configured — but within the range of normal, for example, as much swap as you have RAM.
There are various methods to try to limit trouble from such situations. You can use the shell’s ulimit setting to put a hard cap on the amount of RAM allowed to the process. You can adjust settings in /etc/security/limits.conf on both Red Hat- and Debian-based distros. You can wait for the OOM (out of memory) killer to notice the process and kill it.
But all those remedies don’t help in situations where you want a process to be able to use a lot of RAM, sometimes, when there’s a point to it and it’s not just in an infinite loop that will eventually use all memory.
Sometimes such a bad process will bog the machine down horribly before the OOM killer notices it.
We put together the following script about a year ago to handle such cases:
It uses the Proc::ProcessTable module from Perl’s CPAN to do the heavy lifting. We invoke it once per minute in cron. If you have processes eating up memory so quickly that they bring down the machine in less than a …
hosting linux perl
Git: Delete your files and keep them, too
I was charged with cleaning up a particularly large, sprawling set of files comprising a git repository. One whole “wing” of that structure consisted of files that needed to stay around in production (they were various PDFs, PowerPoint presentations, and Windows EXEs that were only ever needed by the customer’s partners, and downloaded from the live site – our developer camps never wanted to have local copies of these files, which amounted to over 280 MB (and since we have dozens of camps shadowing this repository, all on the same server, this will save a few GB at least).
I should point out that our preferred deployment is to have production, QA, and development all be working clones of a central repository. Yes, we even push from production, especially when clients are the ones making changes there. (Gasp!)
So: the aim here is to make the stuff vanish from all the other clones (when they are updated), but to preserve the stuff in one particular clone (production). Also, we want to ensure that no future updates in that “wing” are tracked.
# From the "production" clone:
$ cd stuff
$ git rm -r --cached .
$ cd ..
$ echo "stuff" …
git