Heroku: dumping production database to staging
If you need to dump the production database locally Heroku has a nice set of tools to make this as smooth as humanly possible. In short, remember these two magic words: pg:pull and pg:push. This article details the process https://devcenter.heroku.com/articles/heroku-postgresql#pg-push-and-pg-pull
However, when I first tried it I had to resolved few issues.
My first problem was:
pg:pull not found
To fix this:
- Uninstall the “heroku” gem with
gem uninstall heroku (Select 'All Versions')
- Find your Ruby “bin” path by running
gem env
(it’s under “EXECUTABLE DIRECTORY:”)
-
Cd to the “bin” folder.
-
Remove the Heroku executable with
rm heroku
-
Restart your shell (close Terminal tab and re-open)
-
Type
heroku version
you should now see something like:
heroku-toolbelt/2.33.1 (x86_64-darwin10.8.0) ruby/1.9.3
Now you can proceed with the transfer:
- Type
heroku config --app production-app
Note the DATABASE_URL, for example let’s imagine that the production database url is HEROKU_POSTGRESQL_KANYE_URL, and the staging database url is HEROKU_POSTGRESQL_NORTH
- Run
heroku pg:pull HEROKU_POSTGRESQL_KANYE rtwtransferdb --app production-app
heroku config --app staging-app …
database heroku
Google Maps JavaScript API LatLng Property Name Changes
Debugging Broken Maps
A few weeks ago I had to troubleshoot some Google Maps related code that had suddenly stopped working. Some debugging revealed the issue: the code adding markers to the page was attempting to access properties that did not exist. This seemed odd because the latitude and longitude values were the result of a geocoding request which was completing successfully. The other thing which stood out to me were the property names themselves:
var myLoc = new google.maps.LatLng(results[0].geometry.location.k, results[0].geometry.location.D);
It looked like the original author had inspected the geocoded response, found the ‘k’ and ‘D’ properties which held latitude and longitude values and used them in their maps code. This had all been working fine until Google released a new version of their JavaScript API. Sites that did not specify a particular version of the API were upgraded to the new version automatically. If you have Google Maps code which stopped working recently this might be the reason why.
The Solution: Use the built-in methods in the LatLng class

I recalled there being some helper methods for LatLng objects and confirmed this with a visit to the docs for …
html javascript api
The Portal project — Jenkins Continuous Integration summary
This post describes some of our experiences at End Point in designing and working on comprehensive QA/CI facilities for a new system which is closely related to the Liquid Galaxy.
Due to the design of the system, the full deployment cycle can be rather lengthy and presents us with extra reasons for investing heavily in unit test development. Because of the very active ongoing development on the system we benefit greatly from running the tests in an automated fashion on the Jenkins CI (Continuous Integration) server.
Our Project’s CI Anatomy
Our Jenkins CI service defines 10+ job types (a.k.a. Jenkins projects) that cover our system. These job types differ as far as source code branches are concerned, as well as by combinations of the types of target environments the project builds are executed on.
The skeleton of a Jenkins project is what one finds under the Configure section on the Jenkins service webpage. The source code repository and branch are defined here. Each of our Jenkins projects also fetches a few more source code repositories during the build pre-execution phase.
The environment variables are defined in a flat text file:
Another configuration file is in the JSON …
chef browsers jenkins visionport python testing
MediaWiki complete test wiki via cloning
Being able to create a quick copy of your MediaWiki site is an important skill that has many benefits. Any time you are testing an upgrade, whether major or minor, it is great to be able to perform the upgrade on a test site first. Tracking down bugs becomes a lot easier when you can add all the debugging statements you need and not worry about affecting any of the users of your wiki. Creating and modifying extensions also goes a lot smoother when you can work with an identical copy of your production wiki. I will outline the steps I use to create such a copy, also known as a “test wiki”.
Before creating a copy, there are two things that should be done to an existing MediaWiki installation: use git, and move the images directory. By “use git”, I mean to put your existing mediawiki directory (e.g. where your LocalSettings.php file lives) into version control. Because the MediaWiki software is not that large, it is simplest to just add nearly everything into git, with the exception of the images and the cache information. Here is a recipe to do just that:
$ cd /var/www/mediawiki
$ git init .
Initialized empty Git repository in …
mediawiki
Updated NoSQL benchmark: Cassandra, MongoDB, HBase, Couchbase
Back in April, we published a benchmark report on a number of NoSQL databases including Cassandra MongoDB, HBase, and Couchbase. We endeavored to keep things fair and configured as identically as possible between the database engines. But a short while later, DataStax caught two incorrect configuration items, in Cassandra and HBase, and contacted us to verify the problem. Even with the effort we put in to keeping everything even, a couple erroneous parameters slipped through the cracks! I’ll save the interesting technical details for another post coming soon, but once that was confirmed we jumped back in and started work on getting corrected results.
With the configuration fixed we re-ran a full suite of tests for both Cassandra and HBase. The updated results have published a revised report that you can download in PDF format from the DataStax website (or see the overview link).
The revised results still show Cassandra leading MongoDB, HBase, and Couchbase in the various YCSB tests.
For clarity the paper also includes a few additional configuration details that weren’t in the original report. We regret any confusion caused by the prior report, and worked as quickly as possible …
benchmarks big-data database nosql cassandra mongodb couchdb
Postfix Address Verification
We recently upgraded some mail servers, moving from Exim to Postfix in the process. These server works as a front line spam/RBL filter, rejecting invalid message and relaying valid ones to different SMTP based on the destination domain.
While looking for the best configuration layout to achieve this, we found that Postfix has a very useful and interesting feature: Address Verification. This technique allows the Postfix server to check that a sender or a recipient address is valid before accepting a message, preventing junk messages from entering the queue.
How does Address Verification work?
Upon receiving a message Postfix will probe the preferred MTA for the address. If that address is valid the message is accepted and processed, otherwise it is rejected.
Message Probes does not actually go through the whole delivery process; Postfix will just connect to the MTA, send a HELO + MAIL FROM + RCPT TO sequence and check its response. Probe checks results are cached on disk, minimizing network and resource impact. During this check the client is put “on hold”; if the probe takes too much a temporary reject is given; a legitimate mail server will have no problem retrying the delivery …
email sysadmin
Postgres “unsupported frontend protocol” mystery
The wonderful tail_n_mail program continues to provide me with new mysteries from our Postgres clients. One of the main functions it provides is to send an immediate email to us when an unexpected FATAL (or ERROR or PANIC) message appears in the Postgres logs. While these are often simple application errors, or deeper problems such as running out of disk space, once in a blue moon you see something completely unexpected. Some time ago, I saw a bunch of these messages appear in an email from a tail_n_mail email:
[1] From files A to B Count: 2
First: [A] 2015-12-01T06:30:00 server1 postgres[1948]
Last: [B] 2015-12-01T06:30:00 server2 postgres[29107]
FATAL: unsupported frontend protocol 65363.19778: server supports 1.0 to 3.0
I knew what caused this error in general, but decided to get to the bottom of the problem. Before we go into the specific error, let’s review what causes this particular message to appear. When a Postgres client (such as psql or DBD::Pg) connects to Postgres, the first thing it does is to issue a startup message. One of the things included in this request is the version of the Postgres protocol the client wishes to use. Since …
database perl postgres
Connected to PgBouncer or Postgres?
Determining if your current database connection is using PgBouncer, or going directly to Postgres itself, can be challenging, as PgBouncer is a very low-level, transparent interface. It is possible, and here are some detection methods you can use.
This was inspired by someone asking on the Perl DBD IRC channel if it was possible to easily tell if your current database handle (usually “$dbh”) is connected to PgBouncer or not. Since I’ve seen this question asked in other venues, I decided to take a crack at it.
There are actually two questions to be answered: (1) are we connected to PgBouncer, and if so, (2) what pool_mode is being run? The quickest and easiest way I found to answer the first question is to try and connect to a non-existent database. Normally, this is a FATAL message, as seen here:
$ psql testdb -p 5432
testdb=# \c ghostdb
FATAL: database "ghostdb" does not exist
Previous connection kept
testdb=#
However, a slightly different ERROR message is returned if the same thing is attempted while connected to PgBouncer:
$ psql testdb -p 6432
testdb=# \c ghostdb
ERROR: No such database: ghostdb
Previous connection kept
testdb=#
Thus, an ERROR …
database postgres scalability