• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Selectively firing Postgres triggers

    Greg Sabino Mullane

    By Greg Sabino Mullane
    July 15, 2015

    Being able to disable Postgres triggers selectively can be an important skill when doing tasks like bulk updates, in which you only want a subset of the triggers on the table to be fired. Read below for the long explanation, but the TL;DR version of the best solution is to set a WHEN clause on the trigger you wish to skip, making it conditional on a variable such as session_replication_role, or application_name

    CREATE TRIGGER mytrig AFTER INSERT ON foobar FOR EACH
      ROW WHEN (current_setting('session_replication_role') <> 'local') EXECUTE PROCEDURE myfunc();
    BEGIN;
    SET LOCAL session_replication_role = 'local';
    UPDATE foobar SET baz = 123;
    COMMIT;

    I decided to spin up a free Heroku “Hobby Dev” database to illustrate the solutions. Generating a test table was done by using the Pagila project, as it has tables which contain triggers. Heroku gives you a randomly generated user and database name. To install the Pagila schema, I did:

    $ export H="postgres://vacnvzatmsnpre:2iCDp-46ldaFxgdIx8HWFeXHM@ec2-34-567-89.compute-1.amazonaws.com:5432/d5q5io7c3alx9t"
    $ cd pagila-0.10.1
    $ psql $H -q -f pagila-schema.sql
    $ psql $H -q …

    database heroku postgres

    How fast is pg_upgrade anyway?

    Greg Sabino Mullane

    By Greg Sabino Mullane
    July 1, 2015

    Back in the old days, upgrading Postgres required doing a pg_dump and loading the resulting logical SQL into the new database. This could be a very slow, very painful process, requiring a lot of downtime. While there were other solutions (such as Bucardo) that allowed little (or even zero) downtime, setting them up was a large complex task. Enter the pg_upgrade program, which attempts to upgrade a cluster with minimal downtime. Just how fast is it? I grew tired of answering this question from clients with vague answers such as “it depends” and “really, really fast” and decided to generate some data for ballpark answers.

    Spoiler: it’s either about 3.5 times as fast as pg_dump, or insanely fast at a flat 15 seconds or so. Before going further, let’s discuss the methodology used.

    I used the venerable pgbench program to generate some sample tables and data, and then upgraded the resulting database, going from Postgres version 9.3 to 9.4. The pgbench program comes with Postgres, and simply requires an –initialize argument to create the test tables. There is also a –scale argument you can provide to increase the amount of initial data—​each …


    postgres

    Python string formatting and UTF-8 problems workaround

    Recently I worked on a program which required me to filter hundred of lines of blog titles. Throughout the assignment I stumbled upon a few interesting problems, some of which are outlined in the following paragraphs.

    Non Roman characters issue

    During the testing session I missed one title and investigating why it happened, I found that it was simply because the title contained non-Roman characters.

    Here is the code’s snippet that I was previously using:

    for e in results:                                                                                                                        
        simple_author=e['author'].split('(')[1][:-1].strip()                                                             
        if freqs.get(simple_author,0) < 1:                                                                                               
            print parse(e['published']).strftime("%Y-%m-%d") , "--",simple_author, "--", e['title']

    And here is the fixed version

    for e in results:                                                                                                                        
        simple_author=e[ …

    python

    Documenting web services with Perl POD and AJAX

    Jeff Boes

    By Jeff Boes
    June 26, 2015

    Perl POD is a handy, convenient, but low-tech approach to embedded documentation. Consider a web service in Dancer:

    get time => sub {
      return scalar(localtime());
    };

    (Disclaimer: my actual use-case of this technique was even more legacy: I was documenting Interchange Actionmaps that returned images, JSON, etc.)

    Your application might have several, or even dozens of these, with various parameters, returning data in JSON or TXT or CSV or who-knows-what. I chose to document these in Perl POD (Plain Old Documentation) format, e.g.,

    =pod
    
    =head1 time
    
    Retrieves the current time
    
    =over 3
    
    =item Parameters
    
    None.
    
    =item Example
    
    =begin html
    
    <script src="/js/example-time.js" type="text/javascript"></script>
    
    =end html
    
    =back
    
    =cut

    This block gets inserted right in-line with the web service code, so it’s immediately obvious to anyone maintaining it (and thus has the best chance of being maintained if and when the code changes!). Now I can generate an HTML page directly from my Perl code:

    $ pod2html MyPackage.pm

    Your output looks something like this (excerpted for clarity):

    time

    Retrieves the current time Parameters

    None.

    Example

    Where the magic comes in …


    dancer javascript perl

    IPython Tips and Tricks

    Kannan Ponnusamy

    By Kannan Ponnusamy
    June 18, 2015

    Recently I have been working on Python automation scripts. Very often I use IPython to develop/debug the code.

    IPython is an advanced interactive python shell. It is a powerful tool which has many more features. However, here I would like to share some of the cool tricks of IPython.

    Getting help

    Typing object_name? will print all sorts of details about any object, including docstrings, function definition lines (for call arguments) and constructor details for classes.

    In [1]: import datetime
    In [2]: datetime.datetime?
    Docstring:
    datetime(year, month, day[, hour[, minute[, second[, microsecond[,tzinfo]]]]])
    
    The year, month and day arguments are required. tzinfo may be None, or an
    instance of a tzinfo subclass. The remaining arguments may be ints or longs.
    File:      /System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/datetime.so
    Type:      type

    Magic commands

    Edit

    This will bring up an editor to type multiline code and execute the resulting code.

    In [3]: %edit
    IPython will make a temporary file named: /var/folders/xh/2m0ydjs51qxd_3y2k7x50hjc0000gn/T/ipython_edit_jnVJ51/ipython_edit_NdnenL.py
    In [3]: %edit -p

    This will bring up the editor with the …


    python tips tools

    Heroku: dumping production database to staging

    Marina Lohova

    By Marina Lohova
    June 17, 2015

    If you need to dump the production database locally Heroku has a nice set of tools to make this as smooth as humanly possible. In short, remember these two magic words: pg:pull and pg:push. This article details the process https://devcenter.heroku.com/articles/heroku-postgresql#pg-push-and-pg-pull

    However, when I first tried it I had to resolved few issues.

    My first problem was:

    pg:pull not found

    To fix this:

    1. Uninstall the “heroku” gem with
    gem uninstall heroku (Select 'All Versions')
    1. Find your Ruby “bin” path by running
    gem env

    (it’s under “EXECUTABLE DIRECTORY:”)

    1. Cd to the “bin” folder.

    2. Remove the Heroku executable with

    rm heroku
    1. Restart your shell (close Terminal tab and re-open)

    2. Type

    heroku version

    you should now see something like:

    heroku-toolbelt/2.33.1 (x86_64-darwin10.8.0) ruby/1.9.3

    Now you can proceed with the transfer:

    1. Type
    heroku config --app production-app

    Note the DATABASE_URL, for example let’s imagine that the production database url is HEROKU_POSTGRESQL_KANYE_URL, and the staging database url is HEROKU_POSTGRESQL_NORTH

    1. Run
    heroku pg:pull HEROKU_POSTGRESQL_KANYE rtwtransferdb --app production-app
    heroku config --app staging-app …

    database heroku

    Google Maps JavaScript API LatLng Property Name Changes

    Greg Davidson

    By Greg Davidson
    June 11, 2015

    Debugging Broken Maps

    A few weeks ago I had to troubleshoot some Google Maps related code that had suddenly stopped working. Some debugging revealed the issue: the code adding markers to the page was attempting to access properties that did not exist. This seemed odd because the latitude and longitude values were the result of a geocoding request which was completing successfully. The other thing which stood out to me were the property names themselves:

    var myLoc = new google.maps.LatLng(results[0].geometry.location.k, results[0].geometry.location.D);

    It looked like the original author had inspected the geocoded response, found the ‘k’ and ‘D’ properties which held latitude and longitude values and used them in their maps code. This had all been working fine until Google released a new version of their JavaScript API. Sites that did not specify a particular version of the API were upgraded to the new version automatically. If you have Google Maps code which stopped working recently this might be the reason why.

    The Solution: Use the built-in methods in the LatLng class

    Screen Shot 2015 06 10 at 3 47 32 PM

    I recalled there being some helper methods for LatLng objects and confirmed this with a visit to the docs for …


    html javascript api

    The Portal project — Jenkins Continuous Integration summary

    Zdeněk Maxa

    By Zdeněk Maxa
    June 9, 2015

    This post describes some of our experiences at End Point in designing and working on comprehensive QA/CI facilities for a new system which is closely related to the Liquid Galaxy.

    Due to the design of the system, the full deployment cycle can be rather lengthy and presents us with extra reasons for investing heavily in unit test development. Because of the very active ongoing development on the system we benefit greatly from running the tests in an automated fashion on the Jenkins CI (Continuous Integration) server.

    Our Project’s CI Anatomy

    Our Jenkins CI service defines 10+ job types (a.k.a. Jenkins projects) that cover our system. These job types differ as far as source code branches are concerned, as well as by combinations of the types of target environments the project builds are executed on.

    The skeleton of a Jenkins project is what one finds under the Configure section on the Jenkins service webpage. The source code repository and branch are defined here. Each of our Jenkins projects also fetches a few more source code repositories during the build pre-execution phase.

    The environment variables are defined in a flat text file:

    Another configuration file is in the JSON …


    chef browsers jenkins visionport python testing
    Previous page • Page 80 of 222 • Next page