• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • More Code and SEO with the Google Analytics API

    Steph Skardal

    By Steph Skardal
    February 22, 2010

    My latest blog article inspiration came from an SEOmoz pro webinar on Actionable Analytics. This time around, I wrote the article and it was published on SEOmoz’s YOUmoz Blog and I thought I’d summarize and extend the article here with some technical details more appealing to our audience. The article is titled Visualizing Keyword Data with the Google Analytics API.

    In the article, I discuss and show examples of how the number of unique keywords receiving search traffic has diversified or expanded over time and that our SEO efforts (including writing blog articles) are likely resulting in this diversification of keywords. Some snapshots from the articles:

    [The unique keyword (keywords receiving at least one search visit) count per month (top) compared to the number of articles available on our blog at that time (bottom).]

    I also briefly examined how unique keywords receiving at least one visit overlapped between each month and saw about 10-20% of overlapping keywords (likely the short-tail of SEO).

    [The keyword overlap per month, where the keywords receiving at least one visit in consecutive months are shown in the overlap section.]

    Now, on to things that End Point’s audience …


    analytics ecommerce seo

    PostgreSQL tip: dump objects into a new schema

    David Christensen

    By David Christensen
    February 16, 2010

    Sometimes the need arises to export a PostgreSQL database and put its contents into its own schema; say you’ve been busy developing things in the public schema. Sometime people suggest manipulating the pg_dump output either manually or using a tool such as sed or perl to explicitly schema-qualify all table objects, etc, but this is error-prone depending on your table names, and can be more trouble than its worth.

    One trick that may work for you if your current database is not in use by anyone else is to rename the default public schema to your desired schema name before dumping, and then optionally changing it back to public afterward. This has the benefit that all objects will be properly dumped in the new schema (sequences, etc) and not just tables, plus you don’t have to worry about trying to parse SQL with regexes to modify this explicitly.

    $ psql -c "ALTER SCHEMA public RENAME new_name"
    $ pg_dump --schema=new_name > new_name_dump.sql
    $ psql -c "ALTER SCHEMA new_name RENAME public"
    $ # load new_name_dump.sql elsewhere

    Cheers!


    postgres tips

    GNU diff: changing the output filenames

    David Christensen

    By David Christensen
    February 15, 2010

    I was working on a script to monitor/compare remote versions of a file and compare against our local mirror; part of this work involved fetching the remote file to a temporary location and doing a diff -u against the local file to see if there were any changes. This worked fine, but the temporary filename was less-than descriptive.

    The man page for diff was somewhat cryptic when it came to changing the displayed output filenames themselves, however based on some figuring-out, you can pass the -L (–label) flag to diff. You need to pass it twice; the first -L will replace the filename in the — output line and the second -L replaces the file in the +++ line.

    $ wget -qO /tmp/grocerylist
    $ diff -u /path/to/local/grocerylist -L /path/to/local/grocerylist /tmp/grocerylist -L http://mirrorsite.com/grocerylist
    
    --- /path/to/local/grocerylist
    +++ http://mirrorsite.com/grocerylist
    @@ -11,7 +11,7 @@
    potatoes
    bread
    eggs
    -    milk
    +    soda
    oranges
    apples
    celery

    Obvious shortcomings in this approach are the fact that you need to specify a redundant -L line to the first file; in my case, this was all handled programatically, so I just substituted the same parameter in both places. …


    tips

    MRM: The power of ‘random’, program maintenance, weave and electricity

    Selena Deckelmann

    By Selena Deckelmann
    February 14, 2010

    Time for another installment of Monday Reading Material!

    I’m in New Zealand (and across the dateline!) so this is appearing a day early for many of you :)

    Reading material from last week:


    tips

    Safari 4 Top Sites feature skews analytics

    Jon Jensen

    By Jon Jensen
    February 13, 2010

    Safari version 4 has a new “Top Sites” feature that shows thumbnail images of the sites the user most frequently visits (or, until enough history is collected, just generally popular sites).

    Martin Sutherland describes this feature in details and shows how to detect these requests, which set the X-Purpose HTTP header to “preview”.

    The reason this matters is that Safari uses its normal browsing engine to fetch not just the HTML, but all embedded JavaScript and images, and runs in-page client JavaScript code. And these preview thumbnails are refreshed fairly frequently—​possibly several times per day per user.

    Thus every preview request looks just like a regular user visit, and this skews analytics which see a much higher than average number of views from Safari 4 users, with lower time-on-site averages and higher bounce rates since no subsequent visits are registered (at least as part of the preview function).

    The solution is to simply not output any analytics code when the X-Purpose header is set to “preview”. In Interchange this is easily done if you have an include file for your analytics code, by wrapping the file with an [if] block such as this:

    [tmp x_purpose][env …

    analytics browsers django ecommerce interchange php rails

    MySQL Ruby Gem CentOS RHEL 5 Installation Error Troubleshooting

    Adam Vollrath

    By Adam Vollrath
    February 9, 2010

    Building and installing the Ruby mysql gem on freshly-installed Red Hat based systems sometimes produces the frustratingly ambiguous error below:

    # gem install mysql
    /usr/bin/ruby extconf.rb
    checking for mysql_ssl_set()... no
    checking for rb_str_set_len()... no
    checking for rb_thread_start_timer()... no
    checking for mysql.h... no
    checking for mysql/mysql.h... no
    *** extconf.rb failed ***
    Could not create Makefile due to some reason, probably lack of
    necessary libraries and/or headers.  Check the mkmf.log file for more
    details.  You may need configuration options.

    Searching the web for info on this error yields two basic solutions:

    1. Install the mysql-devel package (this provides the mysql.h file in /usr/include/mysql/).
    2. Run gem install mysql – –with-mysql-config=/usr/bin/mysql_config or some other additional options.

    These are correct but not sufficient. Because this gem compiles a library to interface with MySQL’s C API, the gcc and make packages are also required to create the build environment:

    # yum install mysql-devel gcc make
    # gem install mysql -- --with-mysql-config=/usr/bin/mysql_config

    Alternatively, if you’re using your distro’s ruby (not a custom build …


    database hosting mysql redhat rails

    On Linux, noatime includes nodiratime

    Jon Jensen

    By Jon Jensen
    February 9, 2010

    Note to performance-tweaking Linux sysadmins, pointed out to me by Selena Deckelmann: On Linux, the filesystem attribute noatime includes nodiratime, so there’s no need to say both “noatime,nodiratime” as I once did. (See this article on atime for details if you’re not familiar with it.)

    Apparently the nodiratime attribute was added later as a subset of noatime applying only to directories to still offer a bit of performance boost in situations where noatime on files would cause trouble (as with mutt and a few other applications that care about atimes).

    See also the related newer relatime attribute in the mount(8) manpage.


    hosting optimization tips

    Monday Reading Material

    Selena Deckelmann

    By Selena Deckelmann
    February 8, 2010

    Just a few links from the past week that are worth checking out:

    • “Spices: the internet of the ancient world!” — Planet Money Podcast. Great storytelling about the ancient spice trade and how information about where certain spices came from eventually leaked out and popped the spice trading bubble/monopoly.

    • Enterprise software is entirely bereft of soul. Reading this reminded me of antifeatures and the competitive advantages of open source software.

    • Emulating Empathy. Nice summary of how interacting with users of software (customers) on a non-technical issues, or over high-level requirements, provokes creativity. Also, that good customer communication is a skill not an innate talent—​meaning it can be taught and learned. :)

    • Interaxon. Other than the cute name, this is a fascinating company and concept based in Vancouver, BC. Thought controlled computing! Looking forward to seeing what comes out of their “Bright Ideas” exhibit during the Winter Olympics.


    tips
    Previous page • Page 188 of 222 • Next page