• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    Automatically kill process using too much memory on Linux

    Jon Jensen

    By Jon Jensen
    August 30, 2012

    Sometimes on Linux (and other Unix variants) a process will consume way too much memory. This is more likely if you have a fair amount swap space configured — but within the range of normal, for example, as much swap as you have RAM.

    There are various methods to try to limit trouble from such situations. You can use the shell’s ulimit setting to put a hard cap on the amount of RAM allowed to the process. You can adjust settings in /etc/security/limits.conf on both Red Hat- and Debian-based distros. You can wait for the OOM (out of memory) killer to notice the process and kill it.

    But all those remedies don’t help in situations where you want a process to be able to use a lot of RAM, sometimes, when there’s a point to it and it’s not just in an infinite loop that will eventually use all memory.

    Sometimes such a bad process will bog the machine down horribly before the OOM killer notices it.

    We put together the following script about a year ago to handle such cases:

    It uses the Proc::ProcessTable module from Perl’s CPAN to do the heavy lifting. We invoke it once per minute in cron. If you have processes eating up memory so quickly that they bring down the machine in less than a …


    hosting linux perl

    Git: Delete your files and keep them, too

    Jeff Boes

    By Jeff Boes
    August 30, 2012

    I was charged with cleaning up a particularly large, sprawling set of files comprising a git repository. One whole “wing” of that structure consisted of files that needed to stay around in production (they were various PDFs, PowerPoint presentations, and Windows EXEs that were only ever needed by the customer’s partners, and downloaded from the live site – our developer camps never wanted to have local copies of these files, which amounted to over 280 MB (and since we have dozens of camps shadowing this repository, all on the same server, this will save a few GB at least).

    I should point out that our preferred deployment is to have production, QA, and development all be working clones of a central repository. Yes, we even push from production, especially when clients are the ones making changes there. (Gasp!)

    So: the aim here is to make the stuff vanish from all the other clones (when they are updated), but to preserve the stuff in one particular clone (production). Also, we want to ensure that no future updates in that “wing” are tracked.

    # From the "production" clone:
     $ cd stuff
     $ git rm -r --cached .
     $ cd ..
     $ echo "stuff" …

    git

    Company Update August 2012

    Zed Jensen

    By Zed Jensen
    August 24, 2012

    Everyone here at End Point has been busy lately, so we haven’t had as much time as we’d like to blog. Here are some of the projects we’ve been knee deep in:

    • The Liquid Galaxy Team (Ben, Adam, Kiel, Gerard, Josh, Matt) has been working on several Liquid Galaxy installations, including one at the Monterey Bay National Marine Sanctuary Exploration Center in Santa Cruz, and one for the Illicit Networks conference in Los Angeles. Adam has also been preparing Ladybug panoramic camera kits for clients to take their own panoramic photos and videos. The Liquid Galaxy team welcomed new employees Aaron Samuel in July, and Bryan Berry just this week.
    • Brian B. has been improving a PowerCLI script to manage automated cloning of VMware vSphere virtual machines.
    • Greg Sabino Mullane has been working on various strange PostgreSQL database issues, and gave a riveting presentation on password encryption methods.
    • Josh Tolley has been improving panoramic photo support for Liquid Galaxy and expanding a public health data warehouse.
    • David has been at work on a web-scalability project to support customized content for a Groupon promotion, while continuing to benefit from nginx caching. He has also been …

    company

    Paginating API call with Radian6

    Marina Lohova

    By Marina Lohova
    August 24, 2012

    I wrote about Radian6 in my earlier blog post. Today I will review one more aspect of Radian6 API - call pagination.

    Most Radian6 requests return paginated data. This introduces extra complexity of making request several times in the loop in order to get all results. Here is one simple way to retrieve the paginated data from Radian6 using the powerful Ruby blocks.

    I will use the following URL to fetch data:

    /data/comparisondata/1338958800000/1341550800000/2777/8/9/6/

    Let’s decypher this.

    • 1338958800000 is start_date, 1341550800000 is end_date for document search. It’s June, 06, 2012 - July, 06, 2012 formatted with date.to_time.to_i * 1000.

    • 2777 is topic_id, a Radian6 term, denoting a set of search data for every customer.

    • 8 stands for Twitter media type. There are various media types in Radian6. They reflect where the data came from. media_types parameter can include a list of values for different media types separated by commas.

    • 9 and 6 are page and page_size respectively.

    First comes the method to fetch a single page.

    In the Radian6 wrapper class:

    def page(index, &block)
      data = block.call(index) 
      articles, count = data['article'], data[ …

    rails api

    Merging Two Google Accounts: My Experience

    Steph Skardal

    By Steph Skardal
    August 21, 2012

    Before I got married, I used a Gmail account associated with my maiden name (let’s call this account A). And after I got married, I switched to a new gmail address (let’s call this account B). This caused daily annoyances as my use of various Google services was split between the two accounts.

    Luckily, there are some services in Google that allow you to easily toggle between two accounts, but there is no easy way to define which account to use as the default for which service, so I found myself toggling back and forth frequently. Unfortunately, Google doesn’t provide functionality to merge multiple Google accounts. You would think they might, especially given my particular situation, but I can see how it’s a bit tricky in logically determining how to merge data. So, instead, I set off on migrating all data to account B, described in this email.

    Consider Your Google Services

    First things first, I took at look at the Google Services I used. Here’s how things broke down for me:

    • Gmail: Account A forwards to account B. I always use account B.
    • Google+: Use through account A.
    • Google Analytics: Various accounts divided between account A and account B. …

    tools

    Using Different PostgreSQL Versions at The Same Time.

    Szymon Lipiński

    By Szymon Lipiński
    August 20, 2012

    When I work for multiple clients on multiple different projects, I usually need a bunch of different stuff on my machine. One of the things I need is having multiple PostgreSQL versions installed.

    I use Ubuntu 12.04. Installing PostgreSQL there is quite easy. Currently there are available two versions out of the box: 8.4 and 9.1. To install them I used the following command:

    ~$ sudo apt-get install postgresql-9.1 postgresql-8.4 postgresql-client-common
    

    Now I have the above two versions installed.

    Starting the database is also very easy:

    ~$ sudo service postgresql restart
     * Restarting PostgreSQL 8.4 database server   [ OK ]
     * Restarting PostgreSQL 9.1 database server   [ OK ]
    

    The problem I had for a very long time was using the proper psql version. Both database installed their own programs like pg_dump and psql. Normally you can use pg_dump from the higher version PostgreSQL, however using different psql versions can be dangerous because psql uses a lot of queries which dig deep into the PostgreSQL internal tables for getting information about the database. Those internals sometimes change from one database version to another, so the best solution is to use the psql from the …


    postgres ubuntu

    Hidden inefficiencies in Interchange searching

    Jeff Boes

    By Jeff Boes
    August 13, 2012

    A very common, somewhat primitive approach to Interchange searching uses an approach like this:

    The search profile contains something along the lines of

      mv_search_type=db
      mv_search_file=products
      mv_column_op=rm
      mv_numeric=0
      mv_search_field=category
    
    [search-region]
      [item-list]
        [item-field description]
      [/item-list]
    [/search-region]
    

    In other words, we search the products table for rows whose column “category” matches an expression (with a single query), and we list all the matches (description only). However, this can be inefficient depending on your database implementation: the item-field tag issues a query every time it’s encountered, which you can see if you “tail” your database log. If your item-list contains many different columns from the search result, you’ll end up issuing many such queries:

    [item-list]
        [item-field description], [item-field weight], [item-field color],
        [item-field size], [item field ...]
      ...
    

    resulting in:

    SELECT description FROM products WHERE sku='ABC123'
    SELECT weight FROM products WHERE sku='ABC123'
    SELECT color FROM products WHERE sku='ABC123'
    SELECT size FROM products WHERE sku='ABC123'
    ... …

    interchange

    Rails 3 ActiveRecord caching bug ahoy!

    Brian Gadoury

    By Brian Gadoury
    August 2, 2012

    Sometimes bugs in other people’s code makes me think I might be crazy. I’m not talking Walter Sobchak gun-in-the-air-and-a-Pomeranian-in-a-cat-carrier crazy, but “I must be doing something incredibly wrong here” crazy. I recently ran into a Rails 3 ActiveRecord caching bug that made me feel this kind of crazy. Check out this pretty simple caching setup and the bug I encountered and tell me; Am I wrong?

    I have two models with a simple parent/child relationship defined with has_many and belongs_to ActiveRecord associations, respectively. Here are the pertinent bits of each:

    class MimeTypeCategory < ActiveRecord::Base
      # parent class
      has_many :mime_types
    
      def self.all
        Rails.cache.fetch("mime_type_categories") do
        MimeTypeCategory.find(:all, :include => :mime_types)
      end
    end
    
    class MimeType < ActiveRecord::Base
      # child class
      belongs_to :mime_type_category
    end
    

    Notice how in MimeTypeCategory.all, we are eager loading each MimeTypeCategory’s children MimeTypes because our app tends to use those MimeTypes any time we need a MimeTypeCategory. Then, we cache that entire data structure because it’s a good candidate for caching and we like our app to be fast. …


    ruby rails tips
    Previous page • Page 134 of 219 • Next page