Our Blog

Ongoing observations by End Point Dev people

Building responsive websites with Tailwind CSS

Afif Sohaili

By Afif Sohaili
December 3, 2021

Sunset over lake and mountains

Tailwind CSS is a CSS framework, like Bootstrap, Bulma, and Foundation. However, Tailwind does things in a less conventional way when compared to traditional CSS frameworks. Instead of providing CSS classes based on components or functional roles (e.g. .card or .row), Tailwind only provides utility classes, in which each class does only one specific thing a CSS attribute usually does, such as m-4 for margin: 1rem or mt-8 for margin-top: 2rem.

In Bootstrap, one can simply apply the provided .card CSS class to have a <div> styled like a card the Bootstrap way. In Tailwind, the styles have to be constructed with a string of different atomic classes. E.g. the equivalent of a Bootstrap’s .card would be something like relative flex flex-col break-words bg-white bg-clip-border min-w-0 rounded border. Verbose, yes, but this gives flexibility for the developers to define the appearance of a .card element themselves (e.g. there could be multiple variants of appearances of a .card) without having to worry about overriding inherited/​cascading CSS classes, which are typically the cause of many CSS bugs in production.

Atomic CSS

The first thing most notice when developing with Tailwind is how wordy CSS class lists can get. It feels almost like using the style="" attribute to write CSS. In the traditional approach to CSS, suppose there are two elements with identical margins:

.card {
  display: block;
  margin: 1rem;
}

.container {
  margin: 1rem;
}

Here, we can see that margin is declared twice. Those duplicates are going to be a few extra bytes in the final CSS payload.

With Tailwind, however, this is how the equivalent would be written:

<div class="block m-4">
</div>

<div class="m-4">
</div>

Here, both of the <div>s are reusing the same class m-4, which is provided out-of-the-box by Tailwind. This approach ensures that the project’s CSS does not grow, which is important for good user experience. Constructing the CSS of a page is a render-blocking task in a web page load. So, the bigger the CSS, the longer the wait time for a user to see something on the browser. Yes, the HTML payload will grow, but just by a little.

One of the differences between using Tailwind CSS classes and using style attribute is that the latter cannot be used to style pseudoclasses (e.g. :hover, :disabled). With CSS classes, that is achievable, but there are special prefixes that Tailwind provides for each of the variants to take effect.

For example:

<!-- margin: 1rem by default, margin: 2rem on hover -->
<div class="m-4 hover:m-8"></div> 

<!-- white background by default, light gray background when disabled -->
<input class="bg-white disabled:bg-gray-100"/> 

Keep in mind that not all variants/​pseudoclasses are supported by default, as this would make the development output of a Tailwind CSS file really big to cater all of the possible variants. To have them supported, it has to be configured inside the project’s tailwind.config.js:

// tailwind.config.js
module.exports = {
  variants: {
    extend: {
      backgroundColor: ['active'],
      // ...
      borderColor: ['focus-visible', 'first'],
      // ...
      textColor: ['visited'],
    }
  },
}

Or, if the project is on Tailwind CSS v2.1+, the developers can enable Just-in-Time mode, which grants access to all variants out-of-the-box while also being more functional in development mode.

See Just-in-Time mode for more details.

Shaking off the unused CSS

By default, Tailwind will be loading the whole Tailwind CSS project files, with CSS declarations on almost every possible CSS rule. There are a lot in there that a developer might never use. To put that into context, there are 105 different values just for grid and flexbox gaps. Most projects aren’t likely to use them all, so Tailwind needs a way to remove the unused CSS when generating the final CSS build for production use.

This is where PurgeCSS comes in. PurgeCSS is a plugin that analyzes all CSS, HTML, and JavaScript files in the project and removes unused CSS declarations from the final build. This tool is available as a PostCSS, Webpack, Gulp, Grunt, or Gatsby plugin.

Because PurgeCSS analyzes the project’s source code to find exact matches of a given CSS style, a CSS class cannot be used through string concatenations or PurgeCSS will not be able to detect that the given Tailwind class is used.

// SomeComponent.jsx
const SomeComponent = (props) => {
  return (
    <div className={'text-' + props.color}>
      Some text
    </div>
  )
}

// <SomeComponent color='gray-400'/>
// In this case, `text-gray-400` will be removed by PurgeCSS in the final CSS 
// production build because it does not know that the component is using it.

DRYing it up

Suppose we have two cards on the page and we want their appearance to be consistent. In the traditional CSS approach, both these two cards will just have the .card CSS class, and the same styles would be applied to both of them. In Tailwind we can’t do that, however, and it doesn’t make sense to be repeating 7 or 8 or more different class names on both HTML elements on the page.

const AppButton = () => (
  <button className='py-2 px-4 font-semibold rounded-lg shadow-md'>
    I'm a button
  </button>
)

// Just use <AppButton> everywhere and their appearances will be consistent
// No need to repeat the CSS classes for all three components
render(
  <Form>
    <AppButton/>
    <AppButton/>
    <AppButton/> 
  </Form>
)

Therefore, Tailwind can be easier to implement in component-based frameworks, such as Vue or React. But even if you’re not using any of them and are just building a plain HTML file, Tailwind provides a way to compose these classes together by using @apply:

.button {
  @apply py-2 px-4 font-semibold rounded-lg shadow-md;
}

.button.success {
  @apply py-2 px-4 font-semibold rounded-lg shadow-md text-white bg-green-400;
}

Then, .button and .button.success classes will be available to us for reuse as in traditional CSS.

<!-- py-2 px-4 font-semibold rounded-lg shadow-md gets applied when using "button" -->
<button class="button">I'm a button</button>

<!-- py-2 px-4 font-semibold rounded-lg shadow-md text-white bg-green-400 gets applied -->
<button class="button success">I'm a green button</button>

Building a responsive page using Tailwind

Now let’s look at responsive design. Suppose we want to implement a page with a navigation bar, a sidebar, a content area, and a footer:

Layout with Tailwind CSS - desktop view

And the sidebar and content area should collapse into one column on mobile devices, like this:

Layout with Tailwind CSS - mobile view

First, let’s have the basic page layout:

<nav class="p-4 bg-gray-100">
  <ul class="flex gap-2 justify-end">
    <li>Home</li>
    <li>About</li>
    <li>Contact</li>
  </ul>
</nav>
<div class="flex flex-col">
  <aside class="flex items-center justify-center p-4 bg-red-100">
    Sidebar
  </aside>
  <main class="min-h-screen p-4 bg-green-100">
    <p>
      Sit eos nam quam nemo qui. Quas recusandae praesentium ratione incidunt sunt commodi labore Nemo nemo error molestias saepe ducimus? Porro reprehenderit voluptatibus nihil voluptate quia. Voluptatibus autem maiores vero?
    </p>
    <p>
      Consectetur veniam voluptate esse amet debitis eius? Voluptatem officia quibusdam voluptates cum rerum Odio rem maiores laborum commodi cum. Nobis numquam quia nemo maiores repellendus error fuga Repellendus consequatur laudantium?
    </p>
    <p>
      Elit vitae sit reprehenderit sit laboriosam Ratione iusto numquam corrupti ullam libero! Nisi veritatis facere repudiandae eos perspiciatis recusandae veritatis. Cupiditate temporibus repellat tempore optio numquam id! Perferendis maxime unde
    </p>
    <p>
      Dolor autem dolore tempora atque provident. Maxime quos ipsum porro non suscipit. Consectetur et perspiciatis perspiciatis illum quos Ab nostrum unde facere nemo mollitia, saepe ab? Vitae tempore hic accusamus
    </p>
    <p>
      Elit labore odit error pariatur cupiditate Ex sequi accusantium maxime et vero Unde quo laboriosam illo ipsam modi eaque Delectus dolorem quas quidem reprehenderit fugiat! Exercitationem provident voluptatum perferendis ut.
    </p>
  </main>
</div>
<div class="p-4 bg-yellow-100">
  <h5 class="font-bold">Footer links</h5>
  <ul>
    <li>Home</li>
    <li>About</li>
    <li>Contact</li>
  </ul>
</div>

Tailwind official documentation recommends taking a mobile-first approach. Hence, all classes without any screen-size variants will be applied to all screen sizes. Tailwind then provides several screen-size variants such as sm, lg, xl, and 2xl that can be used to control the appearance on specific screen sizes range.

This is what we’re doing here. As you can see, we’re using flexbox layout to collapse all elements down to a single column.

Responsive variants

Now, let’s add Tailwind’s responsive variants so that the page is responsive to screen sizes. In our case, what we would want is to have aside and main placed side-by-side when there’s enough real estate on the screen. To achieve that, we would need to switch the flex-col class to flex-row on bigger screen sizes.

<div class="flex flex-col md:flex-row">
  <aside class="flex items-center justify-center p-4 bg-red-100 md:flex-none md:w-1/3 lg:w-1/4">
  <!-- sidebar -->
  </aside>
  <main class="min-h-screen p-4 bg-green-100">
  <!-- main content -->
  </main>
</div>

flex-col md:flex-row here does the trick for us. md variants, by default, kick in when the screen width is at a minimum of 768px. At that point, our flexbox will change from the column layout to the row layout, displaying our aside and main elements side-by-side in one row. To better distribute the width, we specify md:w-1/3 and lg:w-1/4 classes to the sidebar. w-1/3 and w-1/4 sets the width of the elements to one-third and one-fourth of the parent container respectively. The md and lg variants both control at what screen sizes should Tailwind apply which styles.

Conclusion

It can be a bit daunting to start, but once you get a handle on it, Tailwind CSS is a great option for rapidly building user interfaces with total control over the styles. Unlike other frameworks, Tailwind does not attempt to provide a default styling of any component, allowing every site that uses Tailwind to be truly unique from another.

These responsive variants can be applied to any other CSS class from Tailwind, providing a powerful way to build responsive user interfaces. The very thin abstraction over CSS provides developers with a greater flexibility and control over the design while being a good constraint to guide the development process.

Happy styling!


css tailwindcss design

Building a search suggestions feature with Node.js and Vue

By Greg Davidson
November 27, 2021

Old Dog Photo by Kasper Rasmussen on Unsplash

The backstory

Some time ago, I worked on a project to improve the usability of a search component for our clients. Similar to Google and other search interfaces, the user was presented with a number of suggested search terms as they typed into the search box. We wanted to add keyboard support and give the component a visual facelift. When the customer used the up, down, Esc, or Enter or Return keys, the component would allow them to choose a particular search term, clear their search, or navigate to the results for their chosen search term.

This is what the new and improved UI looked like: Search interface with suggested search terms

As developers, it can sometimes feel like we’re stuck when working on older, well established projects. We gaze longingly at newer and shinier tools. Part of my objective while building this feature was to prove the viability of a newer approach (Node.js and Vue) to the other engineers on the project as well as the client.

The feature existed already but we wanted to improve the UX and performance. Having added several Vue-powered features to this site in the past, I was very comfortable with the idea and have written about that previously. It would also be very easy to roll back if needed since this project was limited in scope, and very easy to compare the new solution with the code it replaced.

Picking a route and configuring it

This project was running on Interchange, nginx, and MySQL, but our approach would work in other stacks (e.g. with Apache). One key concept is that we’re using nginx as a reverse proxy to route requests to our various apps, serve static files, etc. Using nginx in this way allows us to stitch together the different services for a given project. In this case I created an endpoint for the search suggestions endpoint in our nginx config file. This enabled requests made to /suggestions/ to be passed along to our Node.js app. You can read more about reverse proxying with nginx if you like.

# Node.js powered endpoint for search suggestions
location /suggestions/ {
    proxy_pass http://0.0.0.0:8741/;
}

If your project is running on Apache or another platform there will likely be a similar configuration option to route the requests to your Node.js app as I did.

Node.js time

I wrote a little app with the MySQL driver for Node.js and Express. The app connects to MySQL and uses the nifty built-in connection pooling. It accepts POST requests from the site and responds with an array of search suggestion objects in JSON format.

Enter Vue

For the front-end part of the feature I created Vue components for the search input and for the display of the results. As the customer types, results are fetched and displayed. Using the arrow keys navigates up and down and Esc clears out the search. Once the customer has the search they want they can either press Enter or click on the Search button.

It’s worth mentioning that this search feature works without JavaScript. When you enter a search term and press Enter or click the Search button, you will get results. However, if you do have JavaScript enabled (and if our suggestions app is up and running) you’ll get suggestions as you type. This is a good thing.

Performance wins

The new endpoint returns results in less than 100ms — a 3x or 4x improvement over the Perl script it replaced. We get a response faster, meaning the experience is much more smooth for the user!

Managing the Node.js process

We used pm2 and systemd to manage the Node.js processes and ensure they are started up when the server is rebooted. In my experience this has been very stable and has not required any babysitting by our operations folks.

Great success!

Have you done anything similar in your projects? Do you have vintage/​legacy app that could benefit from learning some new tricks? Let us know!


nodejs vue javascript development

Salesforce data migration: promoting data changes from a sandbox to production

Dylan Wooters

By Dylan Wooters
November 24, 2021

Manhattan skyline Photo by Dylan Wooters, 2021

Intro

End Point recently completed a new e-commerce website built using Nuxt, Node.js, and Algolia and backed by Salesforce data. As part of the project, the client also wanted to re-categorize their products. They had already updated the data in a Salesforce sandbox and needed a way to move their changes to production in conjunction with the launch of the new website.

This data migration initially seemed like a simple request, but it ended up being one of the more challenging parts of the project. While there are clear instructions on how to migrate code changes from a sandbox to production (via change sets), there is very little information on how to migrate data.

Here I will outline the solution we developed, which involved writing a custom migration script using the JSForce library, testing using additional sandboxes, and planning for a rollback. This can serve as a blueprint for developers faced with the same task.

Why not use Data Loader to migrate?

The Salesforce Data Loader is a clunky yet reliable application that allows you to load large amounts of data directly into a Salesforce org. It will work to migrate data if the requirements are not complex. For example, if you only need to load data into a few objects and there are no connected objects or parent-child relationships. We chose not to use Data Loader for a few important reasons:

  • The data we were migrating was complex. Three levels of hierarchical categories were stored in Salesforce as separate custom objects, attached to another custom object representing products. To make things worse, the category objects were also all attached to each other in a parent-child hierarchy. To load this data using the Data Loader and CSV files was very time-consuming and error-prone.
  • We needed a fast and reproducible migration process. We had a strict cutover time for the launch of the new website, and the migration therefore had to run quickly.

We did end up using Data Loader as part of our rollback process, but more on that in step 6.

Our solution

1. Export JSON data from the sandbox

For our source data, I exported JSON from the Salesforce sandbox using the JSForce CLI, specifically the query function. Having the source data in JSON was easier than parsing CSV files exported directly from Salesforce. I committed the query commands to our Git repo for easy access during the actual production migration.

Here is a basic example of how to export data using the JSForce CLI.

jsforce -c [email protected] \
  -e "query('SELECT Id, Name, CreatedDate FROM Account')" > sandbox-export.json

2. Write a custom migration script

We were already using JSForce in our project to sync Salesforce data with the Algolia search engine, so we decided to also use the library to migrate data. I won’t dive into the specifics of how to use JSForce in this article, since we have another post that does that well.

My basic approach to writing the script started with asking the question “What would we do if we had direct access to the Salesforce database”? Once I had all of the set-based INSERT and UPDATE operations mapped out, I then translated that into procedural TypeScript code.

Here is a basic example from the migration script, where we are reading data from the JSON exported from the sandbox and loading it into Salesforce production.

import * as fs from 'fs';
import * as path from 'path';
import * as jsforce from 'jsforce';

const openConnection = async() => {
  let conn = new jsforce.Connection({
    loginUrl : process.env.instanceURL
  });
  await conn.login(process.env.username as string, process.env.password as string);
  console.log('info', `Connected to Salesforce.`);
  return conn;
}

const getJson = (filename: string) => {
  return JSON.parse(
    fs.readFileSync(
      path.resolve(__dirname, 'sandbox-data/' + filename)
    ).toString()
  );
};

const segmentData = getJson('new-segments.json');
const connection = await openConnection();

for (let i = 0; i < segmentData.records.length; i++) {
  let segment = segmentData.records[i];
  // Ignore Marine Equipment segment since it already exists
  if (segment.Name != 'Marine Equipment') {
    await connection.sobject('Segment__c').create({
      Name: segment.Name,
      Description__c: segment.Description__c,
      Meta_Description__c: segment.Meta_Description__c,
      Meta_Keywords__c: segment.Meta_Keywords__c,
      SubHeader__c: segment.SubHeader__c
    });
  }
}

3. Come up with a plan for testing data changes

After the migration script runs, you’ll need a way to verify that the data was loaded successfully and according to the requirements. For us, this meant working with our client to develop a smoke testing plan. After running our script, we switched a test version of the website to run against the migrated data in a Salesforce sandbox, and then used the plan to test all of the website functionality to make sure everything worked and the data looked correct. You’ll want to develop a plan based on your specific use case, whether that means verifying changes in Salesforce or an integrated system/​website.

4. Test the migration script against a copy of production

Once you have the migration script and test plan completed, you’ll want to run the script against a copy of Salesforce production. The easiest way to do this is to create a new Full sandbox in Salesforce. However, if you’re short on Full sandbox licenses, you can create a Partial Copy, choosing only the options that are targeted in your script. Both sandboxes can be created by navigating to the Setup page in Salesforce, and then going to Platform Tools > Environments > Sandboxes.

Salesforce sandboxes

After the script runs successfully, you can evaluate the data in the target sandbox to make sure everything looks correct. In our case, this involved pointing a test version of the new site to the migrated data.

One important caveat in this step is the refresh interval for the sandbox. Full sandboxes have a refresh interval of 30 days, and partials have an interval of 5 days. Essentially what this means is that you can’t easily press a button and refresh the sandbox after running the migration script, in case you need to make a change or fix an error. As a result, it’s best to have the data exported from Salesforce production and understand how to rollback the script updates, which I’ll explain more later on.

5. Export data from Salesforce production

Obtain a copy of the Salesforce production data by using the Data Export feature. This can be used as source data for a rollback, if errors occur in either testing the migration script or the real run against production.

To export data from Salesforce, first make a list of all the objects that will be affected by the migration. Then, navigate to the Salesforce Setup page, and go to Administration > Data > Data Export. Click the “Export Now” button and choose the applicable objects (or choose “Include All Data”), and hit “Start Export”. The data export will take a bit, but you should get an email from Salesforce once it is complete.

6. Perform a test rollback

After you test the migration script, if you find that the data is incorrect, you can perform a rollback using the Salesforce Data Loader. Even if the script runs successfully, you’ll want to test a rollback in case an error occurs during the real run. Here’s how I did it:

  • Download and install the Data Loader for your platform
  • Delete all the data for each affected object. To do this, log into Salesforce and open the Developer Console, then click on the “Debug” menu and choose “Open execution anonymous window”. Use the following code snippet to delete all data, replacing the Yourobject__c with the name of the target object. Again, you’ll need to do this for each object that was affected by your script.
List<Yourobject__c> SobjLst = [select id from Yourobject__c];
delete SobjLst;
  • Finally, re-insert the data using the CSV backups from step 4. In the Data Loader, choose the Insert method, and log into Salesforce. Be sure to use an account with the proper write permissions. Then go through the Data Loader wizard, choosing the applicable CSV backup file and load the data back into Salesforce. Repeat for each affected object.

Salesforce Data Loader
Not the prettiest application, at least on a Mac. Circled in red is the Insert button.

This process turned out to be quite cumbersome for our project, due to the complexity of the data. It ended up taking several rounds of manual work in Excel (vlookups) to align all of the connected object ids correctly. It took me back to my first job after college. Hopefully your attempt will be easier! If it proves to be challenging, check out the online version of the Data Loader. It has expanded features that make it easier to import objects with relationships. Note, however, that the online version is only free for up to 10,000 records.

7. Run the script against production

Launch in slack
Starting the “livestream” of our website launch in Slack!

Finally it’s time for the main event. Before you run your script against Salesforce production, be sure to export both the data from the sandbox and the data from Production (steps 1 and 4). This will ensure that the most recent data gets migrated and will set you up for a rollback if something goes wrong.

After your migration script runs without errors, use the testing plan from step 3 to verify that the data is correct. Once you’ve verified everything, give yourself a pat on the back for successfully navigating the arcane and treacherous world of a Salesforce data migration!

Summary

If your Salesforce data model is complex, consider using the steps above to migrate data from your sandbox to production. It will ensure a successful migration, especially if need a fast and repeatable process. If your data is simple and you are migrating a few objects, check out the Salesforce Data Loader application or DataLoader.io.

For more info about the tech we used in this project, check out:


salesforce typescript migration

From Liquid Galaxy to VisionPort

Alejandro Ramon

By Alejandro Ramon
November 23, 2021

A VisionPort system

We are rebranding! Meet the future of Liquid Galaxy: VisionPort.

We are proud to announce the official launch of VisionPort, the next phase for Liquid Galaxy. We have spent the past six months taking steps to rebrand, expand, and reposition our product to combine modern working necessities with the traditional kiosk-style, shared immersive experience familiar to our clients. Through these efforts have come a robust product encompassing an entire room of enhanced features, screens, and conference-enabling applications.

While our core product will remain consistent with what our clients know and love, current and future clients can look forward to significant updates to the content management system and user experience. We are also proud to announce advanced add-on features that will allow our current and future clients to make their systems more collaborative, interactive, and adaptable to their needs.

Our core offerings include:

  • Extensive preparation and customization of screens, servers, and frames
  • Google Earth, Cesium, Street View, etc.
  • Content Management System
  • Ongoing support service
  • Custom installation, system, and content consulting
  • Comprehensive system and content training

Our add-on offerings now include:

  • Video Conference Integration: Allows users to join video conferences or host meetings with a native, software-level view of the Liquid Galaxy.
  • Screen Share Integration: Allows users to share their content to the main displays. Supports sharing laptop, phone, or tablet’s presentation files and media to any screen on VisionPort.
  • Media Stream Integration: Allows users to share any HDMI video source onto any of the main displays on VisionPort.
  • Support for integrating all the above and the Content Management System with any additional side displays to complement the main screens.
  • VisionPort Remote: Full remote control of the content and camera view, intended for touch devices. Also supports remote view-only sharing.

All of these enhancements are detailed at our new website, www.visionport.com.

Please note that we also recently made a major domain change for our organization: Our new domain name is www.endpointdev.com — feel free to look through that site as well!

In the same vein, all existing and future clients can now reach us at the www.visionport.com domain. Our team will be proactively reaching out to all of our clients to inform them of these changes, and will be available to answer any questions about the changes or upgrades available.

We are very excited about these new advancements, offerings, and rebranding opportunities. We look forward to engagement with current and future clients alike.


visionport company

Forwarding Google Forms responses to an external API

Afif Sohaili

By Afif Sohaili
November 16, 2021

Sunrise over the Wasatch mountains

Google Forms is a great form service that many people use for surveys, research, questionnaires, etc. It has an intuitive and flexible interface for building forms and is fairly easy to use for everyone. Once you get a response, you can view the results in the admin section of the form or in a Google Sheets document in which Google will automatically insert all your responses.

However, you may need to do something else with the responses. For example, what if you want to have the response printed in your Slack channel or Discord server? Or what if you want to use the raw data to make more complex visualizations than Google Sheets is capable of?

Google Apps Script to the rescue!

Google Apps Script is a development platform for building add-ons for Google products, such as Google Sheets, Google Docs, and Google Forms. You write your JavaScript in the code editor that Google provides for you, so there is nothing to install on your local machine to start developing. This code then gets executed on Google’s servers. These Google Apps Script projects can then be published as a Google Workspace add-on that others can use or shared within your organization.

Even though Google Apps Script is basically just JavaScript, there are a few key differences from a Node.js project. For example, in a Node.js environment you could use the https module to send an HTTP request to another external service (or you could grab a package like axios or node-fetch that can do the same thing). In Google Apps Script, however, you cannot use the https module because it is not a Node.js project per se and does not provide you with the ability to install external NPM packages. Instead, there is a limited set of standard libraries that comes built-in within a Google Apps Script project.

1. Start a new Google Apps Script project

  1. Head to Google Forms and create a new form.
  2. Click the triple-dots icon at the top-right corner of the form and choose “Script Editor”.
  3. You should now see the Google Apps Script editor. Great! Let’s change the project name to something more descriptive (e.g. “Forward to API”).

2. Add triggers to the script for initialization

Google Apps Script allows you to install triggers to the current form. The ones we are interested in right now are onOpen, which runs when a user that has edit access opens the form, and onInstall, which runs when a user installs the add-on in the Google Forms form. There are other events that you might be interested in listed here.

We will also need to provide a way for users to set the outgoing URL for each response. The easiest would be to hardcode it in the code, but we can make our add-on a little bit more configurable; let’s also create a sidebar that contains the configuration interface for the users to change the settings for our add-on, in this case the destination URL for the responses.

Now, in the Google Apps Script code editor, paste the following code:

/**
 * Adds a custom menu to the active form to show the add-on sidebar.
 */
function onOpen(e) {
  FormApp.getUi()
    .createAddonMenu()
    .addItem('Configure', 'showSidebar')
    .addToUi();
}

/**
 * Runs when the add-on is installed.
 */
function onInstall(e) {
  onOpen(e);
}

/**
 * Opens a sidebar in the form containing the add-on's user interface for
 * configuring the notifications this add-on will produce.
 */
function showSidebar() {
  var sidebarPage = HtmlService.createHtmlOutputFromFile('sidebar')
    .setTitle('Your add-on configuration');
  FormApp.getUi().showSidebar(sidebarPage);
}

// Save settings

// Load settings

First, we would like to have a menu item on the form to access the add-on. This is done through the createAddonMenu().addItem function within the onOpen trigger. We also have the onInstall trigger that does the same thing as what onOpen is doing. With these, both installing the add-on for the first time and opening the form creates an add-on menu item called “Configure”.

createAddonMenu().addItem accepts two arguments, a label (Configure) and a function name to be executed when the item is selected (showSidebar). In the code above, it will run the showSidebar function. showSidebar initializes the add-on’s view by loading and running the HTML file we specified in createHtmlOutputFromFile. Since we are passing sidebar to the function, it will automatically assume the filename sidebar.html and load the file from our project.

3. Adding the configuration sidebar

In the previous step, we specified our menu item to show the sidebar. Now, let’s go ahead and create a new file called sidebar.html.

  1. On the Files panel, click the + icon and choose HTML in the dropdown menu.
  2. Name the file sidebar.html when prompted. This is important as we have specified the name sidebar to the createHtmlOutputFromFile function within showSidebar.
  3. Open the HTML file and paste the following HTML content:
<!DOCTYPE html>
<html>
  <head>
    <base target="_top">
    <style>
      body {
        font-family: sans-serif;
        font-size: 14px;
      }

      input {
        border: 1px solid #3f3f3f;
        padding: 0.25rem;
        width: 100%;
      }

      #error {
        margin-top: 0.5rem;
      }

      .input-field {
        margin-bottom: 1rem;
      }
    </style>
  </head>
  <body>
    <form>
      <div class="input-field">
        <label for="url">URL to send responses to:</label>
        <input id="url" type="text" name="url" placeholder="e.g. https://some-api.com/accept/responses"/>
      </div>
      <div class="block" id="button-bar">
        <button class="action" id="save-settings">Save</button>
        <p id="response"></p>
      </div>
    </form>
    
    <script src="//ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js">
    </script>
    <script>
    // Our JavaScript code
    </script>
  </body>
</html>
Test your sidebar
  1. Go back to your Google Forms form and refresh. You should now get an add-on menu on your form, with your add-on listed in the dropdown.
  2. Choose your add-on and click “Configure”. This is the add-on menu item that we declared in step 1.
  3. You should now be prompted to authorize the script. Follow the instructions to allow the script permission to run on your Google Forms forms.
  4. Once you’re done, go to the add-on menu, choose your add-on, and choose “Configure” again. You should now see a sidebar on the right side of the form with a text field to fill in the destination URL and a submit button to save the settings.

Authorization

Your sidebar

4. Saving your configuration

Now we need to be able to save the URL we want to send the responses to and create a form trigger out of that. Let’s go ahead and add that:

Add the following script in the <script> block under the // Our JavaScript code comment.

// sidebar.html
$(function() {
  // Load settings from the server
  loadSettingsAndPopulateForm();

  // Listen to 'submit' event
  $('form').submit(saveSettingsToServer);
});

function loadSettingsAndPopulateForm() {
  google.script.run
      .withSuccessHandler(
        function(settings) {
          $('#url').val(settings.url)
        })
      .withFailureHandler(
        function(msg) {
          $('#response').text('Failed to fetch settings. ERROR: ' + msg);
        })
      .fetchSettings();
}

function saveSettingsToServer(event) {
  event.preventDefault();
  var button = $(this).find('button');
  button.attr('disabled', 'disabled');
  var settings = {
    'url': $('#url').val(),
  };

  // Save the settings on the server
  google.script.run
      .withSuccessHandler(
        function(msg, button) {
          button.removeAttr('disabled');
          $('#response').text('Saved settings successfully');
        })
      .withFailureHandler(
        function(msg, button) {
          button.removeAttr('disabled');
          $('#response').text('Failed to save settings. ERROR: ' + msg);
        })
      .withUserObject(button)
      .saveSettings(settings);
}
// Code.gs

/**
 * Used by the client-side via `google.script.run` to save setings from the form.
 */
function saveSettings(settings) {
  PropertiesService.getDocumentProperties().setProperties(settings);
  // adjustFormSubmitTrigger();
}

/**
 * Used by the client-side via `google.script.run` to load saved setings.
 */
function fetchSettings() {
  return PropertiesService.getDocumentProperties().getProperties();
}

Now, let’s go through the code:

The $(function() {}) block is run at document load. Two things happen here:

  • The loadSettingsAndPopulateForm function runs the fetchSettings function from the backend. Then it populates the text input field #url with the saved settings.
  • It installs an event listener on the configuration form submit to save the settings to the server. This listener, the saveSettingsToServer function, gathers all the values from the input fields, and runs the saveSettings function on the backend, with the input field values as the arguments.
google.script.run

google.script.run is the glue between frontend (the *.html files) and the backend (the *.gs) files. Once you declare a function (e.g. doStuff) on backend, you can use the function through google.script.run.doStuff from the frontend. withSuccessHandler and withFailureHandler are where you supply the callbacks for successful calls and failed calls to the backend respectively.

To see more, read the documentation.

5. Installing the form submit event handler

Now that we’re able to pinpoint the address we want send the form data to, we can start instructing Google Forms to send the data our way when someone submits the form. In order to do that, we:

  1. Code the form submit trigger to submit the data on the form response.
  2. If there’s no existing form submit trigger and the URL is set, install the form submit trigger.
  3. If there is an existing form submit trigger and the URL is unset, remove the trigger.

Let’s write the form submit trigger first. Paste this code at the end of Code.gs:

function sendResponse(e) {
  var data = {
    "form": {
      "id": e.source.getId(),
      "title": e.source.getTitle() ? e.source.getTitle() : "Untitled Form",
      "is_private": e.source.requiresLogin(),
      "is_published": e.source.isAcceptingResponses(),
    },
    "response": {
      "id": e.response.getId(),
      "email": e.response.getRespondentEmail(),
      "timestamp": e.response.getTimestamp(),
      "data": e.response.getItemResponses().map(function(y) {
        return {
          h: y.getItem().getTitle(),
          k: y.getResponse()
        }
      }, this).reduce(function(r, y) {
        r[y.h] = y.k;
        return r
      }, {}),
    }
  };

  var options = {
    method: "post",
    payload: JSON.stringify(data),
    contentType: "application/json; charset=utf-8",
  };

  var settings = PropertiesService.getDocumentProperties();
  UrlFetchApp.fetch(settings.getProperty('url'), options);
};

Let’s break it down. First, we compile the data that we want to send to our application. Form triggers accept a variable in their parameter, which we named e here. e is an Event Object, and it provides us access to the form’s information (via e.source) as well as the responses to the form (via e.response). We gather all this in an object (called data in this case) and submit it to our app via the UrlFetchApp.

UrlFetchApp is a standard library in Google Apps Script. It allows us to make HTTP requests to other applications. Here, we are doing a POST HTTP request to the URL that we set (obtained via PropertiesService.getDocumentProperties.getProperty('url')).

Adjusting the form trigger

Right now, sendResponse is not hooked to any function. Let’s hook it to the form trigger with this code:

function adjustFormSubmitTrigger() {
  var form = FormApp.getActiveForm();
  var triggers = ScriptApp.getUserTriggers(form);
  var settings = PropertiesService.getDocumentProperties();
  var url = settings.getProperty('url')
  var triggerNeeded = url && url.length > 0;
  
  // Create a new trigger if required; delete existing trigger
  //   if it is not needed.
  var existingTrigger = null;
  for (var i = 0; i < triggers.length; i++) {
    if (triggers[i].getEventType() == ScriptApp.EventType.ON_FORM_SUBMIT) {
      existingTrigger = triggers[i];
      break;
    }
  }
  if (triggerNeeded && !existingTrigger) {
    var trigger = ScriptApp.newTrigger('sendResponse')
      .forForm(form)
      .onFormSubmit()
      .create();
  } else if (!triggerNeeded && existingTrigger) {
    ScriptApp.deleteTrigger(existingTrigger);
  }
}

Don’t forget to also uncomment // adjustFormSubmitTrigger in the saveSettings function.

This function is pretty straightforward. It checks if there is a URL saved in the settings and installs the form submit trigger if needed. If there is no URL saved or if it’s an empty string, then any existing form submit trigger will be deleted. Changing the URL will not do anything to the existing form submit trigger, since sendResponse will always pull the latest URL from the settings.

6. Test the triggers

Now we can test the submission. Let’s configure our add-on with a valid URL to our app, add a couple of questions to our form, then hit Preview in the top navigation bar to test our form submit trigger.

Form

Once we’ve filled the form, hit Submit. You should now see your form response gets sent to your destination URL as a POST request.

Form submit

Here’s what I get from a simple Express app I developed to receive the response:

Example app listening at http://localhost:4000
{
  "form": {
    "id": "1zm_anJLsYnnmI-MlRqLcEhK8Eemd90rg_oJmsThiONw",
    "title": "HR Benefits Survey",
    "is_private": true,
    "is_published": true
  },
  "response": {
    "id": "2_ABaOnufmy5dDzYsUl3-G5vlkQaOPoW-Jf5Wskk9SZHhXvpgnTzx6LGVq9YGDqivvo6TXpko",
    "email": "",
    "timestamp": "2021-11-07T08:56:08.299Z",
    "data": {
      "What's the most important thing for you?": "4-day workweek",
      "Select games you'd like to have at the office": [
        "Foosball",
        "Dart"
      ]
    }
  }
}

Conclusion

That’s it! We can now send Google Forms responses to our app for better data processing and visualization.


google-apps-script javascript integration google-forms

Liquid Galaxy Media Stream Integration

Alejandro Ramon

By Alejandro Ramon
November 11, 2021

Media Stream Integration

End Point’s Immersive and Geospatial Division is proud to announce the rollout of our new Media Stream Integration as an extension to the Liquid Galaxy platform’s capabilities. This additional hardware can be added to existing installations or included in a new solution provided by our sales team.

The Media Stream Integration (“MSI”) is a collection of hardware additions to the Liquid Galaxy that allows a user to stream from any HDMI capable device in up to a 4k (3840×​2160 pixels) window on the main displays of the system.

With the MSI, a user can connect and share any media source directly to the system through the touchscreen. Examples include a video game console, cable TV box, DVR device, laptop or desktop computer, Plex server, and more. In other words, users can showcase media content that may not be natively supported on the Liquid Galaxy platform.

How it works

A user simply ensures that the device is on, and it will appear as an option to share to the Liquid Galaxy screens on the touchscreen. The stream can be overlaid on top of existing content in pre-defined windows on the display wall through which the content will be displayed. When finished, the overlay window can be removed at any time.

Why we created this

We recognize the value offered in having an impressive, high-resolution display wall, and are often asked during an installation, “Can we watch the game on those screens?” In an effort to expand the flexibility of the system, we asked ourselves, “Why not?” As a dynamic presentation system, we want to equip our clients with all the tools necessary to share their stories, including supporting external content on the system.

Who this benefits

This is useful to anyone looking for a way to share content from a device not natively supported by the system. We are always looking for new ways to interact with the system and build content, and would love to hear your ideas.

If you are an existing client and have any questions about this new capability, or if you are considering a Liquid Galaxy platform for your organization and would like to learn more, please contact us!


visionport

New Jersey Liquid Galaxy Installation

Ben Witten

By Ben Witten
November 10, 2021

Another successful Liquid Galaxy conference room

End Point Dev installed a Liquid Galaxy system at the New Jersey office of one of our clients this past March. This marks the fifth office that our client is using to showcase a Liquid Galaxy, joining offices in 4 other states. This new seven-screen Liquid Galaxy system is built into a conference room wall, and will be used as a technological showpiece to allow their team and clients to view different locations, information, and datasets in an immersive and interactive environment.

As our team is headquartered in New York City, this was a relatively local installation. Our End Point Dev engineers initially spent three days installing this system at the client’s new office; however, due to unforeseen circumstances there were a couple of return trips made to finalize details and ensure the best possible product. We also provided one day of on-site system training, walking the team through using the system and creating presentations with the Content Management System.

All Liquid Galaxy content for this client has been prepared by their global marketing team, who build region-focused content for each of the different Liquid Galaxy systems. The team effectively builds interactive content that allows their staff to home in on geographic locations and share in-depth research and datasets on their seven-screen systems.

Our client has seen great success with its growing fleet of Liquid Galaxy systems. They compare the Liquid Galaxy to being in a helicopter, due to the ability to zoom in and look at real estate properties of interest.

Our client’s in-house research team also effectively uses Liquid Galaxy to visualize opportunities and trends. The team combines information from their industry dataset, with insights from daily dialogue with leaders in manufacturing and distribution.

While our client often uses Liquid Galaxy in dedicated conference rooms specifically designed to house and experience the system, they have also found success in using the platform for portable use in exhibits and trade fair expo booths.

To learn more about Liquid Galaxy, please visit our VisionPort website.


visionport clients

.NET/C# developer job opening

Jon Jensen

By Jon Jensen
November 9, 2021

programmer at keyboard on desk
Photo by #WOCinTech Chat · CC BY 2.0, modified

We are seeking a full-time .NET/C# software developer based in the United States to work with us on our clients’ applications.

End Point Dev is an Internet technology consulting company based in New York City, with 50 employees serving many clients ranging from small family businesses to large corporations. The company is going strong after 26 years in business!

Even before the pandemic most of us worked remotely from home offices. We collaborate using SSH, Git, project tracking tools, Zulip chat, video conferencing, and of course email and phones.

What you will be doing:

  • Develop new web applications and support existing ones for our clients.
  • Work together with End Point Dev co-workers and our clients’ in-house staff.
  • Use your desktop operating system of choice: Windows, macOS, or Linux.
  • Enhance open source software and contribute back as opportunity arises.

You’ll need professional development experience with:

  • 3+ years of development with .NET and C#
  • Databases such as SQL Server, PostgreSQL, Redis, Solr, Elasticsearch, etc.
  • Front-end web development with HTML, CSS, JavaScript and frameworks such as Vue, React, Angular
  • Security consciousness such as under PCI-DSS for ecommerce or HIPAA medical data
  • Git version control
  • Automated testing
  • HTTP, REST APIs

You have these important work traits:

  • Strong verbal and written communication skills
  • An eye for detail
  • Tenacity in solving problems and focusing on customer needs
  • A feeling of ownership of your projects
  • Work both independently and as part of a team
  • A good remote work environment

For some of our clients you will need to submit to and pass a criminal background check.

What work here offers:

  • Collaboration with knowledgeable, friendly, helpful, and diligent co-workers around the world
  • Work from your home office, or from our offices in New York City and the Tennessee Tri-Cities area
  • Flexible, sane work hours
  • Paid holidays and vacation
  • Health insurance subsidy and 401(k) retirement savings plan
  • Annual bonus opportunity

Get in touch with us:

Please email us an introduction to [email protected] to apply. Include your location, a resume/​CV, your Git repository or LinkedIn URLs, and whatever else may help us get to know you.

We look forward to hearing from you! Direct work seekers only, please—​this role is not for agencies or subcontractors.

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of sex/​gender, race, religion, color, national origin, sexual orientation, age, marital status, veteran status, or disability status.


jobs dotnet remote-work
Page 1 of 198 • Next page

Popular Tags


Archive


Search our blog