The Complete Video Solution for Web and Mobile Developers

SitePoint - Thu, 11/30/2017 - 17:33

This article was originally published on Cloudinary Blog. Thank you for supporting the partners who make SitePoint possible. Videos in web sites and apps are starting to catch up with images in terms of popularity and they are a constantly growing part of the media strategy for most organizations. This means bigger challenges for developers […]

Continue reading %The Complete Video Solution for Web and Mobile Developers%

Amazon made a camera to give its image recognition tech to developers

Amazon wants to make it much easier for developers to use its image recognition capabilities.

Amazon Web Services just introduced a new $249 camera with computer vision and AI-smarts baked right in. Called DeepLens, the camera is intended for developers who want to learn to build applications that leverage Amazon's artificial intelligence technology.

SEE ALSO: Boston Dynamics robot can now nail a backflip and land steadily on its feet

The camera is available for pre-order now and will ship next year, the company says.

It's a boxy device that's obviously designed with the AI tech in mind rather than looks or pure specs. The 4MP camera shoots 1080p HD video and is equipped with Wi-Fi, a micro SD slot, and 8 GB of memory.  Read more...

More about Tech, Gadgets, Amazon, Developers, and Artificial Intelligence

PHP-FPM tuning: Using ‘pm static’ for Max Performance

SitePoint - Wed, 11/29/2017 - 12:00

Let's take a very quick look at how best to set up PHP-FPM for high throughput, low latency, and a more stable use of CPU and memory. By default, most setups have PHP-FPM’s PM (process manager) string set to dynamic and there’s also the common advice to use ondemand if you suffer from available memory issues. However, let's compare the two management options based on’s documentation and also compare my favorite for high traffic setup --- static pm:

  • pm = dynamic: the number of child processes is set dynamically based on the following directives: pm.max_children, pm.start_servers, pm.min_spare_servers, pm.max_spare_servers.

  • pm = ondemand: the processes spawn on demand when requested, as opposed to dynamic, where pm.start_servers are started when the service is started.

  • pm = static: the number of child processes is fixed by pm.max_children.

See the full list of global php-fpm.conf directives for further details.

PHP-FPM Process Manager (PM) Similarities to CPUFreq Governor

Now, this may seem a bit off topic, but I hope to tie it back into our PHP-FPM tuning topic. Okay, we’ve all had slow CPU issues at some point, whether it be a laptop, VM or dedicated server. Remember CPU frequency scaling? (CPUFreq governor.) These settings, available on both *nix and Windows, can improve the performance and system responsiveness by changing the CPU governor setting from ondemand to performance. This time, let's compare the descriptions and look for similarities:

  • Governor = ondemand: scales CPU frequency dynamically according to current load. Jumps to the highest frequency and then scales down as the idle time increases.

  • Governor = conservative: scales the frequency dynamically according to current load. Scales the frequency more gradually than ondemand.

  • Governor = performance: always run the CPU at the maximum frequency.

See the full list of CPUFreq governor options for further details.

Notice the similarities? I wanted to use this comparison first, with the aim of finding the best way to write an article which recommends using pm static for PHP-FPM as your first choice.

With CPU governor, the performance setting is a pretty safe performance boost because it’s almost entirely dependent on your server CPU’s limit. The only other factors would be things such as heat, battery life (laptop) and other side effects of clocking your CPU frequency to 100% permanently. Once set to performance, it is indeed the fastest setting for your CPU. For example read about the ‘force_turbo’ setting on Raspberry Pi, which forces your RPi board to use the performance governor where performance improvement is more noticeable due to the low CPU clock speeds.

Using ‘pm static’ to Achieve Your Server’s Max Performance

The PHP-FPM pm static setting depends heavily on how much free memory your server has. Basically, if you are suffering from low server memory, then pm ondemand or dynamic may be better options. On the other hand, if you have the memory available, you can avoid much of the PHP process manager (PM) overhead by setting pm static to the max capacity of your server. In other words, when you do the math, pm.static should be set to the max amount of PHP-FPM processes that can run without creating memory availability or cache pressure issues. Also, not so high as to overwhelm CPU(s) and have a pile of pending PHP-FPM operations.

In the screenshot above, this server has pm = static and pm.max_children = 100 which uses a max of around 10GB of the 32GB installed. Take note of the self explanatory highlighted columns. During that screenshot there were about 200 ‘active users’ (past 60 seconds) in Google Analytics. At that level, about 70% of PHP-FPM children are still idle. This means PHP-FPM is always set to the max capacity of your server’s resources regardless of current traffic. Idle processes stay online, waiting for traffic spikes and responding immediately, rather than having to wait on the pm to spawn children and then kill them off after x pm.process_idle_timeout expires. I have pm.max_requests set extremely high because this is a production server with no PHP memory leaks. You can use pm.max_requests = 0 with static if you have 110% confidence in your current and future PHP scripts. However, it’s recommended to restart scripts over time. Set the number of requests to a high number since the point is to avoid pm overhead. So for example at least pm.max_requests = 1000 depending on your number of pm.max_children and number of requests per second.

The screenshot uses Linux top filtered by ‘u’ (user) option and the name of the PHP-FPM user. The number of processes displayed are only the ‘top’ 50 or so (didn’t count), but basically top displays the top stats which fit in your terminal window --- in this case, sorted by %CPU. To view all 100 PHP-FPM processes you can use something like:

top -bn1 | grep php-fpm

Continue reading %PHP-FPM tuning: Using ‘pm static’ for Max Performance%

23 Development Tools for Boosting Website Performance

SitePoint - Tue, 11/28/2017 - 12:00

When dealing with performance, it's hard to remember all the tools that might help you out during development. For that purpose, we've compiled a list of 23 performance tools for your reference. Some you'll have heard of, others probably not. Some have been covered in detail in our performance month, others are yet to be covered future articles; but all are very useful and should be part of your arsenal.

Client-side Performance Tools 1. Test your Mobile Speed with Google

Google’s Test My Site is an online tool offered by Google and powered by the popular website performance tool

You can either visualize your report on site or have it emailed to you via your email address.

The tool gives you your website loading time (or Speed Index) calculated using a Chrome browser on a Moto G4 device within a 3G network. It also gives you the estimated percentage of visitors lost due to loading time. Among other things it also:

  • compares your site speed with the top-performing sites in your industry
  • gives you top fixes that can help you speed up your website loading time.
2. is an open-source tool --- or a set of tools --- that can help you measure your website performance and improve it.

Image source:

  • Coach: gives you performance advice and fixes for your website based on best practices.
  • Browsertime: collects metrics and HAR files from your browser.
  • Chrome-HAR: helps you compare HAR files.
  • PageXray: extracts different metrics (from HAR files) such as size, number of requests, and so on.

You can install these tool(s) using npm:

npm install -g --help

Or Docker:

docker run --shm-size=1g --rm -v "$(pwd)":/ sitespeedio/ --video --speedIndex 3. Lighthouse by Google

Lighthouse is an open-source tool for running audits to improve web page quality. It's integrated into Chrome's DevTools and can be also installed as a Chrome extension or CLI-based tool. It's an indispensable tool for measuring, debugging and improving the performance of modern, client-side apps (particularity PWAs).

You can find the extension from the Chrome Web Store.

Or you can install Lighthouse, from npm, on your system with:

npm install -g lighthouse

Then run it with:

lighthouse <url>

You can use Lighthouse programmatically to build your own performance tool or for continuous integration.

Make sure to check these Lighthouse-based tools:

  • webpack-lighthouse-plugin: a Lighthouse plugin for Webpack
  • treo: Lighthouse as a service with a personal free plan.
  • calibreapp: a paid service, based on Lighthouse, that helps you track, understand and improve performance metrics using real Google Chrome instances.
  • lighthouse-cron: a module which can help you track your Lighthouse scores and metrics overtime.

We've got an in-depth look at Lighthouse in our PWA performance month post.

4. Lightcrawler

You can use Lightcrawler to crawl your website then run each page found through Lighthouse.

Start by installing the tool via npm:

npm install --save-dev lightcrawler

Then run it from the terminal by providing the target URL and a JSON configuration file:

lightcrawler --url <url> --config lightcrawler-config.json

The configuration file can be something like:

{ "extends": "lighthouse:default", "settings": { "crawler": { "maxDepth": 2, "maxChromeInstances": 5 }, "onlyCategories": [ "Performance", ], "onlyAudits": [ "accesskeys", "time-to-interactive", "user-timings" ] } } 5. YSlow

YSlow is a JavaScript bookmarklet that can be added to your browser and invoked on any visited web page. This tool analyzes web pages and helps you discover the reasons for slowness based on Yahoo's rules for high-performance websites.

Image source:

You can install YSlow by dragging and dropping the bookmarklet to your browser’s bookmark bar. Find more information here.

6. GTmetrix

GTmetrix is an online tool that gives you insights into your website performance (fully loaded time, total page size, number of requests etc.) and also practical recommendations on how to optimize it.

7. Page Performance

Page performance is a Chrome extension that can be used to run a quick performance analysis. If you have many tabs open, the extension will be invoked on the active tab.

8. The AMP Project

The AMP (Accelerated Mobile Pages) project is an open-source project that aims to make the web faster. The AMP project enables developers to create websites that are fast, high-performing and with great user experiences across all platforms (desktop browsers and mobile devices).

The AMP project is essentially three core components:

  • AMP HTML: it's HTML but with some restrictions to guarantee reliable performance.
  • AMP JS: a JavaScript library that takes care of rendering AMP HTML.
  • AMP Cache: a content delivery network for caching and delivering valid AMP pages. You can use tools such as AMP Validator or amphtml-validator to check if your pages are valid AMP pages.

Once you add AMP markup to your pages, Google will discover them automatically and cache them to deliver them through the AMP CDN. You can learn from here how to create your first AMP page.

Continue reading %23 Development Tools for Boosting Website Performance%

JavaScript Performance Optimization Tips: An Overview

SitePoint - Mon, 11/27/2017 - 12:00

In this post, there's lots of stuff to cover across a wide and wildly changing landscape. It's also a topic that covers everyone's favorite: The JS Framework of the Month™.

We'll try to stick to the "Tools, not rules" mantra and keep the JS buzzwords to a minimum. Since we won't be able to cover everything related to JS performance in a 2000 word article, make sure you read the references and do your own research afterwards.

But before we dive into specifics, let's get a broader understanding of the issue by answering the following: what is considered as performant JavaScript, and how does it fit into the broader scope of web performance metrics?

Setting the Stage

First of all, let's get the following out of the way: if you're testing exclusively on your desktop device, you're excluding more than 50% of your users.

This trend will only continue to grow, as the emerging market's preferred gateway to the web is a sub-$100 Android device. The era of the desktop as the main device to access the Internet is over, and the next billion internet users will visit your sites primarily through a mobile device.

Testing in Chrome DevTools' device mode isn't a valid substitute to testing on a real device. Using CPU and network throttling helps, but it's a fundamentally different beast. Test on real devices.

Even if you are testing on real mobile devices, you're probably doing so on your brand spanking new $600 flagship phone. The thing is, that's not the device your users have. The median device is something along the lines of a Moto G1 --- a device with under 1GB of RAM, and a very weak CPU and GPU.

Let's see how it stacks up when parsing an average JS bundle.

Addy Osmani: Time spent in JS parse & eval for average JS.

Ouch. While this image only covers the parse and compile time of the JS (more on that later) and not general performance, it's strongly correlated and can be treated as an indicator of general JS performance.

To quote Bruce Lawson, “it's the World-Wide Web, not the Wealthy Western Web”. So, your target for web performance is a device that's ~25x slower than your MacBook or iPhone. Let that sink in for a bit. But it gets worse. Let's see what we're actually aiming for.

What Exactly is Performant JS Code?

Now that we know what our target platform is, we can answer the next question: what is performant JS code?

While there's no absolute classification of what defines performant code, we do have a user-centric performance model we can use as a reference: The RAIL model.

Sam Saccone: Planning for Performance: PRPL


If your app responds to a user action in under 100ms, the user perceives the response as immediate. This applies to tappable elements, but not when scrolling or dragging.


On a 60Hz monitor, we want to target a constant 60 frames per second when animating and scrolling. That results in around 16ms per frame. Out of that 16ms budget, you realistically have 8–10ms to do all the work, the rest taken up by the browser internals and other variances.

Idle work

If you have an expensive, continuously running task, make sure to slice it into smaller chunks to allow the main thread to react to user inputs. You shouldn't have a task that delays user input for more than 50ms.


You should target a page load in under 1000ms. Anything over, and your users start getting twitchy. This is a pretty difficult goal to reach on mobile devices as it relates to the page being interactive, not just having it painted on screen and scrollable. In practice, it's even less:

Fast By Default: Modern Loading Best Practices (Chrome Dev Summit 2017)

In practice, aim for the 5s time-to-interactive mark. It's what Chrome uses in their Lighthouse audit.

Now that we know the metrics, let's have a look at some of the statistics:

  • 53% of visits are abandoned if a mobile site takes more than three seconds to load
  • 1 out of 2 people expect a page to load in less than 2 seconds
  • 77% of mobile sites take longer than 10 seconds to load on 3G networks
  • 19 seconds is the average load time for mobile sites on 3G networks.

And a bit more, courtesy of Addy Osmani:

  • apps became interactive in 8 seconds on desktop (using cable) and 16 seconds on mobile (Moto G4 over 3G)
  • at the median, developers shipped 410KB of gzipped JS for their pages.

Feeling sufficiently frustrated? Good. Let's get to work and fix the web. ✊

Context is Everything

You might have noticed that the main bottleneck is the time it takes to load up your website. Specifically, the JavaScript download, parse, compile and execution time. There's no way around it but to load less JavaScript and load smarter.

But what about the actual work that your code does aside from just booting up the website? There has to be some performance gains there, right?

Before you dive into optimizing your code, consider what you're building. Are you building a framework or a VDOM library? Does your code need to do thousands of operations per second? Are you doing a time-critical library for handling user input and/or animations? If not, you may want to shift your time and energy somewhere more impactful.

It's not that writing performant code doesn't matter, but it usually makes little to no impact in the grand scheme of things, especially when talking about microoptimizations. So, before you get into a Stack Overflow argument about .map vs .forEach vs for loops by comparing results from, make sure to see the forest and not just the trees. 50k ops/s might sound 50× better than 1k ops/s on paper, but it won't make a difference in most cases.

Continue reading %JavaScript Performance Optimization Tips: An Overview%

Progressive Web Apps: A Crash Course

SitePoint - Fri, 11/24/2017 - 12:00

Progressive Web Apps (PWAs) try to overlap the worlds of the mobile web apps and native mobile apps by offering the best features of each to mobile users.

They offer an app-like user experience (splash screens and home screen icons), they're served from HTTPS-secured servers, they can load quickly (thanks to page load performance best practices) even in low quality or slow network conditions, and they have offline support, instant loading and push notifications. The concept of PWAs was first introduced by Google, and is still supported by many Chrome features and great tools, such as Lighthouse, an open-source tool for accessibility, performance and progressiveness auditing which we'll look into a bit later.

Throughout this crash course, we'll build a PWA from scratch with ES6 and React and optimize it step by step with Lighthouse until we achieve the best results in terms of UX and performance.

The term progressive simply means that PWAs are designed in a such a way that they can be progressively enhanced in modern browsers where many new features and technologies are already supported but should also work fine in old browsers with no cutting-edge features.

Native vs Mobile = Progressive

A native app is distributable and downloadable from the mobile OS's respective app store. Mobile web apps, on the other hand, are accessible from within a web browser by simply entering their address or URL. From the user's point of view, launching a browser and navigating to an address is much more convenient than going to the app store and downloading, installing, then launching the app. From the developer/owner's point of view, paying a one-time fee for getting an app store account and then uploading their apps to become accessible to users worldwide is better than having to deal with the complexities of web hosting.

A native app can be used offline. In the case of remote data that needs to be retrieved from some API server, the app can be easily conceived to support some sort of SQLite caching of the latest accessed data.

A mobile web app is indexable by search engines like Google, and through search engine optimization you can reach more users. This is also true for native apps, as the app stores have their own search engines where developers can apply different techniques --- commonly known as App Store Optimization --- to reach more users.

A native app loads instantly, at least with a splash screen, until all resources are ready for the app to execute.

These are the most important perceived differences. Each approach to app distribution has advantages for the end user (regarding user experience, availability etc.) and app owner (regarding costs, reach of customers etc.). Taking that into consideration, Google introduced PWAs to bring the best features of each side into one concept. These aspects are summarized in this list introduced by Alex Russell, a Google Chrome engineer. (Source: Infrequently Noted.)

  • Responsive: to fit any form factor.
  • Connectivity independent: progressively-enhanced with service workers to let them work offline.
  • App-like-interactions: adopt a Shell + Content application model to create appy navigations & interactions.
  • Fresh: transparently always up-to-date thanks to the service worker update process.
  • Safe: served via TLS (a service worker requirement) to prevent snooping.
  • Discoverable: are identifiable as “applications” thanks to W3C Manifests and service worker registration scope allowing search engines to find them.
  • Re-engageable: can access the re-engagement UIs of the OS; e.g. push notifications.
  • Installable: to the home screen through browser-provided prompts, allowing users to “keep” apps they find most useful without the hassle of an app store.
  • Linkable: meaning they’re zero-friction, zero-install, and easy to share. The social power of URLs matters.

Lighthouse is a tool for auditing web apps created by Google. It's integrated with the Chrome Dev Tools and can be triggered from the Audits panel.

You can also use Lighthouse as a NodeJS CLI tool:

npm install -g lighthouse

You can then run it with:


Lighthouse can also be installed as a Chrome extension, but Google recommends using the version integrated with DevTools and only use the extension if you somehow can't use the DevTools.

Please note that you need to have Chrome installed on your system to be able to use Lighthouse, even if you're using the CLI-based version.

Building your First PWA from Scratch

In this section, we'll be creating a progressive web app from scratch. First, we'll create a simple web application using React and Reddit's API. Next, we'll be adding PWA features by following the instructions provided by the Lighthouse report.

Please note that the public no-authentication Reddit API has CORS headers enabled so you can consume it from your client-side app without an intermediary server.

Before we start, this course will assume you have a development environment setup with NodeJS and NPM installed. If you don't, start with the awesome Homestead Improved, which is running the latest versions of each and is ready for development and testing out of the box.

We start by installing Create React App, a project boilerplate created by the React team that saves you from the hassle of WebPack configuration.

npm install -g create-react-app create-react-app react-pwa cd react-pwa/ The application shell architecture

The application shell is an essential concept of progressive web apps. It's simply the minimal HTML, CSS and JavaScript code responsible for rendering the user interface.

This app shell has many benefits for performance. You can cache the application shell so when users visit your app next time, it will be loaded instantly because the browser doesn't need to fetch assets from a remote server.

For building a simple UI we'll use Material UI, an implementation of Google Material design in React.

Let's install the package from NPM:

npm install material-ui --save

Next open src/App.js then add:

import React, { Component } from 'react'; import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider'; import AppBar from 'material-ui/AppBar'; import {Card, CardActions, CardHeader,CardTitle,CardText} from 'material-ui/Card'; import FlatButton from 'material-ui/FlatButton'; import IconButton from 'material-ui/IconButton'; import NavigationClose from 'material-ui/svg-icons/navigation/close'; import logo from './logo.svg'; import './App.css'; class App extends Component { constructor(props) { super(props); this.state = { posts: [] }; } render() { return ( <MuiThemeProvider> <div> <AppBar title={<span >React PWA</span>} iconElementLeft={<IconButton><NavigationClose /></IconButton>} iconElementRight={<FlatButton onClick={() => this.fetchNext('reactjs', this.state.lastPostName)} label="next" /> } /> { (el, index) { return <Card key={index}> <CardHeader title={} subtitle={} actAsExpander={ === true} showExpandableButton={false} /> <CardText expandable={ === true}> {} </CardText> <CardActions> <FlatButton label="View" onClick={() => {; }} /> </CardActions> </Card> })} <FlatButton onClick={() => this.fetchNext('reactjs', this.state.lastPostName)} label="next" /> </div> </MuiThemeProvider> ); } } export default App;

Next we need to fetch the Reddit posts using two methods fetchFirst() and fetchNext():

fetchFirst(url) { var that = this; if (url) { fetch('' + url + '.json').then(function (response) { return response.json(); }).then(function (result) { that.setState({ posts:, lastPostName:[ - 1] }); console.log(that.state.posts); }); } } fetchNext(url, lastPostName) { var that = this; if (url) { fetch('' + url + '.json' + '?count=' + 25 + '&after=' + lastPostName).then(function (response) { return response.json(); }).then(function (result) { that.setState({ posts:, lastPostName:[ - 1] }); console.log(that.state.posts); }); } } componentWillMount() { this.fetchFirst("reactjs"); }

You can find the source code in this GitHub Repository.

Before you can run audits against your app you'll need to make a build and serve your app locally using a local server:

npm run build

This command invokes the build script in package.json and produces a build in the react-pwa/build folder.

Now you can use any local server to serve your app. On Homestead Improved you can simply point the nginx virtual host to the build folder and open in the browser, or you can use the serve package via NodeJS:

npm install -g serve cd build serve

With serve, your app will be served locally from http://localhost:5000/.

You can audit your app without any problems, but in case you want to test it in a mobile device you can also use services like to deploy it with one command!

npm install --global surge

Next, run surge from within any directory to publish that directory onto the web.

You can find the hosted version of this app from this link.

Now let's open Chrome DevTools, go to Audits panel and click on Perform an audit.

From the report we can see we already have a score of 45/100 for Progressive Web App and 68/100 for Performance.

Under Progressive Web App we have 6 failed audits and 5 passed audits. That's because the generated project already has some PWA features added by default, such as a web manifest, a viewport meta and a <no-script> tag.

Under Performance we have diagnostics and different calculated metrics, such as First meaningful paint, First Interactive, Consistently Interactive, Perceptual Speed Index and Estimated Input Latency. We'll look into these later on.

Lighthouse suggests improving page load performance by reducing the length of Critical Render Chains either by reducing the download size or deferring the download of unnecessary resources.

Please note that the Performance score and metrics values can change between different auditing sessions on the same machine, because they're affected by many varying conditions such as your current network state and also your current machine state.

Continue reading %Progressive Web Apps: A Crash Course%

Black Friday: 50% off the best library in web development and design!

SitePoint - Fri, 11/24/2017 - 11:30

Here at SitePoint it's our mission to keep you informed, so you can stay on the cutting edge of web development and design to produce some pretty awesome things!

That's why today, we're giving you two years of SitePoint Premium, for the price of 1! To put it simply that's $20,000 worth of SitePoint web development and design books and courses for just $99.

If you don't know already, SitePoint Premium guides you through topics like HTML, CSS, JavaScript, Angular, Node, React, PHP, Responsive Web Design, UX, Project Management and much more so you can get ahead of the game, and learn how to build better, faster, and more responsive websites and apps.

Here's what you get:
  • Download ALL (100+) SitePoint ebooks and keep them forever
  • Download ALL (115+) SitePoint courses
  • Easy to follow paths, exclusive member discounts, an ad-free SitePoint, new content monthly, shareable certificates
  • Access to every book and course we release in 2018 and 2019.
Did we mention priority access to everything we release in 2018 and 2019?

Here's a taster of what we have planed for the next few months:

  • Creating a REST API with Node.js
  • Node.js security, authentication and deployment
  • Getting started with React
  • Beginner Sass
  • Version Control with Git
  • HTML5 Games: Novice to Ninja
  • Beginner to advanced HTML and CSS
  • iOS and Android app development

And that's just the next few months! You'll have a whole two years of web tech books, courses and tutorials to look foward to.

Grab this deal now because the offer ends midnight Black Friday

[caption id="attachment_161448" align="alignnone" width="1000"][/caption]

Continue reading %Black Friday: 50% off the best library in web development and design!%

24 Productivity Tools to Help You with Almost Everything

SitePoint - Fri, 11/24/2017 - 10:30

This article was sponsored by Mekanism. Thank you for supporting the partners who make SitePoint possible.

With such high competition in the market, how do you make sure you're project, product or idea cuts through? And in a timely manner? The difference is made by the talent, learning new technologies and last, but not least, by the tools used. There are a heap of productivity tools out there that are helping web designers and developers to design logos, build phone apps without the need for coding skills or even build a database quickly and efficiently.

In this showcase, we will take you through 25 of our favourite tools to help you overcome almost anything. We hope that you will find these helpful. Let us know your thoughts or if you have some other recommendations to add to the list.

1. Tailor Brands - Automated Logo Maker and Brand Builder 

Tailor Brands has quickly grown to be one of the best tools on the market to help you design logos and make branding. After offering up a great logo maker, the company now offers a full branding suite that makes marketing and creating unique company visuals a breeze.

The company’s logo maker uses AI and pairs it with an expansive template library to create unique and effective designs in a matter of minutes. You can then fine-tune aspects of your logo until it's perfect.

Now that you have your logo, you can move on to making your mark with a powerful online brand presence. Tailor Brands is easy-to-use and gives you a fully automated weekly social planner that sets a schedule and auto-generates posts and ads to upload on a regular basis, you can also add more posts in manually if you'd like.

If you want to launch a holiday campaign, you can take advantage of Tailor Brands’ seasonal logos, which lets you give your branding a quick boost. For your business needs, you can create business cards and decks to go along with letterheads, and even presentation templates. You can also download your logo in EPS format to print it on shirts, bag, and other gear.

With a basic monthly subscription of $2.99 or a full package for $10.99, Tailor Brands is a quick and easy way to give your brand a professional touch.

Pricing: Monthly subscription from $2.99.

2. iGenApps - Build Your Phone App Without Any Coding Skills

iGenApps is a powerful and affordable app builder that allows people without programming skills to build and publish a fully customized app. According to the company, it has been downloaded 2 million times and has more than 1.5 million registered users building their apps with it already. And in April 2017, it was named the Best Productivity App.

You start the app creation with the wizard building process that will guide you step by step to build Apps for mobile phones and tablets. Build and publish your Apps in minutes all through your mobile device.

iGenApps has a free trial you can start playing with that will show you how it all works and what it can do.

Pricing: Check their website for detailed pricing.

3. Kohezion - Online Database Software

Kohezion is an advanced online database software that allows you to quickly create your own customized web-based database without coding or any programming. The database can be used for:

  • Clients – Easily manage your clients, leads, organization and more. You can create reminders so you will never lose an opportunity. Keep a record of all communications and attach files to client's records
  • Manage Contracts - Collaborate and share ideas about contracts. Never miss an expiration date with reminders. Easily track all required information. Attach files, Apply calculations or Email contracts directly from Kohezion
  • Manage tasks
  • Schedules

It is also highly customizable. You can create custom quotes and invoices for your clients, custom reports with your own look and feel, integrate with Dropbox™, Google Drive™ or Box™, create complex calculation fields, or even embeddable online forms.

There are 3 types of plans, standard, non-profit organizations and enterprise solutions, with pricing starting at $50/user/year. They also have a Free Forever plan, which will cover your basic needs.

Pricing: Starts from $50/User/YEAR for full access or they also have a Free Forever plan.

4. Visme

When it comes to Visual Communication, Visme is a game changer.  Imagine taking key features of Powerpoint and Photoshop and then marrying them together into an easy-to-use tool. It allows people without design experience to tap into hundreds of professional templates and an extensive library of assets to create engaging and memorable presentations, infographics, charts and report, ebooks, websites and social graphics. You can even make your content interactive with the ability to insert videos, audio or embed external content such as forms, polls, and maps.  

You can publish your content online, embed to your website, make it private and password protected or download it as an Image, PDF or even HTML5, so you can present offline.

Pricing:Paid plans start at $10/month which unlock premium features including the ability to customize your own brand, upload your own fonts, and tap into all premium templates and images. But they also have a free plan.

5. Ultra Theme

Ultra Theme is a powerful and flexible WordPress theme created by the well reputed Themify. It is easy to use and makes creating sites quickly and beautifully (and responsive, of course), a breeze. It allows you to take full control of your theme design from header to footer.

Pricing: The standard license is $49.


Codester is a fast growing platform for web designers and developers to buy and sell lots of great premium PHP scripts, app templates, themes and plugins to create amazing websites & apps. They even have a “flash sales” where products are available for a limited period but with a 50% discount.

Pricing: It's a marketplace, so will vary on products.

7. GrapeCity - JavaScript Solutions

GrapeCity JavaScript solutions provides all you'll need for a full web app. With this you'll get dependency-free, fast, flexible, true JavaScript components that enable you to build basic websites, full enterprise apps, and Excel-like spreadsheet web apps. And if you need it, expert support is available by forum, direct ticket or phone.

There are 2 products: SpreadJS and Wijmo and both have offer free trials.

Pricing: SpreadJS is $999/developer and Wijmo is $895/developer.


ThemeFuse is one of the most impressive WordPress theme developers in the market. They cover most domains such as automotive, blogging, e-commerce, events and portfolios. Their themes are professional-looking and includes everything you need to get started and the installation and setup only takes a couple of minutes, so you can get started with your next big idea.

Pricing: They offer a free plan, but the premium ones start from $45, paid once.
You can also use this code BLKFRY2017 and get a 70% discount.

9. Blaskan

Blaskan is a responsive and professional WordPress theme that's built for many kinds of screens. It was built by Colorlib, a new WordPress developer that is quickly becoming one of the best in the market. The theme is free to download and use, so give it a try.

Pricing: Free


VectorStock is the world’s premier vector-only image marketplace with more than 10,000 vectors added daily. It’s one of the favorite places for web designers because there is a heap to be found here and also because the pricing is budget friendly.

Pricing: Free plan with access to 41,000 free vectors and millions of royalty free images.

11. wpKube

wpKube specializes in WordPress themes, hosting, plugins and everything related to this platform. They've even made a complete guide about how you can start a WP website from scratch, which tools, themes and hosting to choose from, so you can have an awesome website.

They're also investing lots of money in developing new themes. All of their templates are fully responsive, easy to install, setup and customize. At this point, there are 5 themes and the pricing is between $49-$59/theme.

Pricing: Varies depending on service you are after


Host-Tracker is one of the best website monitoring systems on the market. It provides instant notifications about failures, domain and certificate expiration monitoring, content check and many other cool things. They recently launched a cool new feature which automatically pauses your AdWords campaign if any problems with the site are detected and then reactivates onces resolved.

Pricing: $5/month with a 50% discount for new customers

Continue reading %24 Productivity Tools to Help You with Almost Everything%

How to Ship & Validate New Projects Fast

SitePoint - Thu, 11/23/2017 - 11:30

This article was sponsored by MOJO Marketplace. Thank you for supporting the partners who make SitePoint possible.

Launching a new project can be costly, both in terms of time and money. After all, if it’s worth doing, it’s worth doing well. But is that always the best approach? There can be real value in shipping a new project fast.

The risk of a new endeavour isn’t getting the technical stuff right. It’s your customers—or lack of them. They’re looking for effective solutions to their problems; does your product provide one? Will they pay for it? How much? Do they have any feedback?

You need to validate your idea before you start building it—that way you can make sure you’re providing something your customers truly need. So launch quickly. Rather than investing a lot of time and money up front, get something out the door, gauge interest early, and validate your idea.

How Non-Technical Founders Can Ship Projects

Your customers don’t need to see a finished product. They need to know you’re passionate, and they need to have confidence you’ll finish what you start. So start with a prototype—a minimum viable product (MVP)—that will give them a taste of what you have in mind.

Creating an MVP doesn’t necessarily require technical skill or knowledge. In fact, even if you’re a coder it's best to keep things lean and mean at the beginning. Instead, rapidly launch your product using existing web technologies. You can jerry-rig a lot of functionality by linking web services together with Zapier or IFTTT. We’re talking minutes, hours or days, not weeks and months.

Ramli John tells the stories of some non-technical founders who successfully launched startups without writing a line of code. The first is Ryan Hoover, the co-founder of Product Hunt, a site that curates interesting new products. By using an existing link-sharing tool (the now defunct Linky Dink), he was able to launch a prototype in just 20 minutes from a coffee shop, then organized 30 quality contributors to share links. Within two weeks the fledgeling project had over 170 subscribers.

Then there’s Rolling Tree, an online community for skateboard enthusiasts. The three founders were unable to find a suitable developer in three months, so they turned to Facebook instead. They set up a group in less than a day, and attracted 500 keen members in less than five. The community met together weekly on Google Hangouts, where they started to design skateboards together.

A lot of new entrepreneurs think that to launch a startup, you need to build a product, and to do that you need someone that can write code. At that early stage, however, the biggest risk for most startups is not the technology, but the market. Will someone out there actually pay for it? (Ramli John)

Developing an MVP is a great approach to launching products, and it’s not just for non-technical entrepreneurs. Even developers can use these techniques to test ideas for market response before building out apps.

The founders I mentioned created their prototypes using popular web services. WordPress is another approach—it’s versatile, easy and inexpensive. The right WordPress theme will get you started in minutes, then you can add functionality with plugins and embedded web apps. And because WordPress is so popular there’s plenty of help available for customization.

One place to get that support is MOJO Marketplace. They sell WordPress themes and plugins, offer help and personalized coaching with WP Live support, and provide a wide range of professional services for those jobs you don’t have the time or inclination to do yourself.

Let’s have a look at how MOJO can help you launch your project.

Get Started with a Landing Page & Email List

Get started right away by putting together a landing page. That way you can let the world know about your upcoming project from Day 0, all the while collecting email addresses for your mailing list.

Failing to build your list from the start is one of the most common missed opportunities. It will enable you to gauge interest now, and communicate your passion down the track.

“Your passion and belief to build your idea is central to everything that you are doing. Learn from the data and move on.” (Nitesh Agrawal)

For most online businesses, a good email list is one of the most important assets for converting customers.

Get started quickly with a landing page theme with an embedded contact widget. Here are some great choices from MOJO Marketplace.

Advanced Coming Soon - Landing Page PHP Script

This PHP script lets you add a beautiful “Under Construction” page to your website in just five minutes. Its Twitter, subscribe and contact forms will help you grow your follower base.

Smooth - Animated Coming Soon Template

Smooth lets you build anticipation and trust even before you launch your website. The plugin’s Mailchimp integration and contact form will help you build your contact base.

Level Up - Responsive Coming Soon Template

Level Up is a responsive coming soon template with a countdown timer. A simple form allows you to collect subscribers and gauge interest.

Continue reading %How to Ship & Validate New Projects Fast%

Optimizing CSS: Tweaking Animation Performance with DevTools

SitePoint - Thu, 11/23/2017 - 07:00

This article is part of a series created in partnership with SiteGround. Thank you for supporting the partners who make SitePoint possible.

CSS animations are known to be super performant. Although this is the case for simple animations on a few elements, add more complexity, and if you didn't code your animations with performance in mind, website users will soon take notice and possibly get annoyed.

In this article, I introduce some useful browser dev tools' features that will enable you to check what happens under the hood when animating with CSS. This way, when an animation looks a bit choppy, you'll have a better idea why and what you can do to fix it.

Developer Tools for CSS Performance

Your animations need to hit 60 fps (frames per second) to run fluidly in the browser — the lower the rate the worse your animation will look. This means the browser has no more than about 16 milliseconds to do its job for one frame. But what does it do during that time? And how would you know if your browser is keeping up with the desired framerate?

I think nothing beats user experience when it comes to assess the quality of an animation. However, developer tools in modern browsers, while not always being 100% reliable, have been getting smarter and smarter and there's quite a bit you can do to review, edit and debug your code using them.

This is also true when you need to check framerate and CSS animation performance. Here's how it works.

Exploring the Performance Tool in Firefox

In this article I use Firefox Performance Tool, the other big contender is Chrome Performance Tool. You can pick your favorite, as both browsers offer powerful performance features.

To open the developer tools in Firefox, choose one of these options:

  • Right-click on your web page and choose Inspect Element in the context menu
  • If you use the keyboard, press Ctrl + Shift + I on Windows and Linux or Cmd + Opt + I on OS X.

Next, click on the Performance tab. Here, you'll find the button that lets you start a recording of your website's performance:

Press that button and wait for a few seconds or perform some action on the page. When you're done, click the Stop Recording Performance button:

In a split second Firefox presents you with tons of well-organized data that will help you make sense of which issues your code is suffering from.

The result of a recording inside the Performance panel looks something like this:

The Waterfall section is perfect for checking issues related to CSS transitions and keyframe animations. Other sections are the Call Tree and the JS Flame Chart, which you can use to find out about bottlenecks in your JavaScript code.

The Waterfall has a summary section at the top and a detailed breakdown. In both the data is color-coded:

  • Yellow bars refer to JavaScript operations
  • Purple bars refer to calculating HTML elements’ CSS styles (recalculate styles) and laying out your page (layout). Layout operations are quite expensive for the browser to perform, so if you animate properties that involve repeated layouts (also known as reflows), e.g., margin, padding, top, left, etc., the results could be janky
  • Green bars refer to painting your elements into one or more bitmaps (Paint). Animating properties like color, background-color, box-shadow, etc., involves costly paint operations, which could be the cause of sluggish animations and poor user experience.

You can also filter the type of data you want to inspect. For instance, I'm interested only in CSS-related data, therefore I can deselect everything else by clicking on the filter icon at the top left of the screen:

The big green bar below the Waterfall summary represents information on the framerate.

A healthy representation would look quite high, but most importantly, consistent, that is, without too many deep gaps.

Let's illustrate this with an example.

The Performance Tool In Action

Continue reading %Optimizing CSS: Tweaking Animation Performance with DevTools%

Case Study: Optimizing CommonMark Markdown Parser with

SitePoint - Thu, 11/23/2017 - 01:12

As you may know, I am the author and maintainer of the PHP League's CommonMark Markdown parser. This project has three primary goals:

  1. fully support the entire CommonMark spec
  2. match the behavior of the JS reference implementation
  3. be well-written and super-extensible so that others can add their own functionality.

This last goal is perhaps the most challenging, especially from a performance perspective. Other popular Markdown parsers are built using single classes with massive regex functions. As you can see from this benchmark, it makes them lightning fast:

Library Avg. Parse Time File/Class Count Parsedown 1.6.0 2 ms 1 PHP Markdown 1.5.0 4 ms 4 PHP Markdown Extra 1.5.0 7 ms 6 CommonMark 0.12.0 46 ms 117

Unfortunately, because of the tightly-coupled design and overall architecture, it's difficult (if not impossible) to extend these parsers with custom logic.

For the League's CommonMark parser, we chose to prioritize extensibility over performance. This led to a decoupled object-oriented design which users can easily customize. This has enabled others to build their own integrations, extensions, and other custom projects.

The library's performance is still decent --- the end user probably can't differentiate between 42ms and 2ms (you should be caching your rendered Markdown anyway). Nevertheless, we still wanted to optimize our parser as much as possible without compromising our primary goals. This blog post explains how we used Blackfire to do just that.

Profiling with Blackfire

Blackfire is a fantastic tool from the folks at SensioLabs. You simply attach it to any web or CLI request and get this awesome, easy-to-digest performance trace of your application's request. In this post, we'll be examining how Blackfire was used to identify and optimize two performance issues found in version 0.6.1 of the league/commonmark library.

Let's start by profiling the time it takes league/commonmark to parse the contents of the CommonMark spec document:

Later on we'll compare this benchmark to our changes in order to measure the performance improvements.

Quick side-note: Blackfire adds overhead while profiling things, so the execution times will always be much higher than usual. Focus on the relative percentage changes instead of the absolute "wall clock" times.

Optimization 1

Looking at our initial benchmark, you can easily see that inline parsing with InlineParserEngine::parse() accounts for a whopping 43.75% of the execution time. Clicking this method reveals more information about why this happens:

Here we see that InlineParserEngine::parse() is calling Cursor::getCharacter() 79,194 times --- once for every single character in the Markdown text. Here's a partial (slightly-modified) excerpt of this method from 0.6.1:

public function parse(ContextInterface $context, Cursor $cursor) { // Iterate through every single character in the current line while (($character = $cursor->getCharacter()) !== null) { // Check to see whether this character is a special Markdown character // If so, let it try to parse this part of the string foreach ($matchingParsers as $parser) { if ($res = $parser->parse($context, $inlineParserContext)) { continue 2; } } // If no parser could handle this character, then it must be a plain text character // Add this character to the current line of text $lastInline->append($character); } }

Blackfire tells us that parse() is spending over 17% of its time checking every. single. character. one. at. a. time. But most of these 79,194 characters are plain text which don't need special handling! Let's optimize this.

Instead of adding a single character at the end of our loop, let's use a regex to capture as many non-special characters as we can:

public function parse(ContextInterface $context, Cursor $cursor) { // Iterate through every single character in the current line while (($character = $cursor->getCharacter()) !== null) { // Check to see whether this character is a special Markdown character // If so, let it try to parse this part of the string foreach ($matchingParsers as $parser) { if ($res = $parser->parse($context, $inlineParserContext)) { continue 2; } } // If no parser could handle this character, then it must be a plain text character // NEW: Attempt to match multiple non-special characters at once. // We use a dynamically-created regex which matches text from // the current position until it hits a special character. $text = $cursor->match($this->environment->getInlineParserCharacterRegex()); // Add the matching text to the current line of text $lastInline->append($character); } }

Once this change was made, I re-profiled the library using Blackfire:

Okay, things are looking a little better. But let's actually compare the two benchmarks using Blackfire's comparison tool to get a clearer picture of what changed:

This single change resulted in 48,118 fewer calls to that Cursor::getCharacter() method and an 11% overall performance boost! This is certainly helpful, but we can optimize inline parsing even further.

Continue reading %Case Study: Optimizing CommonMark Markdown Parser with

Using Preact as a React Alternative

SitePoint - Fri, 09/29/2017 - 13:00

Preact is an implementation of the virtual DOM component paradigm just like React and many other similar libraries. Unlike React, it's only 3KB in size, and it also outperforms it in terms of speed. It's created by Jason Miller and available under the well-known permissive and open-source MIT license.

Why Use Preact?

Preact is a lightweight version of React. You may prefer to use Preact as a lightweight alternative if you like building views with React but performance, speed and size are a priority for you --- for example, in case of mobile web apps or progressive web apps.

Whether you're starting a new project or developing an existing one, Preact can save you a lot of time. You don't need to reinvent the wheel trying to learn a new library, since it's similar to, and compatible with, React --- to the point that you can use existing React packages with it with only some aliasing, thanks to the compatibility layer preact-compat.

Pros and Cons

There are many differences between React and Preact that we can summarize in three points:

  • Features and API: Preact includes only a subset of the React API, and not all available features in React.
  • Size: Preact is much smaller than React.
  • Performance: Preact is faster than React.

Every library out there has its own set of pros and cons, and only your priorities can help you decide which library is a good fit for your next project. In this section, I'll try to list the pros and cons of the two libraries.

Preact Pros
  • Preact is lightweight, smaller (only 3KB in size when gzipped) and faster than React (see these tests). You can also run performance tests in your browser via this link.
  • Preact is largely compatible with React, and has the same ES6 API as React, which makes it dead easy either to adopt Preact as a new library for building user interfaces in your project or to swap React with Preact for an existing project for performance reasons.
  • It has good documentation and examples available from the official website.
  • It has a powerful and official CLI for quickly creating new Preact projects, without the hassle of Webpack and Babel configuration.
  • Many features are inspired by all the work already done on React.
  • It has also its own set of advanced features independent from React, like Linked State.
React Pros
  • React supports one-way data binding.
  • It's backed by a large company, Facebook.
  • Good documentation, examples, and tutorials on the official website and the web.
  • Large community.
  • Used on Facebook's website, which has millions of visitors worldwide.
  • Has its own official developer debugging tools extension for Chrome.
  • It has the Create React App project boilerplate for quickly creating projects with zero configuration.
  • It has a well-architectured and complex codebase.
React Cons
  • React has a relatively large size in comparison with Preact or other existing similar libraries. (React minified source file is around 136KB in size, or about 42KB when minified and gzipped.)
  • It's slower than Preact.
  • As a result of its complex codebase, it's harder for novice developers to contribute.

Note: Another con I listed while writing this article was that React had a grant patent clause paired with the BSD license, making it legally unsuitable for some use cases. However, in September 2017, the React license switched MIT, which resolved these license concerns.

Preact Cons
  • Preact supports only stateless functional components and ES6 class-based component definition, so there's no createClass.
  • No support for context.
  • No support for React propTypes.
  • Smaller community than React.
Getting Started with Preact CLI

Preact CLI is a command line tool created by Preact's author, Jason Miller. It makes it very easy to create a new Preact project without getting bogged down with configuration complexities, so let's start by installing it.

Open your terminal (Linux or macOS) or command prompt (Windows), then run the following commands:

npm i -g preact-cli@latest

This will install the latest version of Preact CLI, assuming you have Node and NPM installed on your local development machine.

You can now create your project with this:

preact create my-app

Or with this, ff you want to create your app interactively:

preact init

Next, navigate inside your app's root folder and run this:

npm start

This will start a live-reload development server.

Finally, when you finish developing your app, you can build a production release using this:

npm run build

Continue reading %Using Preact as a React Alternative%

Extracting Website Data and Creating APIs with WrapAPI

SitePoint - Fri, 09/29/2017 - 11:00

Today, almost all services we use have some sort of API. Some web applications are even built from API points alone, being passed to some kind of front-end view. If you're a consumer of a service that provides an API, you'll sometimes need more features or find limits to what the API can offer. In this article, we'll cover a service that's useful both for API consumers and creators.

I always go with the saying that, if there's a web interface, you can build your own API over it. WrapAPI tries to make this process easier. If you're familiar with the process of web scraping/crawling (or extracting data from websites), you'll see the magic of WrapAPI.

WrapAPI offers a service that allows you to easily extract information from websites and create APIs from the data. It provides an easy, interactive way of selecting what information you want to get. With just a few clicks, you can have your API online.

To follow along with this tutorial, I recommend you head over to and create an account.

How To Get Around WrapAPI

On the WrapAPI site, you'll see that you can start to build your project right away --- although, unless you create an account, your work won't be saved.

Once you've signed up, click the Try building an API button.

You'll be presented by a browser-like interface. On top of the site we're presented with a URL bar. As an example, WrapAPI uses Hacker News ( If you click the URL to change it to something else, you'll see more options related to the request you want to make. We'll use the default options, and only change the URL to We're covering only the GET method, as we only want to get data in this example.

Below the URL bar there are four buttons that give you different information regarding the site you're viewing. Browser view displays the site as you would visit it from your browser. Code view displays the source code of the site. Headers shows the response you get from the server. This is useful if you want to see what response you get from the server: it gives you information like the HTTP status codes (200, 404, 400 etc.), content types, web servers and so on. You can also view the request's Cookies directly from the builder.

Getting the Data

By now you should be able to see SitePoint inside the Browser View frame.

Let's create a very simple API that shows us the latest post titles of the JavaScript channel. If you hover over the titles, images or any other element in the site, you'll notice a selection color covering it. Let's scroll down a bit, to the LATEST articles part. Hover over the title from one of the articles and click on that title. You'll notice that it doesn't switch to that particular link we clicked. We see that every title in this section is highlighted. WrapAPI guessed that these are all the titles we want. Sometimes it can also select parts of the sites we don't want. That's usually the case when the CSS class selectors are not well-defined or used by other elements in the site.

Besides CSS selectors, WrapAPI supports regular expressions, JSON selectors, headers, cookies, form outputs, and a bunch more options. You can use them all together and extract exactly what you're aiming for. In this example, we'll only use CSS selectors.

In the right part of the interface, you'll see three tabs. Let's take a look at the current Build tab. Outputs will show us the selectors (in our case CSS selectors), and you'll get more details on what you would like to select. We're interested only in extracting the title, which is text. There are more options on cleaning the result output, but we won't get into these details. If you'd like to create another selector, to select description, author, date, etc., just click the Create a new collection/output. Naming your selectors is also important, as this will make it easier if you use multiple selectors in the site. By clicking the pencil icon, you can edit your selectors.

The Preview tab will show a representation of our data in JSON, and you probably get the idea of what the API will look like. If you're happy with the results, you can click the Save button to save a version of the API.

You'll need to enter the repository and the endpoint name of the API. It helps you manage and organize your APIs. That will also be part of your API's name in the end. After entering the information, you'll return to the builder. Our API is saved, but now we need to test and publish it.


  • If the site has pagination (previous/next pages), you can use the query string options. (More on that here.)
  • Name your selectors correctly, as they'll be part of the JSON output.

Continue reading %Extracting Website Data and Creating APIs with WrapAPI%

Conditionally Applying a CSS Class in Vue.js

SitePoint - Thu, 09/28/2017 - 11:00

There are times you need to change an element's CSS classes at runtime. But when changing classes, it's sometimes best to apply style details conditionally. For example, imagine your view has a pager. Pagers are often used to navigate larger sets of items. When navigating, it can be helpful to show the user the page they're currently on. The style of the item is conditionally set, based on the current page that's being viewed.

A pager in this case may look something like this:

In this example, there are five pages. Only one of these pages is selected at a time. If you built this pager with Bootstrap, the selected page would have a CSS class named active applied. You'd want this class applied only if the page was the currently viewed page. In other words, you'd want to conditionally apply the active CSS class. As discussed in my Vue.js tutorial, Vue provides a way to conditionally apply a CSS class to an element. I'm going to show you this technique in this article.

To conditionally apply a CSS class at runtime, you can bind to a JavaScript object. To successfully complete this task, you must complete two steps. First, you must ensure that your CSS class is defined. Then, you create the class bindings in your template. I'm going to explain each of these steps in detail in the rest of this article.

Step 1: Define Your CSS Classes

Imagine, for a moment, that the five page items shown in the image above were defined using the following HTML:

<div id="myApp"> <nav aria-label="Page navigation example"> <ul class="pagination"> <li class="page-item"><a class="page-link" href="#">1</a></li> <li class="page-item"><a class="page-link" href="#">2</a></li> <li class="page-item active"><a class="page-link" href="#">3</a></li> <li class="page-item"><a class="page-link" href="#">4</a></li> <li class="page-item"><a class="page-link" href="#">5</a></li> </ul> </nav> </div>

Notice that each page in this code snippet has a list item element (<li …). That element references the page-item CSS class. In the code for this article, this class is defined in the Bootstrap CSS framework. However, if it weren't defined there, it would be your responsibility to ensure that it was defined somewhere. The second CSS class is the one that's most relevant to this article, though.

The active CSS class is used to identify the currently selected page. For this article, this CSS class is also defined in the Bootstrap CSS. As shown in the snippet above, the active class is only used in the third list item element. As you can probably guess, this is the CSS class that you want to apply conditionally. To do that, you need to add a JavaScript object.

Continue reading %Conditionally Applying a CSS Class in Vue.js%

120+ Places To Find Creative Commons Media

SitePoint - Thu, 09/28/2017 - 10:00

The number of images, audio files, movies and other files available under a Creative Commons license is enormous, as Sean demonstrates in this post. Check out his list of over 30 useful sites for sourcing Creative Commons media.

Continue reading %120+ Places To Find Creative Commons Media%

How to Design Highly Memorable Experiences, and Why

SitePoint - Wed, 09/27/2017 - 11:00

According to Gartner, by 2017, 89% of marketers expect customer experience to be their primary differentiator. In order to create terrific customer experiences that set our apps and websites apart, we need to learn a bit more about how our brains work, and how we can create experiences that are memorable.

Fact: human brains are lazy. We love a shortcut.

Let's take a look at how that impacts on the way we design user experiences, and how we can design for lazy brains.

The Peak—end Rule

Nobel Prize winner Daniel Kahneman suggested that modern-day humans employ a a psychological heuristic (basically, a mental shortcut) called the peak—end rule, which states:

People judge an experience largely based on how they felt at its peak (i.e., its most intense point) and at its end, rather than based on the total sum or average of every moment of the experience. The effect occurs regardless of whether the experience is pleasant or unpleasant.

Let's think about that for a second. It's a big deal.

When we remember experiences, we tend to recall only snapshots of the key events that happened. This means that we might easily recall a singular negative event (like a rude customer service representative) and forget the better but smaller aspects of the experience (like a well-designed website). Or, vice versa, we might dislike an experience overall (bad website UX), but what we'll remember later is the terrific customer service received.

The Peak–end Rule: an Everyday Example

An everyday example of this is movies. Have you ever watched a brilliant movie, only for it to be spoiled by a disappointing ending? Two hours of spellbinding suspense can be rendered useless with a bad ending, much like an exciting online shopping experience can be ruined by a confusing/frustrating checkout.

Even if the middle of the experience was faultless, that's not the aspect of the experience that users will remember.

Boost Peak Moments with Friction

So we know that our brains like shortcuts. We know they remember the end and the most intense moments of an experience more than any other moment. In addition to that, we should also remember that our memories are faulty; they aren't always correct. People won't always remember what you said to them, but they'll remember how you made them feel.

So, with that in mind, we can then make changes to the experience to ensure that users forget negative moments, and remember positive ones. Some menial tasks, such as filling out a form, users won't want to remember. By simplifying the experience and removing friction, users can breeze through this step. We don't want the peak moment to be a horrendous one.

Airbnb Example

The same applies to positive experiences. Let's say you've booked an apartment on Airbnb. That's pretty exciting, right? Of course it is: you're going on holiday! To ensure the possibly frustrating search experience doesn't overshadow the excitement of your booking, Airbnb adds friction to keep you excited for a little longer. Here's what Airbnb does:

  • shows you things to do in the area
  • lets you read the house manual
  • lets you send the itinerary to your travel buddies
  • helps you find directions to the address
  • sends you an exciting "You're going away!" message

Not only does this often overshadow the somewhat long/boring search for an Airbnb, but it improves the user experience towards the end as well. Now, when the user remembers Airbnb, they'll remember how exciting it all was. Even though Airbnb bothers us with sending itineraries and recommendations, this is the sort of friction we're happy to engage in.

In short: stretch out positive moments, and relieve the user of negative pain points quickly by removing friction.

Uber Example

Remember taxis? Remember arriving at your destination and then fiddling around for cash? Yeah, this can be awkward. You realize you don't have the right change, so you pay with credit card; the card machine isn't working, so you have to drive to the ATM.

It's a rather awful, frustrating, embarrassing experience.

Your Uber account is linked to your bank card. Once you've arrived at your destination, you hop out of the car and you're done. Fiddling around for cash is not necessary; that pain point has been removed, and so the user walks away with their final experience with Uber being one of delight.

Embrace "Flat" Moments

Flat moments are moments that are neither fun nor boring.

An excellent example of a "flat moment turned memorable" might be from way back in the early 2000s, from an e-commerce website called CD Baby. Typically, when you make a purchase online, you receive an email confirmation to notify you that your purchase went through smoothly. This is fairly standard, and important.

Derek Sivers at CD Baby knew how flat this experience would be, and didn't want to end it with something that wasn't memorable, so he thought he'd have some fun. He put on his best copywriting mitts and came up with the following confirmation email:

People loved it. It went viral. Derek had turned a boring aspect of the experience into an unexpected delight. People were suddenly purchasing from CD Baby just to see the email (remember, this was the early 2000s!). If we map out the customer journey, we'll find that the email had become a peak moment, and a surefire way to create a memorable experience as the user --- hopefully temporarily --- departs from CD Baby.

Continue reading %How to Design Highly Memorable Experiences, and Why%

React Router v4: The Complete Guide

SitePoint - Tue, 09/26/2017 - 13:00

React Router is the de facto standard routing library for React. When you need to navigate through a React application with multiple views, you'll need a router to manage the URLs. React Router takes care of that, keeping your application UI and the URL in sync.

This tutorial introduces you to React Router v4 and a whole lot of things you can do with it.


React is a popular library for creating single-page applications (SPAs) that are rendered on the client side. An SPA might have multiple views (aka pages), and unlike the conventional multi-page apps, navigating through these views shouldn't result in the entire page being reloaded. Instead, we want the views to be rendered inline within the current page. The end user, who's accustomed to multi-page apps, expects the following features to be present in an SPA:

  • Each view in an application should have a URL that uniquely specifies that view. This is so that the user can bookmark the URL for reference at a later time --- e.g.
  • The browser's back and forward button should work as expected.
  • The dynamically generated nested views should preferably have a URL of their own too --- e.g., where 101 is the product id.

Routing is the process of keeping the browser URL in sync with what's being rendered on the page. React Router lets you handle routing declaratively. The declarative routing approach allows you to control the data flow in your application, by saying "the route should look like this":

<Route path="/about" component={About}/>

You can place your <Route> component anywhere that you want your route to be rendered. Since <Route>, <Link> and all the other React Router API that we'll be dealing with are just components, you can easily get used to routing in React.

A note before getting started. There's a common misconception that React Router is an official routing solution developed by Facebook. In reality, it's a third-party library that's widely popular for its design and simplicity. If your requirements are limited to routers for navigation, you could implement a custom router from scratch without much hassle. However, understanding how the basics of React Router will give you better insights into how a router should work.


This tutorial is divided into different sections. First, we'll be setting up React and React Router using npm. Then we'll jump right into React Router basics. You'll find different code demonstrations of React Router in action. The examples covered in this tutorial include:

  1. basic navigational routing
  2. nested routing
  3. nested routing with path parameters
  4. protected routing

All the concepts connected with building these routes will be discussed along the way. The entire code for the project is available on this GitHub repo. Once you're inside a particular demo directory, run npm install to install the dependencies. To serve the application on a development server, run npm start and head over to http://localhost:3000/ to see the demo in action.

Let's get started!

Setting up React Router

I assume you already have a development environment up and running. If not, head over to “Getting Started with React and JSX”. Alternatively, you can use Create React App to generate the files required for creating a basic React project. This is the default directory structure generated by Create React App:

react-routing-demo-v4 ├── .gitignore ├── package.json ├── public │ ├── favicon.ico │ ├── index.html │ └── manifest.json ├── ├── src │ ├── App.css │ ├── App.js │ ├── App.test.js │ ├── index.css │ ├── index.js │ ├── logo.svg │ └── registerServiceWorker.js └── yarn.lock

The React Router library comprises three packages: react-router, react-router-dom, and react-router-native. react-router is the core package for the router, whereas the other two are environment specific. You should use react-router-dom if you're building a website, and react-router-native if you're on a mobile app development environment using React Native.

Use npm to install react-router-dom:

npm install --save react-router-dom React Router Basics

Here's an example of how our routes will look:

<Router> <Route exact path="/" component={Home}/> <Route path="/category" component={Category}/> <Route path="/login" component={Login}/> <Route path="/products" component={Products}/> </Router> Router

You need a router component and several route components to set up a basic route as exemplified above. Since we're building a browser-based application, we can use two types of routers from the React Router API:

  1. <BrowserRouter>
  2. <HashRouter>

The primary difference between them is evident in the URLs that they create:

// <BrowserRouter> // <HashRouter>

The <BrowserRouter> is more popular amongst the two because it uses the HTML5 History API to keep track of your router history. The <HashRouter>, on the other hand, uses the hash portion of the URL (window.location.hash) to remember things. If you intend to support legacy browsers, you should stick with <HashRouter>.

Wrap the <BrowserRouter> component around the App component.

index.js /* Import statements */ import React from 'react'; import ReactDOM from 'react-dom'; /* App is the entry point to the React code.*/ import App from './App'; /* import BrowserRouter from 'react-router-dom' */ import { BrowserRouter } from 'react-router-dom'; ReactDOM.render( <BrowserRouter> <App /> </BrowserRouter> , document.getElementById('root'));

Note: A router component can only have a single child element. The child element can be an HTML element --- such as div --- or a react component.

For the React Router to work, you need to import the relevant API from the react-router-dom library. Here I've imported the BrowserRouter into index.js. I've also imported the App component from App.js. App.js, as you might have guessed, is the entry point to React components.

The above code creates an instance of history for our entire App component. Let me formally introduce you to history.


history is a JavaScript library that lets you easily manage session history anywhere JavaScript runs. history provides a minimal API that lets you manage the history stack, navigate, confirm navigation, and persist state between sessions. --- React Training docs

Each router component creates a history object that keeps track of the current location (history.location) and also the previous locations in a stack. When the current location changes, the view is re-rendered and you get a sense of navigation. How does the current location change? The history object has methods such as history.push() and history.replace() to take care of that. history.push() is invoked when you click on a <Link> component, and history.replace() is called when you use <Redirect>. Other methods --- such as history.goBack() and history.goForward() --- are used to navigate through the history stack by going back or forward a page.

Moving on, we have Links and Routes.

Links and Routes

The <Route> component is the most important component in React router. It renders some UI if the current location matches the route's path. Ideally, a <Route> component should have a prop named path, and if the pathname is matched with the current location, it gets rendered.

The <Link> component, on the other hand, is used to navigate between pages. It's comparable to the HTML anchor element. However, using anchor links would result in a browser refresh, which we don't want. So instead, we can use <Link> to navigate to a particular URL and have the view re-rendered without a browser refresh.

We've covered everything you need to know to create a basic router. Let's build one.

Demo 1: Basic Routing src/App.js /* Import statements */ import React, { Component } from 'react'; import { Link, Route, Switch } from 'react-router-dom'; /* Home component */ const Home = () => ( <div> <h2>Home</h2> </div> ) /* Category component */ const Category = () => ( <div> <h2>Category</h2> </div> ) /* Products component */ const Products = () => ( <div> <h2>Products</h2> </div> ) /* App component */ class App extends React.Component { render() { return ( <div> <nav className="navbar navbar-light"> <ul className="nav navbar-nav"> /* Link components are used for linking to other views */ <li><Link to="/">Homes</Link></li> <li><Link to="/category">Category</Link></li> <li><Link to="/products">Products</Link></li> </ul> </nav> /* Route components are rendered if the path prop matches the current URL */ <Route path="/" component={Home}/> <Route path="/category" component={Category}/> <Route path="/products" component={Products}/> </div> ) } }

We've declared the components for Home, Category and Products inside App.js. Although this is okay for now, when the component starts to grow bigger, it's better to have a separate file for each component. As a rule of thumb, I usually create a new file for a component if it occupies more than 10 lines of code. Starting from the second demo, I'll be creating a separate file for components that have grown too big to fit inside the App.js file.

Inside the App component, we've written the logic for routing. The <Route>'s path is matched with the current location and a component gets rendered. The component that should be rendered is passed in as a second prop.

Here / matches both / and /category. Therefore, both the routes are matched and rendered. How do we avoid that? You should pass the exact= {true} props to the router with path='/':

<Route exact={true} path="/" component={Home}/>

If you want a route to be rendered only if the paths are exactly the same, you should use the exact props.

Nested Routing

To create nested routes, we need to have a better understanding of how <Route> works. Let's do that.

<Route> has three props that you can you use to define what gets rendered:

  • component. We've already seen this in action. When the URL is matched, the router creates a React element from the given component using React.createElement.
  • render. This is handy for inline rendering. The render prop expects a function that returns an element when the location matches the route's path.
  • children. The children prop is similar to render in that it expects a function that returns a React element. However, children gets rendered regardless of whether the path is matched with the location or not.
Path and match

The path is used to identify the portion of the URL that the router should match. It uses the Path-to-RegExp library to turn a path string into a regular expression. It will then be matched against the current location.

If the router's path and the location are successfully matched, an object is created and we call it the match object. The match object carries more information about the URL and the path. This information is accessible through its properties, listed below:

  • match.url. A string that returns the matched portion of the URL. This is particularly useful for building nested <Link>s
  • match.path. A string that returns the route's path string --- that is, <Route path="">. We'll be using this to build nested <Route>s.
  • match.isExact. A boolean that returns true if the match was exact (without any trailing characters).
  • match.params. An object containing key/value pairs from the URL parsed by the Path-to-RegExp package.

Now that we know all about <Route>s, let's build a router with nested routes.

Switch Component

Before we head for the demo code, I want to introduce you to the <Switch> component. When multiple <Route>s are used together, all the routes that match are rendered inclusively. Consider this code from demo 1. I've added a new route to demonstrate why <Switch> is useful.

<Route exact path="/" component={Home}/> <Route path="/products" component={Products}/> <Route path="/category" component={Category}/> <Route path="/:id" render = {()=> (<p> I want this text to show up for all routes other than '/', '/products' and '/category' </p>)}/>

If the URL is /products, all the routes that match the location /products are rendered. So, the <Route> with path :id gets rendered along with the Products component. This is by design. However, if this is not the behavior you're expecting, you should add the <Switch> component to your routes. With <Switch>, only the first child <Route> that matches the location gets rendered.

Continue reading %React Router v4: The Complete Guide%

Getting Started with Redux

SitePoint - Tue, 09/26/2017 - 12:00

A typical web application is usually composed of several UI components that share data. Often, multiple components are tasked with the responsibility of displaying different properties of the same object. This object represents state which can change at any time. Keeping state consistent among multiple components can be a nightmare, especially if there are multiple channels being used to update the same object.

Take, for example, a site with a shopping cart. At the top we have a UI component showing the number of items in the cart. We could also have another UI component that displays the total cost of items in the cart. If a user clicks the Add to Cart button, both of these components should update immediately with the correct figures. If the user decides to remove an item from the cart, change quantity, add a protection plan, use a coupon or change shipping location, then the relevant UI components should update to display the correct information. As you can see, a simple shopping cart can quickly become difficult to keep in sync as the scope of its features grows.

In this guide, I'll introduce you to a framework known as Redux, which can help you build complex projects in way that's easy to scale and maintain. To make learning easier, we'll use a simplified shopping cart project to learn how Redux works. You'll need to be at least familiar with the React library, as you'll later need to integrate it with Redux.


Before we get started, make sure you're familiar with the following topics:

Also, ensure you have the following setup on your machine:

You can access the entire code used in this tutorial on GitHub.

What is Redux

Redux is a popular JavaScript framework that provides a predictable state container for applications. Redux is based on a simplified version of Flux, a framework developed by Facebook. Unlike standard MVC frameworks, where data can flow between UI components and storage in both directions, Redux strictly allows data to flow in one direction only. See the below illustration:

Figure 1: Redux Flow Chart

In Redux, all data --- i.e. state --- is held in a container known as the store. There can only be one of these within an application. The store is essentially a state tree where states for all objects are kept. Any UI component can access the state of a particular object directly from the store. To change a state from a local or remote component, an action needs to be dispatched. Dispatch in this context means sending actionable information to the store. When a store receives an action, it delegates it to the relevant reducer. A reducer is simply a pure function that looks at the previous state, performs an action and returns a new state. To see all this in action, we need to start coding.

Understand Immutability First

Before we start, I need you to first understand what immutability means in JavaScript. According to the Oxford English Dictionary, immutability means being unchangeable. In programming, we write code that changes the values of variables all the time. This is referred to as mutability. The way we do this can often cause unexpected bugs in our projects. If your code only deals with primitive data types (numbers, strings, booleans), then you don't need to worry. However, if you're working with Arrays and Objects, performing mutable operations on them can create unexpected bugs. To demonstrate this, open your terminal and launch the Node interactive shell:


Next, let's create an array, then later assign it to another variable:

> let a = [1,2,3] > let b = a > b.push(9) > console.log(b) [ 1, 2, 3, 9 ] // b output > console.log(a) [ 1, 2, 3, 9 ] // a output

As you can see, updating array b caused array a to change as well. This happens because Objects and Arrays are known referential data types --- meaning that such data types don't actually hold values themselves, but are pointers to a memory location where the values are stored. By assigning a to b, we merely created a second pointer that references the same location. To fix this, we need to copy the referenced values to a new location. In JavaScript, there are three different ways of achieving this:

  1. using immutable data structures created by Immutable.js
  2. using JavaScript libraries such as Underscore and Lodash to execute immutable operations
  3. using native ES6 functions to execute immutable operations.

For this article, we'll use the ES6 way, since it's already available in the NodeJS environment. Inside your NodeJS terminal, execute the following:

> a = [1,2,3] // reset a [ 1, 2, 3 ] > b = Object.assign([],a) // copy array a to b [ 1, 2, 3 ] > b.push(8) > console.log(b) [ 1, 2, 3, 8 ] // b output > console.log(a) [ 1, 2, 3 ] // a output

In the above code example, array b can now be modified without affecting array a. We've used Object.assign() to create a new copy of values that variable b will now point to. We can also use the rest operator(...) to perform an immutable operation like this:

> a = [1,2,3] [ 1, 2, 3 ] > b = [...a, 4, 5, 6] [ 1, 2, 3, 4, 5, 6 ] > a [ 1, 2, 3 ]

The rest operator works with object literals too! I won't go deep into this subject, but here are some additional ES6 functions that we'll use to perform immutable operations:

In case the documentation I've linked isn't useful, don't worry, as you'll see how they're used in practice. Let's start coding!

Setting up Redux

The fastest way to set up a Redux development environment is to use the create-react-app tool. Before we begin, make sure you've installed and updated nodejs, npm and yarn. Let's set up a Redux project by generating a redux-shopping-cart project and installing the Redux package:

create-react-app redux-shopping-cart cd redux-shopping-cart yarn add redux # or npm install redux

Delete all files inside the src folder except index.js. Open the file and clear out all existing code. Type the following:

import { createStore } from "redux"; const reducer = function(state, action) { return state; } const store = createStore(reducer);

Let me explain what the above piece of code does:

  • 1st statement. We import a createStore() function from the Redux package.
  • 2nd statement. We create an empty function known as a reducer. The first argument, state, is current data held in the store. The second argument, action, is a container for:
    • type --- a simple string constant e.g. ADD, UPDATE, DELETE etc.
    • payload --- data for updating state
  • 3rd statement. We create a Redux store, which can only be constructed using a reducer as a parameter. The data kept in the Redux store can be accessed directly, but can only be updated via the supplied reducer.

You may have noticed I mentioned current data as if it already exists. Currently, our state is undefined or null. To remedy this, just assign a default value to state like this to make it an empty array:

const reducer = function(state=[], action) { return state; }

Now, let's get practical. The reducer we created is generic. Its name doesn't describe what it's for. Then there's the issue of how we work with multiple reducers. The answer is to use a combineReducers function that's supplied by the Redux package. Update your code as follows:

// src/index.js … import { combineReducers } from 'redux'; const productsReducer = function(state=[], action) { return state; } const cartReducer = function(state=[], action) { return state; } const allReducers = { products: productsReducer, shoppingCart: cartReducer } const rootReducer = combineReducers(allReducers); let store = createStore(rootReducer);

In the code above, we've renamed the generic reducer to cartReducer. There's also a new empty reducer named productsReducer that I've created just to show you how to combine multiple reducers within a single store using the combineReducers function.

Next, we'll look at how we can define some test data for our reducers. Update the code as follows:

// src/index.js … const initialState = { cart: [ { product: 'bread 700g', quantity: 2, unitCost: 90 }, { product: 'milk 500ml', quantity: 1, unitCost: 47 } ] } const cartReducer = function(state=initialState, action) { return state; } … let store = createStore(rootReducer); console.log("initial state: ", store.getState());

Just to confirm that the store has some initial data, we use store.getState() to print out the current state in the console. You can run the dev server by executing npm start or yarn start in the console. Then press Ctrl+Shift+I to open the inspector tab in Chrome in order to view the console tab.

Figure 2: Redux Initial State

Currently, our cartReducer does nothing, yet it's supposed to manage the state of our shopping cart items within the Redux store. We need to define actions for adding, updating and deleting shopping cart items. Let's start by defining logic for a ADD_TO_CART action:

// src/index.js … const ADD_TO_CART = 'ADD_TO_CART'; const cartReducer = function(state=initialState, action) { switch (action.type) { case ADD_TO_CART: { return { ...state, cart: [...state.cart, action.payload] } } default: return state; } } …

Take your time to analyze and understand the code. A reducer is expected to handle different action types, hence the need for a SWITCH statement. When an action of type ADD_TO_CART is dispatched anywhere in the application, the code defined here will handle it. As you can see, we're using the information provided in action.payload to combine to an existing state in order to create a new state.

Next, we'll define an action, which is needed as a parameter for store.dispatch(). Actions are simply JavaScript objects that must have type and an optional payload. Let's go ahead and define one right after the cartReducer function:

… function addToCart(product, quantity, unitCost) { return { type: ADD_TO_CART, payload: { product, quantity, unitCost } } } …

Here, we've defined a function that returns a plain JavaScript object. Nothing fancy. Before we dispatch, let's add some code that will allow us to listen to store event changes. Place this code right after the console.log() statement:

… let unsubscribe = store.subscribe(() => console.log(store.getState()) ); unsubscribe();

Next, let's add several items to the cart by dispatching actions to the store. Place this code before unsubscribe():

… store.dispatch(addToCart('Coffee 500gm', 1, 250)); store.dispatch(addToCart('Flour 1kg', 2, 110)); store.dispatch(addToCart('Juice 2L', 1, 250));

For clarification purposes, I'll illustrate below how the entire code should look after making all the above changes:

// src/index.js import { createStore } from "redux"; import { combineReducers } from 'redux'; const productsReducer = function(state=[], action) { return state; } const initialState = { cart: [ { product: 'bread 700g', quantity: 2, unitCost: 90 }, { product: 'milk 500ml', quantity: 1, unitCost: 47 } ] } const ADD_TO_CART = 'ADD_TO_CART'; const cartReducer = function(state=initialState, action) { switch (action.type) { case ADD_TO_CART: { return { ...state, cart: [...state.cart, action.payload] } } default: return state; } } function addToCart(product, quantity, unitCost) { return { type: ADD_TO_CART, payload: { product, quantity, unitCost } } } const allReducers = { products: productsReducer, shoppingCart: cartReducer } const rootReducer = combineReducers(allReducers); let store = createStore(rootReducer); console.log("initial state: ", store.getState()); let unsubscribe = store.subscribe(() => console.log(store.getState()) ); store.dispatch(addToCart('Coffee 500gm', 1, 250)); store.dispatch(addToCart('Flour 1kg', 2, 110)); store.dispatch(addToCart('Juice 2L', 1, 250)); unsubscribe();

After you've saved your code, Chrome should automatically refresh. Check the console tab to confirm that the new items have been added:

Figure 3: Redux Actions Dispatched

Continue reading %Getting Started with Redux%

15 Top Prototyping Tools Go Head-to-Head

SitePoint - Tue, 09/26/2017 - 11:00

As the number and variety of prototyping tools continues to grow, it’s becoming harder and harder to figure out which tools meet what needs, and who they’re suitable for. Since we first wrote this article back in 2015, countless design apps have dominated (and changed) the prototyping space.

Stakeholder feedback and user testing is now taking a far greater role in UI design and this new generation of tools aims to connect these two previously separate stages of the design process. Clients want to be involved, and email isn’t cutting it anymore. Some apps like UXPin are also taking care of the wireframing stages, whereas others like InVision App are bridging the gap between designer and developer by offering design handoff tools.

Plus, there’s now a clear divide between desktop tools with cloud sharing (Adobe XD, Axure, Balsamiq, Sketch+InVision) and collaborative online tools (Figma, UXPin, Fluid UI,

Many of these tools appear to be converging on a feature set that defines the role of the modern user experience designer. TL;DR—here’s a swift comparison of prototyping tools.

Adobe XD

Adobe may have been caught napping during the rise of Sketch, but they’re rapidly making up for it with Adobe XD. Launched in March 2016, and still in beta as of July 2017, it’s the latest addition to Adobe’s Creative Cloud Suite. While you can prototype interactions in Sketch with the help of the Craft Plugin, Adobe XD impressively offers these tools right out of the box, so designers are already comparing Adobe XD to Sketch like longtime rivals.

It’s definitely worth a look if you’re interested in a tool that covers all your bases (low-fidelity prototyping, high-fidelity prototyping, user flows, sharing and feedback) in a single app.


  • Available for macOS and Windows
  • Everything you need in a single app


  • Still in beta (although pleasantly mature as a design app)
  • Plugin workflow non-existent, you’re locked into the Adobe ecosystem
InVision App (with Sketch and Craft)

InVision App is the largest and most successful design collaboration tool currently on the market, the primary go-to tool for serious designers and enterprise teams alike. With tools like whiteboard collaboration, dynamic prototyping, comments, device preview, video calling, user testing, version control and design handoff, InVision is already a colossal force in the prototyping space, but when you factor in its Sketch and Photoshop integrations, it becomes an all-in-one design suite, especially when you throw in Craft, the Sketch/Photoshop Plugin that brings a lot of that functionality directly into your design app of choice.


  • Powerful, mature platform
  • Fully-integrated with Sketch for high-fidelity design
  • Constantly being updated with new features


  • Feature-set can be a little overwhelming at first
  • Sketch is only available for macOS users only (but you can pair InVision with Photoshop on Windows, although Photoshop isn’t strictly a UI design tool)
Marvel App

A very strong favourite for those looking for simpler, friendlier alternatives to InVision App. Marvel App has excelled at creating a prototyping tool that works for both advanced UX designers and those simply looking to communicate high and low fidelity concepts. Plus, while they champion working with Sketch, they also offer component libraries to allow for a complete online workflow in Marvel. Marvel App also recently integrated fan-favourite POP, which allows designers to transform their pen/paper ideas into iPhone and Android apps.


  • Great support for transitions for additional realism
  • Friendlier for non-designers, especially when giving feedback


  • Web based only
  • No offline designing

UXPin is the most complete online solution for UX designers in terms of their offering. While you can import from Sketch and Photoshop, you can also design complex and responsive interfaces with UXPin’s built-in libraries, making UXPin something of a wireframing tool as well. With their design systems features, UXPin becomes one of the most complex tools in terms of automated documentation, designer/developer handoffs and collaborative features.


  • Responsive design with breakpoints
  • Powerful animations (not just linking screens)
  • Complete design collaboration and handoff solution


  • A little pricey versus the competition at $49/user/month
  • Additional features increase the complexity of use

Webflow is a visual tool for designing responsive websites that also exports clean code—it removes the headache of going from design to published on the web. Competing as much with WordPress as it does with Sketch App, Webflow lets you design fully functional responsive websites incorporating back-end (API) data and can automatically deploy to fully scalable, worry-free hosting with a single click of a button. It’s basically Adobe Dreamweaver for the modern-day designer who cares about clean code and mobile-friendly web design.


  • Real data can be included (from APIs/JSON/etc)
  • Creates high-quality, reusable code
  • Responsive websites can be designed and deployed with ease


  • Not useful for designing native mobile apps
  • Requires some knowledge of HTML/CSS to be at its most effective

A somewhat recent addition to the prototyping space, Figma boasts the most mesmerising real-time design collaboration features of any prototyping tool while modelling its feature-set on many of the intuitive design tools of Sketch and Adobe XD (such as symbols and device preview), along with a bunch of tools usually reserved for the online crowd (such as versioning and design handoff). Version 2.0, launched in July 2017, includes a prototyping mode with hotspots and developer handoffs to further streamline the design workflow. It works in the browser, on macOS, and on WIndows, although it sometimes can be slow.


  • Real-time collaborative design features are second-to-none
  • Fully-featured, ideal for designers from start to finish


  • Figma can can be laggy at time, especially with real-time collaboration
9 More Prototyping Tools Worth Considering Fluid UI

With a strong focus on simplicity and communication, Fluid UI includes built-in high and low fidelity component libraries, live team collaboration, device previews and video presentations making it a top-notch solution for designers, product managers and founders alike.

Mature and feature-rich, is best used by designers looking to create high-fidelity and highly-animated prototypes in the browser.

Axure RP 8

Established way back in 2003, Axure is an excellent choice for UX designers who need to create specifications for designs and animations in supreme detail. Axure’s includes support for conditional flow interactions, dynamic content and adaptive/responsive design, as well as high and low-fidelity prototyping. Axure is a serious tool for serious designers.

Continue reading %15 Top Prototyping Tools Go Head-to-Head%

CSS font-display: The Future of Font Rendering on the Web

SitePoint - Tue, 09/26/2017 - 09:00

One of the downsides of using web fonts is that if a font is not available on a user’s device, it must be downloaded. This means that before the font becomes available the browser has to decide how to handle the display of any block of text that uses that font. And it needs to do so in a way that doesn’t significantly impact the user experience and perceived performance.

In the course of time, browsers have adopted several strategies to mitigate this problem. But they do this in different ways and out of the control of developers, who have had to devise several techniques and workarounds to overcome these issues.

Enter the font-display descriptor for the @font-face at-rule. This CSS feature introduces a way to standardize these behaviors and provide more control to developers.

Using font-display

Before looking in detail at the various features offered by font-display, let’s briefly consider how you might use the feature in your CSS.

First of all, font-display is not a CSS property but, as mentioned in the intro, it is a descriptor for the @font-face at-rule. This means that it must be used inside a @font-face rule, as showed in the following code:

[code language="css"]
@font-face {
font-family: 'Saira Condensed';
src: url(fonts/sairacondensed.woff2) format('woff2');
font-display: swap;

In this snippet I’m defining a swap value for the font Saira Condensed.

The keywords for all the available values are:

  • auto
  • block
  • swap
  • fallback
  • optional

The initial value for font-display is auto.

In later sections I’ll go over each of these values in detail. But first, let’s look at the time period that the browser uses to determine the font to be rendered. When discussing each of the values, I’ll explain the different aspects of the timeline and how these behave for each value.

The font-display Timeline

At the core of this feature is the concept of the font-display timeline. The font loading time, starting from its request and ending with its successful loading or failure, can be divided into three consecutive periods that dictate how a browser should render the text. These three periods are as follows:

  • The block period. During this period, the browser renders the text with an invisible fallback font. If the requested font successfully loads, the text is re-rendered with that requested font. The invisible fallback font acts as a blank placeholder for the text. This reduces layout shifting when the re-rendering is performed.
  • The swap period. If the desired font is not yet available, the fallback font is used, but this time the text is visible. Again, if the loading font comes in, it is used.
  • The failure period. If the font does not become available, the browser doesn’t wait for it, and the text will be displayed with the fallback font on the duration of the current page visit. Note that this doesn’t necessarily mean that the font loading is aborted; instead, the browser can decide to continue it, so that the font will be ready for use on successive page visits by the same user.

Continue reading %CSS font-display: The Future of Font Rendering on the Web%

Subscribe to Kariann Wolf Morris aggregator - Web Design &amp;amp; Development
46 Old Stage Road
Ballston Lake NY 12019
p | 518.857.3359