I’ve been asked a lot of questions and have done a lot of work recently around security hardening for HTTP Streaming with Adobe Media Server (AMS) and Apache. Content protection and sever security and hardening is an evolving beast and the best thing to do is to keep in mind what needs to be secure and how it can possibly be circumvented. However, there’s some basic things to know and a couple tips I can shed some light on within the span of a blog post.

First, with HTTP streaming I think of security in three major categories:

  1. Server security
  2. Content protection over the wire
  3. Content protection while at rest and preventing unauthorized access

Server Security

When considering the origin of your content, you need to follow the general server hardening and security processes:

  • Decreasing access to root level accounts.
  • Protecting authentication info such as passwords and certs. Changing them from time to time as well.
  • Keeping the Operating System and server applications patched.
  • Using firewalls to decrease the network attack surface of your server.
  • Auditing the server files and logs and using some IDS systems.
  • The list goes on…

After you’ve done due diligence when it comes to your server, then next you need to concern yourself with AMS and Apache as well. Here’s a couple tips to keep in mind:

Adobe Media Server

Apache Server

The version of Apache bundled with AMS is 2.2.x. Unfortunately, due to the modules needed for HTTP Streaming you can’t upgrade to a newer version of Apache such as 2.4. However, you can lock 2.2 down as far as you need. Here’s some tips on that:

AMS and Apache – Ongoing

A really good way to see how well your lockdown efforts are going is to run a vulnerability scanner against your server. This not only will give you an idea of what’s still exposed, but it’s also a good way to check your server from time to time as new vulnerabilities are found. Here’s a scanner that I like using:

Content Protection Over the Wire

Now that your server is secure, you need to figure out how to protect your content as it traverses the network between your AMS/Apache origin and the end-user’s video player. SSL is always an option, but did you know that AMS has some built-in DRM protection that doesn’t need to use SSL?

Content Protection While at Rest and Preventing Unauthorized Access

How do we prevent unauthorized access and protect the content that the end user has streamed to their local machine?

Prevent Unauthorized Access

There’s a number of things you can do to prevent unauthorized access. Without going too far into implementation details, this step requires:

  1. Some co-ordination with the application developers on your team to basically create a binding between the video player and the wrapping application. For instance, the video player would require some kind of token to be passed in before it will play back content. This token can be anything from a shared secret to some information acquired through a valid SSO sign-on.
  2. If you’re using PHDS, once the player is bound to your system, then you can leverage Protected SWF Verification for PHDS to make sure only your player can play back the PHDS content:
  3. If you’re using HLS, it’s much trickier and not quite as all encompassing, but someting you might keep in mind is locking down requests for content through token rewrites that have a short expiration ttl:

Content Protection While at Rest

This one’s easy…for now. If you use PHDS or PHLS as mentioned in the previous section, the data itself is protect with DRM. Basically, a simple AMS bundled version of Adobe Access DRM. :)

Closing thoughts

Don’t consider this article and the referenced links as an end-all be all to HTTP Streaming Security with AMS/Apache. It’s just a quick summary of some of the things to consider.

In my consulting experience, I’ve had a wide variety of consulting clients each with varying needs for security. Some implement everything, some a subset and most of the time there’s custom development, consulting, and testing involved. Also, security is a trade-off, the more secure you make something the less functionality there will be for you to leverage. So, implement your security while keeping your required functionality in mind. And test, Test, TEST your configurations against your production use cases.

Hope you enjoyed the read. If you’re ever in need of advice or help with implementing your HTTP Streaming Security, feel free to drop us a line:



Back in 2012, Adobe proposed CSS Shapes as a new feature to the Official Specifications of The Internet W3C. Earlier this spring, a portion of the proposed features reached Candidate Recommendation status. While that is still a far cry from becoming official by any definition, it is a very encouraging step. Some elements of the functionality are also becoming available to experiment with in the “bleeding edge” versions of major browsers, Chrome Canary, and the Webkit Nightly Builds.

CodePen Example

An example on CodePen by Adobe Web Platform


CSS Shapes are an attempt to bring some of the benefits of working with type and layout in print to the modern web. Module 1 focuses primarily on how text wraps around a shape (the shape-outside property).


Despite both being aspects of CSS that relate to Shapes, CSS Shapes discussed in this article are actually quite different from the very cool little examples showcased at CSS-Tricks’ The Shapes of CSS.

Those shapes are created almost entirely through the clever manipulation of individual border and border-radius properties, with the main intent of rendering shapes in the browser with pure CSS code.

The CSS Shapes defined in the spec are in fact full geometric shapes:

inset() = inset( <shape-arg>{1,4} [round <border-radius>]? )

circle() = circle( [<shape-radius>]? [at <position>]? )

ellipse() = ellipse( [<shape-radius>{2}]? [at <position>]? )

polygon() = polygon( [<fill-rule>,]? [<shape-arg> <shape-arg>]# )

Created with the main intention of wrapping and flowing text around and through the elements of a page, while providing the flexibility required to responsively accomodate the ever-growing ecosystem of devices and screen resolutions being used today.

The most concise explanation I’ve found is these illustrations from the current W3C Document




Chrome Canary

  • Open up Chrome Canary and type “chrome://flags” into the address bar
  • You should see a large list of experimental features, with the option to Enable or Disable each one
    • Be very careful on this page because you actually can break things here
  • Find the entry titled “Enable experimental Web Platform features.” and set it to Enabled
  • Restart Canary and the following properties should be available to use:
    • shape-image-threshold: 0;
    • shape-margin: 0px;
    • shape-outside: none;

Webkit Nightly

  • As of the current version at the time of writing (r168728) some CSS Shapes are available by default, in the standard download:
    • -webkit-shape-image-threshold: 0;
    • -webkit-shape-margin: 0px;
    • -webkit-shape-outside: none;

Once you’re set up with a browser that supports these experimental features, you can checkout the Web Platform Live Example of shape-outside and instead of layout that everyone else sees,

Without Shapesyou should see that block of text fit nicely between the two triangles.

With Shapes


It will probably still be quite a while before these features will reach a level of implementation and adoption that they can actually be used, but it’s always good to keep an eye on what’s coming.

Here are some great resources for learning more, and seeing what sort of experiments people are trying already:




Last month we had the pleasure of attending the NAB 2014 conference in Las Vegas, Nevada.

I even came out of Lost Wages a little ahead, so for now I can still call it Las Vegas. We’ll just have to wait until my next visit to see what I call it after I leave.

Since this was my first time in attendance, I wanted to see as much as I possibly could without saturating my brain with products and information. When you’re at a conference with 90,000+ attendees and hundreds of exhibitors, it’s easy to get overwhelmed. With that in mind, I made a little mental agenda to focus on relevance versus irrelevance.

What’s really cool about working for a small software development, consulting, training, and integration company like RealEyes, is that we get to deal firsthand with many real world use cases that are never the same. It keeps all of us on our toes and helps us be more knowledgeable in our niche.

Now back to NAB. To anyone who is reading this blog post and has been there before or is familiar with the conference overall knows that it’s the big leagues. Companies from all over the world that either already have an impact or are trying to have an impact are there. From hardware to software and everything in between, they’re there.

One thing I had never considered was how many moving parts go into broadcasting. It’s insane. Since my primary focus deals in streaming solutions, encoding and web collaboration, I never think about what it takes to produce the content, just how to get it out there and deliver it successfully. And since I don’t think we’re going to be delving into video and/or broadcast production any time soon (or are we?), we have to be resourceful with best practices for content delivery.

Varnish Plus is what we feel will offer the proverbial, “icing on the cake” for our end-to-end solution approach. What is it exactly? It’s an HTTP accelerator. Simply put, it’s like supercharging your car. While it’s already been established as the web accelerator of choice in Europe, we are excited to be the premiere Varnish Plus resale, implementation and training partner here in the United States.

Please contact us directly to find out how you can supercharge your content delivery.

Adobe Experiment “Project Parfait”

Posted on April 30, 2014 at 5:39 pm in Blog, Development

Project Parfait

Photoshop has been popular for building pixel-perfect web-design comps for many years now, and has had at least some level of support intended specifically for that purpose for just about as long. With the end of development on Fireworks, and the introduction of the Edge line of products, Adobe has been working on integrating Photoshop more and more seamlessly with tools that are built for developers.

Last week Adobe unveiled it’s latest experiment to the public, which for the time being, is called Project Parfait.

Project Parfait Screenshot


Project Parfait is a new experiment from Adobe that’s currently online and free to try out and use. The general idea of Project Parfait, is that it is an online service which will allow you to log in with your Adobe ID, and upload a PSD comp of an app or website – without any extra formatting, organizing, or labelling – and it will then display in your browser almost exactly as it did in Photoshop*.

* All the files I’ve experimented with so far have rendered exactly as I’d expected them to, but the current FAQ does call out that there may be occasional discrepancies. 

Instead of any elements or layers being editable, however, clicking on any of them will display all sorts of incredibly useful information specific to your selected element. Selecting two elements at the same time will give you an exact pixel measurement between those two items.

The information panel will also display some overall specs of the file which have traditionally been a bit more difficult to compile in Photoshop than one would like.



Project Parfait - Fonts

My personal favorite so far, is the complete list of all fonts and font styles used in the file. A complex design comp can easily contain well over a hundred layers, nested, and hidden, and generally difficult to assess all at once, even when filtering for certain types. Compiling a list of all the fonts used, and at what weights and styles, can be tedious even with a PSD created yourself. If you’re working with a PSD provided by another designer, it can border on maddening. While there are plugins to address this issue, I haven’t been completely satisfied with the ones I’ve tried. Project Parfait though does a grand job.

Project Parfait - Fonts

If you click on and select any text layer, you will be able to view the exact properties of that instance, as well as get the CSS styling rules, in a remarkably clean, and a la carte format.


Another very useful bit of information that Project Parfait provides as soon as you load a file, is a clear list of colors used in the file. Any element or layer with a solid fill color will be sampled for the list of swatches, while image layers will not. This gives you an immediate overview of the site’s palette.

Project Parfait - Colors

Clicking on an individual swatch will give you the numeric color values, available in RGB, Hex, or HSL. As well as place marker overlays on the comp to point out each instance of that specific color.

Project Parfait - Colors

One great use for this, is tightening up your color palette, as it allows you to be much more conscious of the exact color values implemented. In this instance, I have several grey values that are only a few tics away from each other, and can probably be pulled into one single value for a more defined overall palette.

As with the text elements, directly selecting any of these elements will provide you with a clean, a la carte list of CSS rules, which you can drop directly into your style sheet. Transparent layers are even accounted for, and will provide CSS using RGBA values, as opposed to HEX values.


This functionality is more or less identical to the color functionality discussed above, except that it will identify elements with a gradient fill.

Project Parfait - Gradients

The major difference is in the CSS generated. Even simple gradients can require some fairly sizeable chunks of code to be rendered with CSS, and while using any number of available CSS3 gradient generators will provide that code, they require you to recreate the specific values used in your comp in order to render your gradient correctly. Project Parfait will generate that CSS directly from your PSD, to save you that step and cut down on the chances of code straying from your design.


Project Parfait - Measurements

Opinion seems a bit divided on this feature in the Project Parfait forums so far, but I personally would call this out as my other favorite feature to be released so far. Clicking to select any element in your document will display clear and prominent pixel dimensions for both the width and heighth and the xy position of that element. This same information is readily available in Photoshop, but the ease of selecting elements and the prominence with which the information is displayed are extra convenient.

Project Parfait - Measurements

The magic happens (for me at least), once you already have one element selected, and then shift-click to select a second element. Regardless of the two element’s placement, shape, or (lack of) alignment with one another, you get exact pixel dimensions of their offset from one another. You don’t have to add any guides, zoom in to verify that your ruler is set precisely, or worry about any of the little quirks involved with using the Photoshop Ruler Tool. It’s even easy to determine and double-check the exact alignment of text elements within their containers.

Exporting Image Assets

By default, when you load up Project Parfait, you will see the Styles Tab on the right-hand side of the screen, which contains everything discussed thus far. There are also tabs titled “Layers” and “Assets” which can be explored. The Layers Tab functions more or less the way the Layers Panel in Photoshop does. When you first open a file, the visibility of the layers will be in the same state they were when you saved the PSD. Using the familiar eye icon you can toggle the visibility of any layers or layer groups that contain different pages or states of the design.

Project Parfait - Layers

The two main differences in Project Parfait, are a Reset Layers button, which will set all the layers back to the state they were in when you opened the file, and a down arrow visible on the right side of selected layer’s rows in the list.

Project Parfait - Extract

This relatively inconspicuous little down arrow will in fact allow you to extract and generate an image element in a variety of formats. In essence, this is a slickly streamlined workflow replacement for selecting an element’s layer, hiding all other layers, trimming transparent pixels, and then starting the “Save For Web and Devices” dialog.

Closing thoughts and Caveats

Despite it’s still-very-experimental and very-much-in-development status, Project Parfait offers some incredibly useful little tools and timesavers to anyone working from PSD comps. This seems to be an incredibly well-formed example of the capabilities being given to JavaScript as a fully-implemented language for Adobe Extensions and Scripting.

It is well worth giving the official FAQ at least a quick read to see the latest status of the project, and what features are currently in development.

360Flex 2014 – Why you should register

Posted on April 21, 2014 at 12:23 pm in Blog, Development, Training

Here’s a number of reasons of why you should consider registering for 360Flex this year:

  1. There will be a series of mobile development sessions and labs give by our very own Jun Heider and OmPrakash Muppirala to help you build applications for both Android and iOS devices.
  2. Alex Harui will be there to talk about FlexJS which allows you to leverage your existing Flex skills to build JavaScript based applications.
  3. Ted Patrick will be there to help you make the move from Flex to Web Standards if that is how you choose to go.
  4. As always, we have multiple sessions of Michael Labriola. He’s always got something slick up his sleeve to discuss.
  5. There’s a number of other great sessions and speakers as well.

So what are you waiting for? Register and experience the awesome this May!

Using the AngularJS $logProvider

Posted on April 16, 2014 at 12:11 am in Blog, Development

Logging and debugging is a daily occurrence when developing JavaScript based applications. AngularJS provides a simple, effective way to enable logging in your applications – the $logProvider. $logProvider is used to enable logging and the $log object is used to log within your applications objects.


Enabling the $logProvider is as simple as injecting the $log object into your object and then calling $log.debug(); to send logs to the console.

Turning off logging

You can disable logging (AngularJS 1.1.2+) by calling


AngularJS provides a mechanisms to extend the built in $logProvider, $provider.decorator(). An example will get the point across much more quickly than me trying to explain it all.

Basically we intercept the call using decorate so we can add on the features and functionality we need to the $log.debug() call.

AngualrJS version 1.2.16 was used as a basis for this article.
Thanks to Burleson Thomas and this post for the extension example.

Women Who Code JavaScript: A Successful Study Group

Posted on March 26, 2014 at 12:11 pm in Blog, Development

This week RealEyes hosted the first Women Who Code Denver JavaScript Study Group with 11 women in attendance. Many different experience levels were represented making for a great opportunity to share and learn together.

Women Who Code Study Groups provide the opportunity for an evening of dedicated work time on a specific programming topic, in this case JavaScript. Participants bring their own computers and either come with something to work on or work with the group leader to get started. This particular study group also includes a short presentation on a related topic at the beginning of each meeting.

RealEyes will continue to host the WWC JavaScript Study Group. The next meeting will be in April. Checkout the meetup group for more information.

Mysteries of AMS Stream Recording

Posted on March 19, 2014 at 12:11 pm in Blog, Development

When delving into the deeper realms of Adobe Media Server, sometimes you find some interesting gems. Sometimes you unleash code balrogs. In a recent project where we were setting up recording of a continuously running live video stream, we stumbled upon one of the latter; a bug so bedeviling that I am compelled to write about it in the hopes that I might save fellow coders from falling prey to such a fiendish bug.

Here’s the situation: we had a client who wanted to consume many different live video feeds that were running 24/7. They wanted record them for later review, and also provide limited DVR functionality on the live stream. In the AMS server-side application we wrote, we would consume a stream from a remote server and spawn a Stream object to record it locally on the server using Stream class’ record method. At the same time, AMS was republishing the stream from the Stream object with DVR functionality for a Flash client app. We would record the stream for 30 minutes, then stop recording on that Stream object, clean that object up, update the file name for the next segment, and then start recording again on a different Stream object. After 24 hours, we would end up with a day’s worth of video broken up into 30 minute chunks with minimal gaps in playback. That’s pretty cool. In addition to the recording, we had an FFMPEG job running on the recorded videos to generate thumbnails. Because of the server load, we staggered the recording intervals of the different streams so recordings wouldn’t finish at the same time.

Everything seemed to be working OK for the most part. Videos were recorded. Thumbnails were generated. The client app could view video. The only confounding thing was that occasionally, without rhyme or reason, the video would play back in the client app with audio, but no video. The thumbnails generated were blank. Even more confounding was that if we played back a problem video as live, we could get video. Only with DVR playback we would not be able to see the video. What was going on?

We found there were some NetStream.Record.NoAccess status messages in the application logs. That can happen if the client doesn’t have file permissions, or the stream is still playing when you tried to record it. Unfortunately that did not always correspond to a file that had audio, but no video. We tried the recommended methods for stopping a recording. We called streamObj.record( false ); and false ); We also nulled out the Stream object. No luck. The problem would still happen randomly and seemingly without cause.

We ran one of the problem video files through the FLVCheck tool and received this message: error: -18 Truncated box. Some research showed that this is from the recording not stopping properly. The question was, why wasn’t our stopping of the recording working?

In short, we never found out. However, after much debugging and researching, we did find another way. We simplified things down by splitting our app in two. One app did the recording and another did the DVR work. The live stream would be published into our recording app, which would republish it to the DVR playback app using ns.publish( republishStreamName, "record" );. This way we would end up with a recording and the DVR app could make the stream available for playback on the client app. To stop, we would just do ns.publish( false ); and null out the reference to the NetStream object. That solved the issue and we no longer encountered truncated boxes or videos that played back without video.

So, the short of it is, using the NetStream’s publish method with its “record” parameter solved the issue, rather than using the Stream class’ record method. Arm yourself with this knowledge as you tread the lower reaches of AMS and hopefully you can tell this particular code balrog, “You shall not pass!”

WebVTT Captions and Subtitles

Posted on March 12, 2014 at 11:31 am in Blog, Development

Using WebVTT for Captions and Subtitles

WebVTT can be used to support accessibility by providing captions or can support localization by adding subtitles to a movie. This article will explain how to set up basic closed captions and subtitles.

To start, we need to create a video tag with a video source. I’ve set up an example All the code for this article is also at I’m using Sintel as my video source. For now, this is basic video player that will use the built in controls for playback.

Setup a captions track

First, let’s set up some English Closed Captions. In order to have your Closed Captions display correctly as the video plays back, you want to create what is called a WebVTT file. WebVTT stands for Web Video Text Tracks. The Sintel video short already has caption files we can use. I’ve saved the file in my src/captions folder and given it an extension of ‘.vtt’. Additionally, I had to make some formatting changes to conform to WebVTT standards.

The ‘.vtt’ file looks like this (check out this introduction to WebVTT for more info):


00:01:47.250 --> 00:01:50.500
This blade has a dark past.

00:01:51.800 --> 00:01:55.800
It has shed much innocent blood.

00:01:58.000 --> 00:02:01.450
You're a fool for traveling alone,
so completely unprepared.

Now that we have a WebVTT file, we want to make it display with the video. Here is our video tag currently:

<video id="videoMain"
    type="video/mp4" width="640" height="360"
    autoplay controls>

To get the closed captions working, we need to add what is called a track tag. It looks like this:

<track id="enCaptions" kind="captions"
    label="English Captions"
    src="captions/sintel-en-us.vtt" />

Great, we made the video accessible! Your video tag should now look like:

<video id="videoMain"
    type="video/mp4" width="640" height="360"
    autoplay controls>
      <track id="enCaptions" kind="captions"
      label="English Captions"
      src="captions/sintel-en-us.vtt" />

Subtitle Tracks

Now let’s add some French and Italian subtitles. This time we will pull the captions from the Sintel site and save them as a ‘.vtt’ file just as we did with the caption file. However, the track tag for subtitles behaves somewhat differently.

<track id="frSubtitles" kind="subtitles"
    label="French" src="captions/sintel-fr.vtt"
    srclang="fr" />

Note that we have changed the type of this track to subtitles. This is the most important part of your track tag for determining how the WebVTT file will be used. Possible types include:

  • captions
  • subtitles
  • descriptions
  • chapters
  • metadata – enables adding thumbnails or other script based logic

We have also added a property called srclang to this track tag. This is only required for subtitles but can be added to other track tag types as well.

Now the video tag should look like this:

<video id="videoMain"
    type="video/mp4" width="640" height="360"
    autoplay controls>
      <track id="enCaptions" kind="captions"
      label="English Captions"
      src="captions/sintel-en-us.vtt" />
      <track id="frSubtitles" kind="subtitles"
      src="captions/sintel-fr.vtt" srclang="fr" />
      <track id="itSubtitles" kind="subtitles"
      src="captions/sintel-it.vtt" srclang="it" />

There are now three track tags that will let the browser know what options to display under the Closed Caption/Subtitles option. In Safari this looks like:

Screenshot 2014-03-10 00.06.29

In Safari 6.1 or later and Internet Explorer 11 the browser’s default control bar will display all the track options we have added to the tag. Unfortunately not all browsers have fully implemented this functionality. That’s where a custom javascript solution can come in handy.

Adding Manual Control

In order to improve the cross-browser compatibility we need to manage the track options via JavaScript. Below the video is a list of language options – English, French and Italian. I’ve added a basic click handler for each element that will allow us to change the current text track.

document.getElementById('en').onclick = function() {
	updateTextTracks( 'enCaptions' );

document.getElementById('fr').onclick = function() {
	updateTextTracks( 'frSubtitles' );

document.getElementById('it').onclick = function() {
	updateTextTracks( 'itSubtitles' );

Each click handler calls the updateTextTracks function which gets the textTracks property of the video element and then swaps the mode values so that the selected language is ‘showing’ and the other tracks are ‘disabled’.

var updateTextTracks = function( id ) {
	var textTracks = document.getElementById( 'videoMain' ).textTracks;
	for (var i = textTracks.length - 1; i >= 0; i--) {
		if( textTracks[i].id === id ) {
			textTracks[i].mode = 'showing';
		} else {
			textTracks[i].mode = 'disabled';

The caveat here is that only the latest browsers will fully support the textTracks property of the video element. Check out Captionator for backwards compatibility.

A Quick Guide to Chrome’s New Built-in Device Emulation

Posted on February 19, 2014 at 12:00 pm in Blog, Development

Mobile device testing is necessary, but presents many challenges

When you consider that over the past 5 years mobile devices have risen from 0.9% to over 15% of all internet traffic (and that number will continue to climb (source - slide #32)), it’s become increasingly important to make sure that there is at least a basic level of support for mobile users in anything you build. While there are new tools and technologies popping up daily to help make that easier, there are still an incredible number of challenges involved. One of the most common, is finding devices to test your creations with. Even for those of us that tend to collect far more gadgets than the average bear, there are almost certainly a very great number of devices that will not be available to us.  There is a recent phenomena of Open Device Labs that can definitely offer some help with that (for instance, the Rocky Mountain Open Device Lab here in Denver) but there isn’t always going to be one that is convenient, or even available.

When devices simply aren’t available, it’s still important to try to test the best that you can with some fallback options. There are many available options for emulating various mobile devices on a desktop, ranging from some small browser extensions that simply resize your window to match the pixel dimensions of a few devices, to subscription services that offer many more features, to full-blown device simulators available in Xcode or the Android SDK. Any of these options are far better than nothing, but there always seems to be a compromise. Most free and lightweight options tend to be lacking in features, the subscription services can be quite pricey, and the 9.6GB footprint of Xcode (on my machine at least) can seem a bit ridiculous, especially if you don’t actually tend to build native iOS or Mac apps.

Chrome’s Dev Tools now offer a solution

Device Emulator in Action

Luckily, as of version 32, Google Chrome has added a rather impressive, and built-in, set of capabilities for Mobile Device Emulation to DevTools. By and large, this new offering addresses all of the compromises I listed above. There is a rather comprehensive level of functionality as compared to any device emulator, the tools are free and built right into Chrome, and while Chrome can eat up a lot of resources (especially if you tend to open and use as many tabs as I do), it is still much, much lighter than Xcode and is probably one of your primary browsers anyway.

Enabling device emulation

So, with that bit of Chrome DevTools fanboyism finished, here’s a quick introduction on how to enable and use the new mobile device emulation features.  They are a bit hidden, so here’s how to turn them on and get to them:

  • Open DevTools (Menu>View>Developer>Developer Tools – OR – CMD(CTRL)+ALT+I)
  • Open DevTools Settings (Click on the Gear icon near the right-hand side of the DevTools menu bar)
  • Click on the “Overrides” tab
    • If you’re using Chrome Canary, stay on the “General” tab and look under the heading “Appearance”
  • Tick the checkbox for “Show ‘Emulation’ view in console drawer”
  • Close the settings



That will enable the device emulation features, or at least enable the menu for them, now to get to them, all you have to do is open up the console drawer (hit ESC in any Dev Tools tab other than the Console Tab) and you’ll see a new tab available titled “Emulation”.

Emulation Tab Added

Emulating a device

When you first open that tab “Device” in the list on the left-side should be selected by default, and will allow you to select from a fairly impressive list of devices, I’ll be using the Google Nexus 4 in this example. Selecting a device will display a few key data points specific to that device, which Chrome will then emulate.

  • Viewport
    • Pixel Dimensions (768×1280)
    • Pixel Ratio (2)
    • Font Scale Factor (1.083)
  • User Agent
    • Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 4 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.166 Mobile Safari/535.19


At that point, all you have to do is click the “Emulate” button, and Chrome will resize it’s view and reload the current page as though it were the device you selected.

By default, emulating a device will turn on a standard set of options, though you can easily add or remove the available capabilities, as well as tweak and fine-tune the settings for almost all of them.

Manipulating your emulated device

Drilling down through the remainder of the list on the left-side of the “Emulation” tab will allow you to see and customize all the available details that Chrome is using to emulate your selected device. The “Screen” section seems the most immediately useful, but the “Sensors” section seems the coolest.

There is one other very important thing to call out as a use for all these customization options. Since we have the ability to fine-tune so many different device properties, it means that it is entirely possible to emulate almost any device you can find the specs for. Granted, DevTools provides presets for just about all of the popular devices out there, but it’s good to know that you’re not limited to their list.

Working with screen properties

The “Screen” section allows several options for fine-tuning the way Chrome emulates the selected device’s display. By default, the Resolution values will be set to match the chosen device’s real-world pixel resolution. In general, when emulating tablets, Chrome will set the initial Resolution values as though the tablet is in Landscape (horizontal) orientation. When emulating phones, they will initially be shown in Portrait (vertical) orientation. You can easily swap the two values by clicking the button between the two resolution text fields.


One thing to be aware of, is that swapping these values will make the device viewport appear as though it has rotated, though in terms of device behavior, it has really just resized. What this means for your debugging, is that any styling that uses breakpoints based on width should behave just fine. If, on the other hand, you happen to have JavaScript listening for an orientationchange event in iOS, it won’t fire because there isn’t any accelerometer activity being emulated when you swap those values. This is a prime example of why, as impressive as these tools are, it’s still important to test on actual devices whenever possible.

It is also important to note that if you enable the “Shrink to fit” option in this panel, it can override the resolution values that are set. The aspect ratio will be maintained, but if your browser window is smaller than the defined dimensions, the emulated display will be resized to fit within it. While this is definitely useful in some instances, you’ll want to remember to disable this option before you measure anything.

Changing the user agent

Next in our list is the “User Agent” section, which is fairly  straightforward. It allows you to toggle between using Chrome’s default User Agent String, which will provide (relatively) accurate information about your browser and hardware set up to sites that you visit, with the thought that they may serve up different content and experiences, depending on your configuration. With that mind, it makes sense that when attempting to emulate the Nexus 4 from our examples earlier, you probably don’t want to provide a User Agent String that identifies your setup as a Mac Desktop with the latest version of OSX. Conveniently, if you’re using one of the default device presets available from the list in the “device” section, Chrome will have already selected and enabled the the corresponding User Agent from it’s list. If you would like to edit the string for some reason, simply make your change in the textbox and hit Enter. If you are emulating a custom device other than the provided presets, you can replace the entire User Agent String. I find that is usually a good resource for finding strings from any number of browser versions.


Emulating sensor data

Last in the list, is the “Sensors” section, which offers up some settings that are possibly a bit less commonly needed in day to day web development, but are extremely cool. Only the first option, “Emulate touch screen”, will get enabled by default. When it is active, your cursor will render as a semi-transparent circle that is just large enough to help you keep touch-targets in mind. Paul Irish has a nice demo available on his site for experimenting with the touch events.


At this point in time, there are some limited capabilities for emulating multi-touch interactions. Currently only a simple implementation of the Pinch to Zoom action is available, though it seems likely to me that functionality for other common multi-touch gestures may be added in future updates. To use this action, hold down the SHIFT key, then either click and drag, or scroll.

As with most of the options available in the emulation panel, it is possible to turn on or off the touch screen option independent of any other settings.

On the back end of things, touch events will be added in with some of your mouse events. Using this option will not disable mouse events, it simply adds in touch events.  As an example, while “Emulate touch screen” is active, when you click with your mouse, the page will receive a touchstart event in addition to the mousedown event. To see this illustrated, you can visit this Event Listener Test and turn touch screen emulation on and off.

The sensors section also has Geolocation and Accelerometer properties. I think these properties would be best explained by pointing you to some cool little demos that have been created. I encourage you to experiment:

Wrapping your head around the Accelerometer values can be a rather daunting task, especially when looking at text only values, which is what’s currently available in the mainstream version of Chrome (32.0.1700.107). If you are interested in working more with the accelerometer, I would highly recommend downloading Chrome Canary, as that version of the Device Emulation panel includes a small, 3D representation of your device, which rotates to illustrate the accelerometer values. The good news, is that since that’s currently available in Canary, it will probably show up in regular Chrome sometime relatively soon.


Getting your normal browser back

Once you’ve finished testing (or playing) with a device, and are ready to exit device emulation and get Chrome back to normal, just go back to the “Device” section and click the “Reset” button. Everything will return immediately to the normal desktop browser state, with a whole set of Mobile Devices that are quickly and easily available to emulate whenever you need them again.

Keep calm and continue testing

As I’ve already mentioned, these tools should not replace actual device testing by any means, but they should augment the process nicely, and provide a very convenient method for doing quick checks during development.