Development

WebVTT Captions and Subtitles

Posted on March 12, 2014 at 11:31 am in Blog, Development

Using WebVTT for Captions and Subtitles

WebVTT can be used to support accessibility by providing captions or can support localization by adding subtitles to a movie. This article will explain how to set up basic closed captions and subtitles.

To start, we need to create a video tag with a video source. I’ve set up an example http://elyseko.github.io/web-vtt-example/. All the code for this article is also at https://github.com/elyseko/web-vtt-example. I’m using Sintel as my video source. For now, this is basic video player that will use the built in controls for playback.

Setup a captions track

First, let’s set up some English Closed Captions. In order to have your Closed Captions display correctly as the video plays back, you want to create what is called a WebVTT file. WebVTT stands for Web Video Text Tracks. The Sintel video short already has caption files we can use. I’ve saved the file in my src/captions folder and given it an extension of ‘.vtt’. Additionally, I had to make some formatting changes to conform to WebVTT standards.

The ‘.vtt’ file looks like this (check out this introduction to WebVTT for more info):

WEBVTT

00:01:47.250 --> 00:01:50.500
This blade has a dark past.

00:01:51.800 --> 00:01:55.800
It has shed much innocent blood.

00:01:58.000 --> 00:02:01.450
You're a fool for traveling alone,
so completely unprepared.

Now that we have a WebVTT file, we want to make it display with the video. Here is our video tag currently:

<video id="videoMain"
    src="http://mirrorblender.top-ix.org/movies/sintel-1024-surround.mp4"
    type="video/mp4" width="640" height="360"
    autoplay controls>
</video>

To get the closed captions working, we need to add what is called a track tag. It looks like this:

<track id="enCaptions" kind="captions"
    label="English Captions"
    src="captions/sintel-en-us.vtt" />

Great, we made the video accessible! Your video tag should now look like:

<video id="videoMain"
    src="http://mirrorblender.top-ix.org/movies/sintel-1024-surround.mp4"
    type="video/mp4" width="640" height="360"
    autoplay controls>
      <track id="enCaptions" kind="captions"
      label="English Captions"
      src="captions/sintel-en-us.vtt" />
</video>

Subtitle Tracks

Now let’s add some French and Italian subtitles. This time we will pull the captions from the Sintel site and save them as a ‘.vtt’ file just as we did with the caption file. However, the track tag for subtitles behaves somewhat differently.

<track id="frSubtitles" kind="subtitles"
    label="French" src="captions/sintel-fr.vtt"
    srclang="fr" />

Note that we have changed the type of this track to subtitles. This is the most important part of your track tag for determining how the WebVTT file will be used. Possible types include:

  • captions
  • subtitles
  • descriptions
  • chapters
  • metadata – enables adding thumbnails or other script based logic

We have also added a property called srclang to this track tag. This is only required for subtitles but can be added to other track tag types as well.

Now the video tag should look like this:

<video id="videoMain"
    src="http://mirrorblender.top-ix.org/movies/sintel-1024-surround.mp4"
    type="video/mp4" width="640" height="360"
    autoplay controls>
      <track id="enCaptions" kind="captions"
      label="English Captions"
      src="captions/sintel-en-us.vtt" />
      <track id="frSubtitles" kind="subtitles"
      label="French"
      src="captions/sintel-fr.vtt" srclang="fr" />
      <track id="itSubtitles" kind="subtitles"
      label="Italian"
      src="captions/sintel-it.vtt" srclang="it" />
</video>

There are now three track tags that will let the browser know what options to display under the Closed Caption/Subtitles option. In Safari this looks like:

Screenshot 2014-03-10 00.06.29

In Safari 6.1 or later and Internet Explorer 11 the browser’s default control bar will display all the track options we have added to the tag. Unfortunately not all browsers have fully implemented this functionality. That’s where a custom javascript solution can come in handy.

Adding Manual Control

In order to improve the cross-browser compatibility we need to manage the track options via JavaScript. Below the video is a list of language options – English, French and Italian. I’ve added a basic click handler for each element that will allow us to change the current text track.

document.getElementById('en').onclick = function() {
	updateTextTracks( 'enCaptions' );
}

document.getElementById('fr').onclick = function() {
	updateTextTracks( 'frSubtitles' );
}

document.getElementById('it').onclick = function() {
	updateTextTracks( 'itSubtitles' );
}

Each click handler calls the updateTextTracks function which gets the textTracks property of the video element and then swaps the mode values so that the selected language is ‘showing’ and the other tracks are ‘disabled’.

var updateTextTracks = function( id ) {
	var textTracks = document.getElementById( 'videoMain' ).textTracks;
	for (var i = textTracks.length - 1; i >= 0; i--) {
		if( textTracks[i].id === id ) {
			textTracks[i].mode = 'showing';
		} else {
			textTracks[i].mode = 'disabled';
		}
	};
}

The caveat here is that only the latest browsers will fully support the textTracks property of the video element. Check out Captionator for backwards compatibility.

A Quick Guide to Chrome’s New Built-in Device Emulation

Posted on February 19, 2014 at 12:00 pm in Blog, Development

Mobile device testing is necessary, but presents many challenges

When you consider that over the past 5 years mobile devices have risen from 0.9% to over 15% of all internet traffic (and that number will continue to climb (source - slide #32)), it’s become increasingly important to make sure that there is at least a basic level of support for mobile users in anything you build. While there are new tools and technologies popping up daily to help make that easier, there are still an incredible number of challenges involved. One of the most common, is finding devices to test your creations with. Even for those of us that tend to collect far more gadgets than the average bear, there are almost certainly a very great number of devices that will not be available to us.  There is a recent phenomena of Open Device Labs that can definitely offer some help with that (for instance, the Rocky Mountain Open Device Lab here in Denver) but there isn’t always going to be one that is convenient, or even available.

When devices simply aren’t available, it’s still important to try to test the best that you can with some fallback options. There are many available options for emulating various mobile devices on a desktop, ranging from some small browser extensions that simply resize your window to match the pixel dimensions of a few devices, to subscription services that offer many more features, to full-blown device simulators available in Xcode or the Android SDK. Any of these options are far better than nothing, but there always seems to be a compromise. Most free and lightweight options tend to be lacking in features, the subscription services can be quite pricey, and the 9.6GB footprint of Xcode (on my machine at least) can seem a bit ridiculous, especially if you don’t actually tend to build native iOS or Mac apps.

Chrome’s Dev Tools now offer a solution

Device Emulator in Action

Luckily, as of version 32, Google Chrome has added a rather impressive, and built-in, set of capabilities for Mobile Device Emulation to DevTools. By and large, this new offering addresses all of the compromises I listed above. There is a rather comprehensive level of functionality as compared to any device emulator, the tools are free and built right into Chrome, and while Chrome can eat up a lot of resources (especially if you tend to open and use as many tabs as I do), it is still much, much lighter than Xcode and is probably one of your primary browsers anyway.

Enabling device emulation

So, with that bit of Chrome DevTools fanboyism finished, here’s a quick introduction on how to enable and use the new mobile device emulation features.  They are a bit hidden, so here’s how to turn them on and get to them:

  • Open DevTools (Menu>View>Developer>Developer Tools – OR – CMD(CTRL)+ALT+I)
  • Open DevTools Settings (Click on the Gear icon near the right-hand side of the DevTools menu bar)
  • Click on the “Overrides” tab
    • If you’re using Chrome Canary, stay on the “General” tab and look under the heading “Appearance”
  • Tick the checkbox for “Show ‘Emulation’ view in console drawer”
  • Close the settings

DevTools-Settings

Enable-Emulation-Tab

That will enable the device emulation features, or at least enable the menu for them, now to get to them, all you have to do is open up the console drawer (hit ESC in any Dev Tools tab other than the Console Tab) and you’ll see a new tab available titled “Emulation”.

Emulation Tab Added

Emulating a device

When you first open that tab “Device” in the list on the left-side should be selected by default, and will allow you to select from a fairly impressive list of devices, I’ll be using the Google Nexus 4 in this example. Selecting a device will display a few key data points specific to that device, which Chrome will then emulate.

  • Viewport
    • Pixel Dimensions (768×1280)
    • Pixel Ratio (2)
    • Font Scale Factor (1.083)
  • User Agent
    • Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 4 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.166 Mobile Safari/535.19

Available-Device-Presets

At that point, all you have to do is click the “Emulate” button, and Chrome will resize it’s view and reload the current page as though it were the device you selected.

By default, emulating a device will turn on a standard set of options, though you can easily add or remove the available capabilities, as well as tweak and fine-tune the settings for almost all of them.

Manipulating your emulated device

Drilling down through the remainder of the list on the left-side of the “Emulation” tab will allow you to see and customize all the available details that Chrome is using to emulate your selected device. The “Screen” section seems the most immediately useful, but the “Sensors” section seems the coolest.

There is one other very important thing to call out as a use for all these customization options. Since we have the ability to fine-tune so many different device properties, it means that it is entirely possible to emulate almost any device you can find the specs for. Granted, DevTools provides presets for just about all of the popular devices out there, but it’s good to know that you’re not limited to their list.

Working with screen properties

The “Screen” section allows several options for fine-tuning the way Chrome emulates the selected device’s display. By default, the Resolution values will be set to match the chosen device’s real-world pixel resolution. In general, when emulating tablets, Chrome will set the initial Resolution values as though the tablet is in Landscape (horizontal) orientation. When emulating phones, they will initially be shown in Portrait (vertical) orientation. You can easily swap the two values by clicking the button between the two resolution text fields.

Screen-Options

One thing to be aware of, is that swapping these values will make the device viewport appear as though it has rotated, though in terms of device behavior, it has really just resized. What this means for your debugging, is that any styling that uses breakpoints based on width should behave just fine. If, on the other hand, you happen to have JavaScript listening for an orientationchange event in iOS, it won’t fire because there isn’t any accelerometer activity being emulated when you swap those values. This is a prime example of why, as impressive as these tools are, it’s still important to test on actual devices whenever possible.

It is also important to note that if you enable the “Shrink to fit” option in this panel, it can override the resolution values that are set. The aspect ratio will be maintained, but if your browser window is smaller than the defined dimensions, the emulated display will be resized to fit within it. While this is definitely useful in some instances, you’ll want to remember to disable this option before you measure anything.

Changing the user agent

Next in our list is the “User Agent” section, which is fairly  straightforward. It allows you to toggle between using Chrome’s default User Agent String, which will provide (relatively) accurate information about your browser and hardware set up to sites that you visit, with the thought that they may serve up different content and experiences, depending on your configuration. With that mind, it makes sense that when attempting to emulate the Nexus 4 from our examples earlier, you probably don’t want to provide a User Agent String that identifies your setup as a Mac Desktop with the latest version of OSX. Conveniently, if you’re using one of the default device presets available from the list in the “device” section, Chrome will have already selected and enabled the the corresponding User Agent from it’s list. If you would like to edit the string for some reason, simply make your change in the textbox and hit Enter. If you are emulating a custom device other than the provided presets, you can replace the entire User Agent String. I find that UserAgentString.com is usually a good resource for finding strings from any number of browser versions.

User-Agent-Options

Emulating sensor data

Last in the list, is the “Sensors” section, which offers up some settings that are possibly a bit less commonly needed in day to day web development, but are extremely cool. Only the first option, “Emulate touch screen”, will get enabled by default. When it is active, your cursor will render as a semi-transparent circle that is just large enough to help you keep touch-targets in mind. Paul Irish has a nice demo available on his site for experimenting with the touch events.

Sensors-Options

At this point in time, there are some limited capabilities for emulating multi-touch interactions. Currently only a simple implementation of the Pinch to Zoom action is available, though it seems likely to me that functionality for other common multi-touch gestures may be added in future updates. To use this action, hold down the SHIFT key, then either click and drag, or scroll.

As with most of the options available in the emulation panel, it is possible to turn on or off the touch screen option independent of any other settings.

On the back end of things, touch events will be added in with some of your mouse events. Using this option will not disable mouse events, it simply adds in touch events.  As an example, while “Emulate touch screen” is active, when you click with your mouse, the page will receive a touchstart event in addition to the mousedown event. To see this illustrated, you can visit this Event Listener Test and turn touch screen emulation on and off.

The sensors section also has Geolocation and Accelerometer properties. I think these properties would be best explained by pointing you to some cool little demos that have been created. I encourage you to experiment:

Wrapping your head around the Accelerometer values can be a rather daunting task, especially when looking at text only values, which is what’s currently available in the mainstream version of Chrome (32.0.1700.107). If you are interested in working more with the accelerometer, I would highly recommend downloading Chrome Canary, as that version of the Device Emulation panel includes a small, 3D representation of your device, which rotates to illustrate the accelerometer values. The good news, is that since that’s currently available in Canary, it will probably show up in regular Chrome sometime relatively soon.

Sensors-Options---Chrome-Canary

Getting your normal browser back

Once you’ve finished testing (or playing) with a device, and are ready to exit device emulation and get Chrome back to normal, just go back to the “Device” section and click the “Reset” button. Everything will return immediately to the normal desktop browser state, with a whole set of Mobile Devices that are quickly and easily available to emulate whenever you need them again.

Keep calm and continue testing

As I’ve already mentioned, these tools should not replace actual device testing by any means, but they should augment the process nicely, and provide a very convenient method for doing quick checks during development.

Development & Testing at RealEyes Media

Posted on January 08, 2014 at 11:47 am in Blog, Development

Quality AssuranceQuality is never an accident; it is always the result of intelligent effort. – John Ruskin

The team at RealEyes creates a lot of code. We want to make sure the code we create for our clients is the best it can be. This means the project and project code needs to be defect free to ensure our clients are happy and get exactly what they need and want. To ensure this quality, our development (DEV) and quality assurance (QA) processes are always evolving. Part of the DEV and QA evolution includes the addition of automated testing.

What is automated testing?

In software testing, test automation is the use of special software (separate from the software being tested) to control the execution of tests and the comparison of actual outcomes to predicted outcomes. Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or add additional testing that would be difficult to perform manually. [1]

At RealEyes, we want to make sure defects found by QA during exploratory testing don’t return. We also don’t want to spend time testing and retesting the same things if we don’t have to. In order to make sure we don’t release or re-release defects and to cut down on time spent manually verifying old defects, we create automated tests. These tests verify that a particular defect is fixed. Then we automate that test in our build process.

Our build process

RealEyes leverages Jenkins, a continuous integration (CI) server to build applications, code deliverables, and to run our automated tests. If a test fails during the build process, the build is aborted and the team is notified via email. In addition to the alerts, the team also gets reports for every build that detail how the tests went – which ones passed, which ones failed, and etc. This provides our team the chance to review the changes, figure out what re-introduced the defect, and fix it before anything is released to the client or public. This build process provides a nice “checkpoint” for our deliverables.

How does RealEyes work tests into the DEV and QA process

The concept is pretty simple:

  1. Identify the issue
  2. Write a test for it
  3. Integrate the test into CI

Whenever a defect is discovered and reported, that issue is evaluated for testability.

Can it be reproduced?

Ask if the defect can be reproduced. Open the application and see if you can get the defect to happen. This is the first step in making sure the defect is valid, can be fixed, and tested.

Can it be automated?

Can we write a test to verify the defect is fixed? If the answer is no, then ask why? Is the fix to big? Does it need to be broken up into multiple fixes and tests? Is it something that can’t be tested?

Is the fix contained and small enough to be tested?

Similar to the “can it be automated?” question, this check ensures that we are fixing and testing the right problems. We don’t want to apply a fix that touches code across the application and then try to write a test. Writing that kind of test wouldn’t be an easy task, and isn’t valid. If the answer to this question is no, break the defect into smaller parts and identify what needs to be tested in order to verify the fix. Then we can apply the fix, and run the tests to verify that fix.

Tools & resources

Depending on the project and the client, the RealEyes team works with lots of different technologies and tools. From ActionScript to JavaScript & ColdFusion to PHP, there are testing libraries and frameworks that are available to set up our testing system. Some of the tools we are currently leveraging for the JavaScript applications that we are working on are:

These cover applications are built with AngularJS, RequireJS, and Backbone.js as well as plain old jQuery & jQueryUI.

The importance of testing can’t be overstated. It ensures that our clients are happy and that we are able to use our time in the best possible way — creating instead of debugging.

Want to hear more?

We’ll be posting more about our development and testing processes.

Make sure you sign up for email alerts to be notified about new posts.

Sign Up for Email Alerts

Or, you can follow us on Facebook and Twitter.

Wowza is at it again, folks. But this time they’re teaming up with Microsoft to bring content protection to the next level and reach even more users than they already do. But in this case, they’re using MPEG-DASH (Dynamic Adaptive Streaming over HTTP) for live broadcast streaming delivery with Common Encryption. This is a big plus in the streaming media world and a big win for Wowza and its customers.

Why is this so important? Because it’s shaping the next level of delivery standards for both live and on-demand content.

When I am discussing use case(s) with customers, content protection (also referred to as DRM) is brought up about 95% of the time.

Way back when the Internet was in its infant stages (remember Prodigy?!). And I will admit that I wasn’t very familiar with DRM (or any other three letter acronyms; aka TLA’s for that matter) until about eight years ago.  I knew about the emergence of file sharing networks such as Kazaa, LimeWire, and eDonkey2000, but that’s about as far as it went. Oh, and who could forget Metallica’s lawsuit against Napster?

So here we are now, and the immediate need for content protection seems to be at an all-time high. Why is that? YouTube doesn’t need to leverage any encryption on their content, right? Their motto is ‘Broadcast Yourself.’ Well, not so fast. Have you ever wondered what kind of rights you’re potentially surrendering if you upload a video that you recorded of your kid smacking a baseball into your nether regions to publicly available sites like YouTube? You did take the video – and the shot to the groin – so it’s safe to say that it’s your intellectual property. However, my father always taught me to think before I speak and to consider the consequences before I take action, and that’s how I approach life. So even with something like uploading a video to YouTube, I might want to take a gander at the digital rights management that applies on YouTube.

In the last few years, I guess have become a little obsessed with DRM schemes and technologies, the reason being because documentation was few and far between in terms of availability. But, since online delivery of content is at an all-time high and definitely not going away any time soon, knowing how the content I upload and put online is going to be protected is paramount.

When an entire digital business model revolves around on-demand content being at your fingertips, it isn’t that simple.  The gang at Netflix has managed to garner an exponentially higher customer base due to their reach with an online distribution business model. With that said, the majority of the content that they’re distributing is not proprietary, so they apply an industry-approved DRM technology to the content to ensure protection. And it works.

One thing I’ve learned in my researching DRM is that it truly isn’t just about applying protection. It’s about protecting your intellectual, proprietary property.

I had the pleasure of speaking to Danielle Grivalsky at Encoding.com a couple of weeks ago about DRM, and she directed me to their descriptions of the available DRM technologies, and I must say that they nailed it.

Don’t hesitate to contact us to find out more about how leveraging Wowza and MPEG-DASH can work for you!

Working with Speech Recognition ANEs for Android

Posted on November 26, 2013 at 9:57 am in Blog, Development

Adding speech recognition to a mobile application sounds like something that would be too difficult to be worth the while. Well, that may not necessarily be the case. If you’re building an AIR application for Android, there are some very easy ways to add in voice commands or speech recognition. If you want to take your app hands free, add in some richness, put in voice chat, or make a game with voice controls, speech recognition ANEs (AIR Native Extensions) can help you add that.

What’s an ANE?

An ANE, or AIR Native Extension, is like a plugin for an AIR application that exposes native platform functionality as an API in ActionScript. ANEs can add functionality like native alerts, push notifications, in-app purchases, advertising, and sharing. These extensions are written in native code and then compiled into an ANE file. Unfortunately, this means that ANEs are platform specific. However, the functionality they offer makes it well worth it to track down one that provides the features you need. There are plenty of free ANEs and some commercially available ones as well.

Are There Speech Recognition ANEs?

If there weren’t, this would be a very short post. The good news is that there are two free speech recognition ANEs available at the moment. The bad news is that they are for Android only. The first one I’ll mention is Immanuel Noel’s Speech Recognition ANE. It activates Android’s native speech recognition dialog and simply returns its results to AIR. The second one is Michelle Rueda’s Voice Command ANE. Her ANE exposes a bit more functionality by letting you run continuous voice commands and returns the top 5 guesses Android speech recognition comes up with.

Samples

So let’s take a look at some sample projects that use these two ANEs. You can download a zip of the two projects from http://www.realeyesmedia.com/realeyesjordan/poc/SpeechRecognitionPOCs.zip. You’ll need a copy of Flash Builder 4.6 or higher and an Android mobile device to deploy the projects to. That’s one of the downsides of developing using an ANE. For many of them, you have to develop on a device, because the ANEs require the native OS to interact with. If you haven’t previously set up your Android device to debug from Flash Builder, that’s beyond the scope of this blog post, but you can check out the instructions here: http://help.adobe.com/en_US/flex/mobileapps/WSa8161994b114d624-33657d5912b7ab2d73b-7fdf.html.

Setting Up

Let’s get these projects set up:

  1. Download the SpeechRecognitionPOCs.zip and unzip the file into an empty directory.

  2. Open Flash Builder. In the Project Explorer window, right-click (ctrl-click) and select Import.

  3. In the dialog, open the General set of options and select the Existing Projects Into Workspace option.

  4. Click the Browse button next to Select Root Directory, and in the file dialog, select the directory that you unzipped both projects into.

  5. Check both projects and click OK.

  6. You can use the default options in the rest of the dialogs. If prompted about which Flex SDK to use, use 4.6.0 or higher.

You should now have two projects set up in your workspace. Let’s take a look at where the ANEs get hooked up:

  1. In the Project Explorer for either project, open up the assets/ANEs directory.

  2. Note the ANE file with the .ane extension

  3. Right click (ctrl-click) on the project and select Properties from the menu

  4. Select Flex Build Path in the navigation, then click the Native Extensions tab. Here is where you can add either a single ANE or an entire folder of ANEs.

  5. Click on the arrow next to the ANE to see its details, such as whether the AIR simulator is supported and the minimum AIR runtime version.

  6. Click on the arrow next to Flex Build Packaging from the left navigation, and select Google Android from the submenu.

  7. Select the Native Extensions tab.

  8. Make sure the Package checkbox is checked for the ANE, then click Apply and then click OK. This makes sure that the ANE is included when the project builds.

Next, let’s deploy an app to a device. (Hopefully, you’ve already set your Android device up for debugging, because that can be tricky task and is beyond the scope of this post.):

  1. Connect your Android device with its USB connector to your computer

  2. Under the Debug menu in Flash Builder, select Debug Configurations

  3. In the left menu, select Mobile Application and then click the New button at the top of the menu.

  4. Select the Project you want to debug from the Browse button next to the Project field

  5. Under launch method, select On Device and Debug via USB.

  6. Click Apply, then Click Debug. If your device is asleep, make sure to wake it up so the debugging can start.

  7. The application should launch on your Android device and you can try out the speech recognition or voice commands ANEs.

The Code

Speech Recognition ANE:

Let’s take a look at the code for these two apps and how they implement the ANEs. The first one we’ll look at is the Speech Recognition ANE used in the SpeechRecognitionPOC project. This project is pretty straightforward. In the handler for applicationComplete, we instantiate a new Speech object, and pass a string to its constructor. That string is the prompt that shows up in the native Android speech recognition dialog when it appears.

To listen for data or errors from the Speech object, we add listeners for the:  speechToTextEvent.VOICE_RECOGNIZED and speechToTextEvent.VOICE_NOT_RECOGNIZED events. On the speechToTextEvent object, there is a data property that holds the string of text that was Android’s best guess at what was said.

All we have to do to trigger the Android speech recognition dialog is call the listen() method on the speech object. That then opens the native dialog, which will fire one of the two events, then close automatically. In the _onSpeechResult() method, we output the data API returns.

The upside of this ANE is that it is pretty solid and uses the native dialog. The downside is that you only get one result back, and you can only catch one phrase at a time.

Voice Command ANE:

Next let’s look at the Voice Command ANE, which is used in the VoiceCommandsPOC project. It has a bit more depth to it. When the application starts up, we create a new SpeechService object. The SpeechService class exposes a static isSupported property which tells us whether our device supports the ANE. If it does, then we listen for a StatusEvent.STATUS event from the service. Calling the listen() method of the SpeechService object starts the service listening for voice input. When it hears input, it dispatches a status event. The stopListening() method of the object ends the listening for input.

The status event dispatched by the service handles both data and errors. The code property on the StatusEvent object lets us tell whether there was an error in the speech recognition. If no error, then the level property of the event gives us a comma separated list of up to 5 of the results returned by the service. Sometimes it is less than 5, but usually it has the full amount of guesses.

In the _onSpeechStatus() method that handles the status event, we split the level string into an array and then loop through it to analyze the strings. In my experience with the ANE, the service does a great job of parsing my speech. Usually, what I said is the first or second result in the list. This POC project looks for a “stop recording” string and executes a method stopping the service. The Voice Command ANE doesn’t stop listening automatically like the Speech Recognition ANE does.

And that’s one of the great things about the Voice Command ANE. It opens up the possibility of an app always listening for voice commands. The unfortunate thing about the Voice Command ANE is that it is buggy. In my working with it, I found that if you don’t give it speech input right away or have a gap in your speech input, it will freeze up and not recognize any more speech input. Also, even if I spoke without long pauses, after about 3 to 5 statuses, it would freeze up. Bummer.

This leads us to another part of the code. I found that if I set up a refresh timer to restart the service on an interval, I could keep the service running and recognizing speech input. In the _onRefreshTimer() method, I simply call stopListening() and then immediately call listen() to keep the service going. Unfortunately, this results in continuous dinging from the Android API and imperfect listening. However, for my purposes, it’s good enough. Hopefully that is true for you too.

One other thing to note is that if you are using the device’s microphone in your application, that can interfere with the Voice Command ANE’s speech recognition. I did not find a reliable or quick way to clear the application’s hold on the mic to free it up for the SpeechService.

ANEs offer a lot of cool functionality to mobile application developers. In general, they are quite easy to work with. As these ANEs demonstrate, with little effort (and no cash) you can add powerful features to your applications.

Sample Files

document_small_download LBA.zip


Late-Binding Audio

OSMF 1.6 and higher supports the inclusion of one or more alternative audio tracks with a single HTTP video stream. This practice, referred to as “late-binding audio”, allows content providers to deliver video with any number of alternate language tracks without having to duplicate and repackage the video for each audio track. Users can then switch between the audio tracks either before or during playback. OSMF detects the presence of the alternate audio from an .f4m manifest file, which has been modified to include bootstrapping information and other metadata about the alternate audio tracks.

The following article will guide you through the process of delivering an alternate language audio track to an on-demand video file (VOD) streamed over HTTP using the late-binding audio feature. You should be familiar with the HTTP Dynamic Streaming (HDS) workflow before beginning. Please refer to the following articles on HDS for more information:


Getting Started

To get the most out of this article, you will need the following:


Overview

After completing this article you should have a good understanding of what it takes to stream on-demand video with alternate audio tracks over HTTP. At a high level, this process includes:

  • Packaging your media files into segments and fragments (.f4x)
  • Creating a master (set-level) manifest file (.f4m)
  • Editing the media tags for the alternate audio tracks within the master .f4m to prepare them for late-binding audio
  • Uploading the packaged files to Adobe Media Server
  • Including a cross-domain.xml file on the server if the media player is hosted on a separate domain from Adobe Media Server
  • Playing back the video using the master .f4m as the video source, and switching between audio tracks using OSMF


Packaging the media files

When streaming media using HDS, files first need to be “packaged” into segments and fragments (.f4f), index files (.f4x), and a manifest file (.f4m). Adobe Media Server 5.0 or later can automatically package your media files for both normal on-demand and live streaming with the included Live Packager application (live), and JIT HTTP Apache module (vod). However, in order to achieve late-binding audio, the manifest for the main video file needs to be modified so that it includes information about the alternate audio tracks.

To package media that supports late-binding audio, you use the f4fpackager, a command line tool built into Adobe Media Server. The f4fpackager accepts .f4v, .flv, or other mp4-compatible files, and is located in the rootinstall/tools/f4fpackager folder within Adobe Media Server.

Next, you will use the f4fpackager to create packaged media files. You can use your own video and audio assets for this step, or you can use the “Obama.f4v” (video), and “Spanish_ALT_Audio.mp4”(alternate audio) included in the exercise files.


Running the f4fpackager

The process for packaging media files on Windows and Linux is similar:

  1. From command line target the f4fpackager executable for execution in the [Adobe Media Server Install Dir]/tools/f4fpackager (Windows), or set the LD_LIBRARY_PATH to the directory containing the File Packager libraries (Linux).

  2.  Enter the name of the tool, along with any arguments. For this example, you’ll only need to provide the following arguments for each input file:

  • The name of the input file
  • The file’s overall bitrate (Alternatively, you could add this information manually later)
  • The location where you’d like the packaged files to be output (If you omit this argument, the File Packager simply places the packaged files in the same directory as the source files)

  3.  Run the packager on the main video .f4v file. At the command prompt, enter arguments similar to: 

 

C:\Program Files\Adobe\Adobe Media Server 5\tools\f4fpackager\f4fpackager –-input-file=”C:\Obama.f4v” –-bitrate=”546” –-output-path=”E:\packaged_files”

 

   4.  Next, run the packager again, this time to package the alternate audio track:


C:\Program Files\Adobe\Adobe Media Server 5\tools\f4fpackager\f4fpackager –-input-file=”C:\Spanish_ALT_Audio.mp4” –-bitrate=”209” –-output-path=”E:\packaged_files”


   5.  Click “Enter” to run the f4fpackager


*Note: individual media files are packaged separately, meaning you run the packager once for the main video file, “Obama.f4v”, and then again for the alternate audio file, “Spanish_ALT_Audio.mp4”


running the f4fpackager

Figure 1.0: Packaging media with the f4fpackager tool


You should now see the packaged output files generated by the File Packager in the directory you supplied to the arguments in step #2. Packaging the source media included in the exercise files should output:

  • Obama.f4m
  • ObamaSeg1.f4f
  • ObamaSeg1.f4x
  • Spanish_ALT_Audio.f4m
  • Spanish_ALT_AudioSeg1.f4f
  • Spanish_ALT_AudioSeg1.f4x


Creating the “master” manifest file

Next, you will modify “Obama.f4m” to create a master (set-level) manifest file that will reference the alternate audio track.

  1. Using a text editor, open the file “Spanish_ALT_Audio.f4m”



Note: If you skipped the previous section on packaging media, you can use the manifests included with the exercise files in LBA/_START_/PackagedMediaFiles_START)



   2.  Copy everything within the bootstrapInfo and media tags in “Spanish_ALT_Audio.f4m”.

<bootstrapInfo
    profile="named"
    id="bootstrap4940">

    AAAB+2Fic3QAAAAAAAAAGQA...

</bootstrapInfo>

<media
    streamId="Spanish_ALT_Audio"
    url="Spanish_ALT_Audio"
    bitrate="209"
    bootstrapInfoId="bootstrap4940">

<metadata>

    AgAKb25NZXRhRGF0YQgAAAAA....

</metadata>

</media>

Figure 1.1: Copy the bootstrapInfo and media tags from the alternate audio manifest file



3.  Paste the bootstrapInfo and media tags into “Obama.f4m” to reference the Spanish language track.


<?xml version="1.0" encoding="UTF-8"?>

<manifest xmlns="http://ns.adobe.com/f4m/1.0">

    <id>Obama</id>
    <streamType>recorded</streamType>
    <duration>133</duration>

<bootstrapInfo
    profile="named"
    id="bootstrap4744">

    AAAAq2Fic3QAAAAAAAAABAAAAAPoAAA....

</bootstrapInfo>

<media

    streamId="Obama"
    url="Obama"
    bitrate="546"
    bootstrapInfoId="bootstrap4744">

<metadata>
    AgAKb25NZXRhRGF0YQgAAAAAAAhk....
</metadata>

</media>

<bootstrapInfo
    profile="named"
    id="bootstrap4940">

AAAB+2Fic3QAAAAAAAAAGQAAAAPo....

</bootstrapInfo>

<media
    streamId="Spanish_ALT_Audio"
    url="Spanish_ALT_Audio"
    bitrate="209"
    bootstrapInfoId="bootstrap4940">

<metadata>

AgAKb25NZXRhRGF0YQgAAAA....

</metadata>

</media>

</manifest>

Figure 1.2: Paste the bootstrapInfo and media tags from the alternate audio .f4m into the main video’s manifest file to create the master .f4m



4.  Add the following attributes to the media tag for the Spanish language track within “Obama.f4m”:

  • alternate=”true”
  • type=”audio”
  • lang=”Spanish”

<media
   streamId="Spanish_ALT_Audio"
   url="Spanish_ALT_Audio"
   bitrate="209"
   bootstrapInfoId="bootstrap4940"
   alternate="true"
   type="audio"
   lang="Spanish">

Figure 1.3: Edit the alternate audio’s media tag to prepare it for late-binding audio



In the above step, alternate=”true”, and type=”audio” allow OSMF to parse through “Obama.f4m” and see that there is alternate audio available. Logic within the included example player, which you’ll be using to play the video in a later step, uses lang=”Spanish” to populate a dropdown menu with the available alternate audio stream.

5.  Save the file “Obama.f4m”. This is now the master manifest file, and it will be what you will reference to play the video and alternate audio content with OSMF.


Upload the packaged files to the server

6.  Next, you will need to upload all of the packaged files to a folder within the webroot/vod directory of Adobe Media Server. On Windows this default location is C:\Program Files\Adobe\Adobe Media Server 5\webroot\vod. Later on you will point OSMF to the master manifest within this directory in order to play the video.

PackagedFilesOnServer

Figure 1.4: Upload all of the packaged media files to a location within the webroot/vod directory of Adobe Media Server


Verify the delivery of the .f4m file

At this point, all of the packaged files should be uploaded to a directory on the server within /webroot/vod. It’s a good idea to test whether-or-not the server is delivering the manifest file properly, and you can do that by entering the path to the .f4m file in the address bar of a browser.

To test the delivery of the manifest, open the .f4m in a browser from the webserver. On a local development machine the URL would be something like:

http://127.0.0.1/vod/Obama.f4m

If you’ve entered the URL correctly, and the server is properly serving up the .f4m, you should see the manifest’s xml. Notice the alternate audio’s media and bootstrap info tags you added earlier, as well as the additional attributes in the media tag:

*Note: Safari will not display XML by default


<manifest xmlns="http://ns.adobe.com/f4m/1.0">

<id>Obama</id>
<streamType>recorded</streamType>
<duration>133</duration>

<bootstrapInfo profile="named" id="bootstrap4744">

AAAAq2Fic3QAAAAAAAAABAAAAAPoAAAAAAACB3QAAA…

</bootstrapInfo>

<media streamId="Obama" url="Obama" bitrate="546"

bootstrapInfoId="bootstrap4744">

<metadata>

AgAKb25NZXRhRGF0YQgAAAAAAAhkd…

</metadata>

</media>

<bootstrapInfo profile="named" id="bootstrap4940">

AAAB+2Fic3QAAAAAAAAAGQAAAAPoAAAAAAACCwAAAAAA…

</bootstrapInfo>

<media 
 streamId="Spanish_ALT_Audio" 
 url="Spanish_ALT_Audio" 
 bitrate="209" 
 bootstrapInfoId="bootstrap4940" 
 alternate="true" 
 type="audio" 
 lang="Spanish">

<metadata>

AgAKb25NZXRhRGF0YQgAAAAAAAhkdXJh…

</metadata>

</media>

</manifest>

Figure 1.5: Verify that the server is delivering the .f4m properly by entering the path to the manifest in your browser’s address bar



*Note: The above example URL does not point to “/hds-vod” like it would for HDS content that needs to be packaged just-in-time as the client application requests it. This is because “/hds-vod” is a location directive for Apache that tells the server to look for content in the /webroot/vod directory, and package it for delivery. The jithttp module in Apache responsible for just-in-time packaging isn’t needed for this example, as the source files have been already packaged manually. 


Include a cross-domain policy file (Optional)

In order to access content from Adobe Media Server using a Flash-based media player that is hosted on a separate domain from the server, the player needs permission in the form of a cross-domain policy file hosted on the server. Below is an example of a cross-domain policy file that allows access from any domain. You may want to use a more restrictive cross-domain policy for security reasons. For more information on cross-domain policy files, see Setting a crossdomain.xml file for HTTP streaming.

CrossDomain

Figure 1.6: Include a crossdomain.xml file in the webroot directory of Adobe Media Server


  1. Open “crossdomain.xml” from the LBA/_START_/ folder in the included exercise files in a text editor.

  2. Examine the permissions, and edit them if you wish to limit access to include only specific domains.

  3. Save the file, and upload “crossdomain.xml” to the /webroot directory of Adobe Media Server.


Test the video using an OSMF-based media player

Now it’s time to test the video and the alternate audio streams using the included sample media player. The sample player is provided as a Flash Builder project, and is based on the LateBindingAudioSample application that comes as part of the OSMF source download(OSMF/samples/LateBindingAudioSample). You can find the included sample player in LBA/_COMPLETED_/LateBindingAudio_VOD.fxp in the exercise files folder.

  1. Import the file “LateBindingAudio_VOD.fxp” into Flash Builder and run the project.

  2. Enter the URL of the master manifest located on your server in the “.f4m source” field.

  3. If the player can see the path to the .f4m file, the Play button will be enabled, and the alternate languages dropdown menu will show a Spanish audio option.

  4. In no particular order, click “Play”, and choose “Spanish” from the alternate languages dropdown menu.

  5. The video should start to play, and you should see “Switching Audio Track” being displayed in the player under the languages menu.

  6. The audio should switch to the Spanish track, while the video continues to play normally.

The OSMF Player

Figure 1.7: Play the media within the included sample application. Enter the URL for the manifest file and click Play. Use the language selection dropdown menu to switch to the alternate audio stream.


Where to go from here

This article covered the steps necessary to attach late-binding audio to an HTTP video stream using a pre-built OSMF-based media player. You should now have a good understanding of how to package media files for delivery over HTTP, as well as what needs to be done on the server side to deliver late-binding audio. In the next article, you will learn all about the media player, and the code within it that makes late-binding audio possible.

In addition to on-demand video, OSMF supports late-binding audio for live video, multi-bitrate video, and DVR. For more information about HTTP Dynamic Streaming, late-binding audio, and OSMF, please refer to the following resources:



Contact us to learn more.

HTML5 Boilerplate: Učiniti Web programiranje lakše

Posted on April 24, 2012 at 11:40 am in Blog, Development

Boilerplate: Web dizajn i razvijanje nije tako lako kao što je nekada bilo – još je lakše

NAPOMENA: Ovaj pregled Boilerplate-a je deo predstojećeg pregleda Roots WordPress Theme-eОвај i kao takav, uglavnom se fokusira v2. Imajte na umu da se Boilerplate konstantno razvija (v3 јеpušten u februaru ). u stvari, možete misliti o Boilerplate promenama као pulsu HTML5 razvoja. Ostanite sa nama da vidite neke fascinantne promene v3.

Ah …. život je nekada bio mnogo jednostavniji..

Godine 1998 sam uzeo knjigu pod nazivom “Naučite HTML 4 za 24 sata”.Par dana i 350 stranica kasnije ja sam dizajnirao, kodirao i validirao svoj prvi sajt.

Naravno, taj sajt nije mnogo uradio ili čak nije dobro izgledao po današnjim standardima.

Sve ovo može biti previše, a dobra vest je bila da postoji neverovatna zajednica programera furiozno stvara fantastične (i besplatne!) alatke da bi sve ovo bilo jednostavnije.

Međutim, to dovodi do drugog problema – koju alatku da koristim i da joj verujem?

Na primer, skočite do front-end developer diskusione grupe ili foruma i pitajte koji HTML5 framework( radni okvir) treba da koristite i vidite koliko ćete različitih preporuka dobiti .. Uf ..!!

Dakle,šta ako ste želeli podrazumevani šablon za vaše programiranje koji već ima sve isprobane-i-tačne, ažurirane alatke instalirane i spremne da se prilagode potrebama vašeg projekta -komplet alatki – ako hoćete.

Pa,i mi ih imamo takođe.

I verovatno najpopularnija trenutno se zove HTML5 Boilerplate.

HTML5 Boilerplate (H5BP) je izum superstar programera Paul Irish-a i Divya Manian-a.

Neću ući u sve H5BP-ove karakteristike (koje su mnogo bolje pokrivene оvde ) ali je zaključak da je H5BP rad tima programera od nekoliko godina da vam da HTML5 template sa najboljim iskustvom naučenim na teži način.

H5BP je posebno pogodan za dizajnere sa rokovima, koji žele da se fokusiraju na prezentaciju i ne moraju da se zanimaju sa mnogim postavkama projekta. Samo stavite H5BP fajlove u vaš projekat i počnite sa radom. U zavisnosti od verzije koju koristite – 1,2, ili (od februara novu ) 3 – evo sa čime ćete počinjati :

  • Resetujte CSS normalizovanim fontovima (Eric Meyer-ov resetovan ponovo učitan HTML5 Baseline i YUI CSS fonts) ili Nicolas Gallagher-ov Normalize.css .
  • Osnovni štampani i mobilni stilovi
  • .htaccess i drugi fajlovi konfig. servera ( dosta pametnih isečaka), prazan fajl više domenskih pravila za flash, robots.txt, favicon, apple-dodir-ikona, i 404 fajlove
  • HTML5 – spreman. H5BP koristi alatku koja se zove Modernizr a koja obuhvata drugu alatku pod nazivom HTML5 Shim (između ostalih stvari kao funkciju detekcije) da proveri da li vaš HTML5 kod dobro izgleda na svim pretraživačima, uključujući IE6
  • jQuery učitan sa Google CDN-a ili lokalno ako je korisnik offline
  • ddbelated png za IE6 png fix
  • yui profilisanje
  • Optimizovana Google Analytics skripta
  • Kul male stvari poput ispravka grešaka za izbegavanje console.log u IE & ispravka problema pisanja dokumenata, itd.

Najnoviji H5BP je verzija 3 i proteklih nekoliko zadnjih godina tim programera je porastao i proizvod se stalno poboljšava. Nedavno je bio fokus na performansama veb sajta. U tom cilju, Paul i ekipa su razvili H5BP ‘Build Script’. To je nešto što pokrenete kada završite svoj dizajnerski / programerski rad koji se odnosi na optimizaciju i minifikaciju a u cilju pravljenja vašeg sajta moćnom veb mašinom.

Na kraju, mi živimo u svetu paradoksa. Dok je svet veb dizajna i programiranja kompleksniji nego ikada, nikad nije bilo bolje vreme za rad u ovom polju zahvaljujući dobro osmišljenim i besplatnim alatkama kao što je HTML5 Boilerplate.

Želite li da saznate više?

Pogledajte ovaj video u kom Paul Irish objašnjava ceo Boilerplate templat i veliki je resurs.

This article is translated to “http://science.webhostinggeeks.com/html5-boilerplate“Serbo-Croatianlanguage by Anja Skrba from “http://webhostinggeeks.com/” Webhostinggeeks.com.

HTML5 Boilerplate: Making Web Development Easier

Posted on April 24, 2012 at 11:38 am in Blog, Development

HTML5 Boilerplate Site

Boilerplate: Web design and development ain’t as easy as it used to be – it’s easier!

NOTE: This look at Boilerplate is part of an upcoming look at the Roots WordPress Theme and, as such, it focuses mostly on v2. Keep in mind that Boilerplate is under constant development (v3 was released in February). In fact, you could think of the Boilerplate changelog as the pulse of HTML5 development. Stay tuned for a look at some fascinating changes in v3.

Ah.…life used to be so much simpler.

In 1998 I picked up a book called ‘Teach yourself HTML 4 in 24 hours’. A couple of days and 350 pages later I had designed, coded and validated my first site.

Of course, that site didn’t do very much or even look very good by today’s standards.

All of this can be overwhelming and the good news has been that there is an incredible community of developers furiously creating fantastic (and free!) tools to make all of this easier.

But this leads to another problem – which tools do I use and trust?

For example, hop into a front-end developer discussion group or forum and ask what HTML5 framework you should use and see how many different recommendations you get..whew..!!

So, what if you wanted a default template for your development that already had all the tried-and-true, up-to-date tools installed and ready to be adapted to your project’s needs – a tool-kit, if you will.

Well, we have those too.

And probably the most popular right now is called HTML5 Boilerplate.

HTML5 Boilerplate (H5BP) is the brain-child of superstar developers Paul Irish and Divya Manian.
I won’t go into all of H5BP’s features (that is covered much better here) but the bottom-line is H5BP is like having a team of developers work for several years to give you an HTML5 template with all the best practices learned the hard way baked in.

H5BP seems especially suited for designers with deadlines who want to focus on presentation and not have to monkey around with a lot of project set-up. Just dump the H5BP files into your project and get to work. Depending on which version you’re using – 1,2, or (new as of February) 3 – here’s what you’ll be starting with:

  • Reset CSS with normalized fonts (Eric Meyer’s reset reloaded with HTML5 Baseline and YUI CSS fonts) or Nicolas Gallagher’s Normalize.css.
  • Basic print and mobile styles
  • .htaccess and other server config files (full of really clever snippets), empty crossdomain policy file for flash, robots.txt, favicon, apple-touch-icon, and 404 files
  • HTML5-ready. H5BP uses a tool called Modernizr that includes another tool called the HTML5 Shim (among other things like feature detection) to make sure your HTML5 code looks fine across all browsers including IE6
  • jQuery loaded from the Google CDN or locally if the user is offline.
  • ddbelated png for an IE6 png fix
  • yui profiling
  • Optimized Google Analytics script
  • Cool little things like a fixes to avoid console.log errors in IE & a fix for document.write issues etc.

The latest H5BP is version 3 and over the past couple of years the development team has grown and the product has been continuously improved. Recently the focus has been on web site performance. To this end, Paul and the crew have developed the H5BP ‘Build Script’. This is something that you run when you’ve finished your design/development work that handles optimizing and minification to make your site a lean and mean web machine.

Ultimately we live in a world of paradox. While the world of web design and development is more complex than ever, there has also never been a better time to work in this field thanks to well thought-out and free tools like HTML5 Boilerplate.

Want to learn more?

Check out this is a video where Paul Irish walks through the entire Boilerplate template and is a great resource.

or

contact-us-to-learn-more

Javascript Selector API – Should I care?

Posted on April 09, 2012 at 12:09 pm in Development, Training

Javascript Selector API – Should I care?

What is it?

Using JavaScript with CSS selectors, particularly Classes has traditionally been a little awkward. You end up needing dozens of lines of code with fun stuff like regular expressions to do something simple like toggle a Class. Looking for a better way to do this is how many of us got introduced to jQuery and it’s easy access to the DOM.

The JavaScript API has showed up to the party by implementing the W3C Selectors API.

What does it look like?

It looks a lot like jQuery.

The following example would select all p elements in the document that have a class of either “error” or “warning”.

var alerts = document.querySelectorAll("p.warning, p.error");

(Example above taken from the API Examples)

I’ve created a demo that shows this example in action. It uses the classList property  so don’t try this in IE. :)

In addition to querySelectorAll, we can use querySelector which returns only the first descendant element. Also, querySelector is not restricted to CSS IDs and Classes – you can use this with HTML5 elements as well:

document.querySelector(‘footer’);

Um…What About Browser Support?

QuerySelector and querySelectorAll are supported by all the major browsers from IE8 and up. Of course you need to be careful which CSS selectors you are querying because not all browser versions recognize all selectors.

Should I care?

Poke around inside jQuery and you’ll find references to querySelector – looks like jQuery is using this native API too (when it can). So, if you’re already using jQuery in your project and you’re more comfortable with jQuery selectors this new API isn’t going to rock your world. If you’re not using jQuery, are not worried about old pre-IE8 browsers and are trying to keep your project super-lightweight then these new selectors will make your coding much easier. So it looks like it is up to you and your situation.

Want to Improve your JavaScript Chops?

Sencha Animator: A Test Drive

Posted on March 27, 2012 at 12:55 pm in Blog, Development

What is Sencha Animator?
Sencha Animator is a new tool that makes it easy to create CSS3 transformation-based animations. So easy that you don’t even need a whiff of CSS3 skills!
Actually, working with Animator will look very familiar to anyone who’s used the Flash IDE (or any tool that uses timelines) to create animations.

Let’s walk through a simple Sencha Animator project.

Our finished project will look like this.

First, you’ll need to download and install Animator – get it here.

1. Set Up Your Project
Once you get it up and running you’ll select File–>New Project and set the size (ours is 600×320). Next, save your project (File –> Save) where you can find it again.

2. Add Images
For our project we’ll be fading in each of the four elements of our logo. Assuming we’ve already separated the logo into PNGs, the first step is to place the images onto the Canvas.
Select the Image Tool and then click anywhere on the Canvas.

Now we have a placeholder graphic on the Canvas. Let’s link this to our image. Click the button next to the default image name in the General Object panel and browse to your image.

While in the Object Panel with your image selected you’ll also want to set the image Name and Position.

Repeat these steps with the additional images. You should now have 3 layers in your Object Tree. You can rearrange these so that the layers are stacked correctly.

3. Well, that’s great – LET’S ANIMATE!
Set the Playhead between 0s and 1s and double-click in the timeline of the bottom layer (ours is called ‘LeftThing’).
This will create a white Keyframe and the Properties for this Keyframe will be displayed.

Under Properties, change the Easing to ‘Linear’. This will connect the Keyframe to another Keyframe at 0s.
Select the Keyframe at 0s and change the Opacity to 0% so that this element will appear to fade in to the scene.
(You can scrub the playhead to watch it fading in — ooohhh, aaahhh!)

Repeat this process with the other two images so that each element fades-in on top of each other. Your timeline should look similar to this.

4. Add Some Text
Let’s create some text and fade that in too.
Select the Text Tool and click on the Canvas.
Just like with the Image Tool, we need to adjust the properties of our new Text Element.
See the screenshot below to see the settings that we used.

To simulate our logo, we duplicated the text layer (Ctrl-D) and changed the Content to a left parenthesis and then repeated to create a right parenthesis. We then positioned and changed the Fill Color of these new text layers to match our RealEyes logo.

Next we’ll animate these layers to fade-in like the previous layers.

5. Add Interactivity
Excellent. Now we have a logo whose various elements fade-in and then the animation stops.
So, how hard would it be to add some interactivity and make the animation repeat if the user clicked on the logo?
Easy!
Here’s how.
Select the top-most image layer (‘Yellow Thing’ in our example) and open the Actions panel. You’ll notice several interactions to choose from.
Select ‘click’ and then ‘Go to next scene’ from the drop-down menu.

6. Export the Project
Almost done! Lastly, we need to select File–>Export Project and then FTP this to our favorite web-server or simply open the html file that Animator creates as it exports the project.
Viola – you have some snappy animation that looks a whole lot like Flash – but isn’t!

Conclusion:
With browser support for CSS3 animation growing everyday, designers and developers have been turning to frameworks, libraries and plugins like transform.js, paper.js, move.js and JSAnim to simplify their workflow. However, making convincing animations with pure code can be a frustrating and ultimately disappointing process. Because successful animation depends on nuance and timing, creating them with some kind of IDE or GUI has always been the natural solution (Flash owes a lot of it’s success to it’s easy to use and powerful timeline controls).

Without getting into advanced easing, multiple scenes, z-axis rotations, etc.., we’re really just scratching the surface of what this tool is capable of. While Sencha Animator is still a work in progress and will never be able to offer the power of the Flash IDE, we’ve seen that Animator is intuitive, easy to learn and offers a time-saving GUI for modifying CSS properties over time.
Another plus – the version that we used (1.2) seemed very stable.

Interested in learning more about the power of Sencha or their tools? Contact us!