Wowza is at it again, folks. But this time they’re teaming up with Microsoft to bring content protection to the next level and reach even more users than they already do. But in this case, they’re using MPEG-DASH (Dynamic Adaptive Streaming over HTTP) for live broadcast streaming delivery with Common Encryption. This is a big plus in the streaming media world and a big win for Wowza and its customers.

Why is this so important? Because it’s shaping the next level of delivery standards for both live and on-demand content.

When I am discussing use case(s) with customers, content protection (also referred to as DRM) is brought up about 95% of the time.

Way back when the Internet was in its infant stages (remember Prodigy?!). And I will admit that I wasn’t very familiar with DRM (or any other three letter acronyms; aka TLA’s for that matter) until about eight years ago.  I knew about the emergence of file sharing networks such as Kazaa, LimeWire, and eDonkey2000, but that’s about as far as it went. Oh, and who could forget Metallica’s lawsuit against Napster?

So here we are now, and the immediate need for content protection seems to be at an all-time high. Why is that? YouTube doesn’t need to leverage any encryption on their content, right? Their motto is ‘Broadcast Yourself.’ Well, not so fast. Have you ever wondered what kind of rights you’re potentially surrendering if you upload a video that you recorded of your kid smacking a baseball into your nether regions to publicly available sites like YouTube? You did take the video – and the shot to the groin – so it’s safe to say that it’s your intellectual property. However, my father always taught me to think before I speak and to consider the consequences before I take action, and that’s how I approach life. So even with something like uploading a video to YouTube, I might want to take a gander at the digital rights management that applies on YouTube.

In the last few years, I guess have become a little obsessed with DRM schemes and technologies, the reason being because documentation was few and far between in terms of availability. But, since online delivery of content is at an all-time high and definitely not going away any time soon, knowing how the content I upload and put online is going to be protected is paramount.

When an entire digital business model revolves around on-demand content being at your fingertips, it isn’t that simple.  The gang at Netflix has managed to garner an exponentially higher customer base due to their reach with an online distribution business model. With that said, the majority of the content that they’re distributing is not proprietary, so they apply an industry-approved DRM technology to the content to ensure protection. And it works.

One thing I’ve learned in my researching DRM is that it truly isn’t just about applying protection. It’s about protecting your intellectual, proprietary property.

I had the pleasure of speaking to Danielle Grivalsky at a couple of weeks ago about DRM, and she directed me to their descriptions of the available DRM technologies, and I must say that they nailed it.

Don’t hesitate to contact us to find out more about how leveraging Wowza and MPEG-DASH can work for you!

Working with Speech Recognition ANEs for Android

Posted on November 26, 2013 at 9:57 am in Blog, Development

Adding speech recognition to a mobile application sounds like something that would be too difficult to be worth the while. Well, that may not necessarily be the case. If you’re building an AIR application for Android, there are some very easy ways to add in voice commands or speech recognition. If you want to take your app hands free, add in some richness, put in voice chat, or make a game with voice controls, speech recognition ANEs (AIR Native Extensions) can help you add that.

What’s an ANE?

An ANE, or AIR Native Extension, is like a plugin for an AIR application that exposes native platform functionality as an API in ActionScript. ANEs can add functionality like native alerts, push notifications, in-app purchases, advertising, and sharing. These extensions are written in native code and then compiled into an ANE file. Unfortunately, this means that ANEs are platform specific. However, the functionality they offer makes it well worth it to track down one that provides the features you need. There are plenty of free ANEs and some commercially available ones as well.

Are There Speech Recognition ANEs?

If there weren’t, this would be a very short post. The good news is that there are two free speech recognition ANEs available at the moment. The bad news is that they are for Android only. The first one I’ll mention is Immanuel Noel’s Speech Recognition ANE. It activates Android’s native speech recognition dialog and simply returns its results to AIR. The second one is Michelle Rueda’s Voice Command ANE. Her ANE exposes a bit more functionality by letting you run continuous voice commands and returns the top 5 guesses Android speech recognition comes up with.


So let’s take a look at some sample projects that use these two ANEs. You can download a zip of the two projects from You’ll need a copy of Flash Builder 4.6 or higher and an Android mobile device to deploy the projects to. That’s one of the downsides of developing using an ANE. For many of them, you have to develop on a device, because the ANEs require the native OS to interact with. If you haven’t previously set up your Android device to debug from Flash Builder, that’s beyond the scope of this blog post, but you can check out the instructions here:

Setting Up

Let’s get these projects set up:

  1. Download the and unzip the file into an empty directory.

  2. Open Flash Builder. In the Project Explorer window, right-click (ctrl-click) and select Import.

  3. In the dialog, open the General set of options and select the Existing Projects Into Workspace option.

  4. Click the Browse button next to Select Root Directory, and in the file dialog, select the directory that you unzipped both projects into.

  5. Check both projects and click OK.

  6. You can use the default options in the rest of the dialogs. If prompted about which Flex SDK to use, use 4.6.0 or higher.

You should now have two projects set up in your workspace. Let’s take a look at where the ANEs get hooked up:

  1. In the Project Explorer for either project, open up the assets/ANEs directory.

  2. Note the ANE file with the .ane extension

  3. Right click (ctrl-click) on the project and select Properties from the menu

  4. Select Flex Build Path in the navigation, then click the Native Extensions tab. Here is where you can add either a single ANE or an entire folder of ANEs.

  5. Click on the arrow next to the ANE to see its details, such as whether the AIR simulator is supported and the minimum AIR runtime version.

  6. Click on the arrow next to Flex Build Packaging from the left navigation, and select Google Android from the submenu.

  7. Select the Native Extensions tab.

  8. Make sure the Package checkbox is checked for the ANE, then click Apply and then click OK. This makes sure that the ANE is included when the project builds.

Next, let’s deploy an app to a device. (Hopefully, you’ve already set your Android device up for debugging, because that can be tricky task and is beyond the scope of this post.):

  1. Connect your Android device with its USB connector to your computer

  2. Under the Debug menu in Flash Builder, select Debug Configurations

  3. In the left menu, select Mobile Application and then click the New button at the top of the menu.

  4. Select the Project you want to debug from the Browse button next to the Project field

  5. Under launch method, select On Device and Debug via USB.

  6. Click Apply, then Click Debug. If your device is asleep, make sure to wake it up so the debugging can start.

  7. The application should launch on your Android device and you can try out the speech recognition or voice commands ANEs.

The Code

Speech Recognition ANE:

Let’s take a look at the code for these two apps and how they implement the ANEs. The first one we’ll look at is the Speech Recognition ANE used in the SpeechRecognitionPOC project. This project is pretty straightforward. In the handler for applicationComplete, we instantiate a new Speech object, and pass a string to its constructor. That string is the prompt that shows up in the native Android speech recognition dialog when it appears.

To listen for data or errors from the Speech object, we add listeners for the:  speechToTextEvent.VOICE_RECOGNIZED and speechToTextEvent.VOICE_NOT_RECOGNIZED events. On the speechToTextEvent object, there is a data property that holds the string of text that was Android’s best guess at what was said.

All we have to do to trigger the Android speech recognition dialog is call the listen() method on the speech object. That then opens the native dialog, which will fire one of the two events, then close automatically. In the _onSpeechResult() method, we output the data API returns.

The upside of this ANE is that it is pretty solid and uses the native dialog. The downside is that you only get one result back, and you can only catch one phrase at a time.

Voice Command ANE:

Next let’s look at the Voice Command ANE, which is used in the VoiceCommandsPOC project. It has a bit more depth to it. When the application starts up, we create a new SpeechService object. The SpeechService class exposes a static isSupported property which tells us whether our device supports the ANE. If it does, then we listen for a StatusEvent.STATUS event from the service. Calling the listen() method of the SpeechService object starts the service listening for voice input. When it hears input, it dispatches a status event. The stopListening() method of the object ends the listening for input.

The status event dispatched by the service handles both data and errors. The code property on the StatusEvent object lets us tell whether there was an error in the speech recognition. If no error, then the level property of the event gives us a comma separated list of up to 5 of the results returned by the service. Sometimes it is less than 5, but usually it has the full amount of guesses.

In the _onSpeechStatus() method that handles the status event, we split the level string into an array and then loop through it to analyze the strings. In my experience with the ANE, the service does a great job of parsing my speech. Usually, what I said is the first or second result in the list. This POC project looks for a “stop recording” string and executes a method stopping the service. The Voice Command ANE doesn’t stop listening automatically like the Speech Recognition ANE does.

And that’s one of the great things about the Voice Command ANE. It opens up the possibility of an app always listening for voice commands. The unfortunate thing about the Voice Command ANE is that it is buggy. In my working with it, I found that if you don’t give it speech input right away or have a gap in your speech input, it will freeze up and not recognize any more speech input. Also, even if I spoke without long pauses, after about 3 to 5 statuses, it would freeze up. Bummer.

This leads us to another part of the code. I found that if I set up a refresh timer to restart the service on an interval, I could keep the service running and recognizing speech input. In the _onRefreshTimer() method, I simply call stopListening() and then immediately call listen() to keep the service going. Unfortunately, this results in continuous dinging from the Android API and imperfect listening. However, for my purposes, it’s good enough. Hopefully that is true for you too.

One other thing to note is that if you are using the device’s microphone in your application, that can interfere with the Voice Command ANE’s speech recognition. I did not find a reliable or quick way to clear the application’s hold on the mic to free it up for the SpeechService.

ANEs offer a lot of cool functionality to mobile application developers. In general, they are quite easy to work with. As these ANEs demonstrate, with little effort (and no cash) you can add powerful features to your applications.

Sample Files


Late-Binding Audio

OSMF 1.6 and higher supports the inclusion of one or more alternative audio tracks with a single HTTP video stream. This practice, referred to as “late-binding audio”, allows content providers to deliver video with any number of alternate language tracks without having to duplicate and repackage the video for each audio track. Users can then switch between the audio tracks either before or during playback. OSMF detects the presence of the alternate audio from an .f4m manifest file, which has been modified to include bootstrapping information and other metadata about the alternate audio tracks.

The following article will guide you through the process of delivering an alternate language audio track to an on-demand video file (VOD) streamed over HTTP using the late-binding audio feature. You should be familiar with the HTTP Dynamic Streaming (HDS) workflow before beginning. Please refer to the following articles on HDS for more information:

Getting Started

To get the most out of this article, you will need the following:


After completing this article you should have a good understanding of what it takes to stream on-demand video with alternate audio tracks over HTTP. At a high level, this process includes:

  • Packaging your media files into segments and fragments (.f4x)
  • Creating a master (set-level) manifest file (.f4m)
  • Editing the media tags for the alternate audio tracks within the master .f4m to prepare them for late-binding audio
  • Uploading the packaged files to Adobe Media Server
  • Including a cross-domain.xml file on the server if the media player is hosted on a separate domain from Adobe Media Server
  • Playing back the video using the master .f4m as the video source, and switching between audio tracks using OSMF

Packaging the media files

When streaming media using HDS, files first need to be “packaged” into segments and fragments (.f4f), index files (.f4x), and a manifest file (.f4m). Adobe Media Server 5.0 or later can automatically package your media files for both normal on-demand and live streaming with the included Live Packager application (live), and JIT HTTP Apache module (vod). However, in order to achieve late-binding audio, the manifest for the main video file needs to be modified so that it includes information about the alternate audio tracks.

To package media that supports late-binding audio, you use the f4fpackager, a command line tool built into Adobe Media Server. The f4fpackager accepts .f4v, .flv, or other mp4-compatible files, and is located in the rootinstall/tools/f4fpackager folder within Adobe Media Server.

Next, you will use the f4fpackager to create packaged media files. You can use your own video and audio assets for this step, or you can use the “Obama.f4v” (video), and “Spanish_ALT_Audio.mp4”(alternate audio) included in the exercise files.

Running the f4fpackager

The process for packaging media files on Windows and Linux is similar:

  1. From command line target the f4fpackager executable for execution in the [Adobe Media Server Install Dir]/tools/f4fpackager (Windows), or set the LD_LIBRARY_PATH to the directory containing the File Packager libraries (Linux).

  2.  Enter the name of the tool, along with any arguments. For this example, you’ll only need to provide the following arguments for each input file:

  • The name of the input file
  • The file’s overall bitrate (Alternatively, you could add this information manually later)
  • The location where you’d like the packaged files to be output (If you omit this argument, the File Packager simply places the packaged files in the same directory as the source files)

  3.  Run the packager on the main video .f4v file. At the command prompt, enter arguments similar to: 


C:\Program Files\Adobe\Adobe Media Server 5\tools\f4fpackager\f4fpackager –-input-file=”C:\Obama.f4v” –-bitrate=”546” –-output-path=”E:\packaged_files”


   4.  Next, run the packager again, this time to package the alternate audio track:

C:\Program Files\Adobe\Adobe Media Server 5\tools\f4fpackager\f4fpackager –-input-file=”C:\Spanish_ALT_Audio.mp4” –-bitrate=”209” –-output-path=”E:\packaged_files”

   5.  Click “Enter” to run the f4fpackager

*Note: individual media files are packaged separately, meaning you run the packager once for the main video file, “Obama.f4v”, and then again for the alternate audio file, “Spanish_ALT_Audio.mp4”

running the f4fpackager

Figure 1.0: Packaging media with the f4fpackager tool

You should now see the packaged output files generated by the File Packager in the directory you supplied to the arguments in step #2. Packaging the source media included in the exercise files should output:

  • Obama.f4m
  • ObamaSeg1.f4f
  • ObamaSeg1.f4x
  • Spanish_ALT_Audio.f4m
  • Spanish_ALT_AudioSeg1.f4f
  • Spanish_ALT_AudioSeg1.f4x

Creating the “master” manifest file

Next, you will modify “Obama.f4m” to create a master (set-level) manifest file that will reference the alternate audio track.

  1. Using a text editor, open the file “Spanish_ALT_Audio.f4m”

Note: If you skipped the previous section on packaging media, you can use the manifests included with the exercise files in LBA/_START_/PackagedMediaFiles_START)

   2.  Copy everything within the bootstrapInfo and media tags in “Spanish_ALT_Audio.f4m”.









Figure 1.1: Copy the bootstrapInfo and media tags from the alternate audio manifest file

3.  Paste the bootstrapInfo and media tags into “Obama.f4m” to reference the Spanish language track.

<?xml version="1.0" encoding="UTF-8"?>

<manifest xmlns="">


















Figure 1.2: Paste the bootstrapInfo and media tags from the alternate audio .f4m into the main video’s manifest file to create the master .f4m

4.  Add the following attributes to the media tag for the Spanish language track within “Obama.f4m”:

  • alternate=”true”
  • type=”audio”
  • lang=”Spanish”


Figure 1.3: Edit the alternate audio’s media tag to prepare it for late-binding audio

In the above step, alternate=”true”, and type=”audio” allow OSMF to parse through “Obama.f4m” and see that there is alternate audio available. Logic within the included example player, which you’ll be using to play the video in a later step, uses lang=”Spanish” to populate a dropdown menu with the available alternate audio stream.

5.  Save the file “Obama.f4m”. This is now the master manifest file, and it will be what you will reference to play the video and alternate audio content with OSMF.

Upload the packaged files to the server

6.  Next, you will need to upload all of the packaged files to a folder within the webroot/vod directory of Adobe Media Server. On Windows this default location is C:\Program Files\Adobe\Adobe Media Server 5\webroot\vod. Later on you will point OSMF to the master manifest within this directory in order to play the video.


Figure 1.4: Upload all of the packaged media files to a location within the webroot/vod directory of Adobe Media Server

Verify the delivery of the .f4m file

At this point, all of the packaged files should be uploaded to a directory on the server within /webroot/vod. It’s a good idea to test whether-or-not the server is delivering the manifest file properly, and you can do that by entering the path to the .f4m file in the address bar of a browser.

To test the delivery of the manifest, open the .f4m in a browser from the webserver. On a local development machine the URL would be something like:

If you’ve entered the URL correctly, and the server is properly serving up the .f4m, you should see the manifest’s xml. Notice the alternate audio’s media and bootstrap info tags you added earlier, as well as the additional attributes in the media tag:

*Note: Safari will not display XML by default

<manifest xmlns="">


<bootstrapInfo profile="named" id="bootstrap4744">



<media streamId="Obama" url="Obama" bitrate="546"






<bootstrapInfo profile="named" id="bootstrap4940">









Figure 1.5: Verify that the server is delivering the .f4m properly by entering the path to the manifest in your browser’s address bar

*Note: The above example URL does not point to “/hds-vod” like it would for HDS content that needs to be packaged just-in-time as the client application requests it. This is because “/hds-vod” is a location directive for Apache that tells the server to look for content in the /webroot/vod directory, and package it for delivery. The jithttp module in Apache responsible for just-in-time packaging isn’t needed for this example, as the source files have been already packaged manually. 

Include a cross-domain policy file (Optional)

In order to access content from Adobe Media Server using a Flash-based media player that is hosted on a separate domain from the server, the player needs permission in the form of a cross-domain policy file hosted on the server. Below is an example of a cross-domain policy file that allows access from any domain. You may want to use a more restrictive cross-domain policy for security reasons. For more information on cross-domain policy files, see Setting a crossdomain.xml file for HTTP streaming.


Figure 1.6: Include a crossdomain.xml file in the webroot directory of Adobe Media Server

  1. Open “crossdomain.xml” from the LBA/_START_/ folder in the included exercise files in a text editor.

  2. Examine the permissions, and edit them if you wish to limit access to include only specific domains.

  3. Save the file, and upload “crossdomain.xml” to the /webroot directory of Adobe Media Server.

Test the video using an OSMF-based media player

Now it’s time to test the video and the alternate audio streams using the included sample media player. The sample player is provided as a Flash Builder project, and is based on the LateBindingAudioSample application that comes as part of the OSMF source download(OSMF/samples/LateBindingAudioSample). You can find the included sample player in LBA/_COMPLETED_/LateBindingAudio_VOD.fxp in the exercise files folder.

  1. Import the file “LateBindingAudio_VOD.fxp” into Flash Builder and run the project.

  2. Enter the URL of the master manifest located on your server in the “.f4m source” field.

  3. If the player can see the path to the .f4m file, the Play button will be enabled, and the alternate languages dropdown menu will show a Spanish audio option.

  4. In no particular order, click “Play”, and choose “Spanish” from the alternate languages dropdown menu.

  5. The video should start to play, and you should see “Switching Audio Track” being displayed in the player under the languages menu.

  6. The audio should switch to the Spanish track, while the video continues to play normally.

The OSMF Player

Figure 1.7: Play the media within the included sample application. Enter the URL for the manifest file and click Play. Use the language selection dropdown menu to switch to the alternate audio stream.

Where to go from here

This article covered the steps necessary to attach late-binding audio to an HTTP video stream using a pre-built OSMF-based media player. You should now have a good understanding of how to package media files for delivery over HTTP, as well as what needs to be done on the server side to deliver late-binding audio. In the next article, you will learn all about the media player, and the code within it that makes late-binding audio possible.

In addition to on-demand video, OSMF supports late-binding audio for live video, multi-bitrate video, and DVR. For more information about HTTP Dynamic Streaming, late-binding audio, and OSMF, please refer to the following resources:

Contact us to learn more.

HTML5 Boilerplate: Učiniti Web programiranje lakše

Posted on April 24, 2012 at 11:40 am in Blog, Development

Boilerplate: Web dizajn i razvijanje nije tako lako kao što je nekada bilo – još je lakše

NAPOMENA: Ovaj pregled Boilerplate-a je deo predstojećeg pregleda Roots WordPress Theme-eОвај i kao takav, uglavnom se fokusira v2. Imajte na umu da se Boilerplate konstantno razvija (v3 јеpušten u februaru ). u stvari, možete misliti o Boilerplate promenama као pulsu HTML5 razvoja. Ostanite sa nama da vidite neke fascinantne promene v3.

Ah …. život je nekada bio mnogo jednostavniji..

Godine 1998 sam uzeo knjigu pod nazivom “Naučite HTML 4 za 24 sata”.Par dana i 350 stranica kasnije ja sam dizajnirao, kodirao i validirao svoj prvi sajt.

Naravno, taj sajt nije mnogo uradio ili čak nije dobro izgledao po današnjim standardima.

Sve ovo može biti previše, a dobra vest je bila da postoji neverovatna zajednica programera furiozno stvara fantastične (i besplatne!) alatke da bi sve ovo bilo jednostavnije.

Međutim, to dovodi do drugog problema – koju alatku da koristim i da joj verujem?

Na primer, skočite do front-end developer diskusione grupe ili foruma i pitajte koji HTML5 framework( radni okvir) treba da koristite i vidite koliko ćete različitih preporuka dobiti .. Uf ..!!

Dakle,šta ako ste želeli podrazumevani šablon za vaše programiranje koji već ima sve isprobane-i-tačne, ažurirane alatke instalirane i spremne da se prilagode potrebama vašeg projekta -komplet alatki – ako hoćete.

Pa,i mi ih imamo takođe.

I verovatno najpopularnija trenutno se zove HTML5 Boilerplate.

HTML5 Boilerplate (H5BP) je izum superstar programera Paul Irish-a i Divya Manian-a.

Neću ući u sve H5BP-ove karakteristike (koje su mnogo bolje pokrivene оvde ) ali je zaključak da je H5BP rad tima programera od nekoliko godina da vam da HTML5 template sa najboljim iskustvom naučenim na teži način.

H5BP je posebno pogodan za dizajnere sa rokovima, koji žele da se fokusiraju na prezentaciju i ne moraju da se zanimaju sa mnogim postavkama projekta. Samo stavite H5BP fajlove u vaš projekat i počnite sa radom. U zavisnosti od verzije koju koristite – 1,2, ili (od februara novu ) 3 – evo sa čime ćete počinjati :

  • Resetujte CSS normalizovanim fontovima (Eric Meyer-ov resetovan ponovo učitan HTML5 Baseline i YUI CSS fonts) ili Nicolas Gallagher-ov Normalize.css .
  • Osnovni štampani i mobilni stilovi
  • .htaccess i drugi fajlovi konfig. servera ( dosta pametnih isečaka), prazan fajl više domenskih pravila za flash, robots.txt, favicon, apple-dodir-ikona, i 404 fajlove
  • HTML5 – spreman. H5BP koristi alatku koja se zove Modernizr a koja obuhvata drugu alatku pod nazivom HTML5 Shim (između ostalih stvari kao funkciju detekcije) da proveri da li vaš HTML5 kod dobro izgleda na svim pretraživačima, uključujući IE6
  • jQuery učitan sa Google CDN-a ili lokalno ako je korisnik offline
  • ddbelated png za IE6 png fix
  • yui profilisanje
  • Optimizovana Google Analytics skripta
  • Kul male stvari poput ispravka grešaka za izbegavanje console.log u IE & ispravka problema pisanja dokumenata, itd.

Najnoviji H5BP je verzija 3 i proteklih nekoliko zadnjih godina tim programera je porastao i proizvod se stalno poboljšava. Nedavno je bio fokus na performansama veb sajta. U tom cilju, Paul i ekipa su razvili H5BP ‘Build Script’. To je nešto što pokrenete kada završite svoj dizajnerski / programerski rad koji se odnosi na optimizaciju i minifikaciju a u cilju pravljenja vašeg sajta moćnom veb mašinom.

Na kraju, mi živimo u svetu paradoksa. Dok je svet veb dizajna i programiranja kompleksniji nego ikada, nikad nije bilo bolje vreme za rad u ovom polju zahvaljujući dobro osmišljenim i besplatnim alatkama kao što je HTML5 Boilerplate.

Želite li da saznate više?

Pogledajte ovaj video u kom Paul Irish objašnjava ceo Boilerplate templat i veliki je resurs.

This article is translated to ““Serbo-Croatianlanguage by Anja Skrba from “”

HTML5 Boilerplate: Making Web Development Easier

Posted on April 24, 2012 at 11:38 am in Blog, Development

HTML5 Boilerplate Site

Boilerplate: Web design and development ain’t as easy as it used to be – it’s easier!

NOTE: This look at Boilerplate is part of an upcoming look at the Roots WordPress Theme and, as such, it focuses mostly on v2. Keep in mind that Boilerplate is under constant development (v3 was released in February). In fact, you could think of the Boilerplate changelog as the pulse of HTML5 development. Stay tuned for a look at some fascinating changes in v3.

Ah.…life used to be so much simpler.

In 1998 I picked up a book called ‘Teach yourself HTML 4 in 24 hours’. A couple of days and 350 pages later I had designed, coded and validated my first site.

Of course, that site didn’t do very much or even look very good by today’s standards.

All of this can be overwhelming and the good news has been that there is an incredible community of developers furiously creating fantastic (and free!) tools to make all of this easier.

But this leads to another problem – which tools do I use and trust?

For example, hop into a front-end developer discussion group or forum and ask what HTML5 framework you should use and see how many different recommendations you get..whew..!!

So, what if you wanted a default template for your development that already had all the tried-and-true, up-to-date tools installed and ready to be adapted to your project’s needs – a tool-kit, if you will.

Well, we have those too.

And probably the most popular right now is called HTML5 Boilerplate.

HTML5 Boilerplate (H5BP) is the brain-child of superstar developers Paul Irish and Divya Manian.
I won’t go into all of H5BP’s features (that is covered much better here) but the bottom-line is H5BP is like having a team of developers work for several years to give you an HTML5 template with all the best practices learned the hard way baked in.

H5BP seems especially suited for designers with deadlines who want to focus on presentation and not have to monkey around with a lot of project set-up. Just dump the H5BP files into your project and get to work. Depending on which version you’re using – 1,2, or (new as of February) 3 – here’s what you’ll be starting with:

  • Reset CSS with normalized fonts (Eric Meyer’s reset reloaded with HTML5 Baseline and YUI CSS fonts) or Nicolas Gallagher’s Normalize.css.
  • Basic print and mobile styles
  • .htaccess and other server config files (full of really clever snippets), empty crossdomain policy file for flash, robots.txt, favicon, apple-touch-icon, and 404 files
  • HTML5-ready. H5BP uses a tool called Modernizr that includes another tool called the HTML5 Shim (among other things like feature detection) to make sure your HTML5 code looks fine across all browsers including IE6
  • jQuery loaded from the Google CDN or locally if the user is offline.
  • ddbelated png for an IE6 png fix
  • yui profiling
  • Optimized Google Analytics script
  • Cool little things like a fixes to avoid console.log errors in IE & a fix for document.write issues etc.

The latest H5BP is version 3 and over the past couple of years the development team has grown and the product has been continuously improved. Recently the focus has been on web site performance. To this end, Paul and the crew have developed the H5BP ‘Build Script’. This is something that you run when you’ve finished your design/development work that handles optimizing and minification to make your site a lean and mean web machine.

Ultimately we live in a world of paradox. While the world of web design and development is more complex than ever, there has also never been a better time to work in this field thanks to well thought-out and free tools like HTML5 Boilerplate.

Want to learn more?

Check out this is a video where Paul Irish walks through the entire Boilerplate template and is a great resource.



Javascript Selector API – Should I care?

Posted on April 09, 2012 at 12:09 pm in Development, Training

Javascript Selector API – Should I care?

What is it?

Using JavaScript with CSS selectors, particularly Classes has traditionally been a little awkward. You end up needing dozens of lines of code with fun stuff like regular expressions to do something simple like toggle a Class. Looking for a better way to do this is how many of us got introduced to jQuery and it’s easy access to the DOM.

The JavaScript API has showed up to the party by implementing the W3C Selectors API.

What does it look like?

It looks a lot like jQuery.

The following example would select all p elements in the document that have a class of either “error” or “warning”.

var alerts = document.querySelectorAll("p.warning, p.error");

(Example above taken from the API Examples)

I’ve created a demo that shows this example in action. It uses the classList property  so don’t try this in IE. :)

In addition to querySelectorAll, we can use querySelector which returns only the first descendant element. Also, querySelector is not restricted to CSS IDs and Classes – you can use this with HTML5 elements as well:


Um…What About Browser Support?

QuerySelector and querySelectorAll are supported by all the major browsers from IE8 and up. Of course you need to be careful which CSS selectors you are querying because not all browser versions recognize all selectors.

Should I care?

Poke around inside jQuery and you’ll find references to querySelector – looks like jQuery is using this native API too (when it can). So, if you’re already using jQuery in your project and you’re more comfortable with jQuery selectors this new API isn’t going to rock your world. If you’re not using jQuery, are not worried about old pre-IE8 browsers and are trying to keep your project super-lightweight then these new selectors will make your coding much easier. So it looks like it is up to you and your situation.

Want to Improve your JavaScript Chops?

Sencha Animator: A Test Drive

Posted on March 27, 2012 at 12:55 pm in Blog, Development

What is Sencha Animator?
Sencha Animator is a new tool that makes it easy to create CSS3 transformation-based animations. So easy that you don’t even need a whiff of CSS3 skills!
Actually, working with Animator will look very familiar to anyone who’s used the Flash IDE (or any tool that uses timelines) to create animations.

Let’s walk through a simple Sencha Animator project.

Our finished project will look like this.

First, you’ll need to download and install Animator – get it here.

1. Set Up Your Project
Once you get it up and running you’ll select File–>New Project and set the size (ours is 600×320). Next, save your project (File –> Save) where you can find it again.

2. Add Images
For our project we’ll be fading in each of the four elements of our logo. Assuming we’ve already separated the logo into PNGs, the first step is to place the images onto the Canvas.
Select the Image Tool and then click anywhere on the Canvas.

Now we have a placeholder graphic on the Canvas. Let’s link this to our image. Click the button next to the default image name in the General Object panel and browse to your image.

While in the Object Panel with your image selected you’ll also want to set the image Name and Position.

Repeat these steps with the additional images. You should now have 3 layers in your Object Tree. You can rearrange these so that the layers are stacked correctly.

3. Well, that’s great – LET’S ANIMATE!
Set the Playhead between 0s and 1s and double-click in the timeline of the bottom layer (ours is called ‘LeftThing’).
This will create a white Keyframe and the Properties for this Keyframe will be displayed.

Under Properties, change the Easing to ‘Linear’. This will connect the Keyframe to another Keyframe at 0s.
Select the Keyframe at 0s and change the Opacity to 0% so that this element will appear to fade in to the scene.
(You can scrub the playhead to watch it fading in — ooohhh, aaahhh!)

Repeat this process with the other two images so that each element fades-in on top of each other. Your timeline should look similar to this.

4. Add Some Text
Let’s create some text and fade that in too.
Select the Text Tool and click on the Canvas.
Just like with the Image Tool, we need to adjust the properties of our new Text Element.
See the screenshot below to see the settings that we used.

To simulate our logo, we duplicated the text layer (Ctrl-D) and changed the Content to a left parenthesis and then repeated to create a right parenthesis. We then positioned and changed the Fill Color of these new text layers to match our RealEyes logo.

Next we’ll animate these layers to fade-in like the previous layers.

5. Add Interactivity
Excellent. Now we have a logo whose various elements fade-in and then the animation stops.
So, how hard would it be to add some interactivity and make the animation repeat if the user clicked on the logo?
Here’s how.
Select the top-most image layer (‘Yellow Thing’ in our example) and open the Actions panel. You’ll notice several interactions to choose from.
Select ‘click’ and then ‘Go to next scene’ from the drop-down menu.

6. Export the Project
Almost done! Lastly, we need to select File–>Export Project and then FTP this to our favorite web-server or simply open the html file that Animator creates as it exports the project.
Viola – you have some snappy animation that looks a whole lot like Flash – but isn’t!

With browser support for CSS3 animation growing everyday, designers and developers have been turning to frameworks, libraries and plugins like transform.js, paper.js, move.js and JSAnim to simplify their workflow. However, making convincing animations with pure code can be a frustrating and ultimately disappointing process. Because successful animation depends on nuance and timing, creating them with some kind of IDE or GUI has always been the natural solution (Flash owes a lot of it’s success to it’s easy to use and powerful timeline controls).

Without getting into advanced easing, multiple scenes, z-axis rotations, etc.., we’re really just scratching the surface of what this tool is capable of. While Sencha Animator is still a work in progress and will never be able to offer the power of the Flash IDE, we’ve seen that Animator is intuitive, easy to learn and offers a time-saving GUI for modifying CSS properties over time.
Another plus – the version that we used (1.2) seemed very stable.

Interested in learning more about the power of Sencha or their tools? Contact us!

The past week has brought a series of announcements from Adobe that has elicited myriad speculation and concern from the Flash Platform and Adobe community.  As a leading Adobe Solutions provider for Flash Platform solutions, RealEyes wants to address these announcements and how we see their impacting our focus in the technological ecosystem.

Before we begin this analysis, from our vantage point, the largest issue with these announcements is the way in which they were communicated—to the public, to partners, everyone.  There was much good news in what Adobe announced; unfortunately, their public relations team chose to focus largely on what was being deprecated, which colored the resulting dialog.

We’d like to take a moment to refocus this conversation for our customers and community.  Contrary to popular debate, Flash is NOT dead.  And here’s why:

Adobe Focus on Mobile Applications

Adobe announced that it would be more aggressively contributing to HTML5, with future Flash Platform development to focus on personal computer and mobile applications.  Great!  Our clients who are developing mobile experiences are universally doing so with the intention of making installable applications.  More Adobe focus in this area will only enhance the experiences that we are able to work with them to deliver.

The Flash Platform is still the best way to develop mobile application experiences intended to be deployed across the major application marketplaces: Apple, Android, and Blackberry.

However, what got the most attention in this announcement was that Adobe is discontinuing development of Flash Player for the mobile browser.  While this got many people up in arms, declaring the general demise of the Flash Player, we at RealEyes can respect this decision and see the validity of it.  For Adobe, the return on investment for this runtime simply wasn’t there, and with the fragmented nature of Android (and a few other issues that contribute to delivering an application to all browser, OS, and mobile hardware configurations) the continued development of the mobile Flash Player would be exponentially complex.

For application developers, the mobile Flash Player was never as good a runtime as the desktop one.

So, how is the discontinuation of mobile Flash Player affecting our clients? Really, it isn’t.

Because mobile device users are more likely to look exclusively toward installable applications for rich media content—and RealEyes’ Flash Platform applications largely deliver rich media content—our customers have been developing applications built using the Flash Platform and relying less on the mobile web.  Mike Chambers does a nice job of discussing the differences in how users consume rich content on mobile devices compared to the desktop, and we agree wholeheartedly that this is the way to go.

Because Flash Player doesn’t have the same ubiquity on mobile devices as it does for desktop browsers RealEyes was already advising our clients to create fallback experiences for their Flash content for mobile browsers.  For most of them we could achieve the same functionality in HTML as in Flash (video being the exception as you’ll see below).  Why not forgo Flash entirely and have a single HTML codebase to support?  Seems like a decision that makes good business sense.

Not that we aren’t sad to see mobile Flash Player go: we are.

If only because we don’t want the web to have missing plugin alerts. Having the Flash Player plugin available to Android and Blackberry mobile browsers was a convenience that offered a great marketing pitch, but, truthfully, delivered very little.  This is due, in large part, to the fact that the majority of the web was design for the desktop and was not meant for (nor is it very functional for) mobile phones – period, full stop.

In truth, we’ve seen just a very few Flash applications developed specifically for the mobile browser.  We at RealEyes have developed just one of these for commercial release. And this application was built before AIR for Android and was always intended to be a stop-gap until this runtime was available.

Now, tablets make a better use case for Flash’s place in the mobile ecosystem; however, the number of tablets that support Flash is under 30% of market share.  Given this and Apple’s seemingly prohibition on Flash, the Flash Player was just never going to achieve the same ubiquity as it has on the desktop for tablets, or for mobile phones for that matter.

Adobe Supports HTML5 Development

As Adobe is a multimedia creation company it will want to be at the forefront of whatever technology is defining exceptional user experiences for multimedia delivery.  And, for a few years now, Adobe’s been looking toward HTML5.  Unfortunately, the announcement from Adobe that contains the information about the discontinuation of the mobile Flash Player makes it sound like Adobe’s just jumping on HTML as a development platform.  That’s just not true.

Even more unfortunate in the present debate is a perception that Steve Job’s thoughts on Flash have somehow won and that this was just fallout from an Apple v. Adobe war.  Not so fast.  Apple and, to some degree Microsoft, have done much to market HTML5 development to the point that its perception overpromises what it can deliver.  Although Adobe has been working to educate its community about the benefits of the Flash Player over HTML5 and was backed by legions of developers, animators, designers, and content creators, they couldn’t overcome the tactics of a such powerful and cunning marketing machines.  While standing its ground on the mobile Flash Player, Adobe was, in many ways, able to achieve what critics said was not possible with Flash Player on mobile devices.

So, if Steve didn’t win, who did?

Well, Adobe is still poised to win and … more importantly so is its community of developers and customers.  Look at tools like Adobe Edge and the new mobile enhancements to Dreamweaver.  Also, with Adobe’s acquisition of PhoneGap, Adobe developers are poised to deliver the best HTML5 experiences out there.  Yeah, it’s not Flash … but that’s OK. While it seems like Adobe’s making a sharp turn toward HTML5, from where we sit, they are more fully committing to a direction that Macromedia, and then Adobe, started in some time ago.  Remember the HTML and Flash being friends video from Adobe MAX last year?

And, with other recent innovations for mobile AIR such as the availability of native extensions, the future of mobile development is exhilarating for any Flash Platform developer.  We’re hopeful that Adobe will use this opportunity to sharpen their focus on native mobile functionality and continue the path of making the Flash Platform the best choice for developing multi-platform mobile applications with a single code base.

However, the perception that Adobe’s making a rash decision is very damaging and something that we’re working with our clients to help them understand.  The reality of the situation is that not much has changed; however, poor communication, horrible messaging, and virtually no community outreach from Adobe regarding this messaging has made the perception the accepted reality in the short term.

And, if that weren’t enough news for one week …

Adobe Really Open Sources Flex

In clarifying its future plans for the Flex SDK, Adobe announced that the Flex SDK will be contributed to an open source foundation.  The good news in this move is that the Flex community is mature enough to take on the governance of this robust framework moving forward.  This wasn’t the case in February of 2008 when Adobe released Flex 3 as open source (Adobe had been planning to open source it since April of 2007).

For several years now, Adobe has been moving towards a more open standard with their development and this decision to contribute the Flex SDK to an open source foundation isn’t something that’s Adobe has done in isolation, and not just to the Flash Platform.  Some other projects that are on this path are:

  • PhoneGap
  • BlazeDS
  • Flex SDK

And, in reading Adobe’s clarification to this open source announcement, we see even more reason to be excited.  They are also open sourcing tools that support Flex including an experimental one (Falcon JS) that cross-compiles MXML and ActionScript to HTML and JavaScript.  Now, that’s exciting!  And, we’re sure that more is on the horizon.  Maybe HTML and Flash can be friends after all.

And, let’s be honest, the original model that Adobe used to open source Flex didn’t go as planned.  While Adobe always said they welcomed contributions from the community to grow and improve the Flex SDK, the process for getting a change accepted was unclear and many community contributions were rejected for any number of reasons (valid or invalid).  Adobe simply did not have the process or the resources to handle the influx of developers who wanted to contribute.  It was a frustrating situation for the Flex development community (and arguably Adobe as well).

So, the vibrant Flex community answered back earlier this year by creating the Spoon Project to better organize and test Flex SDK modifications submitted by the Flex community.  It proved to be an excellent model, drove innovation of the Framework, and was an initial step toward the full open source move that Adobe just announced.

Who’s governing the future of Flex? We are!

In case the nuance in what’s different now versus Adobe’s 2007 decision to open source Flex isn’t apparent, the major difference is that the Flex community will extend the Flex code base without needing Adobe’s permission to do so.  A new governance, following Apache’s well-established community rules, will be formed to determine the future direction of the codebase.

Since our inception RealEyes has been in close contact with Adobe’s Flash Platform team, we’re excited for this change in governance. RealEyes has always been super excited about the Spoon Project, and our Development Manager (Jun Heider) is very active in this community as the Infrastructure Chairman.  We’ve seen that this is truly a community-driven initiative that is supported by Adobe to increase the volume, speed (and maybe even the quality) in which the Flex framework can grow.

We are excited to contribute further to the future of Flex and confident that, like other successful open source communities, the language will continue to evolve.

Also … Flex isn’t all of the Flash Platform

Sadly, many of the announcements that we’ve been talking about, including the open sourcing of Flex, led many to say that Flash is dead. That simply isn’t true.  Let’s talk about what the Flex framework actually is: a particular framework used to structure Flash Platform development.  Do you have to use it to develop Flash Platform applications? No. And, to be honest, RealEyes doesn’t use Flex in every Flash Platform project because sometimes that framework can make applications to “heavy”.  If performance is of paramount concern for a Flash Platform application, Flex often cannot replace pure ActionScript.

Flash and Flex are not going away.  Adobe is still committed to developing tooling to support development for the Flash Platform. Further, Adobe hasn’t open sourced the Flash Player, the most installed piece of software in the history of the internet.  Adobe plans on steadily contributing to the Flex SDK in its open sourced project and we are working with the Flex community to make us contributors as well.

Adobe and Enterprise Applications

In a week of poorly handled communication, probably RealEyes’ largest concern was Adobe’s statement that “In the long-term, we believe HTML5 will be the best technology for enterprise application development.” Ouch.  Big enterprises have invested millions upon millions of dollars in the development and maintenance of Flash Platform applications.  At the very least that statement can erode the confidence that large companies (or companies of any size, really) have when building systems based upon Adobe technology.  Something that we feel is probably a bit of an over-reaction.

Also, without context this statement is very misleading.  Currently, HTML5 does not have full functional parity with the Flash Platform.  A few days after making this statement, Adobe clarified it by indicating what it intended as a timeframe for HTML5 to be able to truly complete with Flash Platform development: three to five years. That timeframe could be heavily extended when considering corporate browser adoption timelines.

There’s no enterprise that can wait three to five years for functionality.

As Adobe stated, “Flex has now, and for many years will continue to have, advantages over HTML5 for enterprise application development – in particular:

  • Flex offers complete feature-level consistency across multiple platforms
  • The Flex component set and programming model makes it extremely productive when building complex application user interfaces
  • ActionScript is a mature language, suitable for large application development
  • Supporting tools (both Adobe’s and third-party) offer a productive environment with respect to code editing, debugging and profiling and automation.

We see all that as being the case and some more:

  • Enterprise clients tend to have slower adoption rates for software, meaning that not all enterprises support the advanced HTML5 features that exist.
  • In particular, the video capabilities in HTML5 are not as robust as what is available in the Flash Platform including multicasting with integrated hardware acceleration and advanced security models.
  • The testing issues for supporting browser fragmentation can be daunting to enterprises, compared with supporting a Flash Platform application that can be deployed across desktop browsers with consistent display and functionality.

RealEyes will continue to recommend Flex and Flash Platform development to our clients where it makes real business sense to do so.  That said, there are reasons to use HTML over (or alongside) the Flash Platform, and we have plenty of clients we support who do that as well.

The Impact to RealEyes

So, what does all of this mean to RealEyes?  In the short term, it has meant a challenge to bring context to Adobe’s announcements and dispel rumors and misinformation to our clients. In the long run, it probably doesn’t mean a lot.

We have already been on a path of technology diversification with continued focus and adoption of HTML5, its supporting technologies, and native mobile development. Many of us are in the technology space because we enjoy the challenge of evolving our skills as the industry grows.  However, for the next few years, we anticipate that the Flash Platform will continue to be our predominant focus.

Our development specialty has been in delivering industry-leading streaming media solutions and multiscreen development. Flash and AIR are still the best solutions for this and will be for a while.  The timeline for that largely depends on Adobe and, as a valued Adobe Solutions Partner, we will continue to support them in as educated and balanced way as possible.

We are actively involved in the future of the Flex framework through the Spoon Project and excited about the potential for future growth for that project.  We are now even more apt to contribute to the betterment of this already robust framework for the benefit of the Flex community.

Finally, RealEyes has always helped our clients to choose the best technology to power a given project and we will continue to do this.  And, as HTML5 becomes a more comprehensive solution, we will likely recommend it more frequently. It is truly about what is right for the current and future on a case by case basis. Our clients and projects will continue to be industry leaders, no matter the technology behind them.


Now, we can’t see all of the news in a positive light.  And not all of it is positive – certainly not for the 750 Adobe employees who were laid off and their families. However, this degree of restructuring in the fourth quarter isn’t unprecedented for Adobe.  We’ve seen this over the past couple of years.  This year, as in years past, we lost meaningful relationships with Adobe employees that we’ve been happy to collaborate with on community and development projects.  We at RealEyes have close contact with Adobe and tend to focus on how individuals shape the platforms, products, and communities that we work with instead of quarterly earnings and fiscal projections.  While adjusting to this restructuring, we wish all of the affected employees only the best in their next moves and hope that they will continue to make positive contributions to the technical community they have helped to shape.

Additional Links:

ckeditor logo
The Javascript-based rich text editor-CKEditor is a great tool to use in projects that require you to give your clients the ability to edit their own HTML pages. Implementation is dead simple, and the list of configuration options is long, giving you a great deal of flexibility in terms of the functionality you can implement in the HTML editor. For example, in addition to editing text, you can configure CKEditor to give your clients the ability to add images to their pages, either by linking to existing ones on the web, or uploading them from their own computer.

ckeditor screenshot
If this sounds like a tool you’d like to use in your own workflow, be sure to check out what Realeyes developer Nils Thingvall has to say about it. Nils’ article gives you helpful tips on configuring your editor to work with image uploading, a topic that is unfortunately under-documented on the CKEditor site. Thanks Nils!

Load testing service API’s got you down? How about load testing PHP-based, AMF service API’s? Thought so. Fear not, because John Crosby recently posted his findings about two AMF load testing tools he says are great! He’s talking about soapUI, and loadUI, the free-of-charge, open-source tools created by the fine people at SmartBear.

John shows you how to use these tools, walking you through step-by-step as you set up a project, configure an AMF request, and set up load testing using soapUI. He also walks you through load testing with loadUI.

It’s clear that John is pretty excited about the handiness of these two load testing applications, and he’s already looking forward to integrating them with our Continuous Integration (CI) system. Stay tuned for more on that soon! For now, happy testing!

Read the original article