Strategic Consulting

I love using Vagrant in my development environment and during my pet projects I love to leverage community boxes. However, many of the clients I work with have very specific needs. To prevent the “works in my dev environment” scenario often times requires creating a Vagrant box from scratch to mirror the QA and production environments as closely as possible. Many of my clients require specific versions of Oracle Linux (OEL) and Red Hat Enterprise Linux (RHEL) running specific kernels, specific packages, and configurations. If you find yourself new to having to provide these environments, here’s some tips and links that I’ve leveraged to get me through a solid execution of “vagrant up“.

Standard Vagrant Box Creation Steps

I’m most appreciative of the technologists out there that spend time writing posts to help others, thank you! These posts are often very good, but realistically can’t account for every situation out there. Here’s a couple really good posts on custom Vagrant box creation that I’ve leveraged:

These post provide outstanding step-by-step details on how to build a custom vagrant box and should be your general reference for creating your Vagrant box. That said, this post will piggy back off them with a list of the things I encounter with OEL and RHEL on a frequent basis and how I’ve resolved them.

Issues “Seemingly” Specific to RHEL and OEL and What I’ve Done to Resolve Them

Here’s the common issues I’ve run into and what I’ve done to fix them. This should not be indicative of the best way to resolve them but what has worked for me. If you’ve been down this road and have better solutions, I am more than happy to hear them. :)

RHEL Requires a Subscription

RHEL requires a subscription. Although you can install RHEL without one, the first time you run yum to get new or updated packages, you’ll run into an message indicating that a subscription is required. To resolve this:

  • If you’re going to be working a lot with RHEL it’s not too expensive to get an RHEL Developer Suite Subscription. I highly recommend it for those of you that work with RHEL.
  • From there you’ll want to ssh into your RHEL VM and utilize the subscription-manager tool to register and attach a subscription to your Vagrant base box.

The eth0 Interface is Disabled on a Fresh Install

Vagrant utilizes the eth0 interface to communicate with and configure your box when you “vagrant up”. Maybe I’ve missed a step during the install dialogs but both my OEL and RHEL instances have spun up with eth0 disabled. (Probably for security reasons) The fix for this was simple sysadmin work of enabling eth0 on boot.

If You Need a Specific Kernel – Don’t yum update!

Sometimes there will be a specific requirement to use a specific linux kernel. If you run yum update you run the risk of accidentally downloading and enabling an unexpected and updated kernel. If for some reason you do…you can revert. It’s just kind of a pain and unnecessary time expenditure. 😉 No one’s perfect and I’ve done it myself, so if you’re like me and need to revert sometime, here’s an example of what I’ve done with RHEL (OEL is most likely very similar):

  • Revert the damage of accidentally running yum update by downgrading.
  • One thing I had to do after running through downgrade was to modify my /etc/grub.conf to make sure the boot kernel selection was properly updated as well.
  • Make sure you reboot to verify your changes.

VirtualBox – VirtualBox Guest Additions on OEL and RHEL

I use VirtualBox as my virtualization software. It provides some OS add-on functionality called Guest Additions. I can never install them out of the box with OEL or RHEL. To get around this usually requires the installation of additional packages via yum. The nice thing is when you try to install Guest additions you usually get some helpful console output on what packages are missing:

  • On RHEL I had to install kernel-devel* and some additional packages such as perl. When working with OEL I had to install kernel-devel* as well.
  • If you install any kind of kernel packages to support the proper installation of Guest Additions you will need to reboot your box before the Guest Additions installer will see that they’re there.
  • Here’s a post I referenced in the past when working through my issues:

Help! Vagrant is Having Issues Configuring My Box!

You’ve painstakingly created a custom Vagrant box and used “vagrant package” to create your box. Now you’re testing your box file via “vagrant up” and are excited to upload it and share it with your dev team. All goes well until Vagrant start to try and connect to your box via SSH, set the Hostname, or configure additional adapters that you’ve defined in your VagrantFile. In these cases, sometimes the solution is trivial, sometimes it requires a lot of googling. Here’s what I’ve found to help me through these issues:

Configuring SSH Access for Vagrant

Vagrant uses an insecure key to connect to the box – at least initially, more on that below – however, for this to work Vagrant will require a “vagrant” user with sudo access which you’ll have if you followed one of the vagrant box tutorials and the ability to connect to the box via SSH without requiring a tty. To fix this I log into my box and use visudo to change the line “Defaults requiretty” to “Defaults !requiretty” before creating my box with “vagrant package”

Fixing – Can’t set the Hostname

In this case, just make sure that the openssh-clients package is installed on your box image via yum before creating your box with “vagrant package”.

Supporting Multiple Adapters

“I want more than one network adapter configured but every time I “vagrant up” the eth1 or above doesn’t seem to get configured”: This one was a pain to track down, required multiple repackages while debugging, and it seems many people run into this issue with Vagrant. In my work, I’ve found a pretty consistent way to get this to disappear based on the link mentioned just now. Before you run “vagrant package” you need to:

  • Remove the config script for the adapter that’s problematic. For instance with eth1 you would type:
    • rm /etc/sysconfig/network-scripts/ifcfg-eth1
  • Remove the persistent net rules file:
    • rm /etc/udev/rules.d/70-persistent-net.rules
  • I make it a practice to run the two commands above right before creating a box with “vagrant package”

Insecure Key – Repackaging a Vagrant Box

I’m not sure which version the feature was implemented in but recent versions of Vagrant will replace the insecure key from the authorized_keys file of the vagrant user with a more secure key generated during “vagrant up”.  The fix for this is to update the authorized_keys file by removing the secure key and replacing it with the insecure public key. You should do this whenever you need to “vagrant package” an existing Vagrant box that has been upped.

In Conclusion

Maybe you find these tips useful, maybe you know a better way to create Vagrant base boxes. Whatever boat you’re in, these are the tips that have worked for me throughout my OEL and RHEL custom box creations. I love using Vagrant and very much appreciate all the developers out there that have shared their source code, tips and tricks. If you have anything you’d like to share as well I look forward to hearing about it.


When I was tasked with determine the max users that an m4.xlarge AWS instance running WowzaStreamingEngine delivering HLS content could reliably handle; I found out quickly I had a fairly difficult task ahead of me. Luckily I found a few blog posts that pointed me in the right direction and provided the base JMeter test plan to work with.

Understanding HLS Protocol

HLS or HTTP Live Streaming is an HTTP based media streaming protocol developed by Apple, Inc. It works by breaking a media resource into a sequence of smaller chunks and fragments that may be split among a variety of different encoding data rates. Since HLS is an HTTP based protocol it is able to bypass any firewall or proxy server that lets through standard HTTP traffic unlike other protocols such as RTMP. This allows content to be delivered by standard HTTP servers and delivered from a wide variety of CDN’s and streaming engines such as Wowza.

Testing Platforms and Plugins

When first researching the best methods to successfully test our service I came across a few options:

First option was to use following the directions in this blog post provided by This provided some good information but since the project was fairly small it did not seem worth it to have to pay the extra money for this load test. Maybe in the future if we need to perform load tests more often, this might be a good route for us.

Second option that I found was to use a HLS Plugin for JMeter developed by Ubik Ingénierie. Ubik wrote a great blog post that describes how most load testing of HLS content can be very unrealistic and it would be very time consuming to create an exact real-world simulation of how a browser handles the HLS protocol. After downloading their plugin and starting some initial testing I found their plugin to be very easy to setup and provided some great test results. However their free plugin only allows a very limited number of concurrent users, and the price for the plugin with the number of users we wanted was too much for this project. If we ever run into a very large project that needed the most reliable test data, this plugin would be a great choice for us.

The third option that I found and ultimately followed through with was to create my own test script using JMeter. The results would not be as accurate as the first two options but they would be good enough for the size of the project that I was working on. I started with the code base I found here, and the steps mentioned by Itay Mendel in his blog post on blazemeter.comthen modifying a test plan to match my needs.

Creating the Test Plan

The first step is to make a standard HTTP Request for the contents playlist manifest.


Once the playlist is requested I created a Gaussian Random Timer with a deviation of 3000 ms and a constant delay of 1000 ms. This creates a delay between subsequent requests with a minimum delay of 0 seconds and maximum delay of 4 seconds with a Gaussian curve distribution (similar to a symmetric bell curve).

Screen Shot 2015-08-14 at 10.43.24 AM

The next step is to parse the response of the playlist request for a list of chunk-lists using a Regular Expression Extractor.

Screen Shot 2015-08-14 at 10.43.27 AM

Then using a ForeachController that will loop through each chunk provided by the chunk-list and make another HTTP Request to the chunklist manifest.



Now we perform the same process to extract the stream fragments from the chunk manifests.





Overall the structure of the requests and variable extractions looks like:


Finally once the test structure is complete and all bugs are worked out, the last step is to configure the ThreadGroup for concurrency and duration of the test. For our load testing we wanted to start with 300 users with a 60 second ramp up that streamed video for 5 minutes each. Than after each test bump the number of users up by 300. To do this I setup the thread group options as such:


One interesting point is that for this test in order to control the duration each user streamed the video for, a scheduler was created with a random date in the past. This would start the test  instantly and make the duration option available for use.

Cost and Server Optimization

Since we are testing on an Amazon EC2 AWS Instance we wanted to limit costs during the load testing. To avoid having to pay for large bandwidth amounts, since we ultimately wanted to test against more than 1000 users streaming content, we launched a second m4.large instance and installed Jmeter on it. This new instance became the server with which we ran the tests from which allowed us to capitalize on AWS pricing for private IP address data transfer costs which is exactly $0.

While optimizing, I found a post by Philippe Mouawad here, which explains the best practices and tuning tips for using JMeter with high concurrency. Here are the tips that I integrated into my testing environment:

  • Remove all listeners and instead generate results after the test is complete.
  • Log all results to csv format
  • Run JMeter in cli instead of gui. <JMETER_HOME>/bin/jmeter -t <Path to Test Plan>  -n -l <path to results>/results.csv

Some other tunings we performed on both the JMeter instance and the Wowza instance were:

  • Increased the number of files allowed to be opened on the server, “nofile”
  • Increased the maximum file handles that can be allocated, “fs.file-max”
  • Increased the max amount of file handles that can be opened, “fs.nr_open”
  • Increased how many connections the NAT can keep track of in the “tracking” table before it starts to drop packets and just break connections, “net.ipv4.netfilter.ip_conntrack_max”

We did this by adding the following to


root soft nofile 1000000
root hard nofile 1000000
* soft nofile 1000000
* hard nofile 1000000




We’re grateful to know that there are so many resources available on the Internet to help work through new challenges. With a little creativity and the ability to piece together information from multiple sources there’s so much that can be accomplished. For instance, even if you’re testing VOD HLS on Adobe Media Server or some other streaming solution, you should be able to take some pieces from our article and the resources we leveraged to jumpstart your own load testing initiative.

We enjoyed working through our challenge, and hopefully this article helps you work through your challenge in the way that the articles we found helped us.

JWPlayer states


“JW Player supports all two formats [WebVTT and SRT] across all browsers though, in both Flash and HTML5 mode. The only exception is playback in fullscreen on iOS, since the native fullscreen mode does not allow JW Player to print captions over the video. Since the iPhone can only display video in full screen, this means that captions will not function on this device.

This to us didn’t make any sense due to HTML5 video element has built in closed caption support.

So in order to change this we created a simple workaround.




Example Code:

jwplayer("media").setup( mediaSetupObject );

    // iOS Closed-Captions fix

    var video, track;

    jwplayer('media').onReady(function() {

    // determine if on an iOS Device

    var iOS = (navigator.userAgent.match(/(iPad|iPhone|iPod)/g) ? true : false);

    if (iOS && type === 'mpl') {

        video = document.getElementsByTagName('video')[0];

        track = document.createElement('track');

        track.kind = mediaSetupObject.playlist[0].tracks[0].kind;

        track.label= mediaSetupObject.playlist[0].tracks[0].label;

        // TODO: Need to account for different languages

        track.srclang = track.label === 'English' ? 'en' : 'en';

        track.src = mediaSetupObject.playlist[0].tracks[0].file;


        video.textTracks[0].mode = 'hidden';

        jwplayer('media').onFullscreen(function(data) {

            if (data.fullscreen) {

                if (video.textTracks[0]) {

                    video.textTracks[0].mode = 'showing';


            } else {

                if (video.textTracks[0]) {

                    video.textTracks[0].mode = 'hidden';







Example Config File:



 "playlist": [


     "image": "splash.png",

     "title": "Sintel",

     "description": "Video created by: Jun Annotations: LOL",

     "sources": [


         "file": ""



     "tracks": [


         "file": "",

         "label": "English",

         "kind": "captions",

         "default": "true"









1. Listen for JWPlayer ‘onReady’ api callback.



Fired when the player has initialized in either Flash or HTML5 and is ready for playback. Has no attributes.

jwplayer(‘media’).onReady(function() {});


2. Determine if on an iOS device.


var iOS = (navigator.userAgent.match(/(iPad|iPhone|iPod)/g) ? true : false);


3. Get reference to html video element. Create track element. Add kind, label, srclang, and src attributes to the track element.


video = document.getElementsByTagName('video')[0];

track = document.createElement('track');

track.kind = mediaSetupObject.playlist[0].tracks[0].kind;

track.label= mediaSetupObject.playlist[0].tracks[0].label;

// TODO: Need to account for different languages

track.srclang = track.label === 'English' ? 'en' : 'en';

track.src = mediaSetupObject.playlist[0].tracks[0].file;


In the above example these values are brought in from an external config file used for setting up the video element.


*Note: type === ‘mpl’ is referring to a parameter we passed in with the page url. This can be omitted for your personal use.


4. Add track element to video element and set mode textTracks mode to hidden.



video.textTracks[0].mode = 'hidden';


Since by default JWPlayer will still play CC for iPads without having to go into native player we need to default the textTracks attribute mode to hidden.


5. Listen for JWPlayer onFullscreen api callback.



Fired when the player toggles to/from fullscreen. Event attributes:

  • fullscreen (Boolean): new fullscreen state.


jwplayer('media').onFullscreen(function(data) {

   if (data.fullscreen) {

       if (video.textTracks[0]) {

          video.textTracks[0].mode = 'showing';


   } else {

       if (video.textTracks[0]) {

           video.textTracks[0].mode = 'hidden';





If device is in fullscreen set video textTracks mode to showing and hidden if device leaves fullscreen.




Through our testing we have noticed SRT format does not work for iOS native playback. So be sure to use a WebVTT format for your Closed Captions. If you need to convert the format, there’s a number of tools out there such as this one:




Until JWPlayer decides to use HTML5’s built in closed caption support instead of overlaying text across the video this workaround will need to be included for your website to support closed captions on iOS devices when playing back video in fullscreen. Luckily the fix is very simple and non-obtrusive.


Reference Links


JWPlayer Closed Captions:

JWPlayer API:

HTML5 Closed Captions:


Live streaming video is not an easy thing to take cross-platform at the moment. Flash can do really well on the desktop using RTMP and HDS, but there’s no Flash on mobile devices. HLS is Apple’s horse in the live streaming race, and it does very nicely on iOS and occasionally works on some Android. But if you take HLS to the desktop and you’re limited to Safari on Macs.  Wouldn’t it be nice if we could consolidate a bit?

flashls is a plugin for the Open Source Media Framework and other frameworks that allows you to play back HLS in Flash. There are actually several plugins that allow this now, but flashls is the one we’ve settled on as the best performing and best supported. This is great! Now you can have a little more consistency in how you stream. But here’s a downside: a number of the fancy, fun things you could do with RTMP or HDS are missing. If you want to inject stream metadata for example, all that gets tossed out when the source file gets chopped into fragments for the HLS stream by the live-packager app on Adobe Media Server.

An important piece of data to us that got discarded were time codes in the stream. We use GMT time codes to synchronize video with time-based data feeds as well as other videos. With most encoders, including Flash Media Live Encoder, you can choose to inject timecodes into your video stream. For RTMP and HDS streams you can pick out the data by adding an onFI method onto a NetStream’s client object property. But switch to a live HLS stream and that all disappears.

Fortunately, a few things have changed recently. First off, Adobe made some changes to AMS in version 5.0.8. The included Apache server got upgraded to 2.4, and with that came a change to the live packaging of streams. Now you can inject ID3 data into HDS and HLS streams! Adobe has a good article about this. This new feature provides a lot of different ways to now put useful metadata into the live HTTP stream of your choice.

Unfortunately, the flashls plugin didn’t pick up on the injected data. The way Adobe included the ID3 data was a bit different than expected. However, the wonderful folks behind flashls took notice and added in support for extracting this ID3 data from the HLS stream. This is why continuing support of a plugin is important! With the changes we now can get an ID3 frame out of an HLS stream.

Here’s the high-level workflow:

  1. On the Flash Media Server, make a copy of the live-packager app in the applications directory and roll up your sleeves for some Server-Side ActionScript.
  2. In the live-packager copy’s main.asc file, we need to add in an onFI listener to the incoming F4F stream in the application.publish() method. In this listener we pull the time code out of the metadata object. In this case the property is the date and the property is the timestamp.
  3. Next we need to inject the metadata into the stream as an ID3 frame. The Adobe article lists the supported ID3 frames. I decided to use the Comment ID3 frame to send the information along. This frame only has a single data property, so I join the time code parts together into an easily parseable string. Then we send the metadata object through the stream using the NetStream.send() method.
    s.onFI = function( info ) {
    var metaData = {};
    var comment = { language: "eng" }; = String( + '|' + );
    metaData.Comment = comment;
    delete comment;
    comment = null;
    s.send("onMetaInfo", metaData);
    delete metaData;
    metaData = null;
  4. Now we can switch back to AS3 and start working in our OSMF player. I won’t cover the implementation of the OSMF plugin, as the flashls site has a nice sample. The flashls OSMF plugin dispatches events about metadata off of an instance of the HLS class. You can get access to this object by listening for the load trait being added to your media.
    protected function onMediaElementTraitAdd( event:MediaElementEvent ):void
    if( event.traitType == MediaTraitType.LOAD )
    var hlsNSLoadTrait:HLSNetStreamLoadTrait = MediaElement( ).getTrait( event.traitType ) as HLSNetStreamLoadTrait;
    if( hlsNSLoadTrait )
    var hls:HLS = hlsNSLoadTrait.hls;
  5. Then add a listener to the HLS object for the HLSEvent.ID3_UPDATED event.
    hls.addEventListener( HLSEvent.ID3_UPDATED, onID3Data );
  6. This is where things get a little complicated. We need to then pull the ID3 frame out of the string of hexadecimal data that is attached to the ID3_UPDATED event. First step is to convert the hexadecimal string into a ByteArray, which we can then parse using AS3’s ByteArray API. Fortunately, flashls provides us with a utility to access this.
    protected function onID3Data( event:HLSEvent ):void
    var hexData:String = event.ID3Data;
    var hexBytes:ByteArray = Hex.toArray( hexData );
  7. The next step is to parse the ByteArray to extract all the information. Fortunately, that Adobe article describes how the ID3 headers are constructed. That gives us a guide on how to read byte data out of the ByteArray. We end up with something like this:
    var frameName:String = hexBytes.readUTFBytes( 3 );
    var version:int = hexBytes.readByte();
    var revision:int = hexBytes.readByte();
    var flags:int = hexBytes.readByte();
    var size:int = hexBytes.readInt();
    var frameID:String = hexBytes.readMultiByte( 4, 'utf-8' );
    if( frameID == 'COMM' )
    var frameDataSize:int = hexBytes.readInt();
    var frameFlags:int = hexBytes.readShort();
    var groupID:int = hexBytes.readByte();

    Note that we are checking to see if this is the type of ID3 frame we care about (Comment). The Adobe article references the IDs of the different frame types, and tells us that the Comment frame ID is ‘COMM’.
  8. Now we’ve actually gotten to the data we care about: the ID3 frame’s metadata payload. It appears that the object comes through not as an AMF object, but rather as values separated by a 0 byte character. The first 3 characters we read off the data make up the language code of the ID3 metadata.
    if( frameID == 'COMM' )
    var frameDataSize:int = hexBytes.readInt();
    var frameFlags:int = hexBytes.readShort();
    var groupID:int = hexBytes.readByte();
    var lang:String = hexBytes.readMultiByte( 3, 'utf-8' ); //eng
  9. Then we can read out the 0 byte separator. Then the remaining bytes give us our time code.
    var separator:int = hexBytes.readByte(); //0
    var timestampString:String = hexBytes.readUTFBytes( hexBytes.bytesAvailable ); //2015-06-12|22:03:06.543
    Keep in mind that the seconds come in as a decimal. In order to convert your time code to a Date object, you will need to either round off the decimal or convert the decimal to milliseconds.
  10. Finally, fire up Flash Media Live Encoder, or the encoder of your choice, and make sure it is injecting time codes into your stream. For FMLE, the option is a Timecode checkbox in the lower left. Click the wrench next to it to access the options and check the “Embed system time as Timecode” option.
  11. Then target your custom live-packager app for publishing and consume the outgoing HLS stream in your OSMF player. You should then start to receive your time codes via ID3 events.

Obviously there are many other uses for extracting in-stream metadata from HLS. There are also many other types of ID3 frame types. They may not follow the same structure as our simple Comment frame, and if you venture into making your own custom ID3 frames, that may be even more complex. However, this proves that it can be done in Flash using the existing tools available. Just remember that you need to be on AMS 5.0.8 or greater and also be on the latest codebase of the flashls plugin.

Many great thanks to Guillaume du Pontavice (a.k.a. mangui), who is the man behind flashls and helped immensely in making this functionality available.

Using the PHDS, PHLS, or PRTMP feature of Adobe Media Server is reliant on some certificate files provided with the installation. These files are located in the {AMS_INSTALL_ROOT}/creds folder. From time to time these files are set to expire and new files are provided by new AMS install versions.

The last time this happened was when AMS 5.0.3 was released. At the time you had two options:

  1. Back up your files, uninstall AMS and then re-install using the AMS 5.0.3 updater:
  2. Get a hold of the updated certificates – you can download the linux updater, unzip, and extract them – and use the list of files at this blog post to replace the certificates on your existing installation:

With the release of AMS 5.0.7, it has been noted in the release notes that these certificate files will need to be replaced again before April 6th, 2015:

“We have also refreshed the certificates used for Protecting Streaming workflows – PRTMP, PHDS and PHLS. The certificates in the earlier versions are due to expire on 5:30 AM April 6 2015. The refreshed certificates in this version have an expiry date of 5:30 AM September 24 2016.”

Although it’s likely that you could use step 2 as mentioned above to simply refresh your certs – especially if you’re still using Adobe Media Gateway which has been discontinued – there is some other interesting information called out in the release notes that make me feel AMS 5.0.7 is a worthwhile upgrade:

  • If you’re using SWF Verification for PHDS there’s a fix for when you forget to add your whitelist file: “3704242: SWF verification for PHDS was ignored if whitelist file was missing. Now playback fails and error is logged suggesting user to provide whitelist file or disable SWF verification for PHDS.”
  • If your AMS is on a Windows box and you’re using HDS or HLS with the Apache cache turned on: “3803660: Disk cache cleanup for Apache using htcacheclean even though enabled by default was not functioning on Windows. This is working fine now.”
  • If you’re using SSL: “We have updated the OpenSSL version used by AMS to 1.0.1j. This provides four security fixes including POODLE (CVE-2014-3566). We have disabled SSL 3.0 on the server. The successor protocols to SSL3.0- TLS 1.0, 1.1 and 1.2 can be used for secure communication.”

That said, use your best judgement on whether you upgrade or just swap out the certificates. IMPORTANT NOTE: If you’re using any kind of fragment or manifest caching, the new certificate won’t match up so you will need to kill your caches and rebuild them after the certificate change.

Quotes are from AMS 5.0.7 Release Notes:

FMS/AMS Updaters:

The idea of being able to upload recordings to Connect can be an attractive thing under the right circumstances. The two most common use cases are when a recording needs to be repaired because of an audio issue (this is probably going to be done by Adobe or a savvy support person with your reseller) and wanting multiple versions of the same recording. Since the first use case is a support based scenario, I won’t dive into it, but the work flow for uploading the repaired recordings are the same as what will follow.

So, let’s address the use case of having multiple versions of the same recording. One scenario might be the desire to have a teaser version of your recording (say the first 1-5 minutes), while still having a full version of your recording. Another scenario would be if you would like a recorded session covering multiple topics broken out into unique topic-specific recordings to play back. By default, in Connect you can only have one version of a recording available for playback to authenticated users or the general public.

Working around the single recording issue prior to Connect 9 was a pretty simple task. All you had to do was download the recording source files by adding /output/ to the URL for your recording to download a zip file containing the recording FLV and XML files representing the meeting recording. From there you can take the zip file and upload it as a new content object to the desired Content Library folder, and you are good to go.

The above workflow still works for Connect 9+, but you may see an error when trying to move the recording. The error will read No message for validation error “recording-is-in-progress”, type “string”, field “sco-id”This error is caused by a field that doesn’t get populated when uploading the recording source files. This field is the recording end date. Resolving this error requires populating the field in the database (DB) to allow Connect to properly manage the recording. This can be accomplished by making an API call.

Making an API call can can seem scary, but here’s a step by step on how to go about it:

  1. Before making an API call, make sure you are logged in with an account that has Administrator credentials. Although lesser permissions may work for some API calls, this particular API call is set up with the assumption of Admin rights. You can login via the API or by going to your Connect server URL and logging in.
  2. Now to update the missing DB field for the recording we need to make the following call using sco-update: You can find the SCO ID for the recording in the URL of the management page for it in Connect Central.
    Recording SCO ID
  3. As stated before, the end date is not populated when you upload a recording which causes the error to be thrown. The end date you choose in step 2 can be any date/time after the upload of the new recording zip, just make sure to use the date format in the step 2 example above.

Right! So now that you’ve fixed the issue (if you even knew it was an issue), what does it get you? Here’s a cool parting trick. Have you ever had someone ask to have multiple versions of their recordings in the Recordings folder for one meeting room? Here is how we can accomplish it:

  1. Using the sco-move call we can place the recording in a different folder. This isn’t needed to move it to a new folder in the Content library. Example: folder-id=654321 The trick is what to put in the folder-id field. If you want to move it to the Recordings folder for a Meeting room then you would just use the sco-id of the meeting room.

Meeting SCO ID


Now the recording will reside in the Recordings folder of the Meeting room!

Recording in Meeting folder


Want to learn more about Adobe Connect and how it can help you meet your web collaboration or eLearning needs?

Contact Us

Why we love Wowza

Posted on January 30, 2015 at 1:32 pm in Media Solutions, Products, Strategic Consulting, Training

Wowza has been growing up as a product well placed to take over the streaming media world. As a result of their attention to focusing on ease of use and an ability to reach every screen they have created the new industry leader in streaming media technology. Because of this we have come to see the Wowza Streaming Engine (WSE) as the most future proof option you can purchase. So why do we feel this way? Well, here are our top 4 reasons, in no particular order.

  • Will it stream to XYZ device? Yes! By adopting both current and next gen media formats the WSE ensures that it can deliver your stream to all screens. This alone overcomes one of the biggest challenges we face in deploying a media server. No one wants to have to exclude or limit their viewers to specific OS’s, devices, or browsers. If you aren’t using WSE, it’s time to ask if you can support the following media formats with one server:
    • RTMP
    • HLS
    • HDS
    • MS Smooth
    • RTSP/RTP
    • MPEG-TS
  • Is there an easy to deploy player for my clients to view my media on? Yes! Wowza and JWPlayer have formed a partnership and created a workflow to easily deploy a polished and diverse media player as the portal through which your audience will view your media. In seven (pretty simple) steps you can have your media player set up and running. This is a great benefit to having your streaming media deployment up and running quickly
  • Transcoders seem to vary and are complicated, is there a simple solution from Wowza? Yes! Wowza can accept a live stream from any h.264 or RTMP source. So if you have a transcoding solution in place, it will likely work with WSE. However, should you want to use a different format, IP cameras or another video streaming source, the Wowza Transcoder AddOn can be used to take in that stream and format it to whatever you need. Why is this so amazing?
    • The transcoding is done server side. No more needing to have encoding software on each device that is steaming to the server. This can be a huge cost savings in not only software purchasing, but also in time. Since the WSE can take in almost any media format, this means you don’t have to spend a large amount of time setting up and teaching configuration to those individuals sending the stream. Just point it to the WSE server and hit go!
    • The transcoding is done on the fly. This means that there is very little latency from the transcoding. You can take in one stream setting and output multiple formats and qualities of the same media. You can even have an audio only stream which can be great for those on small devices or very low bandwidth environments.
    • Static and dynamic images can overlay the media stream. Place Ads, calls to action, watermarks, tickers (sports scores or stock tickers), or whatever else you can think to do to enhance the experience of your video.
  • I’ve never managed a media server, it is complicated? Or, I’ve managed media servers in the past, is WSE as complicated? No! WSE was built with an intuitive and easy to use management interface. Everyone from novice to advanced users has found this tool to be a wonderfully simple and powerful tool to set up, manage and monitor these servers. You can still play in the XML configuration files if you want, but you don’t have to. There are even built in test players where you can test any of your streams in any media format without having to build your own test page! In the WSE Manager interface you can:
    • Set up streaming apps
    • Manage your streams
    • Monitor the server performance
    • Add and manage other admins and publishers
    • Manage your AddOns

To top it all off, WSE is an extremely flexible tool that really can meet most streaming media needs.

Want to talk more about Wowza? Looking to purchase Wowza? Looking for training on Wowza? Need help with Wowza? We can do it all. Reach out to us and start the conversation today.

Contact Us

Released today (Nov 18, 2014) are three new products to add to the Varnish Plus application; Unlimited cache sizing, increased caching performance and customized cache optimization support content-heavy, high-traffic sites.

“For most consumers, websites are now the pivotal point of interaction with companies. If information and content isn’t delivered instantly, they will seek alternatives that are just a mouse-click away,” – Per Buer, Founder and CTO, Varnish Software.

Product details:

Unlimited cache sizing with Varnish Massive Storage Engine
The new Varnish Massive Storage Engine tackles the problems of content-heavy sites by allowing the Varnish caching layer to handle multi-terabyte data sets. This makes it possible to cache almost unlimited objects while the website performance remains stable over time. The Varnish Massive Storage Engine is targeted at business with large data sets such as online retailers, image banks, video distributors or Content Distribution Networks and enables them to deliver high quality content within their current infrastructure while pushing the bounds of modern web experience delivery.

Increased caching performance and resilience with Varnish High Availability
Varnish High Availability is a high performance content replicator that eliminates cache misses (when an item looked up in the cache is not found) and ensures the stability of the Varnish Cache set-up. By protecting the backend infrastructure from overload caused by cache misses, it increases website performance and minimizes the risk of frustrated visitors leaving websites. Varnish High Availability is for Varnish Cache users whose sites are business-critical. It can be installed with any multi-cache Varnish Cache setup, including two/three node CDN POP installations.

Customized cache optimization with Varnish Tuner
Varnish Tuner automates customized cache optimization in both the Varnish and operating system environments. It recommends configuration options for the Varnish Cache set-up including how the operating system should be tuned, which cache parameters should be changed or replaced and also explains these recommendations. Varnish Tuner makes it possible for businesses to find the specific set-ups that best matches their resources and needs, resulting in better website performance.

Varnish Massive Storage Engine, Varnish High Availability and Varnish Tuner are all available from today with a Varnish Plus subscription

Contact us today for all your Varnish purchasing/training/configuration needs!

Your Name (required)

Your Email (required)

Your Company

Your Phone Number


Your Message

HTML Video Check-in – iOS 7 vs. iOS 8

Since iOS 8 went live on the 17th and I updated a few of my devices over the weekend, I decided to do some quick testing of web video playback. I wanted to see if there were any little, undocumented changes that would affect our custom, cross-platform video player, or our general approach to working with HTML video – like the changes to exiting fullscreen video that came in the update from iOS 6 -> iOS 7. 1

Overall, things seem pretty much the same between iOS 7 -> iOS 8, and in a quick runthrough, REPlayer looks to be working just fine.

Cannot Access Alternate Audio Tracks

One interesting change to note, especially since it relates directly to our current series on Alternate Audio Streams in HTML Video, is that the native interface (iOS default controls used when video is fullscreen) for selecting Sub-Title/CC tracks – or Alternate Audio tracks when they’re available – no longer seems to recognize/display the audio tracks in iOS 8.

iOS7 vs. iOS8

Sub-Title selection still works just fine, but the Audio Section (and Audio Tracks) do not display in iOS8. We confirmed this by verifying our test m3u8 still contains Alternate Audio tracks in the manifest. Viewing the same video on a device running iOS7 will display, and allow the selection of, both Sub-Title and Audio Tracks, while iOS8 will only display the subtitle tracks.

Off the bat, I’m assuming this is a bug, not a feature, and it will be addressed in future updates, though it could also be a result of the transition from QTKit to AVFoundation as the new iOS Media Framework. 2
One other possible cause for the discrepancy, is the different versions of WebKit used between the two. 3

As of this writing, this does not seem to be a known issue according to the release notes.

Stay Tuned

Be sure to check back on Wednesday 10/1 as we continue our series on Alternate Audio Tracks in HTML Video – addressing some of the options and implementations available for providing user-selectable alternate audio streams using various formats, and suggest solutions for reaching the widest number of browsers and devices.

This week we’ll be featuring an in-depth writeup of alternate audio in HLS and other playlist-based formats.

Notes and non sequiturs

In iOS6 – when you switched to fullscreen video, there were 2 options available for exiting fullscreen:

  • One was to tap the “Exit Fullscreen” icon in the lower right side of the control bar (Two arrows on a diagonal that were pointing inwards towards each other – the inverse of the icon used to enter fullscreen)
    • This would exit fullscreen, and maintain the current playback state of the video, i.e., if the video was playing in fullscreen, it would continue to be playing after leaving fullscreen – if the video was paused in fullscreen, it would remain paused after leaving fullscreen
  • The other was to tap on the text-button “DONE” in the upper left of the fullscreen interface
    • This would exit fullscreen and pause the video, regardless of current playback state

In iOS7 – the “Exit Fullscreen” icon was removed, and the only option was to use “DONE” – this meant that whenever you exited fullscreen in iOS7, the video would be paused every time. Meaning that an extra tap on the Play Button was necessary in order to resume playback.


AVFoundation was added in iOS 7 and existed alongside QTKit, though developers were strongly encouraged to make the switch – Have not yet found explicit documentation of the status/availability of QTKit in iOS8

  • User Agent String of an iPhone 5S running iOS 8.0 reports WebKit v600.1.4
    • Full User Agent String –
      Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12A365 Safari/600.1.4
  • User Agent String of an iPhone 5S running iOS 7.1 reports WebKit v537.51.2
    • Full User Agent String –
      Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D167 Safari/9537.53

On May 13th, we had the pleasure of being a part of the Varnish Summit in New York. Our own David Hassoun gave a great session on using Varnish Plus to help create your own CDN, and had a great time meeting with the other Varnish users there. This event was a great networking opportunity, and a fantastic way to get together with other Varnish users to see how they have been using the product. Since no two deployments will be the same, it has been amazing to see how this tool gets used and the creativity that everyone uses in their own deployment. If you missed the summit and David’s session, you can view it here:

Now, the new round of summits are about to start, though no US date has been announced yet. However, we are hoping that will change soon! Untill something get’s solidified for the US, and for those of you out there who are in Europe, there are currently three dates that you can register to attend. Paris on October 16th, Frankfurt on October 30th, and Stockholm on November 20th. With any luck, there will be live streaming available so those of us unable to make the trip can still attend the conference and get some great information. You can register for any of these dates here,

Check back here as we will pass along any information about a US summit as it comes, and keep making your websites fly!