Strategic Consulting

JWPlayer states

 

“JW Player supports all two formats [WebVTT and SRT] across all browsers though, in both Flash and HTML5 mode. The only exception is playback in fullscreen on iOS, since the native fullscreen mode does not allow JW Player to print captions over the video. Since the iPhone can only display video in full screen, this means that captions will not function on this device.

This to us didn’t make any sense due to HTML5 video element has built in closed caption support.

So in order to change this we created a simple workaround.

 

Workaround

 

Example Code:

jwplayer("media").setup( mediaSetupObject );

    // iOS Closed-Captions fix

    var video, track;

    jwplayer('media').onReady(function() {

    // determine if on an iOS Device

    var iOS = (navigator.userAgent.match(/(iPad|iPhone|iPod)/g) ? true : false);

    if (iOS && type === 'mpl') {

        video = document.getElementsByTagName('video')[0];

        track = document.createElement('track');

        track.kind = mediaSetupObject.playlist[0].tracks[0].kind;

        track.label= mediaSetupObject.playlist[0].tracks[0].label;

        // TODO: Need to account for different languages

        track.srclang = track.label === 'English' ? 'en' : 'en';

        track.src = mediaSetupObject.playlist[0].tracks[0].file;

        video.appendChild(track);

        video.textTracks[0].mode = 'hidden';

        jwplayer('media').onFullscreen(function(data) {

            if (data.fullscreen) {

                if (video.textTracks[0]) {

                    video.textTracks[0].mode = 'showing';

               }

            } else {

                if (video.textTracks[0]) {

                    video.textTracks[0].mode = 'hidden';

               }

           }

        });

    }

});

 

Example Config File:

 

{

 "playlist": [

   {

     "image": "splash.png",

     "title": "Sintel",

     "description": "Video created by: Jun Annotations: LOL",

     "sources": [

       {

         "file": "http://office.realeyes.com:8080/hls-vod/media/sintel-1280-surround.mp4.m3u8"

       }

     ],

     "tracks": [

       {

         "file": "http://code.realeyes.com/jgainfort/webster-player-page/sintel_en.vtt",

         "label": "English",

         "kind": "captions",

         "default": "true"

       }

     ]

   }

 ]

}

 

Steps:

 

1. Listen for JWPlayer ‘onReady’ api callback.

 

onReady(callback)

Fired when the player has initialized in either Flash or HTML5 and is ready for playback. Has no attributes.

jwplayer(‘media’).onReady(function() {});

 

2. Determine if on an iOS device.

 

var iOS = (navigator.userAgent.match(/(iPad|iPhone|iPod)/g) ? true : false);

 

3. Get reference to html video element. Create track element. Add kind, label, srclang, and src attributes to the track element.

 

video = document.getElementsByTagName('video')[0];

track = document.createElement('track');

track.kind = mediaSetupObject.playlist[0].tracks[0].kind;

track.label= mediaSetupObject.playlist[0].tracks[0].label;

// TODO: Need to account for different languages

track.srclang = track.label === 'English' ? 'en' : 'en';

track.src = mediaSetupObject.playlist[0].tracks[0].file;

 

In the above example these values are brought in from an external config file used for setting up the video element.

 

*Note: type === ‘mpl’ is referring to a parameter we passed in with the page url. This can be omitted for your personal use.

 

4. Add track element to video element and set mode textTracks mode to hidden.

 

video.appendChild(track);

video.textTracks[0].mode = 'hidden';

 

Since by default JWPlayer will still play CC for iPads without having to go into native player we need to default the textTracks attribute mode to hidden.

 

5. Listen for JWPlayer onFullscreen api callback.

 

onFullscreen(callback)

Fired when the player toggles to/from fullscreen. Event attributes:

  • fullscreen (Boolean): new fullscreen state.

 

jwplayer('media').onFullscreen(function(data) {

   if (data.fullscreen) {

       if (video.textTracks[0]) {

          video.textTracks[0].mode = 'showing';

       }

   } else {

       if (video.textTracks[0]) {

           video.textTracks[0].mode = 'hidden';

       }

   }

});

 

If device is in fullscreen set video textTracks mode to showing and hidden if device leaves fullscreen.

 

Limitations

 

Through our testing we have noticed SRT format does not work for iOS native playback. So be sure to use a WebVTT format for your Closed Captions. If you need to convert the format, there’s a number of tools out there such as this one: https://atelier.u-sub.net/srt2vtt/

 

Conclusion

 

Until JWPlayer decides to use HTML5’s built in closed caption support instead of overlaying text across the video this workaround will need to be included for your website to support closed captions on iOS devices when playing back video in fullscreen. Luckily the fix is very simple and non-obtrusive.

 

Reference Links

 

JWPlayer Closed Captions:

http://support.jwplayer.com/customer/portal/articles/1407438-adding-closed-captions

JWPlayer API:

http://support.jwplayer.com/customer/portal/articles/1413089-javascript-api-reference

HTML5 Closed Captions:

https://developer.mozilla.org/en-US/Apps/Build/Audio_and_video_delivery/Adding_captions_and_subtitles_to_HTML5_video

 

Live streaming video is not an easy thing to take cross-platform at the moment. Flash can do really well on the desktop using RTMP and HDS, but there’s no Flash on mobile devices. HLS is Apple’s horse in the live streaming race, and it does very nicely on iOS and occasionally works on some Android. But if you take HLS to the desktop and you’re limited to Safari on Macs.  Wouldn’t it be nice if we could consolidate a bit?

flashls is a plugin for the Open Source Media Framework and other frameworks that allows you to play back HLS in Flash. There are actually several plugins that allow this now, but flashls is the one we’ve settled on as the best performing and best supported. This is great! Now you can have a little more consistency in how you stream. But here’s a downside: a number of the fancy, fun things you could do with RTMP or HDS are missing. If you want to inject stream metadata for example, all that gets tossed out when the source file gets chopped into fragments for the HLS stream by the live-packager app on Adobe Media Server.

An important piece of data to us that got discarded were time codes in the stream. We use GMT time codes to synchronize video with time-based data feeds as well as other videos. With most encoders, including Flash Media Live Encoder, you can choose to inject timecodes into your video stream. For RTMP and HDS streams you can pick out the data by adding an onFI method onto a NetStream’s client object property. But switch to a live HLS stream and that all disappears.

Fortunately, a few things have changed recently. First off, Adobe made some changes to AMS in version 5.0.8. The included Apache server got upgraded to 2.4, and with that came a change to the live packaging of streams. Now you can inject ID3 data into HDS and HLS streams! Adobe has a good article about this. This new feature provides a lot of different ways to now put useful metadata into the live HTTP stream of your choice.

Unfortunately, the flashls plugin didn’t pick up on the injected data. The way Adobe included the ID3 data was a bit different than expected. However, the wonderful folks behind flashls took notice and added in support for extracting this ID3 data from the HLS stream. This is why continuing support of a plugin is important! With the changes we now can get an ID3 frame out of an HLS stream.

Here’s the high-level workflow:

  1. On the Flash Media Server, make a copy of the live-packager app in the applications directory and roll up your sleeves for some Server-Side ActionScript.
  2. In the live-packager copy’s main.asc file, we need to add in an onFI listener to the incoming F4F stream in the application.publish() method. In this listener we pull the time code out of the metadata object. In this case the info.sd property is the date and the info.st property is the timestamp.
  3. Next we need to inject the metadata into the stream as an ID3 frame. The Adobe article lists the supported ID3 frames. I decided to use the Comment ID3 frame to send the information along. This frame only has a single data property, so I join the time code parts together into an easily parseable string. Then we send the metadata object through the stream using the NetStream.send() method.
    s.onFI = function( info ) {
    var metaData = {};
    var comment = { language: "eng" };
    comment.data = String( info.sd + '|' + info.st );
    metaData.Comment = comment;
    delete comment;
    comment = null;
    s.send("onMetaInfo", metaData);
    delete metaData;
    metaData = null;
    }
  4. Now we can switch back to AS3 and start working in our OSMF player. I won’t cover the implementation of the OSMF plugin, as the flashls site has a nice sample. The flashls OSMF plugin dispatches events about metadata off of an instance of the HLS class. You can get access to this object by listening for the load trait being added to your media.
    protected function onMediaElementTraitAdd( event:MediaElementEvent ):void
    {
    if( event.traitType == MediaTraitType.LOAD )
    {
    var hlsNSLoadTrait:HLSNetStreamLoadTrait = MediaElement( event.target ).getTrait( event.traitType ) as HLSNetStreamLoadTrait;
    if( hlsNSLoadTrait )
    {
    var hls:HLS = hlsNSLoadTrait.hls;
    }
    }
    }
  5. Then add a listener to the HLS object for the HLSEvent.ID3_UPDATED event.
    hls.addEventListener( HLSEvent.ID3_UPDATED, onID3Data );
  6. This is where things get a little complicated. We need to then pull the ID3 frame out of the string of hexadecimal data that is attached to the ID3_UPDATED event. First step is to convert the hexadecimal string into a ByteArray, which we can then parse using AS3’s ByteArray API. Fortunately, flashls provides us with a utility to access this.
    protected function onID3Data( event:HLSEvent ):void
    {
    var hexData:String = event.ID3Data;
    var hexBytes:ByteArray = Hex.toArray( hexData );
    }
  7. The next step is to parse the ByteArray to extract all the information. Fortunately, that Adobe article describes how the ID3 headers are constructed. That gives us a guide on how to read byte data out of the ByteArray. We end up with something like this:
    var frameName:String = hexBytes.readUTFBytes( 3 );
    var version:int = hexBytes.readByte();
    var revision:int = hexBytes.readByte();
    var flags:int = hexBytes.readByte();
    var size:int = hexBytes.readInt();
    var frameID:String = hexBytes.readMultiByte( 4, 'utf-8' );
    if( frameID == 'COMM' )
    {
    var frameDataSize:int = hexBytes.readInt();
    var frameFlags:int = hexBytes.readShort();
    var groupID:int = hexBytes.readByte();
    }

    Note that we are checking to see if this is the type of ID3 frame we care about (Comment). The Adobe article references the IDs of the different frame types, and tells us that the Comment frame ID is ‘COMM’.
  8. Now we’ve actually gotten to the data we care about: the ID3 frame’s metadata payload. It appears that the object comes through not as an AMF object, but rather as values separated by a 0 byte character. The first 3 characters we read off the data make up the language code of the ID3 metadata.
    if( frameID == 'COMM' )
    {
    var frameDataSize:int = hexBytes.readInt();
    var frameFlags:int = hexBytes.readShort();
    var groupID:int = hexBytes.readByte();
    var lang:String = hexBytes.readMultiByte( 3, 'utf-8' ); //eng
    }
  9. Then we can read out the 0 byte separator. Then the remaining bytes give us our time code.
    var separator:int = hexBytes.readByte(); //0
    var timestampString:String = hexBytes.readUTFBytes( hexBytes.bytesAvailable ); //2015-06-12|22:03:06.543
    Keep in mind that the seconds come in as a decimal. In order to convert your time code to a Date object, you will need to either round off the decimal or convert the decimal to milliseconds.
  10. Finally, fire up Flash Media Live Encoder, or the encoder of your choice, and make sure it is injecting time codes into your stream. For FMLE, the option is a Timecode checkbox in the lower left. Click the wrench next to it to access the options and check the “Embed system time as Timecode” option.
    fmle
  11. Then target your custom live-packager app for publishing and consume the outgoing HLS stream in your OSMF player. You should then start to receive your time codes via ID3 events.

Obviously there are many other uses for extracting in-stream metadata from HLS. There are also many other types of ID3 frame types. They may not follow the same structure as our simple Comment frame, and if you venture into making your own custom ID3 frames, that may be even more complex. However, this proves that it can be done in Flash using the existing tools available. Just remember that you need to be on AMS 5.0.8 or greater and also be on the latest codebase of the flashls plugin.

Many great thanks to Guillaume du Pontavice (a.k.a. mangui), who is the man behind flashls and helped immensely in making this functionality available.

Using the PHDS, PHLS, or PRTMP feature of Adobe Media Server is reliant on some certificate files provided with the installation. These files are located in the {AMS_INSTALL_ROOT}/creds folder. From time to time these files are set to expire and new files are provided by new AMS install versions.

The last time this happened was when AMS 5.0.3 was released. At the time you had two options:

  1. Back up your files, uninstall AMS and then re-install using the AMS 5.0.3 updater: http://www.adobe.com/support/flashmediaserver/downloads_updaters.html
  2. Get a hold of the updated certificates – you can download the linux updater, unzip, and extract them – and use the list of files at this blog post to replace the certificates on your existing installation: http://blogs.adobe.com/ams/2013/07/ams-5-0-3-availability-and-refresh-of-phdsphlsprtmp-certificates.html

With the release of AMS 5.0.7, it has been noted in the release notes that these certificate files will need to be replaced again before April 6th, 2015:

“We have also refreshed the certificates used for Protecting Streaming workflows – PRTMP, PHDS and PHLS. The certificates in the earlier versions are due to expire on 5:30 AM April 6 2015. The refreshed certificates in this version have an expiry date of 5:30 AM September 24 2016.”

Although it’s likely that you could use step 2 as mentioned above to simply refresh your certs – especially if you’re still using Adobe Media Gateway which has been discontinued – there is some other interesting information called out in the release notes that make me feel AMS 5.0.7 is a worthwhile upgrade:

  • If you’re using SWF Verification for PHDS there’s a fix for when you forget to add your whitelist file: “3704242: SWF verification for PHDS was ignored if whitelist file was missing. Now playback fails and error is logged suggesting user to provide whitelist file or disable SWF verification for PHDS.”
  • If your AMS is on a Windows box and you’re using HDS or HLS with the Apache cache turned on: “3803660: Disk cache cleanup for Apache using htcacheclean even though enabled by default was not functioning on Windows. This is working fine now.”
  • If you’re using SSL: “We have updated the OpenSSL version used by AMS to 1.0.1j. This provides four security fixes including POODLE (CVE-2014-3566). We have disabled SSL 3.0 on the server. The successor protocols to SSL3.0- TLS 1.0, 1.1 and 1.2 can be used for secure communication.”

That said, use your best judgement on whether you upgrade or just swap out the certificates. IMPORTANT NOTE: If you’re using any kind of fragment or manifest caching, the new certificate won’t match up so you will need to kill your caches and rebuild them after the certificate change.

Quotes are from AMS 5.0.7 Release Notes: http://www.adobe.com/support/documentation/en/adobe-media-server/507/AMS_5_0_7_Release_Notes.pdf

FMS/AMS Updaters: http://www.adobe.com/support/flashmediaserver/downloads_updaters.html

The idea of being able to upload recordings to Connect can be an attractive thing under the right circumstances. The two most common use cases are when a recording needs to be repaired because of an audio issue (this is probably going to be done by Adobe or a savvy support person with your reseller) and wanting multiple versions of the same recording. Since the first use case is a support based scenario, I won’t dive into it, but the work flow for uploading the repaired recordings are the same as what will follow.

So, let’s address the use case of having multiple versions of the same recording. One scenario might be the desire to have a teaser version of your recording (say the first 1-5 minutes), while still having a full version of your recording. Another scenario would be if you would like a recorded session covering multiple topics broken out into unique topic-specific recordings to play back. By default, in Connect you can only have one version of a recording available for playback to authenticated users or the general public.

Working around the single recording issue prior to Connect 9 was a pretty simple task. All you had to do was download the recording source files by adding /output/myRecording.zip?download=zip to the URL for your recording to download a zip file containing the recording FLV and XML files representing the meeting recording. From there you can take the zip file and upload it as a new content object to the desired Content Library folder, and you are good to go.

The above workflow still works for Connect 9+, but you may see an error when trying to move the recording. The error will read No message for validation error “recording-is-in-progress”, type “string”, field “sco-id”This error is caused by a field that doesn’t get populated when uploading the recording source files. This field is the recording end date. Resolving this error requires populating the field in the database (DB) to allow Connect to properly manage the recording. This can be accomplished by making an API call.

Making an API call can can seem scary, but here’s a step by step on how to go about it:

  1. Before making an API call, make sure you are logged in with an account that has Administrator credentials. Although lesser permissions may work for some API calls, this particular API call is set up with the assumption of Admin rights. You can login via the API or by going to your Connect server URL and logging in.
  2. Now to update the missing DB field for the recording we need to make the following call using sco-update: http://yourserver.adobeconnect.com/api/xml?action=sco-update&sco-id=123456&date-end=2015-03-15T15:28:37.227-07:00 You can find the SCO ID for the recording in the URL of the management page for it in Connect Central.
    Recording SCO ID
  3. As stated before, the end date is not populated when you upload a recording which causes the error to be thrown. The end date you choose in step 2 can be any date/time after the upload of the new recording zip, just make sure to use the date format in the step 2 example above.

Right! So now that you’ve fixed the issue (if you even knew it was an issue), what does it get you? Here’s a cool parting trick. Have you ever had someone ask to have multiple versions of their recordings in the Recordings folder for one meeting room? Here is how we can accomplish it:

  1. Using the sco-move call we can place the recording in a different folder. This isn’t needed to move it to a new folder in the Content library. Example: http://yourserver.adobeconnect.com/api/xml?action=sco-move&sco-id=123456& folder-id=654321 The trick is what to put in the folder-id field. If you want to move it to the Recordings folder for a Meeting room then you would just use the sco-id of the meeting room.

Meeting SCO ID

 

Now the recording will reside in the Recordings folder of the Meeting room!

Recording in Meeting folder

 

Want to learn more about Adobe Connect and how it can help you meet your web collaboration or eLearning needs?

Contact Us

Why we love Wowza

Posted on January 30, 2015 at 1:32 pm in Media Solutions, Products, Strategic Consulting, Training

Wowza has been growing up as a product well placed to take over the streaming media world. As a result of their attention to focusing on ease of use and an ability to reach every screen they have created the new industry leader in streaming media technology. Because of this we have come to see the Wowza Streaming Engine (WSE) as the most future proof option you can purchase. So why do we feel this way? Well, here are our top 4 reasons, in no particular order.

  • Will it stream to XYZ device? Yes! By adopting both current and next gen media formats the WSE ensures that it can deliver your stream to all screens. This alone overcomes one of the biggest challenges we face in deploying a media server. No one wants to have to exclude or limit their viewers to specific OS’s, devices, or browsers. If you aren’t using WSE, it’s time to ask if you can support the following media formats with one server:
    • RTMP
    • HLS
    • HDS
    • MS Smooth
    • MPEG-DASH
    • RTSP/RTP
    • MPEG-TS
  • Is there an easy to deploy player for my clients to view my media on? Yes! Wowza and JWPlayer have formed a partnership and created a workflow to easily deploy a polished and diverse media player as the portal through which your audience will view your media. In seven (pretty simple) steps you can have your media player set up and running. This is a great benefit to having your streaming media deployment up and running quickly
  • Transcoders seem to vary and are complicated, is there a simple solution from Wowza? Yes! Wowza can accept a live stream from any h.264 or RTMP source. So if you have a transcoding solution in place, it will likely work with WSE. However, should you want to use a different format, IP cameras or another video streaming source, the Wowza Transcoder AddOn can be used to take in that stream and format it to whatever you need. Why is this so amazing?
    • The transcoding is done server side. No more needing to have encoding software on each device that is steaming to the server. This can be a huge cost savings in not only software purchasing, but also in time. Since the WSE can take in almost any media format, this means you don’t have to spend a large amount of time setting up and teaching configuration to those individuals sending the stream. Just point it to the WSE server and hit go!
    • The transcoding is done on the fly. This means that there is very little latency from the transcoding. You can take in one stream setting and output multiple formats and qualities of the same media. You can even have an audio only stream which can be great for those on small devices or very low bandwidth environments.
    • Static and dynamic images can overlay the media stream. Place Ads, calls to action, watermarks, tickers (sports scores or stock tickers), or whatever else you can think to do to enhance the experience of your video.
  • I’ve never managed a media server, it is complicated? Or, I’ve managed media servers in the past, is WSE as complicated? No! WSE was built with an intuitive and easy to use management interface. Everyone from novice to advanced users has found this tool to be a wonderfully simple and powerful tool to set up, manage and monitor these servers. You can still play in the XML configuration files if you want, but you don’t have to. There are even built in test players where you can test any of your streams in any media format without having to build your own test page! In the WSE Manager interface you can:
    • Set up streaming apps
    • Manage your streams
    • Monitor the server performance
    • Add and manage other admins and publishers
    • Manage your AddOns

To top it all off, WSE is an extremely flexible tool that really can meet most streaming media needs.

Want to talk more about Wowza? Looking to purchase Wowza? Looking for training on Wowza? Need help with Wowza? We can do it all. Reach out to us and start the conversation today.

Contact Us

Released today (Nov 18, 2014) are three new products to add to the Varnish Plus application; Unlimited cache sizing, increased caching performance and customized cache optimization support content-heavy, high-traffic sites.

“For most consumers, websites are now the pivotal point of interaction with companies. If information and content isn’t delivered instantly, they will seek alternatives that are just a mouse-click away,” – Per Buer, Founder and CTO, Varnish Software.

Product details:

Unlimited cache sizing with Varnish Massive Storage Engine
The new Varnish Massive Storage Engine tackles the problems of content-heavy sites by allowing the Varnish caching layer to handle multi-terabyte data sets. This makes it possible to cache almost unlimited objects while the website performance remains stable over time. The Varnish Massive Storage Engine is targeted at business with large data sets such as online retailers, image banks, video distributors or Content Distribution Networks and enables them to deliver high quality content within their current infrastructure while pushing the bounds of modern web experience delivery.

Increased caching performance and resilience with Varnish High Availability
Varnish High Availability is a high performance content replicator that eliminates cache misses (when an item looked up in the cache is not found) and ensures the stability of the Varnish Cache set-up. By protecting the backend infrastructure from overload caused by cache misses, it increases website performance and minimizes the risk of frustrated visitors leaving websites. Varnish High Availability is for Varnish Cache users whose sites are business-critical. It can be installed with any multi-cache Varnish Cache setup, including two/three node CDN POP installations.

Customized cache optimization with Varnish Tuner
Varnish Tuner automates customized cache optimization in both the Varnish and operating system environments. It recommends configuration options for the Varnish Cache set-up including how the operating system should be tuned, which cache parameters should be changed or replaced and also explains these recommendations. Varnish Tuner makes it possible for businesses to find the specific set-ups that best matches their resources and needs, resulting in better website performance.

Availability:
Varnish Massive Storage Engine, Varnish High Availability and Varnish Tuner are all available from today with a Varnish Plus subscription

Contact us today for all your Varnish purchasing/training/configuration needs!

Your Name (required)

Your Email (required)

Your Company

Your Phone Number

Subject

Your Message

HTML Video Check-in – iOS 7 vs. iOS 8

Since iOS 8 went live on the 17th and I updated a few of my devices over the weekend, I decided to do some quick testing of web video playback. I wanted to see if there were any little, undocumented changes that would affect our custom, cross-platform video player, or our general approach to working with HTML video – like the changes to exiting fullscreen video that came in the update from iOS 6 -> iOS 7. 1

Overall, things seem pretty much the same between iOS 7 -> iOS 8, and in a quick runthrough, REPlayer looks to be working just fine.

Cannot Access Alternate Audio Tracks

One interesting change to note, especially since it relates directly to our current series on Alternate Audio Streams in HTML Video, is that the native interface (iOS default controls used when video is fullscreen) for selecting Sub-Title/CC tracks – or Alternate Audio tracks when they’re available – no longer seems to recognize/display the audio tracks in iOS 8.

iOS7 vs. iOS8

Sub-Title selection still works just fine, but the Audio Section (and Audio Tracks) do not display in iOS8. We confirmed this by verifying our test m3u8 still contains Alternate Audio tracks in the manifest. Viewing the same video on a device running iOS7 will display, and allow the selection of, both Sub-Title and Audio Tracks, while iOS8 will only display the subtitle tracks.

Off the bat, I’m assuming this is a bug, not a feature, and it will be addressed in future updates, though it could also be a result of the transition from QTKit to AVFoundation as the new iOS Media Framework. 2
One other possible cause for the discrepancy, is the different versions of WebKit used between the two. 3

As of this writing, this does not seem to be a known issue according to the release notes.

Stay Tuned

Be sure to check back on Wednesday 10/1 as we continue our series on Alternate Audio Tracks in HTML Video – addressing some of the options and implementations available for providing user-selectable alternate audio streams using various formats, and suggest solutions for reaching the widest number of browsers and devices.

This week we’ll be featuring an in-depth writeup of alternate audio in HLS and other playlist-based formats.


Notes and non sequiturs
1

In iOS6 – when you switched to fullscreen video, there were 2 options available for exiting fullscreen:

  • One was to tap the “Exit Fullscreen” icon in the lower right side of the control bar (Two arrows on a diagonal that were pointing inwards towards each other – the inverse of the icon used to enter fullscreen)
    • This would exit fullscreen, and maintain the current playback state of the video, i.e., if the video was playing in fullscreen, it would continue to be playing after leaving fullscreen – if the video was paused in fullscreen, it would remain paused after leaving fullscreen
  • The other was to tap on the text-button “DONE” in the upper left of the fullscreen interface
    • This would exit fullscreen and pause the video, regardless of current playback state

In iOS7 – the “Exit Fullscreen” icon was removed, and the only option was to use “DONE” – this meant that whenever you exited fullscreen in iOS7, the video would be paused every time. Meaning that an extra tap on the Play Button was necessary in order to resume playback.

2

AVFoundation was added in iOS 7 and existed alongside QTKit, though developers were strongly encouraged to make the switch – Have not yet found explicit documentation of the status/availability of QTKit in iOS8

3
  • User Agent String of an iPhone 5S running iOS 8.0 reports WebKit v600.1.4
    • Full User Agent String –
      Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12A365 Safari/600.1.4
  • User Agent String of an iPhone 5S running iOS 7.1 reports WebKit v537.51.2
    • Full User Agent String –
      Mozilla/5.0 (iPhone; CPU iPhone OS 7_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D167 Safari/9537.53

On May 13th, we had the pleasure of being a part of the Varnish Summit in New York. Our own David Hassoun gave a great session on using Varnish Plus to help create your own CDN, and had a great time meeting with the other Varnish users there. This event was a great networking opportunity, and a fantastic way to get together with other Varnish users to see how they have been using the product. Since no two deployments will be the same, it has been amazing to see how this tool gets used and the creativity that everyone uses in their own deployment. If you missed the summit and David’s session, you can view it here: http://youtu.be/P7YPFMF5wGo?t=30m25s.

Now, the new round of summits are about to start, though no US date has been announced yet. However, we are hoping that will change soon! Untill something get’s solidified for the US, and for those of you out there who are in Europe, there are currently three dates that you can register to attend. Paris on October 16th, Frankfurt on October 30th, and Stockholm on November 20th. With any luck, there will be live streaming available so those of us unable to make the trip can still attend the conference and get some great information. You can register for any of these dates here,  http://info.varnish-software.com/varnish-summits-autumn-2014-registration.

Check back here as we will pass along any information about a US summit as it comes, and keep making your websites fly!

If you’re using Varnish as your web accelerator or media caching server and want to learn more about it, we’ll be holding online administrator training next week. It’s not too late to register so see you there!

Recently, I was tasked with building a video player that would play live streams via IP Multicast on a supported network and automagically switch to Unicast on an unsupported network. Problem is, with IP Multicast the clients will make a connection and just wait around for data without bombing out. This is because the clients are connected to the IP Multicast address space via their network hardware and not a server endpoint in many other types of streaming.

In the past, this type of configuration might be implemented through a connection timeout in the video player logic. However, I wanted a seamless and immediate way to fall back without making the user have to wait. Enter Apache mod_rewrite.

The general workflow I wanted to follow was this:

  1. The end user hits the video player page on the Apache server
  2. The video player seamlessly and immediately point itself at the right stream.
  3. Everyone’s happy

I accomplished the above with a little mod_rewrite magic in my Apache config.

First, I needed to make sure clients on specific subnets would play back the live stream using Unicast. Second, I needed to properly redirect all other clients to the live stream using IP Multicast. Also, I needed to make sure that VOD requests would be ignored.

Here’s a gist of my rewrites along with some commentary.

Enjoy!