Using Animated Lower Thirds in OBS Studio

Posted 14 November 2021, 16:28 | by | Perma-link

I've been playing around with OBS recently, and having fun pushing various tools to their limits.

One that I've found is great, but suffering a little from lack of documentation is Animated Lower Thirds with Dockable Control Panel , so I thought I'd post a few helpful notes.

General

Often, the tool won't do anything if certain fields are empty. Usually adding a space to a field you want to leave blank is sufficient.

Fonts

The video tutorials show how to import Google Fonts into the tool, but sometimes you want to use a font you've purchased and installed on you machine. Referencing these is actually fairly easy as the browser control is running the HTML page locally on your machine:

  1. Expand the Main Settings section, and "Show more" so you can see the three buttons.
  2. Select "Customs"
  3. For the custom font fill it in as:
    Font Family: font-family: 'Ale and Wenches BB', serif;
    @import url: (single space)
    It's important to include the , serif or similar in the font family field otherwise the font name displays as ' in the other editors
    Adding custom font
    You should then see your local font appear as an option to select in the lower third controls - remember that they don't appear in alphabetical order Selecting a custom font
    Using a custom font

Filed under: obs-studio, Streaming

Connecting Live.com Accounts to Outlook when they use a 3rd party email address

Posted 30 October 2021, 16:37 | by | Perma-link

I've just had to recreate a number of accounts on a windows PC, and came up against a fairly niche problem:

My family have Live "Personal" accounts that are tied to their primary email addresses, rather than to a hotmail.com or outlook.com account. My webhost provides a nice email service that supports secure IMAP connections for their accounts, so there was no need to set up full mail accounts, but we do use the calendar and contacts options, which are also managed on our phones, and shared between us. However, it's been getting slightly harder to add the live accounts to the desktop version of Outlook, hence this post.

Previously, I'd succeeded by adding the live account first, but that started failing after my host upgraded their systems and started supporting the autodiscover protocols, however I'd been able to create the account as an Exchange Enabled account, and point it to a hotmail.com server, however trying both that and using outlook.live.com seemed to connect successfully, but didn't actually bring down any contacts or calendar information.

I did manage to get it work once I'd found the underlying account that outlook.com had generated for the user. If you log in to outlook.com and hover over the Outlook icon in the top left, you're shown the details of the account - this was an email like outlook_[...]@outlook.com:

Outlook.com underlying account

Using that email address, I was able to get the New Account dialog to recognise that I was adding an outlook.com account, but it complained that there wasn't a user with that email address. Entering the users actual email address and credentials added the account with the Calendars and Contacts to Outlook and then I could add the primary email account from our servers after that.

Filed under: Office365, outlook

When No Cache means Cache - Fun with Azure Front Door

Posted 21 June 2021, 17:30 | by | Perma-link

Azure Front Door is a great product, that has only improved since it's initial release. At a high level it wraps three core services that most websites can benefit from: Caching (CDN), Routing (both simple Traffic Manager style and more complex rules based) and Firewalls (WAF). It also works really well behind the bigger full featured CDN offerings when you need more complex caching rules. It's easy to lock down an App Service to the Front Door infrastructure, providing you with the benefits of Web Application Firewall and failover if that's what you need.

However we recently had the following issue with Azure Front Door, caching and cookies: For sites where Front Door has Caching enabled, Front Door was stripping the set-cookie header from responses. This was causing form validation to fail when the Request Verification Token cookie wasn't returned in the POST.

Based on the documentation around cache expiration:

Cache-Control response headers that indicate that the response won't be cached such as Cache-Control: private, Cache-Control: no-cache, and Cache-Control: no-store are honoured.

We had ensured that our pages were sending Cache-Control: no-cache and were seeing x-cache: TCP_MISS on the responses so we thought we were good, but the cookies weren't being set. Checking the origin, they were being set fine, and disabling caching in Front Door resulted in them being set as expected as well, but none of the site was then cached.

Here's where the limitations of Front Door, compared to Azure Premium CDN show - the new Rules engine in Front Door allows you to modify responses, routing and caching behaviour, but only based on the incoming request (Azure Premium from Verizon CDN rules engine also allows you to modify those things based on the incoming request as well as the response from the server). So as an initial work around we disabled caching, and then enabled it with a rule for requests that included a file extension:

Front Door Rules Engine

In psuedo-code:

IF Condition: "Request Path"
   Operator: "Contains"
   Value: "." Transform: "To Lowercase"
THEN Action: "Routing Configuration"
     Route Type: "Forward" "Backend Pool"
     Backend Pool: // Update as needed
     Forwarding Protocol: // Update as needed
     URL Rewrite: // Update as needed
     Caching: "Enabled"
     Cache behaviour: "Cache every unique URL" // We want cache busting query strings to work
     Dynamic compression: "Enabled"
     Use default cache duration: "Yes"

This gave us a level of caching for static content (CSS, JS, images, etc.) but still meant that cacheable pages were not being cached.

After a bit of to and fro with the very helpful support team, it was pointed out that the HTTP specification has this to say about Cache Headers:

The "no-cache" response directive indicates that the response MUST NOT be used to satisfy a subsequent request without successful validation on the origin server

And the MDN documentation spells it out even plainer:

no-cache The response may be stored by any cache, even if the response is normally non-cacheable. However, the stored response MUST always go through validation with the origin server first before using it.

Because pages with a response of "no-cache" may actually be cached, Front Door automatically strips the set-cookie header from the response, ensuring that the page can be cached and other users don't share the set-cookie header.

What we needed to do was use the Cache-Control: no-store on those pages, which results in a truly non-cacheable page, and then Front Door lets the cookies through.

This basically meant changing our code from:

Response.Cache.SetCacheability(HttpCacheability.NoCache);

To:

Response.Cache.SetNoStore();

Your page will then emit a cache control header with private, no-store and an expires header set to -1. While this does help you fall into the pit of success, it's a little tedious that No Store doesn't exist on the HttpCacheability enum, and that attempting to set the cacheability manually to no-store results in an exception.

Filed under: Azure

Setting up Umbraco Azure AD Authentication

Posted 21 June 2019, 08:48 | by | Perma-link

l recently had a requirement to add Azure Active Directory authentication to a client's Umbraco Back Office, with the added request of managing the CMS group membership via Azure AD as well.

To do this, we're going to use an Azure AD Application Registration, with some custom roles - by using roles we remove the need to share more information about the User and their memberships with our application, and we make the application more portable - we can deploy the application manifest into any directory and the same role names are returned to the calling web site, rather than a unique group ID, and because they are textual names we can also easily wire up our Umbraco group aliases to the role names, without working with unwieldy GUIDs (Umbraco also requires that a group alias doesn't start with a number).

When following through Shazwazza's post Configuring Azure Active Directory login with Umbraco, I ran into a couple of problems with the basic set-up mainly that the Token ID wasn't included in the responses, and then that the users email address wasn't populating in the generated claim correctly.

Start by installing the UmbracoCms.IdentityExtensions.AzureActiveDirectorypackage into a suitable project in your solution. This will add some dependencies and a few sample classes in either an app_start or the app_code folder depending on your project type, however these consist of two classes with OWIN startup attributes and a couple of helper methods so you could move them if needed. These classes are very well documented, so it should be fairly easy to see what's happening in there - we'll come back to them in a short while.

Then jump over to Azure, where you'll need permissions to create Applications and Enterprise Applications at least.

Open the Azure Active Directory blade for the subscription you are connecting to, and select "App Registrations" and then "New registration":
Azure AD App Registrations

As we're granting users access to the back office we'll stick with the default of "Accounts in this organisational directory only", and as the name of the application will be shown to users if they have to grant access to their data make it meaningful (you can change it later). Finally add your first "Redirect URI" (if you have more environments that you want to control with a single application we can add these later) - this should be the full path to your Umbraco instance, including the trailing slash:
New App Registration
Note: This area has been improved recently so these screenshots may be slightly out of date

Press "Register" and your application will be created.

Switch to the "Authentication" pane for your new App - you need to enable "ID tokens" in the Advanced Settings - "Implicit grant" section. If you have more than one environment you're protecting with same users, you can also add the additional Redirect URIs here.
Add more redirects and enable ID Tokens

Next, under "API permissions", you ideally want to "Grant admin consent for [Your Directory]" for the User.Read permission that is added by default to the App:
Grant Admin Consent

Which should result in the consent being given for all users:
Grant Admin Consent Success

Next you need to set up the Roles that your application is going to grant to users - these are what we're going to map to Umbraco's back office groups. If you don't want to use the new preview UI to create these, you can edit the manifest directly. Open the Manifest pane and find the "appRoles" array: App roles in the manifest

Put your cursor between the braces, and then add at least the following three roles - you'll need to generate a unique GUID for each role, and enter it as 00000000-0000-0000-0000-000000000000 (i.e. hyphens but no curly braces):

{
    "allowedMemberTypes": [
        "User"
    ],
    "description": "Members of the Umbraco Administrators group.", 
    "displayName": "Umbraco Admin", 
    "id": "[UniqueGUID]", 
    "isEnabled": true, 
    "lang": null, 
    "origin": "Application", 
    "value": "admin" 
}, 
{ 
    "allowedMemberTypes": [ 
        "User" 
    ], 
    "description": "Members of the Umbraco Editors group.", 
    "displayName": "Umbraco Editor", 
    "id": "[UniquieGuid]", 
    "isEnabled": true, 
    "lang": null, 
    "origin": "Application", 
    "value": "editor" 
}, 
{ 
    "allowedMemberTypes": [ 
        "User" 
    ], 
    "description": "Members of the Umbraco Writers group.", 
    "displayName": "Umbraco Writer", 
    "id": "[UniqueGuid]", 
    "isEnabled": true, 
    "lang": null, 
    "origin": "Application", 
    "value": "writer" 
} 

Note that the display name can contain spaces and the value parameter will be used to map to the Group Alias in Umbraco.

From the Overview blade of your application, make a note of the Application (client) ID and Directory (tenant) ID, as you'll need them later. You can also update the Branding for your application that may appear on your users Applications page - the logo also appears in the Application listings in Azure, so can be useful to help you spot it amongst all the others.

Having done all that, you can then configure some users - to do this switch to the "Enterprise Applications" blade in your Azure Active Directory and locate your new Application Registration - depending on your configuration you may have many or very few, and so may have to search for it either by name or the Application (client) ID.

Select your application, and open the "Users and groups" blade and select "Add user". Depending on your Azure Active Directory plan, your experience may be better or worse - on the Free plan you can only assign users to roles, but with any of the paid plans (Basic and above) you can add groups to roles (when selecting users, groups and roles, make sure that you do press the "Select" button at the bottom of the blade each time!):
Assign Roles to Users or groups

To add a user or group to more than one role, you need to add them multiple times:
Add user to multiple roles

Back in your code, first up add the Application (client) Id, Directory (tenant) Id and redirect URI to your appSettings in the web.config - these aren't secrets, so should be safe in source control, but you'll want to ensure that your redirectURI is updated for each environment so that your users are returned to the correct instance.

As I planned on using App_Start\UmbracoStandardOwinStartup.cs as the basis of my application, I updated the owin:appStartup setting to reference UmbracoStandardOwinStartup.

In the UmbracoStandardOwinStartup class, just update the call to ConfigureBackOfficeAzureActiveDirectoryAuth with the values added to the appSettings - note that you can use the Directory (tenant) Id for both the tenant and issuerId properties, which is useful if you're not sure which domain is associated with the directory.

The main changes are then in the UmbracoADAuthExtensions.cs class. This class contains the single extension method ConfigureBackOfficeAzureActiveDirectoryAuth called by the start-up class and this is where I got to work.

To resolve the issue with the user's email address not being found, I wrote a custom handler for the SecurityTokenValidated notification. To wire that up, add the following to the OpenIdConnectAuthenticationOptions constructor call:

Notifications = new OpenIdConnectAuthenticationNotifications 
{ 
 SecurityTokenValidated = async notification => { AzureActiveDirectory.HandleNotification(notification); }, 
} 

This calls the following custom method, which uses the user's full name (rather than just their first name) and then finds either the Email or UPN claims, which should contain the user's email address.

// Need to handle the case when the email address is returned on the UPN claim. 
internal static void HandleNotification(SecurityTokenValidatedNotification<OpenIdConnectMessage, 
                                                                    OpenIdConnectAuthenticationOptions> notification) 
{ 
   var id = notification.AuthenticationTicket.Identity; 

   // we want to keep name (as a whole name) and roles 
   var name = id.FindFirst(ClaimTypes.Name); 

   var email = id.FindFirst(ClaimTypes.Email) ?? id.FindFirst(ClaimTypes.Upn); 

   var roles = id.FindAll(ClaimTypes.Role); 

   // create new identity and set name and role claim type 
   var nid = new ClaimsIdentity( 
       id.AuthenticationType, 
       ClaimTypes.Name, 
       ClaimTypes.Role); 

   nid.AddClaim(name); 
   nid.AddClaims(roles); 
   nid.AddClaim(id.FindFirst(ClaimTypes.NameIdentifier)); 
   var emailClaim = new Claim(ClaimTypes.Email, email.Value); 
   nid.AddClaim(emailClaim); 

   notification.AuthenticationTicket = new AuthenticationTicket( 
       nid, 
       notification.AuthenticationTicket.Properties); 
} 

That should get basic authentication up and running, but requires users exist in Umbraco already so we need to enable Auto Linking, and then add and remove groups based on the roles included in the claim from Azure AD.

Back in ConfigureBackOfficeAzureActiveDirectoryAuth, we need to create a new ExternalSignInAutoLinkOptions object and add it to our OpenIdConnectAuthenticationOptions object. Again, this is going to use a couple of custom handlers to configure the values:

// Don't add the user to any groups by default, these should be added by the claims from Azure. 
var autoLinkOptions = new ExternalSignInAutoLinkOptions(true, new string[] { }, defaultCulture: "en-GB"); 
 
// Handle the Roles from the Azure AD Application 
autoLinkOptions.OnAutoLinking = AzureActiveDirectory.OnAutoLinking; 
 
// Check the Roles from the Azure AD Application on subsequent login 
autoLinkOptions.OnExternalLogin = AzureActiveDirectory.OnExternalLogin; 
 
adOptions.SetExternalSignInAutoLinkOptions(autoLinkOptions); 

The OnAutoLinking method is fairly simple as it just calls out to the OnExternalLogin method:

internal static void OnAutoLinking(BackOfficeIdentityUser user, ExternalLoginInfo info) 
{ 
    // Let login handle sorting out the roles. 
    OnExternalLogin(user, info); 
} 

The OnExternalLogin does all the heavy lifting

public static bool OnExternalLogin(BackOfficeIdentityUser user, ExternalLoginInfo info) 
{ 
    // check user is still in an editing group 
    var applicationRoles = info.ExternalIdentity 
                               .FindAll(c => c.Type == info.ExternalIdentity.RoleClaimType) 
                               .Select(c => c.Value) 
                               .ToList(); 
 
    if (applicationRoles.Any()) 
    { 
        var groups = user.Groups.ToList(); 
 
        var groupsToRemove = groups.Where(g => !applicationRoles.Contains(g.Alias)) 
                                   .ToArray(); 
        var groupsToAdd = applicationRoles.Where(r => !groups.Any(g => g.Alias.Equals(r))); 
 
        // Remove old groups and reset the group array, then sort out the roles. 
        // Has to be done this way to ensure correct change tracking on the underlying user. 
        foreach (var group in groupsToRemove) 
        { 
            groups.Remove(group); 
        } 
 
        user.Groups = groups.ToArray(); 
 
        foreach (var group in groupsToRemove) 
        { 
            var userRole = user.Roles.FirstOrDefault(r => r.RoleId.Equals(group.Alias)); 
 
            if (userRole != null) 
            { 
                user.Roles.Remove(userRole); 
            } 
        } 
 
        foreach (string group in groupsToAdd) 
        { 
            user.AddRole(group); 
        } 
 
        return true; 
    } 
 
    return false; 
} 

And with that in place, you should then be able to log in to your Umbraco instance using the credentials attached to the account linked to the Azure AD (for example a Windows Live, Office 365 or Federated AD account). As you add or remove roles from the user, these are reflected each time they authenticate through the application.

Filed under: ASP.NET MVC, Azure, Umbraco

Restricting access to Sitecore Media Items

Posted 03 March 2015, 11:00 | by | Perma-link

I recently had a requirement to lock down some media items (PDFs in this case) within Sitecore so that only certain logged in users could access them. In principle this is trivially easy - ensure the users are in the right roles, remove read access from the extranet\anonymous user and grant read access to the specific roles. However, as always, the devil is in the details.

Whilst the above steps did work and users were correctly sent to the login page there was a problem - once the user logged in, they were just sent to the home page of site rather than being returned to the item they'd requested.

Checking the web.config I found the following setting, which defaults to false:

<setting name="Authentication.SaveRawUrl" value="true" />

But setting it to true here didn't actually make any difference - because the out of the box MediaRequestHandler ignores this value. I'm not really sure whether that makes sense at all - if I lock down some images for example, but then include them on a publicly accessible page the user isn't going to be prompted to log in, they'd just get broken images as the browser requests an image but gets HTML in response, but in the context of a PDF or other document surely you'd want to log in and be returned to the correct place.

Anyway, the solution was fairly straight forward. I created a new RestrictedMediaRequestHandler that inherits MediaRequestHandler and then overrode only the DoProcessRequest method:

/// <summary>
/// Extends the Sitecore MediaRequestHandler to include the requested
/// URL in the redirect to the login page.
/// </summary>
public class RestrictedMediaRequestHandler : MediaRequestHandler
{
  protected override bool DoProcessRequest(HttpContext context)
  {
    Assert.ArgumentNotNull(context, "context");
    MediaRequest request = MediaManager.ParseMediaRequest(context.Request);
    if (request == null) {
      return false;
    }

    Media media = MediaManager.GetMedia(request.MediaUri);
    if (media != null) {
      // We've found the media item, so send it to the user
      return DoProcessRequest(context, request, media);
    }

    using (new SecurityDisabler()) {
      // See if the media item exists but the user doesn't have access
      media = MediaManager.GetMedia(request.MediaUri);
    }

    string str;
    if (media == null) {
      // The media item doesn't exist, send the user to a 404
      str = Settings.ItemNotFoundUrl;
    } else {
      Assert.IsNotNull(Context.Site, "site");
      str = Context.Site.LoginPage != string.Empty ?
          Context.Site.LoginPage : Settings.NoAccessUrl;

      if (Settings.Authentication.SaveRawUrl) {
        var list = new List<string>(new[]
                                    {
                                        "item",
                                        Context.RawUrl
                                    });

        str = WebUtil.AddQueryString(str, list.ToArray());
      }
    }

    HttpContext.Current.Response.Redirect(str);
            
    return true;
  }
}

Then I updated the web.config to tell the sitecore media handler to use this new handler instead of the default one, and all was well in the world:

<add verb="*" path="sitecore_media.ashx"
     type="Custom.Infrastructure.Sitecore.RestrictedMediaRequestHandler, Custom.Infrastructure"
     name="Custom.RestrictedMediaRequestHandler" />

And now when a user requests a PDF they don't have access to they are sent to a login page that can return them to the PDF afterwards.

Filed under: ASP.NET, Sitecore

Setting up OSMC on a Raspberry Pi

Posted 11 February 2015, 21:50 | by | Perma-link

I've had a Raspberry Pi (original model B) sitting around at home for about a year, and I've been wondering what to do with it for most of that time. I've finally decided that as we're decluttering the house* we need a better way to access all the music that currently sits on the media shares of an original Windows Home Server that we use for backups and media storage.

So I've got together the following hardware:

  • A Windows Home Server, with the Guest user enabled as follows:
    • No password
    • Read Access to the Music, Videos and Photos shares
  • A Raspberry Pi Model B
  • A tiny USB wifi dongle (or one very much like that)
  • An 8GB SD card (they recommend a class 10, mine's currently a class 4)
  • A USB mouse
  • A TV with an HDMI port and its remote (for initial configuration)
  • A stereo amplifier with an AUX port and some speakers (we're running the Pi headless)

The software needed was:

  • The OSMC Installer (I went for the Windows one) - currently Alpha 4
  • A SSH client - what confused me here was the reference to PuTTY in the docs - I thought I had that as part of GitExtensions but Plink (described as "a command-line interface to the PuTTY back ends") is not really - so I used Bash that was installed by Git and that worked a treat - but I'm sure the proper PuTTY client would be fine as well

The process I followed was then (I'm mostly documenting this so that if I have to do it again, I'll have one location to refer to):

  1. Install OSMC on the SD card, setting up the wireless connection during the installer - this is currently the only way to configure this
  2. Insert the SD in the Pi, insert the wifi dongle and mouse and connect it to the TV before powering on
  3. Revel in the wonder that is OSMC running on your telly box.
  4. Change the skin - there are few known issues with the default skin - I'm currently using Metropolis, but might switch to Conq as it seems even lighter (and I'm not really going to be seeing it).
  5. Choose a web server addin of your choice - I'm currently really liking Chorus, which works nicely on Chrome and is responsive enough to have a "remote" view on mobile devices.
  6. Set the Audio output to at least "audio jack" (or possibly both if you want to test it on the TV first)
  7. I wanted to change the webserver port to 80, but that's only been fixed post Alpha 4.
  8. I also turned on AirPlay "just in case" - although most of the devices don't have anything suitable to stream.
  9. On my computer, fire up bash and connect to the Pi via SSH:
    ssh [email protected]
  10. Enter the password when prompted and you should be in
  11. Set up the mount points - you need to create local folders to hold the mounted network drives first, so I went for the following steps which creates them under the osmc users home directory:
    sudo mkdir media/music
    sudo mkdir media/photos
    sudo mkdir media/videos
  12. You can then try mounting your network drive - and I suggest you do so that you can iron out any issues - as a point of note, it appears that you can either use an absolute path or one relative to your current location:
    sudo mount -t cifs -o guest //[servername]/music music
    This basically says:
    1. Mount a device using the CIFS module: mount -t cifs
    2. Pass the option "guest" to use the guest user: -o guest
    3. The network path to mount: //[servername]/music
    4. The mount location: media (in this case the folder below the current location)
  13. If that works, you should be able to change into your music directory and see the folder structure that exists on your server.
  14. You now need to get these to mount every time the Pi boots (hopefully not all that often Wink). OSMC comes with the nano editor pre-loaded, so open your File System Table file as follows:
    sudo nano /etc/fstab
  15. Then I added the following lines:
    //[servername]/music  /home/osmc/media/music  cifs guest,x-systemd.automount,noauto 0 0
    //[servername]/photos /home/osmc/media/photos cifs guest,x-systemd.automount,noauto 0 0
    //[servername]/videos /home/osmc/media/videos cifs guest,x-systemd.automount,noauto 0 0

    These are similar to the mount command above:
    1. The network path to mount: //[servername/music
    2. The full path to the mount location: /home/osmc/media/music
    3. The mount type: cifs
    4. The comma separated mount options: guest,x-systemd.automount,noauto
      A bit more information about these options: as we need to wait until the network is up and running before we can mount the drives, luckily the people behind OSMC have thought of that:
      1. Use "guest" credentials (standard option for cifs): guest
      2. Use the OSMC custom auto-mount script: x-systemd.automount
      3. Do not use the standard auto-mount: noauto
    5. Two further options about closing/hanging options I think: 0 0
  16. Saved the changes (Ctrl+o), exited nano (Ctrl+x) and exited the console (exit)
  17. Back in OSMC, I then rebooted the Pi. If all's gone well, it should restart without any errors.
  18. I then went and added the new mounted folders to their respective libraries within OSMC, not forgetting to tell it to build the music library from the added folder.

Phew - quite a lot of steps, but I'm now sitting here listening to my music collection on the stereo with a "permanent" solution.

Now that I know it works, what would I change? I'd probably spend more than a fiver on the wifi dongle, and I might get a better SD card too - the playback can be a little stuttery... seeing as we're about to move into our loft room, grabbing a new Raspberry Pi 2 and popping it up there looks like a no-brainer Grin

Filed under: OSMC, Raspberry Pi

Long Running Sitecore Workflows

Posted 31 March 2014, 15:42 | by | Perma-link

Note: This has been sitting in my queue for nearly a year, mainly because I didn't find a nice solution that worked with workflows - but I thought I'd finish it up and move on - 10/02/2015

I've been looking into some options for informing editors about the state of long running processes when carrying out a Sitecore workflow action. Typically, the UI will freeze while the workflow action is happening - which can cause issues with browsers (I'm looking at you Chrome) that decide that the page has timed out and just kill it.

In our particular case, we are generating a static copy of our site (as XML, html and a packaged .zip container) for use within a Magazine App container - the content is all hosted via a CDN, and only gets updated when a new issue is published. However, processing a number of issues and languages can take a little while.

I'm currently favouring a fairly simple Sitecore Job running in the context of a ProgressBox, which is working, but has a few rough edges.

The key advantages this method has are:

  • It keeps the connection between the browser and the server active, which keeps Chrome happy.
  • There's a visual indication that "something is happening", which keeps editors happy.

The issues I'm currently looking into however include:

  1. Because the task is running asynchronously, the workflow action "completes" (at least from a code point of view) before the Job finishes.
  2. Because of 1, there's no way to stop the workflow and mark it as "failed" if there are issues with the process.

Not long after I started writing this, the client requested that we remove the various status checks from the workflow conditions (so they could run the process for staging without having to complete the entire magazine) and I came to the conclusion that having this as a Sitecore Workflow didn't really work for because the editors workflow was: work on a few pages, package for staging, work on a few more pages, package to staging, etc. until it was ready to package to production - with the Workflow in place they had to keep rejecting the build to staging so they could re-run that step.

We therefore needed to replace the workflow with some custom ribbon buttons allowing the editors the package the content for staging or production as needed.

Filed under: Sitecore, Sitecore Jobs, Sitecore Workflow

Deploying Assemblies to the GAC

Posted 15 May 2013, 14:50 | by | Perma-link

Pulling together a couple of helpful posts into one place so that I can find it later...

I wanted to deploy an assembly to the Global Assembly Cache (GAC) on a Windows 2008 server - GacUtil isn't neccessary installed, and I don't really want to install the entire SDK on to the server just to get it.

PowerShell to the rescue!

The first thing I needed to do was enable PowerShell to load .Net 4 assemblies, this was done by adding a config file into the PowerShell home directory (run $pshome from a PowerShell console), with the following contents:

<configuration>
  <startup uselegacyv2runtimeactivationpolicy="true">
    <supportedruntime version="v4.0.30319" />
    <supportedruntime version="v2.0.50727" />
  </startup>
</configuration>

You can then run the following commands to load the 4.0 instance of EnterpriseServices, which will install your libraries into the correct GAC depending on it's target runtime:

[System.Reflection.Assembly]::LoadWithPartialName("System.EnterpriseServices")
$publish = New-Object System.EnterpriseServices.Internal.Publish
$publish.GacInstall("C:\PathTo\Your.Assembly.dll")

Filed under: .Net, Fixes

Microsoft Surface and Windows 8

Posted 19 February 2013, 16:58 | by | Perma-link

I pre-ordered the Windows Surface back in October and have been running it fairly happily ever since, but thought I ought to write up what I've found to be good and bad with it, especially in comparison to other tablets, notably the Nexus 7.

Accounts

This is the key win for me, Windows 8 comes with Family Safety built in, which means I can set up separate accounts for each of my kids, enable curfews and time limits that just work. On top of that, I can restrict their web access appropriately along with the game ratings from the store. They then get prompted that something's been blocked and can request permission. This is something that I've not seen on the Apple devices at all, and while the Nexus supports multiple accounts there's no restrictions or sharing. Which leads me to:

App Sharing

Once you have account, the issue of purchasing apps comes up, and Microsoft's answer is that apps you've purchased can be downloaded onto up to 5 machines and downloaded to other's accounts. So I purchase the app on my account, my son logs in, I go to his Store app, go into the settings and switch to my account. I can then see all the apps I've purchased but not installed on here and install it for him, after re-entering my password (I'm not that foolish). In general this works fairly well, and a lot better than it does on Android, where I have to link my account completely with his, including mail, etc.

Content Consumption

There's good and bad elements to the Surface form factor - clearly being a 10" widescreen device, watching HD content on the iPlayer or similar works really well - especially with the HDMI output. Couple that with a USB input and we can easily pull the latest round of pictures off the camera, sort through them and share a few out there (just about - see App ecosystem below). For reading it can depend on the app: in portrait mode pages in Kindle Reader are a little too long, and it's a little top heavy that way, but I've got a nice RSS reader that has a great three column layout with feed categories, feeds and then the actual posts spreading across the landscape device.

Content Creation

Well, words at any rate - the touch screen finger painting style creation works well across all platforms - for long form note taking, the Type cover owns this space, there is no competition from on screen options here. The Touch cover is good for incidental notes, but as others have noticed, it is a bit picky about the angle it works at, which is a shame. The on screen keyboard is however very adaptive, allowing you to choose between a full keyboard for when the Surface is flat on a table or lap, and the split thumb keyboard, which is good for when you're carrying it - the keyboard is split in two allowing your thumbs to reach all the keys even with the 10" screen. On top of that, the on screen keyboard also has the left and right arrows for easy navigation and corrections.

App ecosystem

This is the biggest issue really: My wife was hoping to use the tablet occasionally for her work in Speech and Language Therapy, however all the apps out there are obviously for the iPad (even the Play Store seems a bit light in that respect, although there are some).

Filed under: Tablets, Windows 8

Working with Symplified

Posted 31 August 2012, 19:00 | by | Perma-link

I've been working on a couple of Proof of Concept demos for a client that's looking to implement a Single Sign On solution on their new site, and one of the offerings was from Symplified. Seeing as there doesn't appear to be much out there on this, especially within an ASP.Net context I thought I'd write up my experience.

Symplified Network Overview

The first thing to realise is that Symplified works as a reverse proxy, sitting between your server and your users (reverse in that it's a proxy you put in place rather than your user's ISP). So all requests hit the Symplified app server first before they are forwarded on to your servers. All authorisation is handled by the Symplified app, so you shouldn't be locking things down with the authorization elements in web.config files.

However, you can still use some of the features that the framework provides you with a bit of care.

Membership Provder

I started off with the idea of implementing a custom Membership Provider to handle the authentication/authorisation aspects (as this had worked well in the previous PoC based on PingFederate).

CreateUser

You can still implement the CreateUser method in a custom membership provider, as you will need to provision users within Symplified, especially if you want to allow direct registration.

In Symplified's world, you will need to make three calls to a rest service:

  1. Create a session token
  2. Create a user
  3. Reset the user's password

You need to reset the password as by default the users appear to be created with an expired password, and resetting it to the same value fixes this - note that Symplified will also send an email to the user informing them that they've reset the password - you may want to suppress this.

Not too bad, however handling errors from the create user service is a little tedious:

  • If any of the parameters don't match the patterns expected you'll get a 500 lnternal Server error returned with plain text error messages in the XML response.
  • If the user already exists you'll get a 400 Bad Request, again with the error description in the XML.

These plain text error messages will need to be parsed and mapped to MembershipCreateStatus values to get sensible errors back to your controls.

ValidateUser

You can't really implement the ValidateUser method however, as there's nothing in the API you can call to do this, the user's login credentials need to be sent directly to Symplified's SSO application so it can set it's cookies appropriately, and then pass some headers through to your "secure" areas.

So, how do you handle an authenticated user?

When the user is viewing a "secure" area of your site, Symplified will send a number of additional headers along with the request, which will include things like the Username, which can then be used to generate a Forms Authentication ticket and a Membership Priniple that you can fill for the app to use later.

For the PoC I implemented that logic in a custom Module that hooks into the application's AuthenticateRequest event.

OpenId Users

The one big issue so far has been around users authenticating via OpenId providers. These users are authenticated without a user being be provisioned in Symplified, which could well be an issue for you. The solution we put in place within the PoC was to check for the headers stating this was a login from the OpenId provider, and then attempt to create a user within Symplified, and ignoring the duplicate username error message - the Symplified engineers were looking at adapting the solution so that if the OpenId user matched known user it would send additional headers which would allow me to skip the creation step.

Next Steps

If we decide to go forward with Symplified there are a number of changes I'd like to make:

  • Only create the user context if the request comes from the Symplified app.
  • Implement the GetUser methods using the Symplified API.
  • Redirect requests to the Symplified applicance if they don't come from there.
  • Don't try and create the user on every single request!

Filed under: ASP.NET, SSO