Wednesday, December 28, 2011

ASP.NET MVC 4 Bundling and Minification - Composing Child Views

Of the upcoming ASP.NET MVC 4 features, one that I am really excited to see is the Bundling feature contained within the Microsoft.Web.Optimization assembly in the current Developer Preview release version.  You can get a standalone version of this assembly direct from Nuget:
In this article I want to discuss some of the basics of Bundling and also to outline some thoughts that I have for organizing scripts and code when composing views in your application.

MVC4 Bundling Primer

The Bundling features allows you to easily combine and minify resources within your application.  Take for example an application which contains the following Javascript files:
image
In the ‘bad old days’ we may have emitted each of those script files separately, this means that the client would have to make lots of requests to pull down the content.
image
Emitting the files as above would result in the following network traffic on the client:
image
As you can see at the bottom of the image, 30 separate requests have been issued.

Default Bundle Behaviour

Using Bundling is as simple as registering the files that we want to optimize with the BundlingTables feature.  To achieve that, we can pretty much add the following line of code to Application_Start in Global.asax to register all of the files shown above:
BundleTable.Bundles.EnableDefaultBundles();
This default behaviour will register all of the .js and .css files located in their default locations and create routes so that they can be served up by the application.  Rendering them out in the page is then as simple as pointing a script tag as the route that has been registered to display them – in the default case, that would result in the following script tag being added to the bottom of your main layout file:

<script src="@Url.Content("~/Scripts/js")" type="text/javascript"></script>

Dynamic Bundles

For finer-grained control over the ordering of resources within a Bundle you can create a dynamic bundle and manually add files in the order that you want them to be rendered.
Bundle bundle = new Bundle("~/CoreBundle", typeof(JsMinify));
bundle.AddFile("~/Scripts/jquery-1.7.1.min.js");
bundle.AddFile("~/Scripts/jquery-ui-1.8.16.js");
bundle.AddFile("~/Scripts/jquery.validate-vsdoc.js");
...

BundleTable.Bundles.Add(bundle);
And then, in the page, emit the path that was just registered:
<script src="@Url.Content("~/CoreBundle")" type="text/javascript"></script>
Note: I needed to register the Twitter Bootstrap scripts dynamically like this because I was getting errors when using the Default registration.  The errors resulted because of the order that the default registration behaviour was adding the scripts to the bundle.  In the case of Twitter Bootstrap, I needed to order the scripts specifically because of dependencies beteen Twipsy.js and Popover.js
image

File Organization for child views

So far we have seen how to register all of the core scripts for our application but, as our application grows, we will typically introduce lots of individual scripts to manage the behaviour of specific pages.  In my case, I adopt a convention where I separate all of the Javascript and CSS code out of the View files and into a CSS/JS file with a name which mirrors the view that they represent.  The following Table illustrates how this mapping works:

Resource LocationJS Location
Home/Index.cshtmlScripts/Views/Home_Index.js
Shared/ChildView.cshtmlScripts/Views/ChildView.js

To give a concrete example of how this works, let’s consider that the shared partial view listed in the above table contained the following code which allows a user to click on a link:
<h2>Child Content</h2>
<a href="#" id="clickerLink">Click Me</a>
The corresponding JS file would have behavioural code to handle the click event of the anchor tag like so:
$(function () {
        $("#clickerLink").click(function (e) {
            e.preventDefault();
            alert($(this).text());
        });
}); 
When it comes to rendering the output to the browser it will be desirable to have the JS emitted as part of the Core Bundle thus allowing us to have all scripts combined, minified, and in one optimal location at the bottom of the page.

To achieve this desired outcome we simply register the child script from the partial view that depends on it like so:
@{
    var bundle = Microsoft.Web.Optimization.BundleTable.Bundles.GetRegisteredBundles()
        .Where(b => b.TransformType == typeof(Microsoft.Web.Optimization.JsMinify))
        .First();
    
    bundle.AddFile("~/Scripts/Views/ChildContent.js");
}
For the sake of completeness, you would probably abstract that messy BundleTable logic out into a helper which would reduce your child registration code down to the following line of code:
Html.AddJavaScriptBundleFile("~/Scripts/Views/ChildContent.js", typeof(Microsoft.Web.Optimization.JsMinify));
And here's some example code for what the extension class might look like:



public static class HtmlHelperExtensions
{
    public static void AddJavaScriptBundleFile(this HtmlHelper helper, string path, Type type)
    {
        AddBundleFile(helper, path, type, 100);
    }

    /// <param name="index"> 
    /// Added this overload to cater for switching between 
    /// different Script optimizers more easily
    /// e.g. Switching between Microsoft.Web.Optimization 
    /// bundles and ClientLibrary resources could
    /// be done seamlessly to the application
    /// </param>
    public static void AddBundleFile(this HtmlHelper helper, string path, Type type, int index)
    {
        var bundle = BundleTable.Bundles.GetRegisteredBundles()
            .Where(b => b.TransformType == type)
            .First();

        bundle.AddFile(path);
    }
}


Thursday, December 22, 2011

Social Internet Computing Fabric - Activity Streams

According to Gartner, we are currently experiencing 4 mega trends in the world of internet computing that will change the shape of the IT landscape forever.  Those key trends are: Social, Mobility, Cloud, and Context aware computing. 

If this is true, then I believe that for any application being developed today, it should be considered how each of these aspects will be woven into the core fabric of the application.  In this article I will touch on what I believe is a core piece of the social computing fabric – Activity Streams.

We’ve come to know of Activity Streams through their implementation in common social web applications such as Facebook and LinkedIn. Twitter has recently started exposing Activity Streams through their Activity Tab and Facebook even offers an extensibility plugin to embed their Activity Feeds outside of their own website.

As you plan your own architecture you may come to think of designing a custom Activity Stream to display recent activities to users of your site.  You would start planning by thinking about the different types of activities that can occur like so:

image

Here we see that a user can perform many actions within the site and that we may wish to display each of these in an Activity Stream.  When the user performs an activity, the details of that event are captured and stored in a database.  When you combine the overall amount of activity from many users within a social network, you can see that the Activity Stream becomes a useful barometer of what’s going on. 

You could also add other activities such as Favourite’ing, Liking, Member Profile updates, etc.  There is no limit to the number of types of different activities that you might want to display it all comes down to the type of application that you are developing and what it would make sense for the members of the social networks contained within your application to see.

As you design your Activity Stream, many questions will arise.  One of the questions that you will come across is that of permissions – which activities should members be able to see?

image

Some of the constraints surrounding permissions might be whether the two users are members of the same group, or whether one of the users has set specific content constraints on an individual content item.

image

Data Structure

A way to deal with this is to raise events consistently throughout your application and have a set of listeners log the details of those events into a table with columns that describe attributes for each permission boundary that you might want to filter on.  This might include filtering on attributes such as:

  • IsPublic – a flag which determines whether the content is visible to everybody
  • UserId – the identity of the user who posted the item
  • GroupId – the identity of the group that the content was posted to
  • FolderId – an optional identity value indicating which folder within the group that the content was posted to

Other columns that I would suggest including in your Activity Stream table are:

  • RSS Fields – Fields that allow you to display a title, date created, and description without having to run separate queries to get at that information.
  • Item Type – A value which indicates the type of item.  This could be used to display a custom icon to represent the item in the Activity Stream when rendered on a web page for example.
  • Item Id – The underlying identity of the item being represented.

The Activity Stream data will end up looking something like the data in the following image:

image

And from this data can be crafted a query which has the logic to enforce permissions based on your application’s logic.  An example of such a query might look like this:

CREATE PROCEDURE [dbo].[GetStreamItems] 
@accountId int
AS
BEGIN

SELECT top 100 [StreamItemId] ,[Title] ,[Description] ,[ItemId] ,
'Item Type' =
case
when [ItemTypeId] = 1 then 'Post'
when [ItemTypeId] = 2 then 'Comment'
else 'System Message'
end,
[IsPublic] ,[AccountId] ,[GroupId] ,[FolderId], [CreatedDateTime]
FROM [StreamItems] as A
WHERE IsPublic = 1
OR (AccountId = @accountId)
OR (GroupId = ANY(SELECT GroupId FROM dbo.AccountGroups WHERE AccountId = @accountId) and A.FolderId is null)
OR (FolderId = ANY(SELECT FolderId FROM dbo.AccountFolders WHERE AccountId = @accountId))
order by CreatedDateTime desc;



Here we see that the permission logic is encoded in the WHERE clauses at the bottom of the query.  The logic here only displays items if any of the following rules are true:



  1. The item is publicly visible

  2. The item was created by the user who is invoking the query

  3. The user who is invoking the query is a member of a group that the item was posted to

  4. The user who is invoking the query is a member of a folder that the item was posted to

The other thing to notice about the query is that only the Top N rows are retrieved and that the query is ordered by the date of the item.  The reason for this is performance.  Note also that, for performance reasons your would create an index on the CreatedDateTime column so that the ordering would be done via an index seek and not by a table scan.


Performance Characteristics


For many smaller applications, performance will not be a significant factor to designing your architecture.  And, as they say, in the world of the web, performance constraints are generally a nice problem to have (because it means that you have lots of users Smile).  But it pays to do some analysis to see the rate at which your data will grow.  To consider this, let’s assume the following set of user profiles for our website:



image


This shows user profile data for an application which has 3 segments of users:



  1. Low Use (avg. 2 activity items per day) – account for 40% of the total user base

  2. Medium Use (avg. 10.7 activity items per day) – account for 40% of the total user base

  3. High Use (avg. 23 activity items per day) – account for 20% of the total user base

Given these usage profile ratios, we can then start to calculate how many items our Activity Stream table will grow to over time based on the total number of users that we expect to have.



image


Here we can see the calculation of expected activity based on total site users of 50, 100, 1,000, and 10,000.  At 1,000 users per day we can see that our profile user base would generate around 2,050 database entries per day and, at 10,000 site users, this would grow to around 20,500 entries per day.


So, at the top end of these numbers, our Activity Steam database would accumulate around 2.5 million rows of data every 120 days.  This is where the optimization benefit of indexing and only returning the most recent N entries would really pay the most dividend as you would still be fetching data in a few hundred milliseconds and returning it to your user in an acceptable amount of time.

Tuesday, December 13, 2011

Problem Updating Nuget - Online Gallery Offline


While attempting to update to the recently released 1.6 version of Nuget at work, a few of us were having trouble connecting to the Online Extension Gallery in Visual Studio.  As you can see in the following image, the Online Gallery was listed as being 'Offline'.


Thankfully one of my colleagues solved the problem by applying a tweak to the devenv.exe.config which is located at C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE
<configuration> .... <system.net> <settings> ..... <servicepointmanager expect100continue="false" /> </settings> </system.net> </configuration>
Other things to consider if you are having troubles applying the Nuget update:
  1. If the package won't update, consider un-installing and reinstalling it
  2. You should run Visual Studio as Administrator
  3. It could be a proxy issue

Monday, December 12, 2011

A result message format for asynchronous web operations

Over the past year I've been working on developing SkillsLibrary as a cloud-based application for doing sports analysis through video on the web.  During that time I've learned a lot about the many and various challenges of building a modern web application.  Nowadays things such as mobility, page speed, and asynchronous operations rank high on the list of 'must have' requirements.   In this post I thought I'd touch on a simple little standard that I've used to help make the job of handling asynchronous operations simpler.

Let's start by looking at a typical piece of code for invoking an asynchronous operation:
$.post("/members/remove", data, function (result) 
{

}
This method is used to invoke a server side operation by passing a JSON data object to a controller action at the path '/members/remove'.  As we see from that code, the abstractions in modern Javascript libraries such as jQuery have made really simplified doing such operations.

The problem I set out to solve was to provide a consistent way to handle the response which comes back from the server.

Think about it for a moment... after we invoke the above operation, typically we will need to update a piece of user interface to reflect the result.  And that result may not have necessarily succeeded!  Let's imagine that the member we attempt to remove is not able to be removed because of a business rule.  Or if our logic fails and an exception occurs somewhere in the processing logic.  Or, perhaps it did succeed after all?

In SkillsLibrary I've developed a simple little 'interface' pattern for results which get returned from asynchronous operations which looks like this:
{
     bool Success;
     object Model;
     string Message;
}
Having a common structure for returning messages has a few benefits:
  1. You can create helper code to create the return messages
  2. You can create common client-side code for handling return messages
Focussing on the second of these benefits, think again about what can potentially happen when an async result is invoked.  In the first case, let's consider that the operation fails for some reason.  Your client-side code will need to know that the operation failed and be able to give the user some information about the failure.  In such a case it's as simple as querying the Success property and displaying the Message to the user like so:
$.post("/members/remove", data, function (result) 
{
    if( !result.Success )
    {
        UIHelper.displayErrorElement(result.Message);
    } else {
        ...
    }
}
Whereas, if the operation succeeds, we might want to do an operation to manipulate the DOM in some way to add or remove an object from the UI.  Again, this is easily achieved by using the Model property of the return message which will contain an object that represents a data item for the operation that was executed.
$.post("/members/remove", data, function (result) 
{
    if( !result.Success )
    {
        UIHelper.displayErrorElement(result.Message);
    } else {
        UIHelper.removeDOMItem($"#rootDOMElement", result.Model.Id);
    }
}
You can see that, having a consistent way of returning messages from the server we are able to streamline our client-side handling code which improves maintainability by having less code that is more readable.