Category Archives: Geek

Contributing to open source, part II

In my previous post I detailed what open source projects we have contributed code to, but this post will highlight what projects we have made publicly available.

We host all our open source projects from GitHub, feel free to browse our repositories. I would like to highlight just one project in this post, the AzureWorkers.

AzureWorkers is a project we started back in August to create a framework for running multiple worker threads within one single Azure Worker Role. It allows us to quickly spin up more workers to do different things. I have already posted on how to use this project, so I won’t repeat myself, but instead talk about why we chose to open source it and not keep it closed off.

Looking at Githubs President Tom Preston-Werner‘s blog post about this same issue he basically makes the points for me! So, thank you Tom! I would like to highlight two things though:

  1. We do not open source business critical parts
  2. Open sourcing parts of our stack makes it possible and legal for us to use code written at work for hobby/home projects

Business critical parts

For us all code related to risk management is any way is business critical, it is the essence of UXRisk and what Proactima can bring to the software world. This needs to be protected and so we do not open source it. So far it has been very easy in determining if something is business critical or not, presumably it will get harder as we develop more code in the gray areas between technology and risk knowledge.

Use in hobby/home projects

In most employment contracts it is specified that all work carried out during office hours when employed belongs to your employer, which is only to be expected! But if you open source that work then you are free to use the code/result in other projects too! A colleague of mine, Anders Østhus, is using our AzureWorkers in his latest project (to be published I hope!). This would have been hard to do, legally, if we had not open source that project.

In summary I would like to thank my employer for allowing me to not only blog about my work, but also to share the fruits of our labor with the world. So thank you Proactima!

Ninject and Factory Injection

We have a requirement to instantiate a service facade based on a single parameter, this parameter will then be used to load configuration settings appropriately. A fellow team member reminded me that this looked like a factory requirement so I started to look at Ninject Factory. At first I didn’t quite get the examples, but decided to just try it out and see what would work. Turns out to be pretty simple! This post is mostly as a reminder to myself and perhaps a bit of guidance for others looking at doing the same.

There are three requirements for using Ninject Factory:

  1. Install the nuget package
  2. Create a factory interface
  3. Create a binding to the interface

Point number two looked like this for me:

The IServiceFacade interface and concrete implementation looks like this:

Tying it all together is the Ninject Module:

Line number 5 will make Ninject create a proxy that calls the constructor on ServiceFacade, with the prefix input and the IService implementation (bound on line 7). To make use of the factory I did this:

Here I inject the factory interface and call the create method (line 23). The magic happens inside that create method, since this is a proxy generated for me by Ninject! The really cool thing is that all regular bindings still apply, so if you look at the constructor for ServiceFacade it takes the prefix string and an interface (IService) that I bind in my module. Stepping into (F11) the create method in debug mode I end up in the constructor for ServiceFacade, perhaps as expected, but very cool!

Also worth mentioning; you could have more inputs to the create method and only the names matter, ordering does not. So if I needed both prefix and postfix I would have them in any order between the factory interface and the constructor, as long as names match it’s OK. And finally you are not restricted to one method on the factory interface, so I could have had a CreateWithPrefix and a CreateWithPostfix method etc.

The complete source for this is not available, as it’s part of a much bigger project known as UXRisk and it is not open source.

Contributing to open source, part I

Proactima values knowledge and part of that is sharing the knowledge. One way of sharing, in software development, is to contribute to open source projects. So in UXRisk we have contributed to open source projects that we use:

Ninject.Extensions.Azure

We have contributed minor fixes to package versions and the ability to inject on Azure Worker Roles. So just small changes that we needed in our work, but very rewarding to be able to fix it ourselves.

Semantic Logging Application Block

We are using ElasticSearch (ES) for our logging needs and there was no built in sink to store logs in ES, so we created our own implementation. We basically reused the code from Azure Table Storage and adapted it to our needs. In cooperation with another member of the Semantic Logging Application Block (SLAB) codeplex site our code was accepted into the master branch of SLAB.

Azure Workers

In my new project (UXRisk) we had a requirement to do a lot of background processing, based on messages passed over Azure Service Bus. We did a lot of research online and found good examples of how to properly to this and the outcome was a project we call AzureWorkers.

AzureWorkers makes it super easy to run multiple workers in one Azure Worker Role, all async and safe. You as the implementor basically have to just inherit from one out of three base classes (depening on Service Bus/Storage Queue  is used or not) and that class will run in its own thread and will be restarted if it fails.

There are four supported scenarios:

  • Startup Task – Will only be executed when the Worker Role starts. Implement IStartupTask to enable this scenario.
  • Base Worker – Will be called continuously (basically every second), for you to do work and control the timer. Inherit from BaseWorker to enable this scenario.
  • Base Queue Worker – Will call the Do method with messages retrieved from the Azure  Storage Queue. Inherit from BaseQueueWorker to enable this scenario.
  • Base ServiceBus Worker – Will call the Do method whenever a message is posted to the topic specified in the TopicName overload. Inherit from BaseServiceBusWorker to enable this scenario.

On GitHub an example project is included to document these three scenarios. To get started using AzureWorker you can get the nuget package.

Please note

AzureWorker depends on these projects:

  • Ninject – version 3.0.2-unstable-9038
  • Ninject.Extensions.Azure – version 3.0.2-unstable-9009
  • Ninject.Extensions.Conventions  – version 3.0.2-unstable-9010
  • Microsoft.WindowsAzure.ConfigurationManager – version 2.0.1.0
  • WindowsAzure.ServiceBus – version 2.2.1.0

Some of the code has been borrowed from a blog post by Mark Monster and a blog post by Wayne Walter Berry.

There are a at least two issues with the code right now:

  • If the processing of a message fails it will re-post the message directly and retry, no waiting time (Service Bus Worker).
  • The implementer has to delete messages manually (by intent).
  • It is depending on Ninject, not a generic IoC framework

We will accept PRs to alleviate these issues.

My first adventures in Git

So up until today I have only heard, read and watched Git happen. I have not yet actually tried it. So when I realized today that I actually had some spare time I thought I would dig into it and see if I could understand the hype and perhaps find it useful!

My very first step was to find a good tutorial on how to get started. I found one on Seascape Web that I found easy to follow and short enough so that I didn’t get lost. In the end I managed to setup my repo and commit my Double Key Dictionary source code (more on this later).

The next step will be to work with the code and Git to see how that feels vs. Team Foundation Service that I’ve been using so far.

ASP.Net Web API and Ninject

For the third time I’m using the new Web API in one of my projects and again I hit the wall when it was time to hook up to mu DI framework of choice: Ninject. So this time I’m documenting it here, hopefully that will make me remember it going forward.

To make DI work with Web API there are three steps to perform:

  1. Create two classes: NinjectDependencyScope and NinjectDependencyResolver
    I used the code from Peter Provosts blog.
  2. Hook up the new classes in NinjectWebCommon:
    GlobalConfiguration.Configuration.DependencyResolver = new NinjectDependencyResolver(kernel);
  3. Create bindings

Fairly easy stuff if you just know how and obviously Peter knew how! There is one caveat though: the NinjectDependencyResolver class also exists in the Ninject.Web.Mvc namespace! This one inherits from IDependencyResolver and we want it to inherit from IDependencyScope. It won’t compile if you reference the wrong class though, so not a big problem really.

ORMs and Performance

In one of my projects, actually Proactimas project, I’m using Entity Framework (EF) on top of SQL Azure and it works pretty well; data access is relatively fast, dead easy most of the time and the cost (in $) is OK. However I have discovered how sensitive EF is to how you write your queries. It appears to be especially true if you have several one-to-many relationships.

Just a quick introduction to Proactima Compliance; Any user has access to one or more companies, each company consists of one or more units. Units can be things like ‘Head Office’, IT-department, a ship etc. For every unit there can be one or more evaluations and each evaluation is based on a set of rules that the company measure compliance against. An example of a ruleset is the Petroleum Activities Act in Norway.

In Proactima Compliance the most frequently accessed page had an action that took about 1000ms to complete. This page allows the user to evaluate a rule and then choose ‘Save and Next’; so there are two actions involved:

  1. Save current rule and find next based on previous rules’ sequence
  2. Load next rule and display it to the user

The second action completes in 200-400ms, but the first takes from 800-1200 ms to complete. The main reason for the long processing time is what MiniProfiler calls Duplicate Readers. Basically, because of the way I had written my queries it was executing a reader querying for each rule in a ruleset! It looked like this:

Pre_Optimize

It’s running 103 (!) queries just for that one action, that’s terrible! The actual query looked like this:

var rules = from r in evaluationRule.Evaluation.Rules
                 where r.Rule.Sequence > currentSequence 
                 orderby r.Rule.Sequence 
                 select r;

The variable ‘evaluationRule’ was already fetched from the database and so I was just navigating to its Evaluation (remember how an evaluation is based on a ruleset which in turn consists of individual rules) and then the ruleset for that evaluation. Then I set up the where clause to make sure we can only get a rule with a sequence higher than the current rule. Finally I do an order by and select, from this I will only choose the first rule thus getting the next rule based on sequence. Possible bug here; sequences could potentially be same, but I’m 100% in control of the sequences so that’s really not an issue.

I won’t show the rendered query from this, it’s a nightmare, but the end result is correct and dead slow… So I had to fix this! I tried to manipulate the query in a try-fail-retry pattern; no success at all! I figured after awhile that I was addressing the issue from the wrong end; I was just thinking of the code. Thinking back on previous issues I’ve had with generated Sql (from other ORMs) I remembered (finally!) an old lesson; start by writing the query manually in the best possible way you can and then try to express that same query using the ORM! So that’s what I did! I fired up Sql Server Management Studio Express 2012 Deluxe Gold Plated Edition, or whatever it’s called and thought really hard about what I really needed. What I ended up with was this:

DECLARE @currentSequence int = 1;
DECLARE @currentEvaluation int = 1;

select	top 1 er.ID, 
	er.RuleID,
	r.Sequence,
	er.EvaluationID
from	EvaluationRules er,
	Rules r
where	er.RuleID = r.ID
and	er.EvaluationID = @currentEvaluation
and	r.Sequence > @currentSequence
order by r.Sequence;

Not that hard to write at all. Now all I needed was to convert it to EF code:

var rules =
    (from er in Database.EvaluationRules
        join r in Database.Rules on er.RuleID equals r.ID
        where er.EvaluationID == evaluationId
        && r.Sequence > currentSequence
        orderby r.Sequence
        select er)
        .Take(1);

It’s definitely a bit more than my original code based query, but as you can see it looks a whole lot like the manual sql query. And how does MiniProfiler now report on my page:

Post_Optimize
Please ignore the minor differences (controller name); I have done some other refactoring’s as well as the query.

What an amazing change! 900ms to 222ms! And 100+ queries to just 7! That’s performance optimizations the user notices!

But why did it all go so terribly wrong in the first place? I’m not going to over analyze the two queries or discuss how EF generates the Sql; somebody much smarter than me can probably do that (or has!). But what I learned from all this is that my old skills as Sql developer isn’t lost even if I am using an ORM AND the way you express a query can matter a whole lot when it comes to performance!

On Corporate Values and Developers

I work for a company called Proactima AS, we deliver HSE&Q/RISK consultants to our customers. But I have been hired to do some development for them. Proactima is a company that runs close to a 100% on knowledge and so our main resource is the consultants working here. As so many other companies Proactima has a set of values defined:

  • Knowledge
  • Balance
  • Integrity

And so everything we do is evaluated against them;

  • Can we work with this client?
  • Are we interested in bidding on this job?
  • Can this consultant handle more load?
  • etc.

in theory most companies operate like this, but very few actually adheres to them. We have turned down job offers because of our integrity, workload is scaled down if a consultant wants to focus more on home than job for a period etc. I myself have had issues at home where work was not my number one priority and then my manager (resource owner really) scaled down my work load accordingly. Our chairman of the board even asked me why I as in the office during that difficult time (more on this later IF I’m up for it). So you see, we live and die by our values. We do not steer the ship by financial numbers alone, but rather on how well we mange to live up to our own values.

At this point you might be wondering why I’m telling you all this? Or perhaps you’re not even reading it… The thing is, the last few weeks I’ve been evaluating myself according to these three seemingly simple values; Knowledge, Balance and Integrity. Not just myself really, but people around me as well. And the things that I do. How well does it all stack up against these values?

First of all I found that I could easily rate most people on the three values and I realized that I had more respect for those that rated high on these values vs. other values. Because, let’s be honest here, Proactima could’ve chosen completely different values! Statoil has chosen: Courageous, Open, Hands-on and Caring. All fine values, but not what Proactima chose and not what I would have chosen.

Furthermore I found that I could rate my work (programming that is) according to the values:

  • Is the code using the best components and the correct algorithms? (Knowledge)
  • Am I spending too much time on problems that does not warrant it? (Balance)
  • Will the code be maintainable when somebody else takes over responsibility for it? (Integrity)

Are you evaluating yourself and your code according to a set of values? If not; perhaps you should…

ValueInjecter and Expression Magic

In a previous post I have talked some about how I use AutoMapper and I’m using it for all my “Entity To View” model mappings; works like a charm! But what it does not do is to map the other way; ie. take whatever comes from the view and apply that to an entity model. There are some discussions online on the ‘correctness’ of doing this, but I found it to be an easy and productive technique.

But since AutoMapper doesn’t handle this I had to write a lot of LHS-RHS code, right? No I didn’t! Somebody else have already solved this problem in a utility called ValueInjecter. Basically it’s capable of injecting values from one object to another, and this of course is exactly what I need! So how do I use it;

var entityObject = db.GetEntity(viewObject.UniqueId);
entityObject.InjectFrom<CustomInjectionRules>(viewObject);
entityObject.State = EntityState.Modified;
db.Store(entityObject);

The magic happens on line number two; the InjectFrom method will transfer data from my viewObject to the entityObject. The method can be used without specifying a type (generics) to assist in the injection, but I need it to allow for data type and naming variations between the two objects. The CustomInjectionRules class handles those two issues and looks a little like this:

public class CustomInjectionRules : ConventionInjection
   {
       protected override bool Match(ConventionInfo c)
       {
           return c.SourceProp.Name.Equals(“ViewProp”) && c.TargetProp.Name.Equals(“EntityProp”);
       }

       protected override object SetValue(ConventionInfo c)
       {
           return Convert.ToInt32(c.SourceProp.Value);
       }

   }

The class inherits from “ConventionInjection” and overrides two methods; one for matching properties with unequal names and the second for converting between whatever “ViewProp” is and int32 (“EntityProp”).

After writing this code I made sure it worked and then came back to look at it; could I improve it? Refactor it somehow? Not much code in there, so my first instinct was; No, this is fine. But then I looked hard at the “Magic Strings” (“ViewProp”/”EntityProp”)… I do not like magic strings! No refactor/compile time support and all that nonsense. So I had to find a better solution! I immediately thought of another blog post I’d done some time ago; MVC 3, EF 4.1 And Some More… The HtmlHelper methods appear to just take a lambda expression as input and then figures out the name of the property! Which of course is exactly what I needed! So I started Googling… I found a few examples, but they weren’t spot on before I found a question on StackOverflow! Spot on! I did some minor adjustments and not my CustomInjectionRules class looks like this:

public class EvaluationRuleTypeInjection : ConventionInjection
{
    protected override bool Match(ConventionInfo c)
    {
        return c.SourceProp.Name.Equals(GetMemberInfo<ViewObject>(r => r.ViewProp))
                && c.TargetProp.Name.Equals(GetMemberInfo<EntityObject>(r => r.EntityProp));       
    }

    protected override object SetValue(ConventionInfo c)
    {
        return Convert.ToInt32(c.SourceProp.Value);
    }

    private string GetMemberInfo<T>(Expression<Func<T, object>> method)
    {
        LambdaExpression lambda = method as LambdaExpression;
        if (lambda == null)
            throw new ArgumentNullException(“method”);

        MemberExpression memberExpr = null;

        if (lambda.Body.NodeType == ExpressionType.Convert)
            memberExpr = ((UnaryExpression)lambda.Body).Operand as MemberExpression;
        else if (lambda.Body.NodeType == ExpressionType.MemberAccess)
            memberExpr = lambda.Body as MemberExpression;

        if (memberExpr == null)
            throw new ArgumentException(“method”);

        return memberExpr.Member.Name;
    }
}

Awesome! I immediatly moved the “GetMemberInfo” method out of this class and into a more appropriate class, but included it here to avoid expanding all the code…

Pretty happy about this change. Also loving how easy it is to get stuff such as ValueInjecter into my prosjects using nuget.