In my previous post I detailed what open source projects we have contributed code to, but this post will highlight what projects we have made publicly available.
We host all our open source projects from GitHub, feel free to browse our repositories. I would like to highlight just one project in this post, the AzureWorkers.
AzureWorkers is a project we started back in August to create a framework for running multiple worker threads within one single Azure Worker Role. It allows us to quickly spin up more workers to do different things. I have already posted on how to use this project, so I won’t repeat myself, but instead talk about why we chose to open source it and not keep it closed off.
Looking at Githubs President Tom Preston-Werner‘s blog post about this same issue he basically makes the points for me! So, thank you Tom! I would like to highlight two things though:
- We do not open source business critical parts
- Open sourcing parts of our stack makes it possible and legal for us to use code written at work for hobby/home projects
Business critical parts
For us all code related to risk management is any way is business critical, it is the essence of UXRisk and what Proactima can bring to the software world. This needs to be protected and so we do not open source it. So far it has been very easy in determining if something is business critical or not, presumably it will get harder as we develop more code in the gray areas between technology and risk knowledge.
Use in hobby/home projects
In most employment contracts it is specified that all work carried out during office hours when employed belongs to your employer, which is only to be expected! But if you open source that work then you are free to use the code/result in other projects too! A colleague of mine, Anders Østhus, is using our AzureWorkers in his latest project (to be published I hope!). This would have been hard to do, legally, if we had not open source that project.
In summary I would like to thank my employer for allowing me to not only blog about my work, but also to share the fruits of our labor with the world. So thank you Proactima!
We have a requirement to instantiate a service facade based on a single parameter, this parameter will then be used to load configuration settings appropriately. A fellow team member reminded me that this looked like a factory requirement so I started to look at Ninject Factory. At first I didn’t quite get the examples, but decided to just try it out and see what would work. Turns out to be pretty simple! This post is mostly as a reminder to myself and perhaps a bit of guidance for others looking at doing the same.
There are three requirements for using Ninject Factory:
- Install the nuget package
- Create a factory interface
- Create a binding to the interface
Point number two looked like this for me:
The IServiceFacade interface and concrete implementation looks like this:
Tying it all together is the Ninject Module:
Line number 5 will make Ninject create a proxy that calls the constructor on ServiceFacade, with the prefix input and the IService implementation (bound on line 7). To make use of the factory I did this:
Here I inject the factory interface and call the create method (line 23). The magic happens inside that create method, since this is a proxy generated for me by Ninject! The really cool thing is that all regular bindings still apply, so if you look at the constructor for ServiceFacade it takes the prefix string and an interface (IService) that I bind in my module. Stepping into (F11) the create method in debug mode I end up in the constructor for ServiceFacade, perhaps as expected, but very cool!
Also worth mentioning; you could have more inputs to the create method and only the names matter, ordering does not. So if I needed both prefix and postfix I would have them in any order between the factory interface and the constructor, as long as names match it’s OK. And finally you are not restricted to one method on the factory interface, so I could have had a CreateWithPrefix and a CreateWithPostfix method etc.
The complete source for this is not available, as it’s part of a much bigger project known as UXRisk and it is not open source.
Proactima values knowledge and part of that is sharing the knowledge. One way of sharing, in software development, is to contribute to open source projects. So in UXRisk we have contributed to open source projects that we use:
We have contributed minor fixes to package versions and the ability to inject on Azure Worker Roles. So just small changes that we needed in our work, but very rewarding to be able to fix it ourselves.
Semantic Logging Application Block
We are using ElasticSearch (ES) for our logging needs and there was no built in sink to store logs in ES, so we created our own implementation. We basically reused the code from Azure Table Storage and adapted it to our needs. In cooperation with another member of the Semantic Logging Application Block (SLAB) codeplex site our code was accepted into the master branch of SLAB.
In our project we use ElasticSearch as our search backend. We started seeing some unable to connect exceptions and then decided to add retry logic to our queries. To do so in an ordered manner we use the Transient Fault Handling Application Block (“topaz”) and a custom transient error strategy.
As we were implementing “topaz” we also decided to add a timeout value for our queries so that the client would not be stuck on long running queries. Basically if the query fails after X amount of time we should return an error message and the client can then retry.
Combined we wanted a simple retry logic with timeout on each retry. This is what our calling code looks like:
Our custom retry strategy is super simple:
In my new project (UXRisk) we had a requirement to do a lot of background processing, based on messages passed over Azure Service Bus. We did a lot of research online and found good examples of how to properly to this and the outcome was a project we call AzureWorkers.
AzureWorkers makes it super easy to run multiple workers in one Azure Worker Role, all async and safe. You as the implementor basically have to just inherit from one out of three base classes (depening on Service Bus/Storage Queue is used or not) and that class will run in its own thread and will be restarted if it fails.
There are four supported scenarios:
- Startup Task – Will only be executed when the Worker Role starts. Implement IStartupTask to enable this scenario.
- Base Worker – Will be called continuously (basically every second), for you to do work and control the timer. Inherit from BaseWorker to enable this scenario.
- Base Queue Worker – Will call the Do method with messages retrieved from the Azure Storage Queue. Inherit from BaseQueueWorker to enable this scenario.
- Base ServiceBus Worker – Will call the Do method whenever a message is posted to the topic specified in the TopicName overload. Inherit from BaseServiceBusWorker to enable this scenario.
On GitHub an example project is included to document these three scenarios. To get started using AzureWorker you can get the nuget package.
AzureWorker depends on these projects:
- Ninject – version 3.0.2-unstable-9038
- Ninject.Extensions.Azure – version 3.0.2-unstable-9009
- Ninject.Extensions.Conventions – version 3.0.2-unstable-9010
- Microsoft.WindowsAzure.ConfigurationManager – version 126.96.36.199
- WindowsAzure.ServiceBus – version 188.8.131.52
Some of the code has been borrowed from a blog post by Mark Monster and a blog post by Wayne Walter Berry.
There are a at least two issues with the code right now:
- If the processing of a message fails it will re-post the message directly and retry, no waiting time (Service Bus Worker).
- The implementer has to delete messages manually (by intent).
- It is depending on Ninject, not a generic IoC framework
We will accept PRs to alleviate these issues.