2013-01-03

Publish All the Code

I don’t know about everybody else but when I publish code on github or even in a mailing list post I try extra hard to make it not suck. Of course it still sucks but at least I know that it’s because I’m notcompetentand not because I’m lazy. Putting code out there for others to read forces you to think just that little bit more about how good it is and, hopefully, make it a bit better.

The extra though is one of the reasons that I’m a big advocate of code reviews. Lately I’ve been thinking a lot about coupling code reviews inside my group with code reviews from outside my group. The culture in the rest of the company I’m with is not really conducive to having other people review my code but that’s never stopped me before. We have a github organization and whenever people areinterestedin our code I grant them read access and tell them we accept pull requests. Typically this gets the old squint of confusion. Even afterexplanations the squint usually remains.

The problem is that with most large companies the idea that people who don’t work directly on the product can see the code is totally foreign. If you ask around nobody is really sure why that is so they throw out the excuse of security.

If people can see the code then they might be able to break into the system or they might steal the code and give it away.

The first part of this argument is a terrible one. Security through obscurity has never workedvery well. The second part is slightly better but if you don’t trust the people you have working for you whey are they working for you? There have been code leaks in the past but I have been unable to find a notable case where the leak was caused by an employee leaking the code.

What I’m proposing is that software development of internal tools in a company should be open within the company. There are a number of tools I use which could be improved with a few minor changes. The developers of these tools are either too busy or too out of touch with their user base to make the changes themselves. If these tools were internally open sourced then I could have submitted a pull request to the team ““ saving their time and improving the product. If everybody should learn to codethen the obviouscorollaryis that all code should be open.

2013-01-02

Startup Idea - Geek Proxies

I have a lot of crazy ideas while I type things into the glowing boxes which live on my desk. I’m never going to have time to make something of them but maybe somebody else will. Maybe that person will make a bunch of money from the idea. And maybe that person will send me a finder’s fee and I can retire. In a hot climate. With a drink that has a little umbrella in it and some pineapple. A real man’s drink.

umbrella-drink

Today’s idea is about how businesses interact with the public and more specifically with geeks. As a geek when I’m dealing with a company I have different needs from what I’m going to call “regular people”. Let me explain with an example: I have a friend who just had a house built. He’s one of us, a geek. So when he interacted with the building company he had geeky requirements. He didn’t want people calling him he wanted e-mails. When he went to pick out a kitchen he didn’t want to flip through a book of draw handles he wanted a tablet app which gave him the ability to see renderings of draws and handles in his kitchen.

What he needs is a geek proxy service. This proxy transforms regular interactions into geekier ones. Geekier? How? The service would act as an intermediary and provide somebody who will work with the house building company and with my friend so his interactions are what he wants.

I think another great example is around grocery receipts. This is what I typically get

aldi-receipt

Yawn

This is just a big long list of what I bought. That is really only useful for a couple of things

  1. Knowing the price of an individual item
    1. Using the receipt as proof of purchase

I don’t feel like either of these are common use cases. Most stores now allow you to return goods based on the records they have attached to your frequent buyer card and in a grocery store who cares how much you spend on an individual good?

Wouldn’t it be neater to get

Screen Shot 2013-01-02 at 10.45.08 PM

And that’s what the startup could do. Figure out geekier ways to communicate information and, instead of waiting for businesses to step up, just step into the gap themselves.

2013-01-01

Fixing Portable Libraries for MonoDroid on OSX

Over the Christmas break I thought I would play a bit with MonoDroid or Mono for Android as I guess it is now called. Things were going pretty well until I decided to split the data access components into their own library. I was on v3.05 and compiling resulted in an encounter with thediabolical issue 7905. The fix suggested in that was altering

/Library/Frameworks/Mono.framework/Versions/2.10.9/lib/mono/xbuild/Microsoft/Portable/v4.0/Microsoft.Portable.CSharp.targets

To change

to

This did get me most of the way there but I ran into the error

targetframeworkversion v1.0 could not be converted to an android api level

Digging around a little bit with that I found that in the same file altering thepropertiesdefined at the top to

Corrected the issue. It was a bitdisappointingthat this didn’t work out of the box. I would have thought that creating portable libraries would be a pretty common use case for Mono for X platforms.

2012-12-19

About

My name is Simon Timms, I work as a freelance software developer with an interest in distributed systems, CQRS, cloud computing and ice cream. Ice cream most of all. I don’ t know if you’ve tried pumpkin pie ice cream but you really should.

In the past I’ve worked on GPS systems, been a build master, developed websites, written production accounting software and painted fire hydrants. At the moment I work for a directional drilling company on a bunch of different cool stuff.

I’ve been Microsoft MVP in ASP.net since April of 2014. I’ve also written two books with Packt publishing

socialData mastering

I’m currently working on my third book with David Paquettewhich we’re self-publishing through LeanPub. We’re offering early access to it as we write it ““ it’s cheaper if you buy it now.

@stimms

2012-07-17

Why we're abandoning SQLCE based tests

For a while now we’ve been using either in memory SQLite or SQLCE databases for our integration tests. It is all pretty simple we spin up a new instance of the database, load in a part of our schema, populate it with a few test records and run our repositories against it. The database is destroyed and created for each test. The in memory version of SQLite was particularly quick and we could run our tests in a few seconds.

The problem was that the dialect of SQL that SQLite and SQLCE speak is slightly different from the dialect that SQL Server 2008 speaks. This wasn’t a big problem when we were building our schemas through NHibernate as it supports outputting tables and queries in a variety of different dialects. In our latest project we move to using Dapper. Instead of creating our schema through a series of fluent configurations as we did with NHibernate we used a database project. This allowed us much finer grained control over what was created in the database but it meant that we were more closely tied to the database technology.

For a while we wrote transformations for our create table scripts and queries which would translate them into SQLCE’s version of TSQL. This code grew more and more complicated and eventually we realized that we were trading developer time for computer time, which is a terrible idea. Computers are cheap and developers are expensive. The class of tests we were writing weren’t unit tests they were integration tests so they could take longer to run without having a large impact on development. We were also testing against a database which wouldn’t be used in production. We could have all sorts ofsubtlebugs in our SQL and we would never know about it, at least not until it was too late.

Instead we’ve started mounting a database on a real version of SQL Server 2008 R2, the same thing we’re using in production. The tests do indeed run slower as each one pays a few seconds of start up cost more than it did with the in memory database. We consider the move to be a really good one as it gets us closer to having reliable tests of the entire stack.

2011-08-18

Configuring the Solr Virtual Appliance

I ran into a situation today where I needed a search engine in an application I was writing. The data I was searching was pretty basic and I could have easily created an SQL query to do the lookups I wanted. I was, however, suspicious that if I implemented this correctly there would be a desire for applying the search in other places in the application. I knew a little bit about Lucene so I went and did some research on it reading a number of blogs and the project documentation. It quickly became obvious that keeping the index in sync when I was writing new documents from various nodes in our web cluster would be difficult. Being a good service bus and message queue zealot that seemed like an obvious choice: throw up some message queues and distribute updates. Done.

I then came across Solr which is a web version of Lucene. Having one central search server would certainly help me out. There might be scalability issues in some imaginary future world where users made heavy use of search but in that future world I am rich and don’t care about such thing being far too busy plotting to release dinosaurs onto the floor of the New York Stock Exchange.

I was delighted to find that there exists a virtual appliance with Solr installed on it already. If you haven’t seen a virtual appliance before it is an image of an entire computer which is dedicated to one task and comes preconfigured for it. I am pretty sure this is where a lot of server hosting is going to end up in the next few years.

Once I had the image downloaded and running in Virtual Box I had to configure it to comply with my document schema. The installed which is installed points at the example configuration files. This can be changed in /usr/share/tomcat6/conf/Catalina/localhost/solr.xml, I pointed it at /usr/local/etc/solr. The example configuration files were copied into that directory for hackification.

cp -r /usr/share/apache-solr-1.4.1/example/solr/* /usr/local/etc/solr/

Once you’ve got these files in place you can crack open the schema.xml and remove all the fields in there which are extraneous to your needs. You’ll also want to remove them from the copyField section. This section builds up a super field which contains a bunch of other fields to make searching multiple fields easier. I prefer using this DisMax query to list the fields I want to search explicitly.

2011-01-25

The ONE API

My boss was pretty excited when he came back from some sort of mobile technology symposium offered in Calgary last week. I like it when he goes to these things because he comes back full of wild ideas and I get to implement them or at least I get to think about implementing them which is almost as good. In this case it was THE ONE API. I kid you not, this is actually the name of the thing. It is an API for sending text messages to mobile phone users, getting their location and charging them money. We were interested in sending SMS messages, not because we’re a vertical marketing firm or have some fantastic news for you about your recent win of 2000 travel dollars, we have legitimate reasons. Mostly.

Anyway I signed up for an account over at https://canada.oneapi.gsmworld.com/ and waited for them to authorize my account. I guess the company is out of the UK so it took them until office hours in GMT to get me the account. No big deal. In I logged and headed over to the API documentation. They offer a SOAP and a REST version of the API so obviously I fired up curl and headed over to the sandbox url documentation in hand. It didn’t work. Not at all.

curl https://canada.oneapi.gsmworld.com/SendSmsService/rest/sandbox/ -d version=0.91 -d address=tel:+14034111111 -d message=APITEST
In theory this command should have sent me a message(I changed the phone number so you internet jerks can’t actually call me) or at worst return a helpful error message, so said the API documentation.

What actually happened was that it failed with a general 501 error, wrapped in XML. Not good, the error should be in JSON, so says the API docs. It also shouldn’t fail and if it does the error should be specific enough for me to fix. I traced the request and I was sending exactly what I should have been.

No big deal, I’ll try the SOAP API and then take my dinosaur for a walk. The WSDL they provided contained links to other WSDLs, a pretty common practice. However the URLs in the WSDL were pointing to some machine which was clearly behind their firewall making it impossible for me to use it.

I gave up at that point. These guys are not the only people who are competing in the SMS space and if they can’t get the simplest part of their service, the API, right I think we’re done here. Add to this that they only support the three major telcos in Canada(Telus, Bell and Rogers) and there are much better options available. Twilio supports all carriers in Canada and the US and they charge, at current exchange rates, 1.5 cents a message less than these guys. Sorry THE ONE API you’ve been replaced by a better API, cheaper messaging and better compatibility.

2011-01-12

Test Categories for MSTest

The version of MSTest which comes with Visual Studio 2010 has a new feature in it: test categories. These allow you to put your tests into different groups which can be configured to run or not run depending on your settings. In my case this was very handy for a specific test. Most of my database layer is mocked out and I run the tests against an in memory instance of SQLite. In the majority of cases this gives the correct results, however I had one test which required checking that values were persisted properly across database connections. This is problematic as the in memory SQLite database is destroyed on the close of a connection. There were other possible workarounds for this but I chose to just have that one test run against the actual MSSQL database. Normally you wouldn’t want to do this but it is just one very small test and I’m prepared to hit the disk for it. I don’t want this particular test to run on the CI server as it doesn’t have the correct database configured.

In order to make use of a test category start by assigning one with the TestCategory attribute.

[TestMethod]
[TestCategory(“MSSQLTests”)]
public void ShouldPersistSagaDataOverReinit()
{
FluentNhibernateDBSagaPersister sagaPersister = new FluentNhibernateDBSagaPersister(new Assembly[] { this.GetType().Assembly }, MsSqlConfiguration.MsSql2008.ConnectionString(ConfigurationManager.ConnectionStrings[“sagaData”].ConnectionString), true);
…buncha stuff…
Assert.AreEqual(data, newData);
}

Next in your TFS build definition add a rule in the Category Filter box to exclude this category of tests.

The category field has a few options and supports some simple logic.

That is pretty much it, good work!

2011-01-07

Updating TCPIPListener

I still have a post on encryption for S3 backups coming but I ran into this little problem today and couldn’t find a solution listed on the net so into the blog it goes. I have some code which is using an obsolete constructor on System.Net.Sockets.TcpListener. This constructor allows you to have the underlying system figure out the address for you. It became obsolete in .net 1.1 so this is way out of date. In order to use one of the new constructors and still keep the same behavior just use IPAdress.Any.

Old:

new System.Net.Sockets.TcpListner(port);//warning: obsolete

New:

new System.Net.Sockets.TcpListner(IPAddress.Any, port);

2011-01-04

S3 backup - Part II - Bucket Policy

This wasn’t going to become a series of posts but it is kind of looking like it is going to be that way. I was a bit concerned about the access to my S3 bucket in which I was backup up my files. By default only I have access to the bin but I do tend to be an idiot and might later change the permissions on this bucket indadvertantly. Policy to the rescue! You can set some pretty complex access policies for S3 buckets but really all I wanted was to add a layer of IP address protection to it. You can set policies by right clicking on the bucket in the AWS Manager and selecting properties. In the thing that shows up at the bottom of your screen select “Edit bucket policy”. I set up this policy

{
“Version”: “2008-10-17”,
“Id”: “S3PolicyId1”,
“Statement”: [
{
“Sid”: “IPAllow”,
“Effect”: “Allow”,
“Principal”: {
“AWS”: “
},
“Action”: “s3:
“,
“Resource”: “arn:aws:s3:::bucket/*”,
“Condition” : {
“IpAddress” : {
“aws:SourceIp”: “255.256.256.256”
}
}
}
]
}

Yep policies are specified in JSON, it is the new XML to be sure. Replace the obviously fake IP address with your IP address.

This will protect anybody other than me or somebody at my IP from getting at the bucket. I am keeping a watch on the cat just in case he is trying to get into my bucket from my IP.

Next part is going to be about SSL and encryption.