Now hiring ASP.NET developers

by Phil on 10. October 2013 10:37

we_want_youWe are once again looking for talented and enthusiastic ASP.NET software developers to join our rapidly expanding team in our brand new Napier office.

We have new clients coming on board, as well as existing clients that are bringing us exciting new projects, and for this we need to extend the team.

Our vision is to build exceptional software applications for web and mobile. We pride ourselves on having a team of people dedicated to developing high quality products for our clients. As a company we want to work with new technology to create fantastic experiences for our customers and their users.

We have permanent full-time positions available for Intermediate and Senior ASP.NET developers.

Skills & Requirements

A Good candidate for these positions:

  • Has a solid background of commercial experience with .NET 3.0 - 4.5 in C#, ASP.NET MVC and SQL Server.
  • Knows how to design and build great web applications with HTML, CSS, and modern JavaScript.
  • Is passionate about great UI and UX.
  • Has good communication skills and is self-motivated.
  • Finds technology genuinely exciting and wants to progress their career in a fast paced environment.

A Great candidate for these positions:

  • Has considerable experience developing solutions for the Cloud/SaaS technical web environments.
  • Devours the latest posts, tutorials and books on how to create great web and mobile technology at night -- and can't wait to apply them the next morning.
  • Waxes eloquently about the awesome leaps forward in user experience and increased productivity consumers are enjoying -- and loves bringing those improvements to end users.
  • Writes beautiful code that runs quickly and robustly -- and rolls up their sleeves to debug, fix and improve existing code.

** Don’t miss this opportunity to join an innovative software house with a great company culture that understands and encourages a healthy work/life balance. You will be working in an interesting and stimulating environment with the latest web technologies, all while enjoying the fantastic lifestyle benefits that life in Hawke’s Bay brings with it. **

To apply; send us your story including a clear description of your experience and we will get in touch.

Apply for an Intermediate ASP.NET developer position.

Apply for a Senior ASP.NET developer position.



Developers, on the move!

by Phil on 17. June 2013 10:57

Recently, as the team has expanded we have started to outgrow our current space. So when an opportunity for a brand new, larger space which we could tailor to our needs popped up earlier this year, we jumped at the chance. All without leaving the fantastic environment of the Ahuriri area which we now call home.

So in two weeks time Red Jungle will be making the move to our nearly completed new office -- but we’re not going far. The new premises are approximately 100 meters up the road at 36 Waghorne Street, Ahuriri. Just beside the existing Crown Hotel site. So do drop by and say hello if you are nearby!

As our customers grow, so do we and we’re super excited to be heading into a brand new office to accommodate the growing Red Jungle team. In fact, we’re still hiring and we’d love to hear from you.

A few photos of our nearly completed office.


Our new location:




Tremains Triathlon 2013

by Phil on 23. March 2013 17:49

Last weekend the Red Jungle team took part again in the Tremains corporate triathlon. The event has been running for many years now, and we have been entering Red Jungle teams for the last couple of years. It’s always a great event, a lot of fun, and a great opportunity for some team building (and friendly office rivalry!).

From the website: “It's the largest event of this type held in Hawke's Bay.  Whilst competitive, it is also a lot of fun and largely about groups of co-workers or friends challenging themselves in the triathlon and then enjoying each other's company over a drink and BBQ afterwards.  Anyone can and does participate.”

There were a total of 453 teams (1359 competitors) entered, and we made up two of those teams this year. We would like to thank the organisers and sponsors for putting on another top notch event this year. We had a lot of fun, pushed ourselves to the point of exhaustion, and ate a lot of sausages afterwards.

Here are a few photos from the day. Thanks to all the Red Jungle team members (and spouses!) for your efforts on the day.


The cyclists, Bryan and Gerard waiting in transition for their runners to tag them in.


Mandy and Phil waiting for their Kayak wave.


Runners Tarah and Sarah waiting on the shore of Pandora Pond for their Kayaker’s to arrive.


Mandy tags Sarah, off for her run.


An exhausted Phil on his final approach to shore!


Tarah is tagged and off on her running leg.


Sarah amongst the pack of runners and walkers.


Gerard running towards the finish line after his cycle leg.


Bryan off his bike and heading to cross the finish line.


Trading war stories post race.


The team relax and enjoy a well deserved BBQ after the race.


The NOW cars looking all dapper, lined up in formation by the finish line. Thanks for the free WIFI on the day guys!



Transitioning a SaaS App to Azure Web Site - Part I

by Matthew Hintzen on 9. March 2013 17:21

This is the first post in transitioning a SaaS application that Red Jungle hopes to offer as a Product from its privately hosted Virtual Machine on to run instead on Windows Azure as an Azure Website running against SQL Azure.

Currently the application is an ASP.NET MVC 3 application running on .Net Framework 4.5 using the Entity Frameworks.  The data is maintained on a Sql Server 2008 R2 Database.

This post is about transitioning the Database layer from it’s hosted SQL Instance on the VPN server to SQL Azure.  This walkthrough depends on your having Visual Studio 2012, all of the Latest Azure SDKs and SSMS 2012 installed on your development machine.  Trying to do this with SSMS 2008 is just too much work.  Please note this article was correct as of the 9th of March 2013, but be aware that the Azure Platform is changing and improving daily (thanks @ScottGu) so these instructions may have been superseded, it may be even easier than it seems here.

The Steps necessary to pull this off are:

  1. Log into and Backup the Database on the Virtual Machine, and bring that backup down to your development machine.
  2. Restore the database on your local Sql Server instance
  3. Open the VS2012 solution that contains your SaaS project (or you can start a new solution just for the database if you want).
  4. Add a new Sql Server Database Project
    • image
    • Name the project what ever you want the eventual database to be named in SQL Azure
  5. Right mouse click on the Database project Select Import From , then Select Database…
    • image
  6. Import the database using the Import Database Dialog
    • image
    • Personally I prefer to have the folder structure set up by Object Type, since especially with Azure we find everything ends up being DBO
    • Don’t bother with importing the Referenced logins or the permissions, or the database settings, cause they are all going to be changed for Azure
  7. Once that is done you will have Database structure in a solution that can not be added to source control, modified, deployed, etc
    • image  image
    • We won’t be using the Security on SQL Azure so we delete those
    • And Extended properties are not supported on SQL Azure so if you have any delete those as well
  8. Double click on the Properties tab of the project and in the Project Settings | Target platform: change to Windows Azure SQL Database
    • image

Here is where things begin to become different, some of the stuff I’m about to do is not specific to SQL Azure, It can apply to any Sql Database running on any SQL 2012 instance.  Basically Microsoft is changing the way we think of databases, and has introduced some new terminology.  Recognizing that Versioning is a continuing problem with databases, they are asking us to think of a database as a “Data-Tier Application” that runs on a SQL Hosting machine.  It’s still a Database, but they want to support some development scenarios that just don’t translate smoothly to how we as developers have been thinking about database, so they are hoping some new terminology will assist us in making that transition.

So from here on out don’t think of you data storage as a database, but rather as a Data-Tier Application that runs on a SQL engine.  So why Data-Tier Application? Well to be honest a SQL “database” really isn’t a database. We have stored procedures, triggers, functions, etc, which encapsulate usually business rules and logic, which to be totally honest isn’t data, its process.  Microsoft is just trying to get us developer to recognise this and realise that what we call a Database is more then just a place we store some bytes, it’s a place where actual coding, logic, and versioning is needed.  Hence why they are calling it now a Data-Tier Application.  So along those lines from here on out, I won’t be calling the item we are working on a Database I will be referring to it as a Data-Tier.

So now, looking at that Project Settings page, click on the Properties… button and a new dialog opens where we get to enter our Data-tier Application Properties


And finally here you can see where they have begun to introduce the concept of a Version Number in the Data-Tier.

The other interesting thing to note about the Project Settings Tab (in the same place where we clicked the Properties… button


They specifically have noted that the Output type is Data-tier Application which they have given a .dacpac file extension.  You do have the option to instead of using the .dacpac having the output created as a .sql file (the old database way), but if you read Microsoft’s documentation on SSMS 2012 and SQL Azure, the writing is pretty large on the wall, they REALLY want us to start using .dacpac.

The .dacpac can be thought of as the definition of the Data-tier without the data.  To read more of the concepts and way to think of a Data-tier application and how it relates to a .dacpac file, check out Data-tier Applications documentation on the Microsoft MSDN website. I’ll just be giving you the specific highlights you need to know for this particular part of the operation.

Also of note is that this new type of Database project can be “built” like any other project type, if you go to the Build section of the Properties for the project


you will see a build output path.  The data-tier (remember data-tier === to database, just a new more inclusive name) is dropped then recreated according to the scripts and specifications that have been saved in the project.  This uses the new localdb/v10 functionality of Sql (Microsoft is also doing away with SQL Express for on the fly attached files (single instance) database, and having us move to the localdb approach).  This data-tier once built can then be put thru various SQL and code analysis tools and warn you of problems with the Data-tier.

For example, imagine you have created a stored procedure that references the “User” table, but in a future version you decide to rename the table to “Person”.  Before you wouldn’t get an error until some piece of code in your business layer tried to access that stored procedure.  With this project type and being a Data-Tier, VS2012 will give you an immediate Error in the output window on a Build.  You find out about the error before you even get close to deploying the mistake. Think of it as Precompiling for Databases.

Now once that is all done, and you can successfully build and cleared all your errors (it will use the Target Platform we set earlier to make sure all features specified in the data schema and stored procedures, etc are compatible) we can publish the .dacpac to SQL Azure

image  image

On the Build Menu, there is a Publish <DatabaseName>… option, you can also find this by right mouse clicking on the project and finding “Publish…” there.

Now we end up with a dialog that was obviously designed by a developer and a UI consultant didn’t get a chance to review it.  Once you understand all its options and in what order to access them, it’s pretty easy, but let’s just say the UI is not intuitive.  Also the dialog can’t quite make up its mind whether or not it is going to embrace the new terminalogy and instead is a mishmash that just confuses the new conceptual approach.


First place to start, is at the bottom left with the Create Profile button (unless of course you have already created a profile previously in the project in which case you start with the Load Profile… button).

This will now Create a new publish profile and add it to the Project (for next time you need to use it)


Next we go up to the top left of the dialog and click the Edit button and you will get the dialog we have all seen for creating a connection to a Database, here we will put in the name of the SQL Azure Server url and the Sql Authentication credentials from the SQL Azure website.

Once that is done, we have two options on how to proceed.  Staying with the “let’s jump around on the dialog” theme the developer who put this together has started with, we need to next look at the middle of the dialog at the “Register as a Data-tier Application” checkbox on the right, and the currently enabled “Advanced…” button to the left of that.


Let’s start by first clicking on the “Register as a Data-tier Application”.  You may notice that the “Advanced…” button has been disabled.  The Advanced button would allow you to set all sorts of Scripting publishing rights and defaults.  A Data-tier has those rules “built-in” to it’s schema so we don’t need to worry about them for this operation.


The other thing you will notice is the “Block publish…” checkbox has become active.  This will prevent a publish of the changes to the database if the target database connection already has a database published, and the version number is the same as was set back in the Data-tier application dialog, but the structure has changed.  This means you need to go back and modify the Version Number before it will allow you to publish the changes.  Of course if working on your local development machine you may not care, but on a production machine, if the database has changed at all we really do need to make sure we have modified the Data-tier application version number.

If all the settings are right, you should now go back to the bottom left of the dialog and “Save Profile”.  With that done, finally you can go to the bottom right and click “Publish”, and the database and structure will be published to SQL Azure and the database will automatically be set up to keep track of its “versions” as a Data-tier Application


Here you can see that the Data-tier functionality has added a __RefactorLog where it will keep track of Versioning and changes for you as part of being a Data-tier application.  This is a good thing.

Last thing to do is go into Visual Studio and Check in your Data-Tier Application database project to TFS (or Git-Hub, or SVN, or what ever source control system you use, just please use source control!)

Now to move the data

Ok, so I have actually been moving a database from our VPN to Azure as I have been making up this walk-thru (this is more for my co-workers at Red Jungle, but I thought I might as well share it with you as well).  So up to this point we have

  • Moved and versioned the database from the original hosting Virtual Machine onto my Development machine
  • Converted the Database structure to a SQL Database Project
  • Targeted the SQL Azure platform
  • “Converted” the database to be a Data-Tier Application
  • Set Versioning up.

So that’s great, I now have the database that our SaaS product uses deployed to SQL Azure, I could now in theory just change the connection string in our SaaS product and it should start using the SQL Azure database (and in fact it does…). Only one small problem, there is no data in that database, Red Jungle’s clients won’t like having to re-enter all their data…

So I guess I should have just restored the Backup I took of our SaaS database to SQL Azure.  Well there is a problem there as well, you see, SQL Azure doesn’t allow for Backups and Restores as you and I have come to know them in SSMS.  In order for SQL Azure to be massively parallel and scalable and replicable, Microsoft had to make things work a little bit different. 

This is Where we now learn about the .bacpac file format and Exporting a Data-tier Application.  Again you can follow that link to read in depth about it, but I’ll give you the highlights here.

First let’s start by going back to that database backup I downloaded and restored to my development machine.  Now first things first, if any changes had to be made to the Data-tier application project in VS2012, I’ll need to add a Schema Compare to my project, image

and run it comparing my Project to my target database on my machine, and update the database on my machine so it is the correct schema.

Next if the database I grabbed off the Virtual Machine was not deployed as a Data-tier application, (which in this case it wasn’t) I will need to “publish” the project again, but this time against the local version of the database.  This will add the versioning information to the database making it a true Data-tier application as well.

Once that is done, next I right mouse click on the database in my local SQL server and under task look at the Data-tier Application options


Hang on a moment what’s that “Deploy Database to SQL Azure…” option and why didn’t we use it before? Well 3 reasons

  1. The Database to be moved to Azure STILL needs to be converted to a Data-tier application, so we still would have had to do 80% of the work we already did
  2. When deploying a database to SQL Azure you can only deploy a NEW database; that is the <funkystring>,1433 SQL Azure server you have set up thru the Azure Portal won’t let you overwrite the existing database, you will have to add a new one, which as long as you plan ahead could work for you, but still you do need to learn about how to backup and restore in this new context of Data-tier Application, so this is as good a time to learn and practice as any.
  3. Because when I started this tutorial I had never yet done a right mouse click on the latest edition of SQL Data Tools since the update was installed on my machine and I didn’t know they had added this feature until I got to this point in the article… and I don’t feel like re-writing it from scratch. ;-p

You might also notice the “Extract Data-tier Application…” option, the difference between Extract and Export, is Extract will just pull out the Data-tier schema, basically giving you an empty database of the current version, whereas the Export option pulls out the data AS WELL as the schema, it is akin to the old .bak file we are used to.

So back to the “correct” way, Now that we have everything set up correctly, select the “Export Data-tier Application…” option.


In this case we want to save it to Windows Azure (if you haven’t yet, go set up a Storage Account and a Container in Azure).


Click the Connect… button


Now retrieve your values for Windows Azure from online, you can find the account key by clicking on the Manage Keys on the Storage Home Page


Once you have those values fill out the settings dialog appropriately.


Click Next and you are taken to the “Summary” page, finally click Finish.

This basically makes what we have know as a .bak file, compresses it all up and uploads it to the Windows Azure Storage Account in a Blob.

The rest now all happens from the Azure Management portal.

  1. Open the Azure Management Portal and login
  2. Click to enter the DB section
    • image
  3. at the bottom of the page in the ribbon you should see “import”
    • image
  4. Click the Import then follow the dialog to connect to the storage container where you dropped the .bacpac file, and fill in the settings.  You will need to import this database using a new name, for example in the dialog below I just tacked on the word “Backup” to the original database name
    • image
  5. Making sure the Configure Advanced database settings are set, click to go to the next page, set the edition and database size and click Go.
  6. Then go get yourself a cup of coffee, this is going to take a while!

Then once the database with the data has been properly imported you rename it, using the old Silverlight Manage Console, and issue T-SQL commands into a sql command prompt window. 

WAITFOR DELAY '00:00:30'
ALTER DATABASE Database1_copy_02_01_2012
MODIFY NAME = Database1

Okay, that’s the “official” way to do it, but if I have to fall back to the Silverlight management console, and do esoteric T-Sql commands well I’d rather do Powershell (and if you know me you know that means I don’t want to do that!).

So instead the easiest thing to do is just delete the original database, and use that “Deploy Database to SQL Azure…”  command that I didn’t know about! I just specified the name for the Database to be the same as the one I had just deleted, and called it a day.


Which is exactly what I did!

Well that gets the database off the Virtual Machine and onto SQL Azure, the last step was to go onto the virtual machine and change the connection string to connect the new SQL Azure database.

Next up Loading Certificates for your Azure Website from a Blob.


Azure | SQL Server


Xerocon 2013

by Phil on 25. February 2013 12:33

As an approved Xero API developer and network partner, the Red Jungle team is always keen to keep up with the latest happenings in the Xero world, and to further our knowledge of the development API.

image1Last week we attended the Xerocon 2013 conference, which was held across two days at the Viaduct Events Centre, Auckland. For me personally, this was my third Xerocon -- I've attended each one since the first modest gathering in Taupo 2011. Each year the event gets bigger and better. This year there were over 800 attendees, which is very exciting. It was fantastic to be able to share the event with several of the team this year as well.

Those who follow me on Twitter may have caught a fair bit of this already.

Day 1 - API & Add-on Day

The day kicked off with a catch up with Tony Rule (Product Manager – API). Discussing the last 12 months, and laying out the API roadmap for the next 12 to come.

Of particular interest for me from a developers point of view were the following upcoming planned additions to the API functionality;

  • The ability to create new Accounts and Tax Rates via the API.
  • Access to create recurring invoices. This is something we have been keen to see for some time, we have developed several applications for clients, which create invoices every month for the same amount, a recurring invoice would fit nicely in this scenario, but today we cannot set these up via the API.
  • Changes to allow for scope of access when connecting to a Xero account via the API. Effectively this allows us - when setting up a connection to Xero - to ask for access to certain API end-points only for that company. I think this will really help with user understanding when they choose to connect to Xero from within another application. They will be able to see exactly what information we will be authorised to access, and can have faith that the application won't be accessing any of their more sensitive account information for example.


Interesting to hear that paging will be introduced soon across end points to help keep request sizes and load against the API down -- though I was disappointed to hear that Webhooks, despite being one of the top requested features, are still not a priority. I would have thought this would go a long way towards solving performance issues.

Next up Ronan Quirke (API Account Manager) then gave a quick update on the add-on ecosystem, discussing changes coming to the add-on directory to help connect potential customers with the right add-on in a more intuitive way. He also highlighted potential opportunities for developers interested in tackling some new and currently untapped verticals.

Andrew Tokeley (Xero Product Manager) followed with an update on overall product direction, and some new planned features which will potentially interest developers and add-on partners in the future.

image6Rob Drury (CEO) took the stage to finish up the morning by giving a more general Xero ecosystem update, how he sees the market changing, and the opportunities for add-on partners to join Xero on their global journey.

After lunch we spent our time in the technical track sessions: Tony Rule and Owen Evans (Chief Architect) discussed API Best practices, scaling out, and Security. As you'd imagine, the Xero crew take scale and security very seriously, so its always great to sit in on these sessions each year and gain some more insight into the Xero approach.

The afternoon continued with a workshop on 'getting funded' with Rod, Vaughan Rowsell (Vend, CEO), and several prominent investors from venture capital firms to discuss the process of obtaining funding for your software venture, when to start building a relationship with a VC, and even discussions around when its appropriate to start turning a profit (or not!).

The evening ended with drinks and networking amongst the add-on exhibitors. Which was a great time to wander around and see what interesting products others have been working on in the last 12 months.

Day 2 - Conference Day

While the second day is primarily aimed at Accounting partners, there's a lot to pick up on in terms of upcoming features and Xero's progress in the market. I spent most of the day with our client Re-leased, in their exhibition booth supporting them with their product launch and answering technical questions. More on this shortly.

Some interesting things coming in the main Xero business application included: Purchase Orders, Quotes, Stock (Lite), and updates to Reporting. I'm sure most of these features will be made available to development partners via the API in the timely manner also, which makes them quite exciting for us as it opens up the scope of what we can help you build.


It was great to see that shortly the Pay Online feature of online invoices will be able to be done via more payment gateways than just PayPal. New options include Authorize.Net globally EWay and DPS for Australia and New Zealand, Stripe for US and Canada and GoCardless for the UK. And of course a 'Custom URL' option for linking to any other existing payment platform you may have.

Updates have been made to the iOS and Android apps to allow bank reconciliations on the go. And Xero mobile for personal is apparently well into development and should be available soon.


Another reason It was a particularly satisfying event for us this year is because one of our client Re-Leased launched their new SaaS product for commercial property management at the Xerocon event. We have  been building the Re-Leased platform for the last 12 months, so for us it was fantastic serendipity for us to attend the latest Xero conference, and launch a deeply Xero-integrated product on the same day.
I spent much of day 2 on the event floor in the Re-Leased booth and watched with great excitement at the warm reception they received. There was barely two minutes where someone wasn't checking out the product and watching Tom run through a quick demo with intense interest.
Its hard to believe it was only 12 months ago Tom and I sat down for a coffee before Xerocon 2012 to start seriously discussing the idea. And here we are exhibiting the fully functional product one year later. We are over the moon that the event was such a big success for Tom and his team. And look forward to developing the product further over the next 12 months!