comments edit
codepalousa, conference, review

I had the opportunity this year to attend one of the midwest’s premiere community-ran developer conferences, CodePaLOUsa, which was held in Louisville, KY on April 25th-27th.

The first day of the conference was all pre-compiler workshops, which I did not attend (and therefore cannot review).

April 26th

Keynote

The first day of the conference opened with a keynote by none other than Richard Campbell, from .NET Rocks!, one of the best podcasts available.

Richard did a fantastic job at the keynote, and had the audience rolling with all the jokes. He told a great story, but what really stuck with me was his endeavor to create software for humanitarian relief efforts through humanitariantoolbox.org. I already see myself getting involved with this at some point.

Session #1 – The Class That Knew Too Much – Matthew Groves

This session was on refactoring techniques and had a (brief) introduction/overview to aspect-oriented program (or, AOP, for short). Matthew is local to me, and he’s given this talk many times all around me, but this was the first chance I’d had to attend one of them. Although I enjoyed the session, there were some technical difficulties that kept arising with the projector and connection. I don’t believe this was an issue with the presenter’s hardware, however, as I witnessed the same issue later on in the same room. Not only that, but the room was quite small and quickly overflowed. As a matter of fact, Matt had to give a second session the next day to accommodate the rest of the individuals. Overall, this is a fantastic session/talk, and Matt does a great job all around.

Session: 8/10 Location: 5/10

Session #2 – Deeper Dive into the Windows Phone 8 SDK – Michael Crump

This session was all about the new features in the WP8 SDK. Not having tried any WP development before, I was surprised to see some of the items in these new features. Surprised, because I would have expected some of them to already have been there. Michael presented quite well, but did run into some demo issues that were unable to be resolved during the session. He did, however, make them available via GitHub after the fact. The problem seemed to revolve around flaky internet connectivity, though that cannot be proven at this point I suppose. This room was a lot larger than the previous room, but attendance was relatively low and did not require a large room.

Session: 7/10 Location: 5/10

Session #3 – Secure Mobile Application Development – Jamie Ridgway

I hate to say it, but I did not enjoy this session. Jamie did a great job of gathering the information, and presenting it, but that’s all it was. His slides and talk were all based around the top 10 vulnerabilities for mobile applications by OWASP. I could have read that information myself. I would have liked to have seen a few demos scattered in that demonstrated some of the issues. The room held everyone well, and was a nice choice – though it did get rather cold.

Session: 5/10 Location: 8/10

Session #4 – Rails for the .NET Developer – Jamie Wright

Let me be clear. I am not a Ruby developer. I am a .NET developer. Why did I choose this session? I love to learn. I enjoyed the beginning of this session, but quickly got lost in the demos. Jamie did a side-by-side comparison of the same application being developed in both .NET and Ruby (Rails). He did things a little different by recording his demos ahead of time, and discussing things while he played them back for us. This worked out okay, but there are a few issues. The first problem is that the video speed was increased – and for people new to Ruby/Rails, this made it difficult to follow at times, even with Jamie giving an overview. The second issue is that after about half-way through, he stopped showing the .NET videos and only showing the Ruby videos. I suppose by that point we had an idea of what the application was supposed to do, but I would have liked to have the comparison. This was a smaller room, and was relatively full, but worked nicely. Jamie did have a technical issue or two with the projector, but luckily things got resolved.

Session: 6/10 Location: 7/10

April 27th

Keynote

This keynote presentation was given by Carl Franklin, the other half of .NET Rocks!. While I would love to review this keynote speech, Calvin slept in this morning.

Session #1 – All The Buzzwords: Backbone.js Over Rails In the Cloud – Jared Faris

I have to be careful what I say here, as Jared is one of my managers :)

Jared discussed a lot of his architecture choices while he ran the development at a local start-up for 1.5 years. The application was written in Ruby, and utilized quite a few frameworks and packages during development. Not being a Ruby developer, as previously mentioned, I enjoyed hearing about their trials, tribulations, and the many decisions that came up along the way. Jared put a lot of extra time into his slides, utilizing 8-bit style imagery throughout, which I loved. Jared’s talk was located in the same room where lunch and keynotes were held. Attendance, while pretty good, did not warrant that amount of space, and participation/questions from the attendees was minimal to none at best.

Session: 8/10 Location: 5/10

Session #1.5 – Everyone Sucks at Feedback – Chris Michel

I was actually not expecting this session, since it was in the middle of lunch. The presenter did a great job speaking, and used a lot of humor in his slides – which was a nice change. Honestly, I didn’t pay enough attention during this presentation (ummm…food!?) to warrant a full review, but I would definitely see him present again from what I did see.

Session #2 – Open Space

A few people, including myself, decided to skip this session and have an open-space discussion on confidence. There were, at one time, about 8-10 people present for this (sorry, I don’t remember everyone!). I found this discussion rather enlightening, as I definitely have a confidence problem in myself. It was good to hear that I’m not the only one, and it can definitely be overcome.

Session #3 – Build a Single-Page App with Ember.js and Sinatra – Chris Meadows

Chris did a great job on this presentation, showing one of the more ‘elusive’ javascript frameworks. While Sinatra was used, that was secondary to the main topic and was only used as the back-end. I haven’t had an opportunity (or need) to utilize Ember.js before, but after seeing Chris’s talk, I’m on the hunt for a project. He described the relationships between the views, controllers, models, and the router. The room was full, but worked out well.

Session: 8/10 Location: 8/10

Session #4 – An Introduction to Genetic Algorithms for Artificial Intelligence Using Rubywarrior – James McLellan

Woah. I didn’t realize what I was getting myself into by going to this one. This talk focused heavily on genes and genetic makeup – something I know nothing about. The only saving grace was that it was brought into focus by utilizing a Ruby application called RubyWarrior. This ‘game’ allowed you to utilize your own ‘genes’ (or classes that act as AI – i.e., walk, turn, etc.) You can then bundle these ‘genes’ to try and solve a level in the game. There was a lot of Ruby code involved, which I did expect given the title of the session. Overall, though, James’ presentation style was a little dry. The room was pretty full, though, and seemed to be a good match for the session.

Session: 5/10 Location: 8/10

Closing Session

We almost didn’t stay for the closing session, but I’m kind of glad we did. Carl Franklin took over again, asking attendees various development related questions – he inevitably gave away the answers – to which prizes were given away. Trust me, there were tons of prizes given away (we didn’t win anyway), and it was just a fun time.

Overall

Overall, I would say that CodePaLOUsa is a great conference. It’s ran by intelligent people – by the community, for the community. My biggest complaints are actually minimal in the grand scheme of things. Some of the projectors and equipment seem to be finicky – they might be property of the hotel, too, I am unsure. Some of the rooms were a little cramped due to the partitioning walls of the hotel. I would have liked more signage as a first time attendee as well. Is it worth the $250? Yeah, I think so. I think so enough that I’ll be attending next year!

See you at CodePaLOUsa 2014!

comments edit
webapi, mvc

Alright, you have your MVC 4 website up and running, and you realize you need to add some WebAPI support – not necessarily for the website, but potential external consumers. Your site is configured using Areas for everything, so you decide it would be best to put the WebAPI layer in an Area as well. Makes sense, right? Right. You quickly find out that it isn’t just as simple as right-clicking, add new area, name it API, pat self on back, etc. That’s where this trick comes in.

Now, by default in an MVC 4 project, your Global.asax file calls out to another class to configure WebAPI. It will look something like this:

WebApiConfig.Register(GlobalConfiguration.Configuration);

Guess what? Comment that line out. The file this utilizes is in the App_Start directory, aptly named WebApiConfig.cs. You can leave it, or delete it. You’re call.

Now, head over to your area, we need to make some routing changes.

Look for APIAreaRegistration.cs and open it up.

Bring in another namespace:

using System.Web.Http;

Now, you see that route down below? It needs two minor tweaks to work with WebAPI. Basically, change the method call from:

    context.MapRoute(
        "API_default",
        "API/{controller}/{action}/{id}",
        new { action = "Index", id = UrlParameter.Optional }
    );

to

    context.Routes.MapHttpRoute(
        "API_default",
        "API/{controller}/{id}",
        new { id = UrlParameter.Optional }
    );

In a nutshell, we changed the route to register an HttpRoute, and got rid of the {action} part of the route.

Boom. You’re done.

Keep in mind that this is THE WebAPI layer for your application – with the changes we’ve made, you can’t have any other WebAPI controllers outside of your area. If you find you need the ability for the Area and others, there are a couple methods that others have posted to make it work. I didn’t need anything like that, so this worked well for me.

comments edit
nuget, powershell

Recently a need arose to have a few project-level items added to a project via a NuGet package. While this was no big deal, we ran into an issue of having the items marked as Copy if Newer for the Copy to Output Directory action, and couldn’t manage to find a way to change these properties.

After a bit of research, we determined that an install.ps1 PowerShell script (as part of the NuGet package installation) could access the project items and set the properties of them.

A script was written to handle the three files added to the project:

    param($installPath, $toolsPath, $package, $project)

    $file1 = $project.ProjectItems.Item(\"FolderItem.exe\")
    $file2 = $project.ProjectItems.Item(\"FolderItem.exe.config\")
    $file1.Properties.Item(\"CopyToOutputDirectory\").Value = [int]2
    $file2.Properties.Item(\"CopyToOutputDirectory\").Value = [int]2

Unfortunately, the script didn’t work. Why, you ask? Well, after some more digging, it turns out you can only access top-level items using the above syntax, so you have to chain the commands together to properly access the items:

    param($installPath, $toolsPath, $package, $project)
    
    $file1 = $project.ProjectItems.Item(\"Folder\").ProjectItems.Item(\"Item.exe\")
    $file2 = $project.ProjectItems.Item(\"Folder\").ProjectItems.Item(\"Item.exe.config\")
    $file1.Properties.Item(\"CopyToOutputDirectory\").Value = [int]2
    $file2.Properties.Item(\"CopyToOutputDirectory\").Value = [int]2

comments edit
webapi, mvc, entity-framework

In this post, I’ll show you some of the basics on how to utilize Entity Framework 5.0’s “Code-First” features to develop a data access layer against an existing database. I’ll be demonstrating these concepts with a new MVC 4 Web API application and SQL Server 2012, but I won’t be covering either of those in this tutorial.

While Code-First is a great paradigm for starting a new application from scratch, you can also use it to map back to an existing database with ease.

Let’s pretend we’re working with a very simplistic Twitter model, as shown below. Database Schema

Not a lot of meat here, a simple structure for Users and their Tweets. Of course, the real Twitter model is more complex, but this will suffice for the purpose of this tutorial.

To demonstrate how to accomplish this, we’re going to create a new MVC 4 Web API application in Visual Studio 2012, using C# AS our language. Our database will be running in SQL Server 2012.

After launching Visual Studio, navigate to FILE | New | Project dialog, and select Web from the installed templates navigation section, select ASP.NET MVC 4 Web Application, give your project a name, and click OK.

I’m going to call mine, “Tweeters” New Project Dialog

Once you’ve hit OK, another dialog will pop-up (below), asking what kind of MVC 4 web application you would like to create. Go ahead and choose “Web API” from the list, and press OK. New MVC4 Dialog

Once Visual Studio finishes creating the project, you should have a structure resembling the figure below: New Project Finished

By default, a new Web API project will have quite a number of files put in place for you. For the most part, we’re going to leave them alone.

Visual Studio 2012 automatically pre-installs Entity Framework 5.0 for us, so we’re ready to start coding! If you ever need to install it separately, however, you can find the package for download via NuGet.

The first thing I’m going to do is create two basic classes that represent our database tables. These ‘models’ will be placed in the Models folder of your application: New Models

Let’s run through the code, so you can understand what’s going on.

If you notice the highlighted portions in the previous figure – these are attributes. We are using them to define some of the constraints we need to put on our models. Let’s go through each one in detail.

  • [Key] – This tells EF that the property that directly follows is to be used as the Primary Key.
  • [Required] – A value for this property must be supplied
  • [MaxLength(x)] – Sets the length of the string EF will accept when saving

One thing to note is the property at the bottom of the Users class. You’ll notice we have an ICollection<Tweets>; – this lets EF know how to navigate through the objects to, in this case, the children “Tweets” of “Users”. Essentially we’re setting up the database relationship between these two objects. So for the one-to-many relationship of Users & Tweets, we use a Collection.

If you refer back to our database model in the first figure, you’ll see that the Required and MaxLength attributes just replicate what our database will accept for these tables.

The next thing we have to do is create a database context. A context tells EF what database to connect to, and what models it expects.

Simply create a new class file (I’m calling mine EntityContext), and make it inherit from DbContext. This is a class provided by EF, so you’ll need to import the namespace to get access to it.

Once you have that, you’ll need a constructor. Notice, in our constructor, that we call into the base constructor and pass it a string. This is the connection string from the web.config that we want EF to use during database connections.

You’ll see that, in our constructor, we are setting the Initializer for our context to null with SetInitializer<>. This is very important. Since we’re connecting to an existing database, we don’t want EF to touch that database schema AT ALL. That’s what this is doing; telling EF NOT to try any database initialization logic – just connect, and leave it alone.

The next thing you’ll notice are the public DbSet<> properties. You want one of these for each model you want to interact with. In our case, we have Users and Tweets. Context

That’s all for the context. Pretty simple, huh? Let’s go define our connection string in the web.config. Web Config

Your connection string may differ from mine slightly, but I’m just connecting to a running database on my local machine.

Now we can go work on the controller that will be responsible for serving up some data from our database.

The first thing I’ve done is rename the ValuesController to UsersController just to make a little more sense. You don’t have to do this part if you don’t want, it doesn’t really matter in the long run, just remember what you call it when we go to make a request to the method. Users Controller

We start out by creating an EntityContext reference object and instantiating it in the constructor. In the Get method, we simply call the context and grab the Users collection. (This is the DbSet<Users> we talked about in the context section)

This is all we need to do to start querying our database! Press F5 to start debugging the project (hopefully you have no errors). You should get a default webpage (below) discussing how to get started with a Web API project. We’re NOT going to pay any attention to this since we just want to query our database through the API call. Website Landing

I’m going to fire up Fiddler to make a GET request to our API method. Fiddler

By default, the API is running under an “api” route, and then the controller name. So to issue a GET of our Users, we just need to construct the URL in Fiddler’s Composer tab, with a GET method. Once you have that setup, hit “Execute”.

In the figure below, we’re now looking at the result of our http request. You should hopefully get a result of 200. If so, double click on that line to switch over to see the results of your query. Fiddler Results Query Results

If all has went well, you should see the result of your query, presented IN JSON format! Notice that we got back ALL of our Users, along with a collection of their Tweets. You might be wondering why we got back the Tweets, when we only requested the Users – this is due to the properties we added to the Users class letting it know that it has a collection of Tweets beneath it. Entity Framework is smart enough to work out the magic beneath the covers to populate the collections for us! Awesome!

Conclusion

To recap, I showed you how to create a new ASP.NET MVC 4 Web API project that is backed by an existing database for querying through Entity Framework 5.0 Code-First.

Hopefully this tutorial has given you some basic insight into the capabilities of Entity Framework 5.0 Code First. I encourage you to keep digging into it, as this was only the tip of the iceberg!

comments edit
webapi, mvc

On a recent project, we had a need to integrate Azure blob storage with our web application (also hosted on Azure) to store images. What we found, is that it’s so easy!

Let’s take a look at what we had to do (and how LITTLE code we had to write) to successfully store our images ‘in the cloud’.

The first thing you’ll need to do is login to the management portal and create your storage account. Then, create your container. I won’t go over how to accomplish these two steps, as they are fairly trivial with plenty of walk-throughs out there already.

Now, to connect to your container, you’re going to need three pieces of information from the Azure website:

  • AccountName – this is what you called the account when you initially set-up the storage account. Mine is called, ‘asdfasdfadsf’. Pretty memorable, huh?
  • Container Name – this is just the name of the container you created in the second step. I called mine ‘blobs’.
  • AccountKey – At the bottom of the management portal while looking at your storage account, there is an item called ‘Manage Keys’. Click it to open the dialog that will have your key. It will be a long string of random characters.

Aside from that, there is a connection string template that you’ll use the AccountName and the AccountKey in. The format for that is as follows:

DefaultEndpointsProtocol=https;AccountName={ACCOUNT_NAME};AccountKey={ACCOUNT_KEY}

With this information in hand, we can switch over to Visual Studio while I describe the remainder of the process. Please note, I’ll be using Visual Studio 2012 for this, but you can also use 2010.

You can create any type of project you want, but I’ve wrapped up my Azure blob storage logic into a separate Class Library for easy reuse.

For this project, you’ll need to download two packages from NuGet:

  • Windows Azure Configuration Manager
  • Windows Azure Storage

Note that if you grab the Windows Azure Storage package first, it will download the other AS it directly depends on it. You can use the Package Manager console, with the following statement to install the packages:

install-package WindowsAzure.Storage

Once you have these packages in your project, add two using statements to the top of your class file.

using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

In my class, I’ve utilized the constructor, and four methods: Delete, Update, Create, and Get.

We need some class level variables to start with, so we’ll add:

private readonly CloudStorageAccount _storageAccount;
private readonly CloudBlobContainer _container;

Now let’s move on to the constructor, which will utilize the private class level variables we declared above

public AzureFileService() {
    try {
        _storageAccount = CloudStorageAccount.Parse("<CONNECTION STRING>");

        CloudBlobClient blobClient = _storageAccount.CreateCloudBlobClient();
        _container = blobClient.GetContainerReference("<CONTAINER NAME>");
        _container.CreateIfNotExist();
    } catch (Exception) { 
        
    }
}

Essentially, what this does, is connects us up to our container for our Azure Storage account. It the container doesn’t exist – it creates it. (Or will try to, sometimes that fails.) I recommend storing your connection string and container name off in a config file somewhere, and fleshing out the Catch block for proper error handling.

Moving on to the methods, we’ll start with the Create:

public Guid Create(byte[] content) {
    Guid blobAddressUri = Guid.NewGuid();
    CloudBlob blob = _container.GetBlobReference(blobAddressUri.ToString());
    blob.UploadByteArray(content);
    
    return blobAddressUri;
}

Our method takes a byte array as a parameter, which, in our case, are the bytes of the image we want to store. Each ‘blob’ that gets stored in your container needs a unique name, declared as a string. For our purposes, we just used a GUID, which we store in a relational database table back in our applications database. Now, there are additional properties on the CloudBlob object, such as type, which you can set to various things, like, application/octet-stream OR image/jpeg. All of ours are images, so we opted to ignore this. Azure will assume, by default, that it is ‘application/octet-stream’ – but it doesn’t really matter in the end as long as you know what the correct type is. All of ours are images, so we left the default.

Guess what? That’s ALL the code there is to storing an image in Azure Storage!

For the sake of completeness, however, I’ll show you the other methods, which are actually shorter than the Create method.

The Get method is simple. Give it the name of the blob you want, and it’ll give you back a byte array:

public byte[] Get(Guid id) {
    var blob = _container.GetBlobReference(id.ToString());
    return blob.DownloadByteArray();
}

For the Delete method, we again only need the name of the blob (a GUID IN our case), and call the Delete method on the blob object:

public void Delete(Guid imageId) { 
    CloudBlob blob = _container.GetBlobReference(imageId.ToString());
    blob.Delete();
}

The Update method needs a couple of parameters to be successful. One thing it needs is the blob name of the item you want to update, plus the byte array of the new image. Granted, you could omit the Update statement, and rely on the Delete and Create commands, but we wanted were matching methods to HTTP verbs in a RESTful API.

public void Update(Guid imageId, byte[] content) {
    CloudBlob blob = _container.GetBlobReference(imageId.ToString());
    blob.UploadByteArray(content);
}

That’s ALL the code I had to implement to make storing images in Azure Storage work. I was surprised at how little effort was required, and it makes me love Azure even more.

To be absolutely sure you have all the code correct, here is the entire class file:

using System;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

namespace AzureStorage {
    public class Blobs {
        private readonly CloudStorageAccount _storageAccount;
        private readonly CloudBlobContainer _container;
        
        public AzureFileService() {
            try {
                _storageAccount = CloudStorageAccount.Parse(<CONNECTION STRING>);
                CloudBlobClient blobClient = _storageAccount.CreateCloudBlobClient();
                _container = blobClient.GetContainerReference(<CONTAINER NAME>);
                _container.CreateIfNotExist();
            } catch (Exception) {

            }
        }
        
        public void Delete(Guid imageId) {
            CloudBlob blob = _container.GetBlobReference(imageId.ToString());
            blob.Delete();
        }
        
        public void Update(Guid imageId, byte[] content) {
            CloudBlob blob = _container.GetBlobReference(imageId.ToString());
            blob.UploadByteArray(content);
        }
        
        public Guid Create(byte[] content) {
            Guid blobAddressUri = Guid.NewGuid();
            CloudBlob blob = _container.GetBlobReference(blobAddressUri.ToString());
            blob.UploadByteArray(content);
            
            return blobAddressUri;
        }
        
        public byte[] Get(Guid id) {
            var blob = _container.GetBlobReference(id.ToString());
            
            return blob.DownloadByteArray();
        }
    }
}

All that beautiful functionality in less than 60 lines of code. I don’t know how you feel about it, but I think the Azure team is doing wonderful things.