mentoring, career, life comments edit

When I started at HMB a few years back, one of their policies is that everyone gets a mentor. At first, I wasn't quite on board with the policy, but I didn't have much of a choice.

Fast forward two years, I'm no longer at HMB, but with Heuristic Solutions instead. Heuristics is a lot smaller of a company, so pairing people off like HMB did, just doesn't quite work, especially when you add the fact that a lot of us are remote. However, I realized I actually MISSED having a mentor.

I decided to "take action into my own hands", so to speak, and reach out to my network on Twitter to see if there would be someone WILLING to mentor me.

The gist is pretty simple - spend some time with me once a month over lunch (my treat), we'll chat about my desires for what I see as a successful career in technology (still figuring this out, actually).

I had two responses. While that doesn't seem like a lot, my Twitter network isn't all that big - and - this is somebody's spare time they're giving away to help another. So, while two doesn't seem like a lot, I consider that a complete success.

I ended up having to choose, which was difficult. Granted, I could have more than one mentor, and might seek that out in the future, but for now, we'll start with one :)

I chose Seth Petry-Johnson, who I also work with at Heuristic Solutions. Seth is the technical architect on LearningBuilder, the product I work on full-time. I won't go into great detail, you can check out his blog/twitter/etc., but Seth is a highly skilled software craftsman whom I respect quite a lot.

We had our first lunch on 8/21, and I have homework :). That, to me, says I found a great mentor.

I'll try to post more often as we meet, and work on getting me on some sort of path :)

jenkins, continuous-integration, ci comments edit

So you want to run Jenkins on port 80 on your windows machine, huh? It's easier said than done. Well, it used to be before I wrote this blog post.

If you're like me, you've already spent some time trying to get this to work the way you would expect it to work like the Jenkins documentation alludes to:

  • Open Jenkins.xml
  • Set --httpPort=80 in the command line section

Seems fairly easy doesn't it? Well, as you've probably already figured out, it doesn't work.

Jenkins, apparently, cannot bind to a 'system' reserved port (anything below 1024).

Again, if you're like me, you fiddled with this for a while. Setting the windows service to run as Local Account/Network Account/Local Admin/Domain Admin/etc/etc/etc, Nothing works.

Then, you start searching the internet and find out this is a real problem that people have been struggling with for a while, and apparently, you can solve it by installing Apache.

Now, don't get me wrong, I have nothing against Apache, though I haven't used it in a while, I use to use it quite exclusively. For this installation, however, I didn't feel like attempting to get that work, and wondered if it would be just as possible with IIS - which was already installed.

I finally figured out the secret recipe for getting it to work, and it goes a little something like this:

  1. Configure Jenkins to run on whatever port you like - I left it at the default 8080
  2. Install IIS
  3. Install the IIS URL Rewrite module
  4. Install the IIS Application Request Routing module
  5. Create a 'dummy' website that is bound to port 80 in IIS
  6. Add a 'Reverse Proxy Inbound Rule' to the dummy website that rewrites the requests from port 80 to port 8080 (on the same machine)

Step 6 is a little more involved than the others, so, screenshots!

Pick the rule type to create (Reverse Proxy Inbound Rule)


Configure the basic rule (input the destination IP address/port)


The rule should look similar to this


Then, we're only using HTTP, so I remove the condition, and hardcode HTTP in the rewrite URL

If all is successful, you should be able to hit the 'website' running on port 80 of your server, and silently be directed to Jenkins on port 8080 without even noticing it.

If you run into any issues, or have any feedback, please leave a comment below!

channel9, dotnetconf, general, powershell comments edit

Based on my previous script of a nearly identical title, this script will snag the 2014 dotNetConf videos (high quality MP4s) from Channel 9.

Change the $baseLocation to a folder of your choosing, and let it go.

$baseLocation = "V:\Coding\Channel 9\dotNetConf 2014\"

$rssFeed = New-Object -TypeName XML
$rss = (New-Object System.Net.WebClient).DownloadString("http://s.ch9.ms/Events/dotnetConf/2014/RSS/mp4high")

$rssFeed.LoadXml($rss)

$itemCount = $rssFeed.rss.channel.item.Count

for($i = 0; $i -lt $itemCount; $i++)
{
     $fileCount = $i + 1
     Write-Progress -Activity "Downloading Recordings..." -Status "Processing file $fileCount of $itemCount" -PercentComplete (($i/$itemCount)*100)

     $item = $rssFeed.rss.channel.item[$i]

     $fileExtension = $item.enclosure.url.Substring($item.enclosure.url.lastIndexOf('.'), $item.enclosure.url.length - $item.enclosure.url.lastIndexOf('.'))

     $cleanFileName = [RegEx]::REPLACE(cast(cast($item.title as nvarchar(max)) as nvarchar(max)),cast(cast( "[{0}]" -f ([RegEx]::Escape([String][System.IO.Path]::GetInvalidFileNameChars())) as nvarchar(max)) as nvarchar(max)),cast(cast( '' as nvarchar(max as nvarchar(max))))) 

     $downloadTo = $baseLocation+$cleanFileName+$fileExtension

     If(!(Test-Path $downloadTo)) 
     {
          (New-Object System.Net.WebClient).DownloadFile($item.enclosure.url, $downloadTo)
     }
}

career, general, pluralsight comments edit

pluralsight-reverse-logo-2

That's right, I've been approved to be a Pluralsight author!

Well, technically, I was approved over a year ago, but I never ended up producing anything, so I'm getting back on the bandwagon. I submitted 6 different course topics, so hopefully one of them will come to fruition.

Stay tuned for more details!

general, imposter-syndrome, work comments edit

As most of you know, I recently started a new job with Heuristic Solutions working on their LearningBuilder product. I’ve been with them now for about a month, and up until about last week, things were going great. Then, out of nowhere, The Imposter Monster made its unwelcoming return.

Now, I’ve always been rather hard on myself when it comes to my work. Doubts plague me daily about where I am and where I should be. They never quite overlap in my mind – probably never will – but I’m working on it.

imposterhoodie_fullpic_artwork

Having said all that, I do believe that part of this is just from the stress of starting a new job. Everybody’s been there, I imagine. You leave a job where you provided value, felt good about the decisions you made, knew everyone, etc. – then, all of a sudden, you’re back on the bottom of the totem pole without any of those happy feelings.

Am I supposed to be here? Do I deserve this job? Do I fit in? Do I actually have what it takes to bring value to this company?

I can’t answer those questions.

All I really wanted to do was get this post out here, in hopes that if someone is feeling the same way, perhaps they’ll come across this post and realize – they’re not alone.

career, general comments edit

The last few weeks have been a little stressful here at the Allen household due to the job situation, but things are turning around!

On Tuesday, May 27th, I will be starting a new job that I'm extremely excited about.

What is it you ask?

Well, I have accepted an offer with Heuristic Solutions to work on their LearningBuilder product!

Not only do I get to work on a fantastic team of people on a great product, but I get to do it from home!

general, virtual-machines, virtualbox comments edit

I had a really difficult time coming up with a title for this post, but essentially what I'm trying to convey is:

  1. You created a VirtualBox machine
  2. You deleted/removed/transferred the VM files on your HDD
  3. VirtualBox will no longer let you manage that machine - indicates "inaccessible" - can't even remove it.

I found myself in this predicament earlier this evening. All I wanted to do was remove the VM from the list of machines in VirtualBox, but it wouldn't let me because it couldn't find any of the files in the given location. After a few hacks, I found a solution. Hopefully, this might help someone.

Given a VirtualBox machine - let's call him "Test Machine", that resides on your host machine's file system

4

2

You, accidentally or purposefully, delete the folder containing the VM

3

Suddenly, you can no longer access the VM from within the VirtualBox Manager

4

And, any attempts to remove it are futile

5

If you can deal with the 'clutter' of an orphaned VM, yay. If not, like me, we gotta get rid of that thing.

Here's how I did it.

  1. Shut down all VMs, and close VirtualBox Manager
  2. Navigate to your personal VirtualBox settings file (Mine is in my Users folder - C:\Users\Calvin\.VirtualBox\VirtualBox.xml)
  3. Edit the file in Notepad/Notepad2/Notepad++/WhateverEtcPad++2
  4. Find the MachineRegistry section, and remove the MachineEntry for the offending machine
    7
  5. Save the file, close, and reopen VirtualBox Manager. If all successful, you should no longer see the machine.
    8

That's it! Hope it helps someone!

general, microsoft-build, powershell comments edit

Crazy long title, but you get the idea.

Here is a PowerShell script that will download all of the mp4 high quality videos from Channel 9 for Build 2014.

Change the $baseLocation to a folder of your choosing, and let it go.

$baseLocation = "V:/Coding/Channel 9/Build 2014/"

$rssFeed = New-Object -TypeName XML
$rss = (New-Object System.Net.WebClient).DownloadString("http://s.ch9.ms/Events/Build/2014/RSS/mp4high")

$rssFeed.LoadXml($rss)

$itemCount = $rssFeed.rss.channel.item.Count

for($i = 0; $i -lt $itemCount; $i++)
{
     $fileCount = $i + 1
     Write-Progress -Activity "Downloading Recordings..." -Status "Processing file $fileCount of $itemCount" -PercentComplete (($i/$itemCount)*100)

     $item = $rssFeed.rss.channel.item[$i]

     $fileExtension = $item.enclosure.url.Substring($item.enclosure.url.lastIndexOf('.'), $item.enclosure.url.length - $item.enclosure.url.lastIndexOf('.'))

     $cleanFileName = [RegEx]::Replace($item.title, "[{0}]" -f ([RegEx]::Escape([String][System.IO.Path]::GetInvalidFileNameChars())), '') 

     $downloadTo = $baseLocation+$cleanFileName+$fileExtension

     If(!(Test-Path $downloadTo)) 
     {
          (New-Object System.Net.WebClient).DownloadFile($item.enclosure.url, $downloadTo)
     }
}

camtasia, encode, expression-encoder, general, gotowebinar, hmb comments edit

At HMB, we hold various training opportunities through an internal program known as "HMB University". Some of the events we hold consist of Lunch & Learns, Deep Dives, Book Clubs, and even a Meet & Code Weekend. Most of the instruction (topics, slides, etc) are prepared by employees of HMB, and range from technical topics (MVC, Xamarin, etc.) to project management related topics (Agile, Kanban).

Over the last few years, we've come to realize how much information we were losing after the events were over. We had no way to present the event at any other time (without the employee giving the same presentation again). After much debate, we've decided to start recording the sessions (in this case, the Lunch & Learns specifically) using GoToWebinar (as we have employees 'attending' remotely).

Awesome, so now we have recordings. But, how many video recordings are perfect? Not many, if any. I own a copy of Camtasia Studio 8, so I volunteered to help edit the videos (It's actually kind of fun for this old developer).

Now comes the problem.

GoToWebinar records into a WMV format. Camtasia can import a WMV - except one that's produced through GoToWebinar. Doh! Evidently, GoToWebinar formats their file a little differently than a straight native WMV - which is what Camtasia wants.

After a short time researching, I found a method that 'fixes' the WMV so that you can import it into Camtasia for editing.

Essentially, you need to re-encode the video. I did this by using Microsoft Expression Encoder 4 (free version). Now, this program isn't being maintained anymore, and the old paid version (that you can no longer get) would encode to MP4, but the free version only does WMV. For my projects, that's okay, but if you MUST have MP4, there should be other applications out there that can handle it).

To sum everything up, re-encode the video and save to a new file, and you should be able to import it into Camtasia!

Happy editing!

P.S - As part of my 2014 blogging goal, this post goes towards February, so expect another post for March soon!

general, goals comments edit

Here again, late getting this posted – I had a good excuse though :)

I have a couple goals I want to get written down and be somewhat accountable for it. Let’s get to it.

Blog More

I say this every year, and never actually accomplish anything on it. This year will be different, I swear! To measure whether or not this will be successful, I will post AT LEAST ONE blog entry per month throughout the year.

Be An Awesome Dad

By the end of 2014, Ava will be exactly one year old. While this task can only be measured by her, and she really won’t have the ability to measure it – I’m just going to give it my best shot. That’s better than some of the dads out there (not any of you guys, you guys are cool ;))

Develop Session Abstracts

In relation to giving talks at conferences, I need to develop some session abstracts. By the end of the year, I would like to have at least 4 abstracts – two .NET related, and two more on (right now, looks like Rust and maybe Xamarin).

Speaking Engagements

Columbus has a thriving development community, with its fair share of community organized conferences. This goal, is to present AT LEAST ONE talk to one of these local conferences this year AND/OR present one of them to a local user group.

Start a Local Conference

I have come back to this topic time and time again, and this year, we're putting on the goal list.  By the end of this year, I would like to have a fleshed out 'business plan' for a local developer conference.  This does not include having HAD the conference, just getting the plans ready.

Promotion to Senior Consultant

In accordance with my work related goals, I would like to make the jump to senior consultant at HMB.  I do believe I hold the technical prowess to obtain the position - I just need to work harder at some of the finer details.  Mainly, get more involved with activities such as recruiting, sales, and the mentoring process.  Our yearly reviews will occur around October, so this goal should happen during that time.

I do believe this is all for now. Come back occasionally and check on my progress; I’ll keep you apprised as the year progresses!

children, general, life comments edit

I’m a little late getting this out, but my wife and I just had our first baby (well, two weeks ago)!

On December 31st (Yes, I know…tax deduction) at 2:12pm, we welcomed our baby girl, Ava Elizabeth, to the world! She was 8lbs. 9oz., and 22.5” long.

We did find out that Ava has a biotin deficiency, which means she’ll basically have to take a vitamin supplement for the rest of her life. While it hurts to find this out, we’re relieved that Ohio screens for it during the newborn screening (some states don’t, even though its recommended by the March of Dimes). And, it’s a vitamin – not a drug – so its completely OTC and natural.

Beyond that, mom and baby were/are healthy and doing well!

conferences, general, review, stirtrek comments edit

Oh man, StirTrek was awesome this year! There were a ton of people there – I think around 1200, total. It did, at times, make me a little claustrophobic, which I have never had a problem with before, so that was new :)

Let’s get straight to the details….

Registration was a snap – quick and easy, no problems.

I loved the badges they gave us – awesome Star Trek image on it, a place for your name, and, my favorite – the schedule for the day on the back. You could simply flip it over to see where you were going next. It was a great idea!

Breakfast consisted of donuts, bagels, coffee and water. I’m not much of a breakfast eater, so it worked out perfectly for me.

After some mingling, it was off to the first session of the day!


Javascript Spaghetti - Jared Faris

I had a strong desire this year to hit all Javascript-based talks, so this has nothing to do with the fact that Jared is my boss :) Anyway, this was a great talk. Jared did a wonderful job of incorporating some humorous aspects into his talk, while still making it relevant and interesting. I learned a few things, and had a great time, what more could you ask?


Understanding Prototypal Inheritance - Guy Royse

Guy is another one of those fantastic speakers that you should definitely try and catch any chance you get. The topic of this talk is a difficult one to give, and while I came away still a little confused, Guy made it fun and enjoyable.


LUNCH!

Jimmy John’s. Served in whatever theatre you were already in. Awesome. And Yummy. ‘Nuff said.


Custom Graphics for your Web Application: The HTML5 Canvas and Kinetic.js - Jason Follas

I went to this talk to learn more about Kinetic.js than the Canvas itself, and I was glad to see a nice portion of the talk dedicated to it. I had not heard of Kinetic.js prior to this talk, and man, it’s amazing! The features seem to be pretty robust, and Jason has even contributed back a few features himself to extend it even further. Jason did a great job presenting this – he was extremely clear, concise, and just an all around great speaker. This is the first time I had seen him speak, and I’ll definitely be seeking out his sessions in the future.


JavaScript: Pretty cool guy and doesn’t afraid of anything - Evan Booth

Well, I guess there has to be a bad apple in every bag. This talk was poor, at best. Evan tried to scatter some humor in his talk, and while it was entertaining at first, it quickly got old. He seemed to be reading this slides as he went, talked quickly, rarely made eye contact with the audience, and went completely off-topic into CSS as part of his presentation. I didn’t go to hear about CSS – I went to hear about Javascript. After it was over, he began showing videos of himself making weapons from items bought beyond the TSA security checkpoints at airports. He seemed to believe he was doing a good deed by doing these things, and commented that he does give the info to the TSA. My thoughts? Wrong location to be showing that stuff, bub. While it was mildy entertaining at first, it quickly got old, and honestly seemed to make some people a little irritated that he would be showing that kind of information to the general public. Not a good idea. Fail.


I Didn’t Know JavaScript Could Do That! - David Hoerster

I had kind of forgotten what this session was about until after David actually began, but I’m glad I went! David seemed to be a little nervous at first, not sure if this was his first time presenting or not – and he even mentioned being nervous at one point. But, David, you did an awesome job. I enjoyed your talk very much. You got the audience participating with questions (most of which I got wrong, by the way), had humor woven into the talk, used Prezi (bonus points for that :) ), and you taught me stuff! I walked away from your talk with a better understanding of prototypal inheritance, which I had been trying to understand for some time. Nice job, David.


Overall, Stir Trek was bloody awesome, and I can’t wait to go next year. I would change a few things, but they are mainly minor. For instance, having a difficulty level on the talks so we can gauge better about attenting. Evan’s talk would have been Beginner, while Guy’s would have been Advanced. I would also like to see televisions scattered around monitoring the Stir Trek hashtag from Twitter. They had this at CodePaLOUsa, and I loved reading the comments while moving around. Oh, perhaps even do it on the big screens between sessions :) Honestly, that’s it. That’s all I would change. See? Told you in was minor stuff.

See you at Stir Trek 2014!

codepalousa, conferences, general, review comments edit

I had the opportunity this year to attend one of the Midwest's premiere community-ran developer conferences, CodePaLOUsa, which was held in Louisville, KY on April 25th-27th.

The first day of the conference was all pre-compiler workshops, which I did not attend (and therefore cannot review).


April 26th

Keynote

The first day of the conference opened with a keynote by none other than Richard Campbell, from .NET Rocks!, one of the best podcasts available.

Richard did a fantastic job at the keynote, and had the audience rolling with all the jokes. He told a great story, but what really stuck with me was his endeavor to create software for humanitarian relief efforts through humanitariantoolbox.org. I already see myself getting involved with this at some point.


The Class That Knew Too Much – Matthew Groves

This session was on refactoring techniques and had a (brief) introduction/overview to aspect-oriented program (or, AOP, for short). Matthew is local to me, and he’s given this talk many times all around me, but this was the first chance I’d had to attend one of them. Although I enjoyed the session, there were some technical difficulties that kept arising with the projector and connection. I don’t believe this was an issue with the presenter’s hardware, however, as I witnessed the same issue later on in the same room. Not only that, but the room was quite small and quickly overflowed. As a matter of fact, Matt had to give a second session the next day to accommodate the rest of the individuals. Overall, this is a fantastic session/talk, and Matt does a great job all around.


Deeper Dive into the Windows Phone 8 SDK – Michael Crump

This session was all about the new features in the WP8 SDK. Not having tried any WP development before, I was surprised to see some of the items in these new features. Surprised, because I would have expected some of them to already have been there. Michael presented quite well, but did run into some demo issues that were unable to be resolved during the session. He did, however, make them available via GitHub after the fact. The problem seemed to revolve around flaky internet connectivity, though that cannot be proven at this point I suppose. This room was a lot larger than the previous room, but attendance was relatively low and did not require a large room.


Secure Mobile Application Development – Jamie Ridgway

I hate to say it, but I did not enjoy this session. Jamie did a great job of gathering the information, and presenting it, but that’s all it was. His slides and talk were all based around the top 10 vulnerabilities for mobile applications by OWASP. I could have read that information myself. I would have liked to have seen a few demos scattered in that demonstrated some of the issues. The room held everyone well, and was a nice choice – though it did get rather cold.


Rails for the .NET Developer – Jamie Wright

Let me be clear. I am not a Ruby developer. I am a .NET developer. Why did I choose this session? I love to learn. I enjoyed the beginning of this session, but quickly got lost in the demos. Jamie did a side-by-side comparison of the same application being developed in both .NET and Ruby (Rails). He did things a little different by recording his demos ahead of time, and discussing things while he played them back for us. This worked out okay, but there are a few issues. The first problem is that the video speed was increased – and for people new to Ruby/Rails, this made it difficult to follow at times, even with Jamie giving an overview. The second issue is that after about half-way through, he stopped showing the .NET videos and only showing the Ruby videos. I suppose by that point we had an idea of what the application was supposed to do, but I would have liked to have the comparison. This was a smaller room, and was relatively full, but worked nicely. Jamie did have a technical issue or two with the projector, but luckily things got resolved.



April 27th

Keynote

This keynote presentation was given by Carl Franklin, the other half of .NET Rocks!. While I would love to review this keynote speech, Calvin slept in this morning.


All The Buzzwords: Backbone.js Over Rails In the Cloud – Jared Faris

I have to be careful what I say here, as Jared is one of my managers :)

Jared discussed a lot of his architecture choices while he ran the development at a local start-up for 1.5 years. The application was written in Ruby, and utilized quite a few frameworks and packages during development. Not being a Ruby developer, as previously mentioned, I enjoyed hearing about their trials, tribulations, and the many decisions that came up along the way. Jared put a lot of extra time into his slides, utilizing 8-bit style imagery throughout, which I loved. Jared’s talk was located in the same room where lunch and keynotes were held. Attendance, while pretty good, did not warrant that amount of space, and participation/questions from the attendees was minimal to none at best.


Everyone Sucks at Feedback – Chris Michel

I was actually not expecting this session, since it was in the middle of lunch. The presenter did a great job speaking, and used a lot of humor in his slides – which was a nice change. Honestly, I didn’t pay enough attention during this presentation (ummm…food!?) to warrant a full review, but I would definitely see him present again from what I did see.


Open Space

A few people, including myself, decided to skip this session and have an open-space discussion on confidence. There were, at one time, about 8-10 people present for this (sorry, I don’t remember everyone!). I found this discussion rather enlightening, as I definitely have a confidence problem in myself. It was good to hear that I’m not the only one, and it can definitely be overcome.


Build a Single-Page App with Ember.js and Sinatra – Chris Meadows

Chris did a great job on this presentation, showing one of the more ‘elusive’ javascript frameworks. While Sinatra was used, that was secondary to the main topic and was only used as the back-end. I haven’t had an opportunity (or need) to utilize Ember.js before, but after seeing Chris’s talk, I’m on the hunt for a project. He described the relationships between the views, controllers, models, and the router. The room was full, but worked out well.


An Introduction to Genetic Algorithms for Artificial Intelligence Using Rubywarrior – James McLellan

Woah. I didn’t realize what I was getting myself into by going to this one. This talk focused heavily on genes and genetic makeup – something I know nothing about. The only saving grace was that it was brought into focus by utilizing a Ruby application called RubyWarrior. This ‘game’ allowed you to utilize your own ‘genes’ (or classes that act as AI – i.e., walk, turn, etc.) You can then bundle these ‘genes’ to try and solve a level in the game. There was a lot of Ruby code involved, which I did expect given the title of the session. Overall, though, James’ presentation style was a little dry. The room was pretty full, though, and seemed to be a good match for the session.


Closing Session

We almost didn’t stay for the closing session, but I’m kind of glad we did. Carl Franklin took over again, asking attendees various development related questions – he inevitably gave away the answers – to which prizes were given away. Trust me, there were tons of prizes given away (we didn’t win anyway), and it was just a fun time.



Overall

Overall, I would say that CodePaLOUsa is a great conference. It’s ran by intelligent people – by the community, for the community. My biggest complaints are actually minimal in the grand scheme of things. Some of the projectors and equipment seem to be finicky – they might be property of the hotel, too, I am unsure. Some of the rooms were a little cramped due to the partitioning walls of the hotel. I would have liked more signage as a first time attendee as well. Is it worth the $250? Yeah, I think so. I think so enough that I’ll be attending next year!

See you at CodePaLOUsa 2014!

general, mvc, web-api comments edit

Alright, you have your MVC 4 website up and running, and you realize you need to add some WebAPI support – not necessarily for the website, but potential external consumers. Your site is configured using Areas for everything, so you decide it would be best to put the WebAPI layer in an Area as well. Makes sense, right? Right. You quickly find out that it isn’t just as simple as right-clicking, add new area, name it API, pat self on back, etc. That’s where this trick comes in.

Now, by default in an MVC 4 project, your Global.asax file calls out to another class to configure WebAPI. It will look something like this:

WebApiConfig.Register(GlobalConfiguration.Configuration);

Guess what? Comment that line out. The file this utilizes is in the App_Start directory, aptly named WebApiConfig.cs. You can leave it, or delete it. You’re call.

Now, head over to your area, we need to make some routing changes.

Look for APIAreaRegistration.cs and open it up.

Bring in another namespace:

using System.Web.Http;

Now, you see that route down below? It needs two minor tweaks to work with WebAPI. Basically, change the method call from:

context.MapRoute(
        "API_default",
        "API/{controller}/{action}/{id}",
        new { action = "Index", id = UrlParameter.Optional }
    );

to

context.Routes.MapHttpRoute(
        "API_default",
        "API/{controller}/{id}",
        new { id = UrlParameter.Optional }
    );

In a nutshell, we changed the route to register an HttpRoute, and got rid of the {action} part of the route.

Boom. You’re done.

Keep in mind that this is THE WebAPI layer for your application – with the changes we’ve made, you can’t have any other WebAPI controllers outside of your area. If you find you need the ability for the Area and others, there are a couple methods that others have posted to make it work. I didn’t need anything like that, so this worked well for me.

endorsements, general, linkedin comments edit

Have you ever watched me sling some C#? No? Don’t Endorse Me.

Have you watched me ball out some jQuery and seen how awesome it was? No? Don’t Endorse Me.

Are we friends in life, connected on LinkedIn, and never actually worked together? Don’t Endorse Me.

Unless we have actually worked together, and you can confirm my skills without hesitation. Don’t Endorse Me.

The probability that any of the endorsements on LinkedIn are actually valid, are slim to none. Just because we know each other, doesn’t mean you have to endorse me. At the same time, don’t expect me to endorse you for these ‘skills’ if I’ve never seen you in action.

LinkedIn needs to completely revamp how these endorsements are carried out. Until that time, Don’t Endorse Me.

general, nuget, powershell comments edit

Recently a need arose to have a few project-level items added to a project via a NuGet package. While this was no big deal, we ran into an issue of having the items marked as “Copy if Newer” for the “Copy to Output Directory” action, and couldn’t manage to find a way to change these properties. After a bit of research, we determined that an install.ps1 PowerShell script (as part of the NuGet package installation) could access the project items and set the properties of them.

A script was written to handle the three files added to the project:

param($installPath, $toolsPath, $package, $project)

$file1 = $project.ProjectItems.Item("FolderItem.exe")
$file2 = $project.ProjectItems.Item("FolderItem.exe.config")

$file1.Properties.Item("CopyToOutputDirectory").Value = [int]2
$file2.Properties.Item("CopyToOutputDirectory").Value = [int]2

Unfortunately, the script didn’t work. Why, you ask? Well, after some more digging, it turns out you can only access top-level items using the above syntax, so you have to chain the commands together to properly access the items:

param($installPath, $toolsPath, $package, $project)

$file1 = $project.ProjectItems.Item("Folder").ProjectItems.Item("Item.exe")
$file2 = $project.ProjectItems.Item("Folder").ProjectItems.Item("Item.exe.config")

$file1.Properties.Item("CopyToOutputDirectory").Value = [int]2
$file2.Properties.Item("CopyToOutputDirectory").Value = [int]2

.net, c#, entity-framework, general, mvc, tutorials comments edit

In this post, I’ll show you some of the basics on how to utilize Entity Framework 5.0’s “Code-First” features to develop a data access layer against an existing database.  I’ll be demonstrating these concepts with a new MVC 4 Web API application and SQL Server 2012, but I won’t be covering either of those in this tutorial.

While Code-First is a great paradigm for starting a new application from scratch, you can also use it to map back to an existing database with ease.

Let’s pretend we’re working with a very simplistic Twitter model, as shown below.

DBSchema

Not a lot of meat here, a simple structure for Users and their Tweets.  Of course, the real Twitter model is more complex, but this will suffice for the purpose of this tutorial.

To demonstrate how to accomplish this, we’re going to create a new MVC 4 Web API application in Visual Studio 2012, using C# as our language.  Our database will be running in SQL Server 2012.

After launching Visual Studio, navigate to FILE | New | Project dialog, select Web from the installed templates navigation section, select ASP.NET MVC 4 Web Application and give your project a name and click OK.  I’m going to call mine, “Tweeters”

NewProjectDialog

Once you’ve hit OK, another dialog will pop-up (below), asking what kind of MVC 4 web application you would like to create.  Go ahead and choose “Web API” from the list, and press OK.

NewMVC4Dialog

Once Visual Studio finishes creating the project, you should have a structure resembling the figure below:

NewProjectFinished

By default, a new Web API project will have quite a number of files put in place for you.  For the most part, we’re going to leave them alone.

Visual Studio 2012 automatically pre-installs Entity Framework 5.0 for us, so we’re ready to start coding!  If you ever need to install it separately, however, you can find the package for download via NuGet.

The first thing I’m going to do is create two basic classes that represent our database tables.  These ‘models’ will be placed in the Models folder of your application:

NewModels

Let’s run through the code, so you can understand what’s going on.

If you notice the highlighted portions in the previous figure – these are attributes.  We are using them to define some of the constraints we need to put on our models.  Let’s go through each one in detail.

  1. [Key] – This tells EF that the property that directly follows is to be used as the Primary Key.
  2. [Required] – A value for this property must be supplied
  3. [MaxLength(x)] – Sets the length of the string EF will accept when saving
One thing to note is the property at the bottom of the Users class.  You’ll notice we have an ICollection<Tweets> – this lets EF know how to navigate through the objects to, in this case, the children “Tweets” of “Users”.  Essentially we’re setting up the database relationship between these two objects.  So for the one-to-many relationship of Users & Tweets, we use a Collection.

If you refer back to our database model in the first figure, you’ll see that the “Required” and “MaxLength” attributes just replicate what our database will accept for these tables.

The next thing we have to do is create a database context.  A context tells EF what database to connect to, and what models it expects.

Simply create a new class file (I’m calling mine EntityContext), and make it inherit from “DbContext”.  This is a class provided by EF, so you’ll need to import the namespace to get access to it.

Once you have that, you’ll need a constructor.  Notice, in our constructor, that we call into the base constructor and pass it a string.  This is the connection string from the web.config that we want EF to use during database connections.

You’ll see that, in our constructor, we are setting the Initializer for our context to null with SetInitializer<>.  This is very important.  Since we’re connecting to an existing database, we don’t want EF to touch that database schema AT ALL.  That’s what this is doing; telling EF not to try any database initialization logic – just connect, and leave it alone.

The next thing you’ll notice are the public DbSet<> properties.  You want one of these for each model you want to interact with.  In our case, we have Users and Tweets.

Context

That’s all for the context. Pretty simple, huh?  Let’s go define our connection string in the web.config.

WebConfig

Your connection string may differ from mine slightly, but I’m just connecting to a running database on my local machine.

Now we can go work on the controller that will be responsible for serving up some data from our database.

The first thing I’ve done is rename the “ValuesController” to “UsersController” just to make a little more sense.  You don’t have to do this part if you don’t want, it doesn’t really matter in the long run, just remember what you call it when we go to make a request to the method.

UsersController

We start out by creating an EntityContext reference object and instantiating it in the constructor.  In the Get method, we simply call in the context and grab the Users collection.  (This is the DbSet<Users> we talked about in the context section)

This is all we need to do to start querying our database!  Press F5 to start debugging the project (hopefully you have no errors).  You should get a default webpage (below) discussing how to get started with a Web API project.  We’re not going to pay any attention to this since we just want to query our database through the API call.

WebsiteLanding

I’m going to fire up Fiddler to make a GET request to our API method.

Fiddler

By default, the API is running under an “api” route, and then the controller name.  So to issue a GET of our Users, we just need to construct the URL in Fiddler’s Composer tab, with a GET method.  Once you have that setup, hit “Execute”.

In the figure below, we’re now looking at the result of our http request.  You should hopefully get a result of 200.  If so, double click on that line to switch over to see the results of your query.

FiddlerResults

QueryResults

If all has went well, you should see the result of your query, presented in JSON format!  Notice that we got back all of our Users, along with a collection of their Tweets.  You might be wondering why we got back the Tweets, when we only requested the Users – this is due to the properties we added to the Users class letting it know that it has a collection of Tweets beneath it.  Entity Framework is smart enough to work out the magic beneath the covers to populate the collections for us!  Awesome!

Conclusion

To recap, I showed you how to create a new ASP.NET MVC 4 Web API project that is backed by an existing database for querying through Entity Framework 5.0 Code-First.

Hopefully this tutorial has given you some basic insight as to the capabilities of Entity Framework 5.0 Code First.  I encourage you to keep digging into it, as this was only the tip of the iceberg!

.net, azure, blob-storage, c#, general comments edit

On a recent project, we had a need to integrate Azure blob storage with our web application (also hosted on Azure) to store images. What we found, is that it’s so easy!

Let’s take a look at what we had to do (and how LITTLE code we had to write) to successfully store our images ‘in the cloud’.

The first thing you’ll need to do is login to the management portal and create your storage account, and then create your container. I won’t go over how to accomplish these two steps, as they are fairly trivial with plenty of walk-through out there already.

Now, to connect to your container, you’re going to need three pieces of information from the Azure website:

  1. AccountName – this is what you called the account when you initially set-up the storage account. Mine is called, ‘asdfasdfadsf’. Pretty memorable, huh?
  2. Container Name – this is just the name of the container you created in the second step. I called mine ‘blobs’.
  3. AccountKey – At the bottom of the management portal while looking at your storage account, there is an item called ‘Manage Keys’. Click it to open the dialog that will have your key. It will be a long string of random characters.

Aside from that, there is a connection string template that you’ll use the AccountName and the AccountKey in, and that is as follows:

DefaultEndpointsProtocol=https;AccountName={ACCOUNTNAME};AccountKey={ACCOUNTKEY}
With this information in hand, we can switch over to Visual Studio while I describe the remainder of the process. Please note, I’ll be using Visual Studio 2012 for this, but you can also use 2010.

You can create any type of project you want, but I’ve wrapped up my Azure blob storage logic into a separate Class Library for easy reuse.

For this project, you’ll need to download two packages from NuGet – “Windows Azure Configuration Manager” and “Windows Azure Storage”, as shown in the image below. Note that if you grab the Windows Azure Storage package first, it will download the other as it directly depends on it. nuget-packages

Feel free to use the Package Manager console, with the following statement to circumvent the GUI:

install-package WindowsAzure.Storage

Once you have these packages in your project, add two using statements to the top of your class file.

using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

In my class, I’ve utilized the constructor, and four methods: Delete, Update, Create, and Get.

We need some class level variables to start with, so we’ll add:

private readonly CloudStorageAccount _storageAccount;
private readonly CloudBlobContainer _container;

Now let’s move on to the constructor, which will utilize the private class level variables we declared above

public AzureFileService()
{
    try {
        _storageAccount =
            CloudStorageAccount.Parse("<CONNECTION STRING>");

        CloudBlobClient blobClient = _storageAccount.CreateCloudBlobClient();

        _container = blobClient.GetContainerReference("<CONTAINER NAME>");

        _container.CreateIfNotExist();
    } catch (Exception)
    {
    }
}

Essentially, what this does, is connects us up to our container for our Azure Storage account. It the container doesn’t exist – it creates it. (Or will try to, sometimes that fails.) I recommend storing your connection string and container name off in a config file somewhere, and fleshing out the Catch block for proper error handling.

Moving on to the methods, we’ll start with the Create:

public Guid Create(byte[] content)
{
    Guid blobAddressUri = Guid.NewGuid();

    CloudBlob blob = _container.GetBlobReference(blobAddressUri.ToString());

    blob.UploadByteArray(content);

    return blobAddressUri;
}

Our method takes a byte array as a parameter, which, in our case, are the bytes of the image we want to store. Each ‘blob’ that gets stored in your container needs a unique name, declared as a string. For our purposes, we just used a GUID, which we store in a relational database table back in our applications database. Now, there are additional properties on the CloudBlob object, such as type, which you can set to various things, like, ‘application/octet-stream’ or ‘image/jpeg’. All of ours are images, so we opted to ignore this. Azure will assume, by default, that it is ‘application/octet-stream’ – but it doesn’t really matter in the end as long as YOU know what the correct type is. All of ours are images, so we left the default.

Guess what? That’s all the code there is to storing an image in Azure Storage!

For the sake of completeness, however, I’ll show you the other methods, which are actually shorter than the Create method.

The ‘Get’ method is simple. Give me the name of the blob you want, and I’ll give you back a byte array:

public byte[] Get(Guid id)
{
    var blob = _container.GetBlobReference(id.ToString());
    return blob.DownloadByteArray();
}

For the ‘Delete’ method, we again only need the name of the blob (a GUID in our case), and call the Delete method on the blob object:

public void Delete(Guid imageId)
{
    CloudBlob blob = _container.GetBlobReference(imageId.ToString());

    blob.Delete();
}

The Update method needs a couple of parameters to be successful. One thing it needs is the blob name of the item you want to update, plus the byte array of the new image. Granted, you could omit and Update statement, and rely on the Delete and Create commands, but we wanted were matching methods to HTTP verbs in a RESTful API.

public void Update(Guid imageId, byte[] content)
{
    CloudBlob blob = _container.GetBlobReference(imageId.ToString());

    blob.UploadByteArray(content);
}

That’s all the code I had to implement to make storing images in Azure Storage work. I was surprised at how little effort was required, and it makes me love Azure even more.

To be absolutely sure you have all the code correct, here is the entire class file:

using System;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

namespace AzureStorage
{
    public class Blobs
    {
        private readonly CloudStorageAccount _storageAccount;
        private readonly CloudBlobContainer _container;

        public AzureFileService()
        {
            try {
                _storageAccount =
                    CloudStorageAccount.Parse("<CONNECTION STRING>");

                CloudBlobClient blobClient = _storageAccount.CreateCloudBlobClient();

                _container = blobClient.GetContainerReference("<CONTAINER NAME>");

                _container.CreateIfNotExist();
            } catch (Exception)
            {
            }
        }

        public void Delete(Guid imageId)
        {
            CloudBlob blob = _container.GetBlobReference(imageId.ToString());

            blob.Delete();
        }

        public void Update(Guid imageId, byte[] content)
        {
            CloudBlob blob = _container.GetBlobReference(imageId.ToString());

            blob.UploadByteArray(content);
        }

        public Guid Create(byte[] content)
        {
            Guid blobAddressUri = Guid.NewGuid();

            CloudBlob blob = _container.GetBlobReference(blobAddressUri.ToString());

            blob.UploadByteArray(content);

            return blobAddressUri;
        }

        public byte[] Get(Guid id)
        {
            var blob = _container.GetBlobReference(id.ToString());
            return blob.DownloadByteArray();
        }
    }
}

All that beautiful functionality in less than 60 lines of code. I don’t know how you feel about it, but I think the Azure team is doing wonderful things.

.net, certification, general, mcpd comments edit

Upon taking my new job, I decided to start working towards getting my MCPD (Web Applications, .NET 4.0) certification.

As I haven’t been placed at a client site yet, I’ve had a lot of time on the bench to study for the first exam in the series, 70-515 – Web Applications Development with Microsoft .NET Framework 4.

I used different resources in my preparation for the exam:

  1. Bought/Read the Self-Paced Training Kit
  2. Utilized the practice exams on the CD
  3. Utilized Transcender practice exams
  4. Also took a shot at using uCertify practice exams/study material
I utilized all these practice exams to have a wide range of questions thrown at me. To be completely honest, I took the test and failed it by only utilizing one of those resources (Transcender). I will say, though, it seems as though the exams from the CD and uCertify were better resources than the Transcender exams.  I can’t argue the fact that after seeing the questions so many times that you being to memorize answers, but for me, it was a great learning experience overall in the .NET 4 world.

Now that this one is out of the way, its time to begin studying for the next one, 70-516 – Accessing Data with Microsoft.NET Framework 4.

Onward!

enum, extension, general, reflection, vb.net comments edit

I was recently working on a project where we were rewriting a legacy web service to use a WCF service instead. The client would connect, download some data, validate it against a database on the other side, and return a result indicating the outcome. We decided that a simple enum would suffice, and determined how to decorate the enum and its members properly for the web service.

It ended up looking like this with the proper attributes:

<DataContract()>
Public Enum eResultCodes
     <EnumMember()>
     ValidationSuccessful = 1

     <EnumMember()>
     ValidationFailed = -1
End Enum

While we wanted to use this on the client side to indicate BACK to the webservice what the result was, we soon realized that we had a problem. The problem is that the database records the error TEXT, not a value such as the enum. Oh! ToString(), of course! No, wait, it displays the value without spaces or punctuation, not very user friendly, in my opinion. Crap. What can we do? After a little research, we found that we could add some additional decoration to apply a string description to the enum values using the “Description” attribute, which resulted in our enum looking like this:

<DataContract()>
Public Enum eResultCodes
     <EnumMember()>
     <Description("Validation was successful")>
     ValidationSuccessful = 1

     <EnumMember()>
     <Description("Validation failed")>
     ValidationFailed = -1
End Enum

Alright, awesome! Now, how can we get that description back off of the enum member, to insert into the database? Reflection. But, I wanted to also try something a little different and make the function an extension method of the enum.

This would allow me to write a statement like this:

resultCode.getDescription()

resultCode, in this instance, is a property (or our enum type defined above) of a result object coming back to the webservice.

That’s great, Calvin, but show us the extension method already!

Okay, okay, here’s the code:

Imports System.Runtime.CompilerServices
Imports System.Reflection
Imports System.ComponentModel
Imports Services.Objects

Public Module Extensions
     <Extension()>
     Public Function getDescription(ByVal resultCode As eResultCodes) As String
          Dim name As String
          Dim type As Type

          type = resultCode.GetType()

          name = [Enum].GetName(type, resultCode)

          If Not name Is Nothing Then
               Dim field As FieldInfo = type.GetField(name)

               If Not field Is Nothing Then
                    Dim attr As DescriptionAttribute = CType(Attribute.GetCustomAttribute(field, GetType(DescriptionAttribute)), DescriptionAttribute)

                    If Not attr Is Nothing Then
                         Return attr.Description
                    End If
               End If
          End If

          Return Nothing
     End Function
End Module

Let’s discuss some of the code.

First, you can see that we have to decorate our function with theattribute. This basically says, the function that is listed below is an extension method to the first parameter in the function (an eResultCode enum, in this case).

The rest of the code goes like this: Get the Type of the resultCode object Get the name of the resultCode, from the Type (ValidationSuccessful or ValidationFailed from our example) If we got a valid name, get the FieldInfo from it. This is a collection of various information about the resultCode (again, ValidationSuccessful or ValidationFailed) If we got valid FieldInfo, we can back into the Description attribute by getting a CustomAttribute from the field (we know its Description), and then casting it to a DescriptionAttribute type. If the attribute is valid, we got the description In a case where something is invalid, we simply return Nothing.

Here is an example of using this method:

Dim resultCode as eResultCodes

resultCode = eResultCodes.ValidationSuccessful

MessageBox(resultCode.getDescription())

Put that behind a button on a form in the same project, and once clicked, you should get a message box that simply has “Validation was successful”