My current setup for one of the projects requires a VM running in Azure – a tiny build server I set up just for myself. And I only need it when I actually working on the project. And the idea is to shut the VM down when I’m not working on the project. The fear was that one day I’ll forget to shut it down and I’ll be billed for time that I have not used.

So I’ve created a tiny script in PowerShell:

# Get your publishsettings file here: https://manage.windowsazure.com/publishsettings/
Import-AzurePublishSettingsFile "c:\Path\to\subscription\file.publishsettings"

Select-AzureSubscription -SubscriptionName "MySubscriptionName"

Stop-AzureVM -Name myVmName -ServiceName myVmName -Force

NB: you need -Force parameter otherwise the script will prompt for Y/N and will require user intervention.

And then I hooked the script as part of shutdown on my work PC:

  1. Run gpedit.msc
  2. Go to Computer Configuration -> Windows Settings -> Scripts -> Shutdown
    2015-07-04 15_03_14-Local Group Policy Editor

    1. Open Shutdown option and switch to PowerShell tab; Add your script there.
    2. PROFIT!

Now your VM will be shut-down when you shut-down your PC.

Another alternative was to have free web-site running on Azure, deploy there a Web-Job that constantly polls on your PC if it is working or not. And if your PC if offline, shut-down the VM. But this sounds like a lot of work and you’ll have to expose some sort of publicly available endpoint on your PC. So nothing fancy!

I’m an avid user on StackOverflow in questions about Asp.Net Identity and I attempt to answer most of the interesting questions. And the same question comes up quite often: users try to confirm their email via a confiramtion link or reset their password via reset link and in both cases get “Invalid Token” error.

There are a few possible solutions to this problem.

Continue reading

TL;DR: Go here for a code sample.

This is long overdue post – I’m too busy these days with contract work and other commitments.

By the nature of my work, I have to do data migrations on a regular basis. Some of the migrations from one database into another, but most of the migrations are from Excel spreadsheets into a database.
A database to database migration is somewhat nerve racking – usually the structure is broken, data is dirty, fields re-purposed and all other sort of terrible things you read about on The Daily Wtf. The best so far I’ve seen are along the lines of person middle names stored in field called CompanyName2; employee information stored in table called tbl_Jobs and info about actual jobs in table tbl_Job – see the difference here? That is all to avoid the confusion!

Database-to-database migration are unique in their own kind. But I found a lot of similar things in a process of export from Excel into SQL Server. So I’ll write down these things here – for my own future reference and you, my dear reader, might find this useful as well.

Continue reading

Today I came across somewhat annoying bug. On one of the sites I maintain, after enabling HTTPS users could not download files anymore. The annoying part was that I could. With all the browsers. They could not with IE9. Turned out that the issue was only on the internal network only over HTTPS: the same file was downloaded from the same address fine over HTTP, but when you go HTTPS on the same url, this happened:

Unable to download from HTTPS

Turned out that some domain policy was prohibiting to store encrypted files during browsing. IE has this setting Do not save encrypted pages to disk (Inside Tools / Internet Options > Advanced) that caused other people to complain. However in my situation this was not turned on which was strange. I decided to give a recommended trick with headers a go and it worked:

I have modified Cache-Control header to be no-store, no-cache and the problem went away. Strange!

Sometimes I have to configure sites that are no actually in my control, I only host and administer them. But site development/update is done by somebody else. Or it could be a PHP (yuck!) hosted on IIS.

Today I had to enable a bunch of sites to be served over HTTPS, all these sites sit under one IP, but under different subdomains. Unfortunately IIS does not have ability to configure from UI host header for HTTPS connections. So you have to drop down to command line:

appcmd set site /site.name:"<SiteName>" /+bindings.[protocol='https',bindingInformation='*:443:subdomain.domain.com']

And then in web.config add redirection to HTTPS, but make sure URL Rewrite is installed on server (check if %SystemRoot%\system32\inetsrv\rewrite.dll file is present):

<configuration>
    <system.webServer>
        <rewrite>
            <rules>
                <clear />
                <rule name="Redirect to https" stopProcessing="true">
                    <match url="(.*)" />
                    <conditions>
                        <add input="{HTTPS}" pattern="off" ignoreCase="true" />
                    </conditions>
                    <action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}" redirectType="Permanent" appendQueryString="false" />
                </rule>
            </rules>
        </rewrite>
    </system.webServer>
</configuration>

And add STS header to all the requests:

<configuration>
    <system.webServer>
        <httpProtocol>
          <customHeaders>
            <remove name="X-Powered-By"/>
            <add name="Strict-Transport-Security" value="max-age=31536000"/>
          </customHeaders>
        </httpProtocol>
    </system.webServer>
</configuration>

Just writing down the steps for myself, so I don’t have to search this stuff over again in 6 months time.

As far as I know next version of ASP.Net is going to abandon MSBuild and build projects differently. Can’t wait for this to happen!

But for now we are still stuck with horribleness of XML and MSBuild (did I tell you I hate MSBUILD with passion?).

When I start a new MVC project, I add extra bits of MSBuild script into *.csproj file.

Missing Content files

First element is to fail a build if some of *.js or *.css files are missing from the project folder. This happened before – somebody in a team forgets to check-in a new file site.css – build server is silent for this matter. Only until somebody gets latest version of the project, builds, looks on it in a web-browser and notices that styles are not right, then the digging starts happening.

So here is the snippet to add to your project:

<Target Name="ValidateContentFiles">
    <Error Condition="!Exists(%(Content.FullPath))" Text="Missing Content file [%(Content.FullPath)]" />
</Target>

Simples – issue error if some of the “Content” files not on the hard-drive.

*.cshtml files marked as “None” for Build Action

This happened to me a few times now: to create a new view you copy-paste existing file, rename the file and change the internals. This is much faster than having to ask Visual Studio to create a new View file for you. The problem starts that sometimes files created this way are marked as “None” in Build Action (Right click on the file, Properties -> Build Action). This is very subtle change and not discoverable on development stage. But when you deploy the application to a server, files marked as None are not packed into the package. And your server now missing these view files. And only when somebody navigates to that missing page – that’s when you get a nasty exception in your face (hopefully your face, not your customer’s).

To avoid this Stackoverflow actually provided me very good answer (for a change).
So to fail a build on *.cshtml file being marked as None, add this snippet to your .csproj file

<Target Name="EnsureContentOnViews" BeforeTargets="BeforeBuild">
    <ItemGroup>
        <Filtered Include="@(None)" Condition="'%(Extension)' == '.cshtml'" />
    </ItemGroup>
    <Error Condition="'@(Filtered)'!=''" Code="CSHTML" File="$(MSBuildProjectDirectory)\%(Filtered.Identity)" Text="View is not set to [BuildAction:Content]" />
</Target>

I wish these were the default settings in the project template.

Some users still live in era of Windows 95 where you had to do double-click to make something happen. And this double-clicking goes with them when they go online. So they double-click on links, double click on submit buttons… Wait? did you just double-clicked that button? submitted the form twice? OOPS!

There are a lot of advice online how to stop this happening, but I have not found a complete solution. Here is my stub at this (using jQuery):

$('input:submit').click(function () {
    var button = this;
    var oldValue = button.value;
    setTimeout(function() {
        button.disabled = true;
        button.value = 'Working...';
        setTimeout(function() {
            button.disabled = false;
            button.value = oldValue;
        }, 500);
    }, 0);
});

What are we doing here? seems like too much code for a simple task. A lot of advice goes to e.preventDefault() but that stops the form submission. Some people recommend adding form.submit() after the preventing default action. That smells. What if I did not want to submit the form, but some other event was triggered on click? If you just disable the button on click – your form will not be submitted because disabled html elements are not submitted.

In this snippet I’m delaying button disabling. This gives a chance to other events to fire – I don’t need to submit the form manually. And inside that timeout I’m setting another delayed function to re-activate the function. Because in era where JavaScript is executed on servers (node.js, I’m looking at you!), it is just plain wrong not to do a validation on a client. So in case when the validation has failed, I re-enable the button again so user can re-submit.

I’ve tried that in latest (17 Dec 2014) Chrome and in IE9 in both cases works as expected. Let me know how this can be improved!

I’m a frequent user of RDP and sometimes you get stuck with within the same connection window and can’t get unstuck because the bar on top just does not come up. To make the bar come-down hit CTRL + ALT + HOME and to break free from the full-screen mode hit CTRL + ALT + BREAK

I’ll leave this here for my future reference.

The title is quite vague, but this is the best I can call this post. Bear with me, I’ll explain what I mean.

While working with Asp.Net MVC, in Razor views I use Html helpers a lot. My view end up looking like this:

@Html.TextBoxFor(m => m.Title)
@Html.TextAreaFor(m => m.Description)

And later if I write JavaScript and need to reference one of the input fields, I usually do this:

var title = document.getElementById("Title");
var description = document.getElementById("Description");

or if you use jQuery you go like this:

var title = $("#Title");
var description = $("#Description");

This is fine and works. Until you start renaming field names in your model. And then you need to track down where you hard-coded the input id. And many times these references slip away (they do from me!) and remain unchanged.

Would it not be great if these changed could be picked up automatically? or even better, do some sort of strongly-typed reference to the element if it is used in JavaScript?

Just use @Html.IdFor(m => m.Name)

And your JS code would look like this:

var title = document.getElementById("@Html.IdFor(m => m.Title)");
var description = $("#@Html.IdFor(m => m.Description)");

Below I’ve re-invented the wheel. [Facepalm]

So I trolled through MVC source code and figured out how it generates id’s for elements and here are my findings:

using System;
using System.Linq.Expressions;
using System.Web.Mvc;

public static class HtmlHelpers
{
    public static String ElementId<T, TResult>(this HtmlHelper<T> htmlHelper, Expression<Func<T, TResult>> selector)
    {
        var text = System.Web.Mvc.ExpressionHelper.GetExpressionText(selector);
        var fullName = htmlHelper.ViewContext.ViewData.TemplateInfo.GetFullHtmlFieldName(text);
        var id = TagBuilder.CreateSanitizedId(fullName);

        return id;
    }
}

As you can see, I have not made any custom code, all components are from MVC and are applied in the same order as MVC does when it generates id for elements when you call for @Html.EditorFor(). So this guarantees the provided id from this function will match the id in your form.

In the view you use it like this:

@Html.ElementId(m => m.Title)

And your JS code would look like this:

var title = document.getElementById("@Html.ElementId(m => m.Title)");
var description = document.getElementById("@Html.ElementId(m => m.Description)");

A bit of mouthful, but gives you the safety net of strong types. I have not actually used it widely, so not sure how it’ll work out in the longer run.

A note to myself, next time I try to do view compilation in msbulid and see exception

Can’t find the valid AspnetCompilerPath

I should either install VS2013 on my build server. Or choose a different version of msbuild.exe:
C:\Program Files (x86)\MSBuild\12.0\Bin\msbuild.exe.

The problem is caused by version mis-match. New msbuild is part of Visual Studio 2013 and not part of .Net framework itself. And by default your PATH variable points to .Net folders where older msbuild is found. So you in your scripts you need to explicitly specify what msbuild you need to use.