When I’d like to find out about technologies used on the site, I look on HTTP header, then on cookies. Usually combination of these can give me a pretty detailed information about underlying technology used. Cookie names are very bad for that – search for any cookie name and you’ll get a lot of information about the technology.

To hide yourself, you can rename cookies from standard to something random. In Asp.Net Identity you can do that via CookieName property on CookieAuthenticationOptions class in configuration:

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
    AuthenticationType = DefaultAuthenticationTypes.ApplicationCookie,
    LoginPath = new PathString("/Account/Login"),
    Provider = new CookieAuthenticationProvider
    {
        OnValidateIdentity = SecurityStampValidator.OnValidateIdentity<UserManager, ApplicationUser>(
            validateInterval: TimeSpan.FromMinutes(0),
            regenerateIdentity: (manager, user) => manager.GenerateUserIdentityAsync(user))
    },
    CookieName = "jumpingjacks",
});

See the jumpingjacks string? that will be the cookie name when users login. You can find the full project source code in my Github repository

There are plenty of articles about how to deploy a solution to Azure Web-Sites. I’ll just leave this here for myself:

msbuild .\SolutionName.sln /p:DeployOnBuild=true /p:PublishProfile=ProfileName /p:Password=passwordForAzurePublishing /p:AllowUntrustedCertificate=true

This builds solution according to previously configured Publish Profile.

  • Good thing that all the configurations parameters are stored within Profile, where you can tell the thing to pre-compile the views and publish to folder where you need it.

  • Bad thing – you can’t specify where deploy to, as that is specified in the Publishing Profile. But you can configure different profiles to publish to testing and to production.

  • Another very bad thing – you can’t provide it with a pre-built web-deploy package, this builds the solution again every time you execute it. And this breaks the rule of Continuous Integration and Deployment: “Build once, deploy everywhere”. Suppose you have a Build Server with process that looks like this: Build => Run Unit Tests => Run Integration Tests => Deploy to Testing => Deploy to Production. And if for deployment you use the line I quoted above, you will be compiling your sources 3 times: first time to run tests, then every time to deploy. In some sense this is OK, because you are building from the same sources, but this is a waste of time. If I build my application with pre-compiled views, this can take a while. One of the applications I work on has 800 views and compilation of all these views takes about 7 minutes. If I compile them once, and re-use that, I don’t need to wait another 7 minutes for deployment to production. Well, OK, OK! Msbuild has got some clever stuff in it and probably will not re-compile all the views if nothing have changed and second time it’ll take less time to do compilation, but that still bothers me. Hopefully new Asp vNext will do something to make my life easier in this sense.

Last week I was updating one of applications I work on to ASP.NET Identity. For a specific reasons I could not deploy to Azure for a while. But I did run all the tests locally and everything worked just fine.

When I mostly finished with Identity conversion, I finally managed to deploy the application to Azure Web-Sites. And it worked fine.. until I tried registering a user.

At that point I had an exception exploding in my face:

System.Security.Cryptography.CryptographicException: The data protection operation was unsuccessful. This may have been caused by not having the user profile loaded for the current thread's user context, which may be the case when the thread is impersonating.

A bit of digging online did not give me any results. Everything single solution in google was talking about Windows Identity Foundation, but I was not using it. I only have ASP.NET Identity.

After a bit of digging turned out that my application could not generate a token for email confirmation. And that was handled like this:

public class UserManager : UserManager<ApplicationUser>
{
    public UserManager() : base(new UserStore<ApplicationUser>(new MyDbContext()))
    {
        // this does not work on azure!!!
        var provider = new Microsoft.Owin.Security.DataProtection.DpapiDataProtectionProvider("ASP.NET IDENTITY");
        this.UserTokenProvider = new DataProtectorTokenProvider<ApplicationUser>(provider.Create("EmailConfirmation"))
        {
            TokenLifespan = TimeSpan.FromHours(24),
        };
    }
}

I have found this snippet somewhere online, but mostly had no idea what it does – cargo culting. My bad! And this very piece was causing issues when run on Azure Web-Sites.

When I replaced that snippet with more simple one, everything started to work as expected:

public class UserManager : UserManager<ApplicationUser>
{
    public UserManager() : base(new UserStore<ApplicationUser>(new MyDbContext()))
    {
        this.UserTokenProvider = new EmailTokenProvider<ApplicationUser, string>();
    }
}

Basically, if you use ASP.NET Identity and get CryptographicException when running your site on Microsoft Azure Web-Sites, most likely that is your token generation code. Replace it by framework provided EmailTokenProvider.

Email testing is always a pain. One of the “OOOPS” moments I had with email was when on testing instance of the application I started resetting user password. And the application started sending out actual emails to users. People did get “Your password reset link is here” and pointing to the testing instance of the application.

At that point I started wondering how can I isolate the emailing in production and emailing in dev and test environments. Production system must send out real emails to real users. Emails issued by testing and development installation must not send out emails to users, but rather have them collected somewhere. For a long time I had very basic configuration that checks if we are not in production environment, do not send emails at all. That worked fine for some time, but I lacked ability to check if emails were sent out or not.

There are many solutions for fake SMTP servers for dev-machines: you set up SMTP server on your machine, point your smpt credentials to this server and all emails are collected on that fake server. That works if you are just a one-man developer. If you have a team of devs, every single one of them must install this server, and for their home machines. A bit too much work for my liking. Also this approach does not work for testing environment – when the application is deployed to a web-server identical to production. And actual users are having a go on the system. Where do you install that fake server now? How can you show email messages to the user, without sending them?

Recently I came across Mailtrap.io. Mailtrap is hosted fake SMTP server. When you send email to their SMTP, it is not sent out anywhere, it is collected in inbox and can be validated against. Because this service is hosted, you don’t have to install anything, just re-point your SMTP credentials. And this works for developers on their local machines, for testers, for customers checking out testing environment.

Mailtrap has a descent free tier that allows me to have one fake inbox and one user to access the inbox. That is good enough just now. I’m not building email-heavy application and the only emails I’m sending are for password reset and for email confirmations. I’ve used this service for a month now and pretty happy with the way it works. As soon as I need more than a few emails per day, I’ll subscribe to their paid tier.

We use SendGrid for all our emailing needs and it works great for us: official C# client is provided via nuget, emails are sent out via API requests, not via SMTP (we had issues with network ports in our firewall).

We isolate all email sending into EmailService class which looks similar to this:

public class EmailService : IEmailService
{
    private readonly ITransport transportSmtp;

    public EmailService(ITransport transportSmtp)
    {
        this.transportSmtp = transportSmtp;
    }


    public Task SendEmail(MailAddress source, MailAddress destination, string htmlContent, string subject)
    {
        var message = new SendGridMessage();
        message.Html = htmlContent;
        message.From = source;
        message.To = new[] { destination };
        message.Subject = subject;

        return await transportSmtp.DeliverAsync(message);
    }        
}

Note that ITransport object is injected into this service. ITransport is SendGrid interface for email delivery service. It has 2 methods: Deliver(SendGrid message) and DeliverAsync(SendGrid message). And it does what it says on the tin – delivers email messages.

Now, for a long time my DI container (Autofac) was configured to create native SendGrid transport and pass it down to EmailService class, and if not in production environment, replace ITransport with NullTransport that does not do anything.

public class SmtpModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        // Create an SMTP transport for sending email.
        builder.Register(c => GetTransport())
            .As<ITransport>()
            .InstancePerLifetimeScope();

        builder.Register<EmailService>().As<IEmailService>();

        base.Load(builder);
    }


    public static ITransport GetTransport()
    {
        if (/* check if not production. Usually a flag in web.config */)
        {
            return new NullTransport();
        }
        var networkCredential = new NetworkCredential("username", "password");

        var transportWeb = new SendGrid.Web(networkCredential);

        return transportWeb;
    }
}    

public class NullTransport : ITransport
{
    public void Deliver(SendGrid message)
    {
        // do nothing
    }

    public Task DeliverAsync(SendGrid message)
    {
        // do nothing
        return Task.FromResult(0);
    }
}

Now with MailTrap.io I can replace NullTransport for MailtrapTransport:

public class MailtrapTransport : ITransport
{
    public void Deliver(ISendGrid sendGridMessage)
    {
        var client = new SmtpClient
                     {
                         Host = "mailtrap.io",
                         Port = 123456,
                         Credentials = new NetworkCredential("user", "pass"),
                         EnableSsl = true,
                     };

        var mail = new MailMessage(sendGridMessage.From, sendGridMessage.To.First())
                   {
                       Subject = sendGridMessage.Subject,
                       Body = sendGridMessage.Html,
                       IsBodyHtml = true,
                   };

        client.Send(mail);
    }


    public Task DeliverAsync(ISendGrid message)
    {
        return Task.Factory.StartNew(() => Deliver(message));
    }
}

If you are not using DI, similar result can be achieved just in your EmailService:

    public void SendEmail(MailAddress source, MailAddress destination, string htmlContent, string subject)
    {
        var message = new SendGridMessage();
        message.Html = htmlContent;
        message.From = source;
        message.To = new[] { destination };
        message.Subject = subject;

        if(/*check if production */)
        {
            transportSmtp.Deliver(message);
            return;
        }
        // not in production, use mailtrap
        var client = new SmtpClient
                     {
                         Host = "mailtrap.io",
                         Port = 2525, // check port with Mailtrap settings
                         Credentials = new NetworkCredential("user", "pass"), // credentials for mailtrap inbox
                         EnableSsl = true,
                     };

        var mail = new MailMessage(sendGridMessage.From, sendGridMessage.To.First())
                   {
                       Subject = sendGridMessage.Subject,
                       Body = sendGridMessage.Html,
                       IsBodyHtml = true,
                   };

        client.Send(mail);
    }     

And now all my emails from dev and test environments are delivered to Mailtrap inbox. Pretty cool and now I don’t have to think about where emails should go and if I’m sending a test email to a real person.

Mailtrap provides a pretty elaborate API and I’ve managed to build a basic user interface for admin users of my application. Meaning my users will be able to see/read emails going out without emails reaching real people.

In addition to all this, you can write automated tests against their API to check if sent out emails were in correct format. But all my existing tests were using mock objects to mock-out ITransport and I did validate emails against mocks. However, I can see the appeal of testing actual emails. I’ll do that next time I need to validate the correctness of sent-out email.

I’ve spent a few hours trying to figure out why my code does not work, and I have not found any explanations to the issue, so might just write it down here.

In ASP.NET Identity there is a concept for user locking out. You can specify how many attempts user can gets before the lock-out kicks in and for how long the lockout is enabled. This is a widely known, from all the articles online. But what these articles don’t say is that users can be opted-in and opted-out from this process.

If you look on IdentityUser class, there are 2 fields that relate to lockout: LockoutEnabled and LockoutEndDateUtc. My first reaction was that LockoutEnabled says that user is locked-out and LockoutEndDateUtc is time when the lockout expires.

Turned out that I was wrong. LockoutEnabled is a flag saying that user can (or can not) be locked in principle. I.e. opt-in flag for locking. So if we have an admin user who we don’t want to lockout ever, we’ll set this flag to be false. And for the rest of user-base this should be set to true.

To check if user is locked-out use UserManager.IsLockedOutAsync(user.Id). Function UserManager.GetLockoutEnabledAsync() checks if user is opted in for lock-out, it does not check if user is actually locked-out.

And fields on IdentityUser – don’t use them to detect is user is locked out, they are lies! Use UserManager functions to detect user state.

Hope this will save some people a head-banging, cause that caused me some stress!

Recently I’ve migrated my project to ASP.NET Identity. One of the features I had in the project is “Impersonation”. Administrators could impersonate any other user in the system. This is a strange requirement, but business behind the project wanted it.

This is the old impersonation way:

  1. When admin wanted impersonation, system would serialise information about admin account (mostly username).
  2. Find account for impersonated user
  3. Create a new authentication cookie for impersonated user
  4. As data add serialised information about admin account to the cookie
  5. Set the cookie
  6. Redirect admin to client page.
  7. Bingo, admin logged in as a client user.

To de-impersonate repeate the process in reverse. Get data about admin from cookie data (if it is present), delete cookie for client-user, login admin user again. Bingo, admin is logged in as admin again.

Here is the article how the old way is implemented

This exact code did not work with Identity framework. I tried finding the solution online, but nothing was available. My question on Stackoverslow immediately got 4 up-votes, but no answers. So people are interested in doing it, but nobody published any material on this. So here I am -)

Claims

New Identity framework works with Claims. Claim is a bit of information (think string key-value pair) that is attached to a user. You can store a claim in database and it is restored every time when you get a user from a storage. Or you can assign claims to a user before signing them in. Later on you can easily check if a user has a claim. Think of this like checking a dictionary of strings if there is a required key with a required value, or extract a value from dictionary by a key. Probably this is clear as mud, but bear with me, I’ll get to a code and it’ll become all clear.

Impersonation

To get admin user logged in as somebody else – not a problem. Delete old auth-cookie, create a new one for another user, redirect. The problem is to detect that admin is impersonating. And then to de-impersonte the admin, but don’t allow other users to de-impersonate. So here I’m creating a new user identity and add extra claims to the user, saying impersonation is going on and who is impersonating. So here is the code:

public async Task ImpersonateUserAsync(string userName)
{
    var context = HttpContext.Current;

    var originalUsername = context.User.Identity.Name;

    var impersonatedUser = await userManager.FindByNameAsync(userName);

    var impersonatedIdentity = await userManager.CreateIdentityAsync(impersonatedUser, DefaultAuthenticationTypes.ApplicationCookie);
    impersonatedIdentity.AddClaim(new Claim("UserImpersonation", "true"));
    impersonatedIdentity.AddClaim(new Claim("OriginalUsername", originalUsername));

    var authenticationManager = context.GetOwinContext().Authentication;
    authenticationManager.SignOut(DefaultAuthenticationTypes.ApplicationCookie);
    authenticationManager.SignIn(new AuthenticationProperties() { IsPersistent = false }, impersonatedIdentity);
}

This is pretty standard way Identity framework is logging in users. Only here we add extra claims – one for “Yes, impersonation is happening” and another one for “Original username is admin”.

First is used to detect if we are impersonating, second is to de-impersonate the user back into admin rights.

To detect if impersonation happens here is an extension method:

public static bool IsImpersonating(this ClaimsPrincipal principal)
{
    if (principal == null)
    {
        return false;
    }

    return principal.HasClaim("UserImpersonation", "true");
}

And this would be used like this:

if(ClaimsPrincipal.Current.IsImpersonating())
{
    // do my stuff for admins
}

To get original username, use this extension method:

public static String GetOriginalUsername(this ClaimsPrincipal principal)
{
    if (principal == null)
    {
        return String.Empty;
    }

    if (!principal.IsImpersonating())
    {
        return String.Empty;
    }

    var originalUsernameClaim = principal.Claims.SingleOrDefault(c => c.Type == "OriginalUsername");

    if (originalUsernameClaim == null)
    {
        return String.Empty;
    }

    return originalUsernameClaim.Value;
}

And this is how de-impersonation happens:

public async Task RevertImpersonationAsync()
{
    var context = HttpContext.Current;

    if (!ClaimsPrincipal.Current.IsImpersonating())
    {
        throw new Exception("Unable to remove impersonation because there is no impersonation");
    }


    var originalUsername = ClaimsPrincipal.Current.GetOriginalUsername();

    var originalUser = await userManager.FindByNameAsync(originalUsername);

    var impersonatedIdentity = await userManager.CreateIdentityAsync(originalUser, DefaultAuthenticationTypes.ApplicationCookie);
    var authenticationManager = context.GetOwinContext().Authentication;

    authenticationManager.SignOut(DefaultAuthenticationTypes.ApplicationCookie);
    authenticationManager.SignIn(new AuthenticationProperties() { IsPersistent = false }, impersonatedIdentity);
}

This is the basics of my process. I have a bit more checks and validations in place, mostly for null references. Also as a claim I store return url, where admin have started impersonation, so admin can be redirected to the original location where they started from. Also userManager is an instance of UserManager<User> that was injected by a constructor. I skipped that for brevity.

I’m pretty sure there is a better way to handle this. If you know it, please let me know in comments.

In the office we use StyleCop to make sure everyone writes code in the same style. And all of us have StyleCop installed coupled with Resharper. So whenever you have a violation of StyleCop rule, Resharper highlights the issue via squiggly line. And all our projects do conform to our StyleCop settings.

But I do look on code of many other projects – I have source code for most of the dependencies we use, like Entity Framework, Nlog, etc. And looking on other people projects is painful because of StyleCop:

See these lines – they annoy me. And just make code unreadable. And there is no simple way to turn off StyleCop in R# or VS, you’ll have to suck it up.

Or follow instructions in this post: http://stylecop.codeplex.com/discussions/285902
Basically there people say create Settings.Stylecop file with the following contents:

<StyleCopSettings Version="105">
  <GlobalSettings>
    <BooleanProperty Name="RulesEnabledByDefault">False</BooleanProperty>
  </GlobalSettings>
</StyleCopSettings>

and place it in the folder next to *.sln file. This disables all the StyleCop rules for the solution.

For my own convenience, I’ll place this file here: settings.StyleCop
You’ll have to remove .txt extension to make things work. Otherwise your antivirus might not like the file.

settings.StyleCop
Title: settings.StyleCop (0 click)
Caption:
Filename: settings.StyleCop.txt
Size: 169 B

TL;DR: Watch video by Mark Seemann about Homoiconicity in C#. Appreciate his sample implementation. Go see my simplistic implementation that is actually is used in production.

Homoiconicity is ability of code to be manipulated as data and data presenting code. I’m not sure I completely understand the concept, so I’m not going to try to explain it here. Honestly, just watch the video by Mark Seemann about Faking Homoiconicity in C#. After the video all what I’m talking here will make much more sense!

In his talk, Mark describes an application that generated loan proposal document that had complex logic and had to output document to a printer. For my project I had to generate a résumé into both Word and PDF formats. I inherited the project, so PDF generation was already done. I only needed to create Word document. Easy said!

Problem was that PDF generation was tightly coupled with PDF rendering engine and was in a bit of a “state”. When this part of project was written the developers did not know any better way to do it. And I did not imagine this can be done in a neater way. Until I saw Mark’s talk.

The way my project was generating PDF documents worked fine, until I needed to produce identical documents into Word format. It would be easier to go from Word and use some converter into PDF, but I did not have that luxury.

Data Structures

Following Mark’s lead, I identified basic parts of what I needed to represent in documents, grouped them into higher level sections. This gave me a domain data structure that represents my document. Then I fed that structure into renderers and they produced me actual files in required formats.

My very basic structures turned out to be Paragraph, Table and Bulleted List. Each of that is a class with IResumeElement marker interface. That interface does not have any methods, it only works as a marker interface, so I can group elements into a collection.

public interface IResumeElement
{
    // represent a small element on a page, like a table or paragraph
}

My Paragraph had properties of text, alignment and font. Table had rows and cells. And Bulleted List consisted of collection of Paragraphs. I’m making things very simple here. My production code has table borders for every cell. Cells have backgrounds, paragraph font can be given colour and many other properties that you’d expect from a proper document. There are 2 reasons I’m omitting extra details – it makes sample code easier to understand; and the original production code is not open source.

My higher level structures in document are Sections: IResumeSection. Section of the document consists of elements:

public interface IResumeSection
{
    IEnumerable<IResumeElement> ProduceElements(ResumeData resumeData);
}

Section is a logical grouping of elements. I have personal details section that presents name, contact details and address. Another example section is “Education” – description of all the education had by the resume owner. Here is a basic

public IEnumerable<IResumeElement> ProduceElements(ResumeData resumeData)
{
    yield return new ResumeParagraph("Sample text for demonstration");
    yield return new ResumeBulletedList()
                 {
                     Paragraphs = new List<ResumeParagraph>()
                                  {
                                      new ResumeParagraph("Bullet Point One"),
                                      new ResumeParagraph("Bullet Point Two"),
                                  }
                 };
}

There is a lot of Composite pattern going on: Resume is composed of sections; ResumeSection is composed of elements. In other words, résumé consists of sections; each section consists of elements.

Talking about composite, I do have a CompositeSection that consists of other sections:

public class CompositeSection : IResumeSection
{
    public List<IResumeSection> Sections { get; set; }

    public IEnumerable<IResumeElement> ProduceElements(ResumeData resumeData)
    {
        var result = new List<IResumeElement>();

        foreach (var section in Sections)
        {
            var sectionElements = section.ProduceElements(resumeData);
            result.AddRange(sectionElements);
        }

        return result;
    }
}

This section is used together with ConditionalSection that is taking Specification class (lookup specification pattern).

public class ConditionalSection : IResumeSection
{
    public IResumeSection TruthSection { get; private set; }
    public IResumeSectionSpecification SectionSpecification { get; private set; }


    public ConditionalSection(IResumeSectionSpecification sectionSpecification, IResumeSection truthSection)
    {
        this.TruthSection = truthSection;
        this.SectionSpecification = sectionSpecification;
    }


    public IEnumerable<IResumeElement> ProduceElements(ResumeData resumeData)
    {
        if (SectionSpecification.IsSatisfiedBy(resumeData))
        {
            return TruthSection.ProduceElements(resumeData);
        }
        return Enumerable.Empty<IResumeElement>();
    }
}

Where specification interface looks like this:

public interface IResumeSectionSpecification
{
    bool IsSatisfiedBy(ResumeData data);
}

Rendering

Because I only have 3 different types of elements, my renderers need to know how to handle only these elements. And it is down to each renderer to correctly translate the domain structure into required format. My renderers take a collection of Sections, extract elements from each section and do render each element separately. RendererBase class looks like this:

public abstract class RendererBase
{
    public abstract MemoryStream CreateDocument(IEnumerable<IResumeSection> resumeSections, ResumeData data);
    protected abstract void RenderParagraph(ResumeParagraph resumeParagraph);
    protected abstract void RenderTable(ResumeTable resumeTable);
    protected abstract void RenderBulletedList(ResumeBulletedList bulletedList);


    protected void RenderElements(IEnumerable<IResumeSection> resumeSections, ResumeData data)
    {
        var elementRenderers = GetElementRenderers();

        foreach (var section in resumeSections)
        {
            var elements = section.ProduceElements(data);

            // do the rendering
            foreach (var element in elements)
            {
                var elementRenderer = elementRenderers[element.GetType()];
                elementRenderer.Invoke(element);
            }
        }
    }


    private Dictionary<Type, Action<IResumeElement>> GetElementRenderers()
    {
        var result = new Dictionary<Type, Action<IResumeElement>>()
         {
             { typeof(ResumeParagraph), (element) => RenderParagraph((ResumeParagraph)element) },
             { typeof(ResumeTable), (element) => RenderTable((ResumeTable)element) },
             { typeof(ResumeBulletedList), (element) => RenderBulletedList((ResumeBulletedList)element) }
         };
        return result;
    }
}

And to be honest, here only one method is required: CreateDocument(). Other methods in the interface only just to make sure that every renderer knows about all the types of elements. Methods RenderParagraph, RenderTable, RenderBulletedList should really be private and never visible to the outside world. This is just an easy way to enforce for each renderer to have an ability to print every possible element. I don’t know other easy way to do that, if you know how to do it, please tell me, I’d like to know!

As Kristian Hellang rightfully suggested, Template pattern is very applicable to the base class of Renderer. This way I make sure that all the renderers know how to deal with all the element types, and also I take out some logic from the concrete renderers, like deciding on which method does render an element.

I’ll spare the details of implementation of rendering – you can check out that in the sample solution on GitHub (see link at the end of the article).

Testability

Every element is very small, every bigger element consists of small elements, but in itself is also small. All of the solution is very testable. I honestly have 100% test coverage on my production classes, even the renderers are covered – there are ways to validate if the PDF and Word documents are generated correctly. Some details about testing OpenXML generation you can find in my previous articles. I’m not including tests into the sample solution because I’m lazy. I’ve already spent 2 days getting together the sample – most of the time killing domain specific stuff and removing a lot of code I don’t want to share. So you’ll have to trust me on this one -)

If you are really interested in testing the renderers, let me know, I’ll try writing about it. Testing of the domain models are very simple and not exciting in any way. You can see how Mark does his testing in his sample solution I trust him to do a good job in TDD!

Homoiconicity

You might ask “where the hell is homoiconicity here?”. It is here:

public static List<IResumeSection> ComposeResume()
{
    return new List<IResumeSection>()
           {
                new PersonalDetailsSection(),
                new EducationSection(),
                new CertificationSection(),
                new EmploymentHistorySection(),

                new ConditionalSection(
                    new CitizenSecretSpecification(), 
                    new MembershipSection()),

                new ConditionalSection(
                    new TopSecretSpecification(), 
                    new CompositeSection()
                    {
                        Sections = new List<IResumeSection>()
                                   {
                                       new TopSecretSection(),
                                       new CitizenSecretSection(),                                                    

                                   }
                    }),
           };
}

Resume is composed of little classes – sections. These classes can be serialized and persisted. Or list of sections can be read from a database, composed into C# objects and then you can feed these into renderers. Or user can drag and drop sections into place, then you build a C# list of IResumeSection and then feed that to a renderer – this will give each user a customised résumé. More than that – you can persist the order of sections in drag’n’drop area and then next time give a customized résumé to a user. Or do many other cool things…

Just for kicks, in my sample solution, I compose a list of sections, serialize it into standard C# binary notation, then de-serialize it; then serialize as JSON with Json.Net, deserialize it and then render these sections into PDF and Word:

public static void Main(string[] args)
{
    //var resumeSections = ResumeComposer.ComposeBasicResume();
    var resumeSections = ResumeComposer.ComposeResumeForTopSecretAgents();

    ResumeSectionsToBinaryFormat(resumeSections, "ResumeSections.cv");

    List<IResumeSection> sectionsFromBinary = ReadResumeSectionsFromBinary("ResumeSections.cv");


    String json = SerializeAsJson(sectionsFromBinary);

    List<IResumeSection> sectionsFromJson = DeserializeJson(json);


    var resumeData = Data.JamesBond;
    CreatePdf(sectionsFromJson, resumeData);

    CreateWord(sectionsFromJson, resumeData);
}

I hope all this makes sense and next time I come across a PDF generation code, it will be nice and testable -)

The sample solution is on GitHub. Have a look through.

p.s. In my other project I’ll need to create an interface to customize résumé format for different users. And drag’n’drop with json persisting is what I’ll be using. Maybe I’ll write about it here…

Yesterday I discovered Package Explorer for opening up OpenXML documents, validating and editing. Today I have found OpenXml SDK Tool.

SDK Tool is a great software from Microsoft for every developer working with OpenXml. SDK Tool is similar to Package Explorer, but you don’t have edit functionality. And validation messages are not as good. But you get an instant access to documentation about any possible element you can imagine.
And the killer function is CODE GENERATION! It generates C# code for any element that is available on the page.

You can create a Word Document, open it up in SDK Tool and it will produce you a full C# listing to create exactly the same document from your code! This is just fantastic! Using this tool today I was discovering how things done in a simple way: add page numbers to an empty Word-doc, open it up in SDK Tool, find the part about page numbers (probably this was the hardest part), copy the generated code, slightly refactor and I had page numbers in my generated documents. All it took about 15 minutes and I did not use Google or any other documentation.

Anyway, download SDK Tool from here: http://www.microsoft.com/en-gb/download/details.aspx?id=30425. Select only SDK Tool to download, you won’t need anything else available on the page.

Once installed find Open XML SDK Productivity Tool in your start menu and open a Word document.

You can select any element of the document and see documentation to that, with all available child node types

Documentation

You can run validation of any element in the document:

Validation

And the promised killer feature – Generated code:

See_Generated_Code

Another quite useful function is to compare the documents. Yesterday I had to do that to find out how a particular feature is implemented. Now, when you have a generated code, this may be not so useful, but you can compare 2 Word documents and it’ll show you the differences in containing XML.