Simple Console Application in .NET Core with DI and Configuration

While the .NET Core documentation and libraries do a good job of providing an easy way to get started with hosted apps (web or otherwise), it is somewhat lacking in terms of the same guidance for simple run-to-completion type console apps. You can write a simple Main() method and do your stuff, but how do you get the advantage of the amazing configuration and dependency injection that you get out of the box with hosted apps? Surely, you could set up all that machinery and maybe create an IHostedService implementation just to get going. Even then, you are still left with a hosted app that you have to deal with shutting down after your logic is done.

If you look behind the builder methods that come out of the box with hosted apps, you will find that there is an easy way to get the good DI and configuration stuff and keep your simple Main() method app simple. To that end, I’ve written up a little ProgramRunner class that you can plop in and use as so (example .NET Core 3.1 code follows):

public class ProgramOptions
{
    public string SomeOption { get; set; }
}

public class Program
{
    private readonly ProgramOptions _programOptions;

    public Program(IOptions<ProgramOptions> programOptions) => _programOptions = programOptions.Value;

    public static void Main(string[] args)
    {
        ProgramRunner
            .WithConfiguration(c => c.AddJsonFile("appsettings.json"))
            .AndServices((s, c) => s
                .Configure<ProgramOptions>(o => c.Bind(o))
                .AddSingleton<Program>())
            .Run(p => p.GetService<Program>().Run());
    }

    public void Run() => Console.WriteLine(_programOptions.SomeOption);
}

Easy.

Here is the code for ProgramRunner referenced above (perhaps I will put it into a library at some point):

public class ProgramRunner
{
    private Action<IConfigurationBuilder> _configurationBuilderAction;
    private Action<IServiceCollection, IConfiguration> _serviceCollectionAction;

    private ProgramRunner() { }

    public ProgramRunner AndConfiguration(Action<IConfigurationBuilder> configurationBuilderAction)
    {
        _configurationBuilderAction = configurationBuilderAction;
        return this;
    }

    public ProgramRunner AndServices(Action<IServiceCollection, IConfiguration> serviceCollectionAction)
    {
        _serviceCollectionAction = serviceCollectionAction;
        return this;
    }

    public static ProgramRunner WithConfiguration(Action<IConfigurationBuilder> configurationBuilderAction) => new ProgramRunner().AndConfiguration(configurationBuilderAction);
    public static ProgramRunner WithServices(Action<IServiceCollection, IConfiguration> serviceCollectionAction) => new ProgramRunner().AndServices(serviceCollectionAction);

    private IServiceProvider GetServiceProvider()
    {
        var configurationBuilder = new ConfigurationBuilder();
        _configurationBuilderAction?.Invoke(configurationBuilder);
        
        var configuration = configurationBuilder.Build();

        var serviceCollection = new ServiceCollection();
        _serviceCollectionAction?.Invoke(serviceCollection, configuration);
        
        return serviceCollection.BuildServiceProvider();
    }

    public void Run(Action<IServiceProvider> runAction) => runAction(GetServiceProvider());
    
    public T Run<T>(Func<IServiceProvider, T> runAction) => runAction(GetServiceProvider());
    
    public Task RunAsync(Func<IServiceProvider, CancellationToken, Task> runAction, CancellationToken cancellationToken = default) => runAction(GetServiceProvider(), cancellationToken);
    
    public Task<T> RunAsync<T>(Func<IServiceProvider, CancellationToken, Task<T>> runAction, CancellationToken cancellationToken = default) => runAction(GetServiceProvider(), cancellationToken);

And there you have it.

Published originally at aashishkoirala.com.

An AWS Primer for Azure Developers

Even though AWS has been around for much longer, as is the norm for a lot of people coming from the .NET/Microsoft side of things, my cloud experience started with Azure. I got into AWS when I was a few years into Azure. I remember thinking at that point it would be nice to have something like this primer that would give me a very high-level introduction to AWS based on what I knew of Azure. So here it is.

Obviously everything can’t be covered- I’ve kept this very high level so you have a starting point based on your needs, plus the services covered are geared more towards general-purpose application development as opposed to specialized cases like machine learning, IoT, big data or data warehousing.

Here is what I cover:

  • Accounts, Subscriptions & Resources
  • Administration
  • Security
  • Provisioning and Infrastructure as Code (IaC)
  • Infrastructure as a Service (IaaS) and Networking
  • Platform as a Service (PaaS)- includes Applications and Serverless, Containers and Kubernetes, Database, Cache, Messaging, Storage, Observability, and SCM/CI/CD.

Accounts, Subscriptions & Resources

The major difference here is the concept of Subscriptions in Azure which does not exist in AWS. When you create an account in AWS, that is already at the level of what you would call a subscription in AWS. So, if you are an organization that has multiple subscriptions in Azure, the corresponding experience in AWS would be to just have multiple accounts. To ease the management overhead this would cause, AWS has an offering called Organizations.

The other difference is the concept of Resource Groups in Azure which, again, does not exist in AWS (or rather exists but in a very different way). Whereas resource groups are a first-class deployment-level construct in Azure, AWS resource groups are just a tagging mechanism. Locations in Azure correspond to Regions in AWS as you would expect – but the role that resource groups play in resource structuring in Azure are for the most part played by AWS regions.

Resources in Azure get a Resource Identifier; the equivalent concept in AWS is the ARN.

Administration

While there are multiple modes of administration for both, the common modes are that both have a CLI and a web-based GUI. The CLIs operate similarly – you login and get a context and then all your operations are against that context. The account/subscription/resource group/region difference mentioned above plays into this as well. Practically, though, Azure CLI and AWS CLI are pretty similar in how they work.

The major difference can be seen in the web GUI experience. With Azure Portal, you pick an account and then within that account, you get the blade-based UI that shows you all resources/resource groups in one place, and as you get deeper into various settings, the blades pile on – but for the most part it’s one unified UI. As of this writing, it has Dark Mode support whereas AWS Console does not.

AWS Console is much more segmented – you first sign in to an account, then you pick a region, then you pick a service and a standalone Console opens up specific to only that service with a UI that is tailored for that service. Since when you initially get started with either Azure or AWS, the web GUI will be your most likely landing spot, this causes some disorientation. This has been a long-standing complaint against AWS Console – the lack of a way to look at all your resources in one place regardless of service type or account. On the other hand, once you start getting deep into a specific service’s configuration, some think the Azure blade UI has a tendency to get “out of hand” – so pros and cons on both sides.

Security

The repository that drives security and access control in Azure is Azure Active Directory (AAD). All role-based access control (RBAC) is driven off AAD. This aspect on AWS is served by Identity and Access Management (IAM). Both have similar security constructs in the way of users, groups, roles, policies and permissions. There are significant differences in implementation and how it applies to resource access for administration as well as for consumption – so this is a fundamental aspect of both that is worth getting to know in depth before jumping in.

Provisioning and Infrastructure as Code (IaC)

Of course, we would all rather be using Terraform or Pulumi – but that may not always suit your needs in which case you do have to deal with the native IaC system provided by each. The counterpart to Azure Resource Manager in AWS is CloudFormation. Like you author ARM Templates in Azure, you author CloudFormation Stacks in AWS. CloudFormation supports both JSON and YAML whereas ARM only supports JSON.

Infrastructure as a Service (IaaS) and Networking

Azure VMs and VM Scale Sets allow you to provision and scale VMs on the cloud. AWS does this via Elastic Compute Cloud (EC2). You can define virtual networks and subnets where your VMs run (as well as some PaaS services) in Azure with Azure VNET, and in AWS with Virtual Private Cloud (VPC). Both Azure and AWS provide DNS and DNS based traffic routing services (Azure with Azure DNS and Traffic Manager, and AWS with Route 53). Load balancing is available on Azure with Load Balancer and on AWS with Elastic Load Balancing. The corresponding service on AWS for Azure’s Front Door is Global Accelerator. Both Azure and AWS have CDN services- with Azure CDN and AWS CloudFront.

Platform as a Service (PaaS)

If you stick to IaaS, once you have set up your VM and networking, a VM is a VM is a VM. Not much else to it – it gets more interesting when you get into PaaS with all the different services and offerings and, of course, fancy names (more of those on the AWS side). Again, keeping with the “just general-purpose application development” theme, I’ve covered the following “service categories” if you will:

  • Applications and Serverless
  • Containers and Kubernetes
  • Database
  • Cache
  • Messaging and Event Processing
  • Storage and Secrets
  • Observability
  • SCM and CI/CD

Applications and Serverless

If you want to get an application (or a bunch of them) up and running without worrying about infrastructure, you would use App Service on Azure. This purpose is served by Elastic Beanstalk in AWS. Both support all the major mainstream languages. To define and orchestrate workflows, whereas you would use Logic Apps on Azure, you could use either Simple Workflow Service on AWS. Fully serverless, spin-up-on-demand-via-triggers compute is available on Azure as Functions and on AWS as Lambda. You can string up a bunch of Lambdas to implement workflows on AWS using Step Functions.

Containers and Kubernetes

Serverless container management is available in Azure with Container Instances. Between Elastic Container Service (ECS) and Fargate, AWS has this covered. Both have their versions of a container registry- Azure with Azure Container Registry (ACR) and AWS with Elastic Container Registry (ECR). If you want to be on Kubernetes, Azure has a managed offering with Azure Kubernetes Service (AKS). The corresponding AWS service is Elastic Kubernetes Service (EKS).

Database

Azure supports SQL Server, MySQL and PostgreSQL all as PaaS services. The corresponding service offering on AWS is Relational Database Service (RDS). As far as purpose-built/NoSQL databases are concerned, AWS seems to have more offerings, but a lot of that is also owing to a lot of these in Azure being bundled within Cosmos DB. The closest thing to native Cosmos DB as well as Table Storage on Azure is DynamoDB in AWS.

Another common pattern with databases on both Azure and AWS is proprietary database engines that then have compatibility with some standard API. For example, you can set up Cosmos DB with MongoDB compliance (the AWS counterpart being DocumentDB) or with Cassandra compliance (the AWS counterpart being Keyspaces). In the same vein, switching back to relational, AWS has Aurora – which is a proprietary database engine atop RDS that has compatibility either with MySQL or PostgreSQL.

Cache

Azure has a managed Redis Cache offering. The corresponding service in AWS is ElastiCache which supports both Redis as well as Memcached.

Messaging and Event Processing

As far as message queues are concerned, Azure Storage has a Queue Storage option. For more advanced usage, Azure Service Bus has Queues as well. The closest equivalent on AWS to these is Simple Queue Service (SQS). The closest equivalent to Topics on Azure Service Bus on AWS is Simple Notification Service (SNS). On more of the event processing side, Azure Service Bus has Event Hub and there’s also Azure Event Grid – the AWS counterpart to these is the Kinesis set of services. You can’t talk events without talking about Kafka- whereas Azure has Kafka support as part of HDInsight, AWS has Managed Streams for Kafka (MSK).

Since we brought up HDInsight, even though not technically messaging or event processing, HDInsight supports both Hadoop and Spark. The AWS equivalents for these would be EMR and Glue, respectively. Finally, only slightly related to all these- Azure does not have a dedicated email service- it promotes SendGrid as a marketplace option. AWS does have Simple Email Service (SES) – so there’s that.

Storage and Secrets

Setting aside Queue Storage (which is a queue more than storage) and Table Storage (which is a database more than storage) which I mentioned above, Azure Storage groups two pure storage services- BLOB Storage (the AWS counterpart being Simple Storage Service or S3) and File Storage (the AWS counterpart being Elastic File System or EFS). In terms of block level storage, Azure has Managed Disks whereas AWS has Elastic Block Store (EBS).

For secret management, Key Vault is the Azure one-stop-shop. AWS has a bunch of services that deal with secrets and keys- such as KMS, CloudHSM and Secrets Manager. There’s also Parameter Store which is a general-purpose configuration data storage service but also supports encrypted secrets.

Observability

Between Azure Monitor and Application Insights, your logging, tracing and monitoring needs should be covered on Azure. The corresponding services in AWS are CloudWatch and X-Ray.

SCM and CI/CD

Azure DevOps is a set of tools to support SCM and CI/CD (including Azure Repos for source control and Azure Pipelines for CI/CD). AWS has its own source control service in CodeCommit along with CI/CD provided by CodeBuild and CodePipeline.

In conclusion, I hope this helps someone. It is very high level and barely scratches the surface of anything, really. I felt the need for something like this especially when I was building software that needed to run natively on both cloud platforms or when I was building abstractions for cross cutting functionality. In any case, I hope it orients you in the right direction if you are at a similar position.

Originally published on aashishkoirala.com.

The Rise of Go

Recently, Go has seen a real uptick in popularity and adoption for a variety of different usages. It has been around for a while and has been continually improving. The purpose-built simplicity and extra focus on making concurrency easy and safe is part of it. The other part I like is the ease with which what you write becomes portable. These aspects especially make it a good fit to write infrastructure and tooling.

I think the recent spike in popularity can be attributed to two factors- the first being Docker/Kubernetes/CNCF and the second being HashiCorp.

Docker was written in Go, which I assume had some influence in Kubernetes being written in Go (or maybe it was because it incubated in Google). In any case, what that has meant is a good chunk of the tooling that has been built around Kubernetes has been written in Go. By extension, CNCF, which was borne out of Kubernetes, has been incubating a large number of projects, most of which are written in Go. It is almost like an unwritten rule that a CNCF project better be written in Go (sort of like Apache with Java). Some of the more widely used tools like Helm and Jaeger are good examples.

HashiCorp is a big player in the distributed software and cloud tooling space, and its adoption of Go is probably a factor as well in its popularity. Most of its major products like Terraform, Vault, Consul, Nomad and Packer are all written in Go.

I have been dabbling in Go recently, and I like it. If I had to write something quickly or something big with lots of moving pieces, I would still fall back on my comfortable cushion that is C#. For a lot of usages, that is still the right language to use (or whatever your mainstream language of choice is- be it Node, Python or Ruby, and if you’re a Java person, well, you do you). I have been trying to force myself to adopt Go at least for any CLI tooling that needs to be written. By extension, I have started to like seeing CLI tooling written in Go rather than once written in Python or Node, I think mainly because of the portability. All Go-built tools are shipped as single executables, which just makes everything so much better.

For infrastructure components, Go strikes a good balance between performance and maintainability. I think a good tag-team combination is Rust for the really low-level stuff combined with Go as an orchestrator.

But even in the domain of writing application services, where my primary choice is C#, I think Go is seeing a lot of popularity, especially because of the rise of microservices and each piece not having to be built up of a very large number of components.

If you are in a position where you would like to broaden your programming language horizon, I would certainly give Go a shot.

Originally published on aashishkoirala.com.

Revisiting Kubernetes vs. Service Fabric

Since I wrote my initial post regarding Kubernetes and Service Fabric, a few things have happened:

  • Kubernetes had a chance to mature a lot more and also, needless to say, has sky-rocketed in adoption.
  • Managed Kubernetes on the major cloud providers (AKS/EKS/GKE) has had a chance to mature a lot more.
  • Adoption of Service Fabric is miniscule in comparison.
  • Microsoft itself seems to be putting (wisely so) much of its firepower behind Kubernetes while Service Fabric sort-of just sits there on the side.
  • The successor to Service Fabric (i.e. Service Fabric Mesh) – is going to be container-driven.

Specifically, in terms of where Microsoft is putting its money, I think that got brought home at Ignite 2019. You only need to sit through the major keynotes and peruse the sessions to figure out that as far as these kinds of platforms are concerned, Kubernetes has “won the day”. All things being equal, my suggestion would be to adopt Kubernetes and avoid Service Fabric. If you are starting out, this means making sure you pick a technology that is not bound to any specific OS platform (sure, Kubernetes can run Windows workloads, but it will be a while before it gets parity with Linux if it ever does). If you’re already invested in Service Fabric, put a migration plan in place to move away.

Not to say that Service Fabric is going to be sunsetted any time soon. After all, a lot of Azure services supposedly run on it. That does not preclude Microsoft from winding it down slowly as a public offering though. I would say Service Fabric is in a similar bucket as a lot of other similar platforms- Mesos, Nomad, Borg, or whatever Netflix’s proprietary thing is called. They are all solid platforms and battle tested and are in wide use by companies that built them or adopted them pre-Kubernetes. As a general consumer of tech, though, Kubernetes seems to be the better choice.

This, of course, does not mean you should couple your software to Kubernetes or any specific platform for that matter. If you build it such that you can easily port from one to another, these fluctuations in the technology landscape are a lot less painful.

Originally published on aashishkoirala.com.

Working with Windows Containers in Kubernetes

Even though Docker was built atop Linux containers and that is the majority of Docker usage out there, Windows Containers have been a thing for a while now. They went mainstream in 2016, and one hopes “ready for primetime” with Windows Server 2019. Even though integration with Docker is getting tighter, if you are in the unfortunate position of having to use Windows Containers with Kubernetes, you are going to have issues.

If your language/runtime is platform-independent, and if you’re not invoking any platform-specific libraries, your best bet would be to just stick to Linux containers. For .NET folks, this means using .NET Core and making sure you’re not using anything that is Windows-native (or if you are, making sure there is a fallback or abstraction or what-have-you). Additionally, you should make sure your code is not making any assumptions about what platform it is running on (think file path roots and separators). Staying with Linux containers means – if nothing else – that your Kubernetes deployment is part of the majority – and you are more likely to find resources or support online for issues that may arise.

If you do decide to embark on your Windows Container-Kubernetes journey, keep the following in mind:

  • There is no good “Developer Machine Testing” option. Docker for Windows supports Windows containers but the instance of Kubernetes that ships with it doesn’t. Minikube does not support Windows containers.
  • This means you have to set up a deployable environment and use that instead of testing locally on your own machine. So the best you can do is maybe test your container directly on Docker on your machine, and then deploy it out to your test cluster and see if it works.
  • As far as a deployable environment is concerned, short of setting up an on-prem cluster on Windows Server 2019 boxes (that only covers the worker nodes – the control plane still needs Linux), your best bet is to go with one of the big three cloud providers.
  • All three- Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE) and Amazon EKS – claim support for Windows Containers in some capacity, however all of them are in Preview mode currently and thus not ready for production use.
  • This being Windows Containers, I would say safest bet might be to go with AKS. That is the only one I tried.

Azure Kubernetes Service (AKS)

Assuming you are going with AKS, you have to then keep the following in mind:

  • You’ll be building your containers against some variant of NanoServer. Make sure you use a build of NanoServer that matches the build of Windows Server 2019 that AKS uses. I initially went with 1903 (since it was the latest) and that did not work. As of today, the version that works is build 1809. For ASP.NET Core apps, then, the correct base image to build your container off is mcr.microsoft.com/dotnet/core/aspnet:2.2-nanoserver-1809 (see here for details).
  • Again, realize that Windows Containers are still a preview or “experimental” feature even on AKS. You have to enable experimental/preview features in AKS first, BEFORE you provision your AKS cluster. Once again, the control plane still needs Linux – so you will need a cluster with 2 node pools. The provisioned cluster will include a single node pool based on Linux. You have to then add a Windows node pool where your Windows containers will be deployed. To do all this through the Azure CLI, you also have to install the aks-preview extension on the CLI. This article lays it all out.

All said though, as of today, it’s all experimental so you can’t really run production workloads. So if you’re on Windows and are considering Kubernetes, you can use the above to start preparing for your eventual migration, or alternatively if you have the option, you can start preparing your application so it can run on Linux.

Installing PFX Certificates in Docker Containers

Recently, I came across having to install PKCS12 certificate bundles (i.e. a PFX file with the certificate and private key included, protected with a password) on a Docker container. This is standard fare on normal Windows machines or on PaaS systems such as Azure App Service. Doing this on a container, though, proved to be tricky (perhaps with good reason as I mention later) – so tricky that I ended up writing my own tool to do it. I have written this up in case you have similar needs and are working with .NET Core.

Should you?

Probably not- if you can help it. The most common reason to install a PFX with private key anywhere is so that some application running there can access it and use it for authentication, signing or other cryptography purposes. Usually, this is on a protected production machine or somewhere more managed where the security is managed by the resource provider (e.g. Azure App Service). By that rationale, if your container image is going to be stored somewhere secure, I guess it’s fine. If your image is published or is going to run some place where anyone can get to it – then that’s almost as bad as just handing out your private key. This is because with the image in hand, anyone can fire up Docker, get inside the image, and extract it or use it.

Most platforms will allow you to parameterize sensitive information and pass it securely to Docker while running the container. If I have to deploy my application to Kubernetes, for example, I can generate a Kubernetes Secret out of it. That also means, though, that I have to change my application to be able to read this information as if though it were in a file. Now, if I have that option, it is the best one to go with.

Okay, but I still need to

I found very sparse documentation online related to installing PFX on containers. As far as Windows (i.e. NanoServer) is concerned, certoc.exe is the prescribed tool to deal with certificates. I can’t use PowerShell and the good old Import-PfxCertificate because the .NET Core SDK/Runtime images don’t have PowerShell since PowerShell has a dependency on .NET Framework. They have PowerShell Core but that does not have the certificate commands yet. But most online content surrounding certoc deals with installing CA public key certificates to the root store, and not with PFX certificates. Getting your hands on the tool is also bit of a chore. It’s included on the NanoServer base image, but not on the .NET Core Runtime or SDK base images. So I have to create a multistage DOCKERFILE to get it from one, copy it to the other to run it, and so on. After all that, even though there are options for it, I just couldn’t get PFX to work with certoc – and the error messages are unhelpful to say the least. As an added deterrent, it’s a Windows-only thing – so would not work on Linux containers anyway.

So just write your own

If you’re already trying to run a .NET Core application on a container and are in the space of writing .NET Core code, your best bet therefore is to just use the X509Store API to write a quick little tool that will do what you want. Installing a PFX is as easy as (this is the simplest case i.e. single certificate, no chain):

using (var certificate = new X509Certificate2(pfxFileBytes, pfxPassword, X509KeyStorageFlags.Exportable | X509KeyStorageFlags.PersistKeySet))
using (var store = new X509Store(storeName, storeLocation, OpenFlags.ReadWrite))
{
    store.Add(certificate);
    store.Close();
}

To that end – PFXTOOL

I ended up writing a .NET Core Global Tool that does the above and a bunch more with PFX certificates so that I could use it more easily in containers. You can find PFXTool here. I’ve tested it on .NET Core Runtime Docker images with both Windows NanoServer and Alpine Linux. The following are sample DOCKERFILEs that I used to test it (I’ve included the simpler ones that run on the SDK base images. The Runtime ones are a bit trickier since you need the SDK to install .NET Core Global Tools – I’ve documented the workaround here – and that is what I did for this tool as well).

Windows

FROM mcr.microsoft.com/dotnet/core/sdk:2.2-nanoserver-1803
WORKDIR /app
COPY TestCertificate.pfx ./
RUN dotnet tool install pfxtool -g && pfxtool import --file TestCertificate.pfx --password Password123 --scope user
ENTRYPOINT ["pfxtool", "list", "--scope", "user"]

Linux

FROM mcr.microsoft.com/dotnet/core/sdk:2.2-alpine3.9
WORKDIR /app
COPY TestCertificate.pfx ./
ENV PATH "$PATH:/root/.dotnet/tools"
RUN dotnet tool install pfxtool -g && pfxtool import --file TestCertificate.pfx --password Password123 --scope user
ENTRYPOINT ["pfxtool", "list", "--scope", "user"]

A quick note about X509Store on Linux

On Windows, it is pretty clear where X509Store puts the certificates since they’re accessible using certmgr. On Linux, though, the whole thing is a bit of a retrofit of the contracts on to how Linux (or OpenSSL to be precise) handles certificates. While it generally works and will most probably work for your scenario, you should note the following:

  • The only supported stores for LocalMachine are Root and CA (Root points to the OpenSSL certificate directory, and CA I think is a filter on Root that excludes self-issued certificates).
  • OpenSSL does not have the concept of a CurrentUser scope. Anything you put in or get from CurrentUser stores go to a special .NET-specific directory (~/.dotnet/corefx/x509stores/ – officially undocumented and subject to change).

And there you have it. I guess you could summarize it as: try not to do it, here’s how you should probably handle it instead, but if you have to, here’s a way, here’s even a tool, but then look at these caveats if you’re on Linux.

Originally published on aashishkoirala.com.

Running .NET Core Global Tools Without the SDK

.NET Core Global Tools are pretty neat. If you are targeting developers with the .NET Core SDK installed on their machines and need to ship CLI tools, your job is made immensely easier. It is just as easy as shipping a NuGet package. However, once you get used to building these things, it is easy to fall into the trap of treating this shipping mechanism as if it were Chocolatey (or apt-get, or yum, or what-have-you). It is certainly not that. The process of installing and upgrading your tools are handled by the .NET Core SDK – which alleviates you from having to create a self-contained package if you were shipping a ready-to-go tool – and this makes sense – global tools are a developer-targeted thing. You’re not supposed to use it to distribute end-user applications.

However, there is a use case in the middle that warrants some resolution, and that is – being able to install and run a .NET Core Global Tool on a machine that has the .NET Core Runtime installed but not the SDK. A good example of this is if, as part of my deployment pipeline, I need to build a Docker Container image that is supposed to run in production and therefore has the Runtime installed but not the SDK – and it needs this tool installed in the container.

In this scenario, the following is what I do:

  • As part of the deployment pipeline, install the tool on the deployment machine itself (or hosted agent or whatever if you’re using a CI/CD pipeline tool)- which is going to have the SDK since you’re building and deploying stuff from it.
  • As part of installing the tool, the SDK will then create a folder where the tool is published so as to be self-contained (not TOTALLY self-contained, it will still need the Runtime, of course).
  • As part of your Docker image build, add a COPY line to your DOCKERFILE to copy the tool files to your container image.

Example Let’s say I have a global tool called mytool that is published to NuGet with version 1.0.0. Then, If I run the following:

dotnet tool install -g mytool

The tool ends up getting installed in the UserProfile/.dotnet/tools/ directory (where UserProfile would commonly be /Users/username in Windows and /home/username in Linux). You can further control the install path using the --tool-path option rather than the -g (or --global) option – if that makes your job easier.

In the above mentioned folder, it creates a platform-native executable shim with the actual files residing inside a .store directory. So, going back to the mytool example, you could have the directory structure as follows:

UserProfile/.dotnet/tools/.store\mytool/1.0.0/pfxtool/1.0.0/tools/netcoreapp2.2/any

The above is just an example. The actual structure will be similar but could vary depending on what TFMs the tool targets and how the NUPKG is packaged and is thus exploded by the installer. In any case, inside this directory will be the set of files that make up this tool along with the main startup assembly (say mytool.dll) that you can invoke by switching to this directory and running:

dotnet mytool.dll

You can use this mechanism to then set up your DOCKERFILE to copy the files to a well-defined location inside the container. Perhaps you can add the well-defined location to PATH or add a shim shell script to invoke it to make things easier. It’s all a bit cumbersome but it works. Of course, if you OWN the tool, you could just publish it differently with dotnet publish – but in cases where you need to use a published tool in your production runtime environment, this helps.

Azure DevOps for CI and CD

I set up CI and CD for two of my applications using Azure DevOps. It was quite easy. Setting up the build pipeline is as simple as including a YAML file in your source repository. It then just comes down to knowing how the build YAML schema works. As far as the release (or deployment) pipeline is concerned, though, I could not find a similar method. I had to set it up through the Azure DevOps UI. I don’t know if there is some information I am missing, but that would seem to somewhat go against the DevOps principle – you know – infrastructure as code and all.

Anyway, my personal website and blog is a straight forward Razor app based on ASP.NET Core. The build YAML was then very simple:

pool:
  vmImage: 'VS2017-Win2016'

variables:
  buildConfiguration: 'Release'

steps:
- script: |
    dotnet build --configuration $(buildConfiguration)
    dotnet publish --configuration $(buildConfiguration) --output $(Build.BinariesDirectory)/publish
  displayName: 'Build Application'

- task: ArchiveFiles@2
  inputs:
    rootFolderOrFile: '$(Build.BinariesDirectory)/publish' 
    includeRootFolder: false
    archiveType: 'zip'
    archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip' 
    replaceExistingArchive: true 
  displayName: 'Zip Build Output'

- task: PublishBuildArtifacts@1
displayName: 'Publish Build Artifacts' 

Things to note here – at first, it was not obvious to me that I need to actually publish the build artifacts for it to be available further down the pipeline. Once I figured that out, though, the built-in task makes this easy. I did have to zip up my folder, though. It’s a good idea, anyway, as it’s easier than uploading a gazillion files, but I couldn’t get it to work without zipping it up. Second, even though I am running the build on a Windows machine, I could just as easily have picked a Linux box since this is .NET Core.

My release pipeline is pretty simple – there are no multiple stages, and it just comes down to one task – the Azure App Service deploy task.

I guess the only difference with my other application, Listor, is that it is a React based SPA with a .NET Core backend. So the build is a little more involved. I therefore found it easier to just write a custom PowerShell script and call that from the build YAML rather than messing around with it in YAML.

$ErrorActionPreference = "Stop"
Push-Location AK.Listor.WebClient
Write-Host "Installing web client dependencies..."
npm install
If ($LastExitCode -Ne 0) { Throw "npm install failed." }
Write-Host "Building web client..."
npm run build
If ($LastExitCode -Ne 0) { Throw "npm run build failed." }
Pop-Location
Write-Host "Deleting existing client files..."
[System.IO.Directory]::GetFiles("AK.Listor\Client", "*.*") | Where-Object {
	[System.IO.Path]::GetFileName($_) -Ne ".gitignore"
} | ForEach-Object {
	$File = [System.IO.Path]::GetFullPath($_)
	[System.IO.File]::Delete($File)
}
[System.IO.Directory]::GetDirectories("AK.Listor\Client") | ForEach-Object {
	$Directory = [System.IO.Path]::GetFullPath($_)
	[System.IO.Directory]::Delete($Directory, $True)
}
Write-Host "Copying new files..."
[System.IO.Directory]::GetFiles("AK.Listor.WebClient\build", "*.*", "AllDirectories") | Where-Object {
	-Not ($_.EndsWith("service-worker.js"))
} | ForEach-Object {
	$SourceFile = [System.IO.Path]::GetFullPath($_)
	$TargetFile = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine("AK.Listor\Client", $_.Substring(26)))
	Write-Host "$SourceFile --> $TargetFile"
	$TargetDirectory = [System.IO.Path]::GetDirectoryName($TargetFile)
	If (-Not [System.IO.Directory]::Exists($TargetDirectory)) {
		[System.IO.Directory]::CreateDirectory($TargetDirectory) | Out-Null
	}
	[System.IO.File]::Copy($SourceFile, $TargetFile, $True) | Out-Null
}
Write-Host "Building application..."
dotnet build
If ($LastExitCode -Ne 0) { Throw "dotnet build failed." }

The build YAML, then is simple enough:

pool:
  vmImage: 'VS2017-Win2016'

variables:
  buildConfiguration: 'Release'

steps:
- script: |
    powershell -F build.ps1
    dotnet publish --configuration $(buildConfiguration) --output $(Build.BinariesDirectory)/publish	
  displayName: 'Run Build Script'

- task: ArchiveFiles@2
  inputs:
    rootFolderOrFile: '$(Build.BinariesDirectory)/publish' 
    includeRootFolder: false
    archiveType: 'zip'
    archiveFile: '$(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip' 
    replaceExistingArchive: true 
  displayName: 'Zip Build Output'

- task: PublishBuildArtifacts@1
displayName: 'Publish Build Artifacts'

The release pipeline, of course, is identical since both these applications are hosted as app services on Azure. Since deploying these applications to production, I have had to make a few changes and this CI/CD pipeline has proved very helpful.

On Service Fabric, Kubernetes and Docker

UPDATE (Nov 13, 2019) – My views on this have changed since I first wrote this. See this post for where I currently stand.

Let us get Docker out of the way first. Microservices and containers are quite the hype these days. With hype comes misinformation and hysteria. A lot of people conflate the two (fortunately there are wise people out there to set us all straight). If you have done your due diligence and decided to go with microservices, you don’t have to go with containers. In fact, one would argue that using containers for production might be a good crutch for applications that have too many tentacles and there is no appetite to port them or rewrite them to be “portable”. Containers do have other good use cases too. Docker being the leading container format (although starting to face some competition from rkt these days), all in all I am glad containers exist and I am glad that Docker exists. Just be aware of the fact that what you think you must use may not be what you need at all.

Service Fabric was written by Microsoft as a distributed process and state management system to run their own Azure cloud infrastructure. After running Azure on top of it for some years, Microsoft decided to release it as a product with the marketing pitch being geared towards microservices. Whether you are doing microservices or not, if you have any multitude of services or deployment units, Service Fabric is an excellent option. If you are running .NET workloads, then it is an even better option as you get to use its native features such as the Stateful model, among others. Since it is also a process orchestrator, and at the end of the day, a Docker container is just a Docker process, Service Fabric can also act as a decent container orchestrator. Kubernetes I am told has a better ingress configuration system, but Service Fabric has improvements coming related to that. The other big promise is around Service Fabric Mesh – which is a fully managed PaaS offering – so that may sway the decision somewhat.

If you have done your homework and find that containers are the way to go, then Kubernetes is a good choice for the orchestration system. Even Microsoft is all behind Kubernetes now with their Azure Kubernetes Service, which only makes sense given that it is in their best interest the more developer ecosystems adopt Azure. It also boasts having run years worth of production workloads at Google. If you are completely onboard with containers, then even if you are running .NET workloads (especially if you are running cross platform workloads such as .NET Core), perhaps you mitigate a lot of risk by sticking to Kubernetes. However, especially if you have a .NET workload, you have to decide whether you want to pick up what you have, put it in containers, and move into Kubernetes; or with some effort you can just make your application portable enough so you don’t need containers.

Where do I stand? For my purposes (which involves writing primarily .NET applications), I am partial to Service Fabric. However, take this fact within the context that I have been working with Service Fabric in production for almost a couple of years now, while I have only “played around” with Kubernetes. I do like the fact that Service Fabric can run on a set of homogenous nodes as opposed to Kubernetes that needs a master. However, as with stance on things, this one can change given new information and experience.

What should you do? Not listen to quick and easy decision making flowcharts. When you do listen to opinions, be aware of the fact that they are influenced by experience and also sometimes by prejudice. Finally, evaluate your own needs carefully rather than do something because “everybody is doing it”. I could be eating my words years from now, but based on their origins and use, I don’t think either Service Fabric or Kubernetes are at risk of being “abandoned” by their stewards (i.e. Microsoft and Google) at any point.

Reader/Writer Locking with Async/Await

Consider this another pitfall warning. If you are a frequent user of reader/writer locking (via the ReaderWriterLockSlim class) like I am, you will undoubtedly run into this situation. As more and more code we write these days are asynchronous with the use of async/await, it is easy to end up in the following situation (an oversimplification, but just imagine write locks in there as well):

async Task MyMethod()
{
	...
	myReaderWriterLockSlim.EnterReadLock();
	var thing = await ReadThingAsync();
	... 
	myReaderWriterLockSlim.ExitReadLock(); // This guy will choke.
}

This, of course, will not work. This is because reader/writer locks, at least the implementation in .NET, are thread-affine. This means the very same thread that acquired a lock must be the one to release it. As soon as you hit an await, you have dispatched the rest of the behavior to some other thread. So this cannot work.

This explains why other thread-synchronization classes such as SemaphoreSlim are not async/await savvy with methods like WaitAsync but not ReaderWriterLockSlim.

So, what are our options?

  1. Carefully write our code such that whatever happens between reader/writer locks is always synchronous.
  2. Relax some of the rules around reader/writer locking that requires it to be thread-affine and roll your own.
  3. Look for an already existing, widely adopted, mature library that handles this very scenario.

In the spirit of Option 3, Stephen Cleary has an AsyncEx library that includes this functionality and many others geared towards working efficiently with async/await. If that is too heavy-handed, may I suggest this post by Stephen Toub that lays out a basic implementation that you can build upon?